Browsing Computer Science by Issue Date
Previous Page
Now showing items 2128 of 28

Elfaghine, Halima (Beograd , 2016)[more][less]
Abstract: The subject of this thesis belongs the area of quality control, which represents the practical usage of statistics in following and improving the production process. In 1930 Walter Shewhart started studying quality control, based on control charts, and using statistical principles. Later on, after World War II, Edward Deming took this discipline to Japan, where it ourished. The design and the performance of control charts are the most important problems in this area. The purpose of studying the characteristics of control charts in this thesis is to compare the existing and the suggested control charts. The thesis is divided into four chapters. The rst chapter is introductory and contains motivation and basic de nitions related to this subject. In this study it is always assumed that the data are normally distributed, and that the incontrol process data are stationary and uncorrelated. Shewhart control charts and the corresponding limits are constructed in order to meet the given speci cations for the quality characteristic that we investigate. Quality control that is applied to a production process always has costs related to the control. The important parameters connected to the cost of quality control are: width of control limits k, the sample size n and the interval between the samples h. In Chapter 2 a new loss function is given, which is connected to the production process and to X−bar quality control chart. Using Matlab program for optimization, values of ^k; ^n and ^h are found, which minimize the loss function for given costs. For given values of cost, a nonlinear regression model is built using a package Sigma plot and the obtained values are compared to those obtained by numerical optimization. In Chapter 3, the time series model Yi = Xi + (1 − )Yi−1 is investigated, where 0 < B 1 is a constant, Xi are N( ; 2) distributed. Exponentially Weighted Moving Average (EWMA) control charts for this model are presented, and type I and type II errors are calculated in the case when i is large. For di erent sample sizes, the new comparison between the optimal design of the Xbar and EWMA control charts for Normally distributed quality characteristic is given, comparing the corresponding costloss functions, power functions and average run lengths. i The process of calibration is one of the methods in statistical process control, introduced for improving the quality of the products and for reducing the production costs. In Chapter 4, two new models of nonsymmetrical loss function are introduced. Here, the loss function is connected to one product under control (not to the whole sample). Using our program, written in statistical software R, the value which minimizes the expected loss for Shewhart X control chart is found. This value is used as the new central target value of the quality characteristic, that is, the production process is calibrated with this new value. The thesis ends with Conclusions, where the results of the thesis are summarized, and with some open problems to be investigated. URI: http://hdl.handle.net/123456789/4340 Files in this item: 1
Disertacija_Halima_Elfaghihe.pdf ( 1.104Mb ) 
Mladenović, Miljana (Beograd , 2016)[more][less]
Abstract: The beginning of the new millennium was marked by huge development of social networks, internet technologies in the cloud and applications of artificial intelligence tools on the web. Extremely rapid growth in the number of articles on the Internet (blogs, ecommerce websites, forums, discussion groups, and systems for transmission of short messages, social networks and portals for publishing news) has increased the need for developing methods of rapid, comprehensive and accurate analysis of the text. Therefore, remarkable development of language technologies has enabled their applying in processes of document classification, document clustering, information retrieval, word sense disambiguation, text extraction, machine translation, computer speech recognition, natural language generation, sentiment analysis, etc. In computational linguistics, several different names for the area concerning processing of emotions in text are in use: sentiment classification, opinion mining, sentiment analysis, sentiment extraction. According to the nature and the methods used, sentiment analysis in text belongs to the field of computational linguistics that deals with the classification of text. In the process of analysing of emotions we generally speak of three kinds of text classification: • identification of subjectivity (opinion classification or subjectivity identification) used to divide texts into those that carry emotional content and those that only have factual content • sentiment classification (polarity identification) of texts that carry emotional content into those with positive and those with negative emotional content • determining the strength or intensity of emotional polarity (strength of orientation). In terms of the level at which the analysis of feelings is carried out, there are three methodologies: an analysis at the document level, at the sentence level and at the level of attributes. Standardized methods of text classification usually use machine learning methods or rulebased techniques. Sentiment analysis, as a specific type of classification of documents, also uses these methods. This doctoral thesis, whose main task is the analysis of emotions in text, presents research related to the sentiment classification of texts in Serbian language, using a probabilistic method of machine learning of multinomial logistic regression i.e. maximum entropy method. The aim of this research is to create the first comprehensive, flexible, modular system for sentiment analysis of Serbian language texts, with the help of digital resources such as: semantic networks, specialized lexicons and domain ontologies. This research is divided into two phases. The first phase is related to the development of methods and tools for detecting sentiment polarity of literal meaning of the text. In this part of the work, a new method of reducing the feature vector space for sentiment classification is proposed, implemented and evaluated. The proposed method for reduction is applied in the classification model of maximum entropy, and relies on the use of lexicalsemantic network WordNet and a specialized sentiment lexicon. The proposed method consists of two successive processes. The first process is related to the expansion of feature vector space by the inflectional forms of features. The study has shown that usage of stemming in sentiment analysis as a standard method of reducing feature vector space in text classification, can lead to incomplete or incorrect sentimentpolarity feature labelling, and with the introduction of inflectional feature forms, this problem can be avoided. The paper shows that a feature vector space, increased due to the introduction of inflectional forms, can be successfully reduced using the other proposed procedure – semantic mapping of all predictors with the same sentimentpolarity into a small number of semantic classes. In this way, the feature vector space is reduced compared to the initial one, and it also retains the semantic precision. The second phase of the dissertation describes the design and implementation of formal ontologies of Serbian language rhetorical figures – the domain ontology and the task ontology. Usage of the task ontology in generating features representing figurative speech is presented. The research aim of the second phase is to recognize figurative speech to be used in improving of the existing set of predictors generated in the first phase of the research. The research results in this phase show that some classes of figures of speech can be recognized automatically. In the course of working on this dissertation, a software tool SAFOS (Sentiment Analysis Framework for Serbian), as an integrated system for sentiment classification of text in Serbian language, has been developed, implemented and statistically evaluated. Results of the research within the scope of this thesis are shown in papers (Mladenović & Mitrović, 2013; Mladenović & Mitrović, 2014; Mladenović, Mitrović & Krstev, 2014; Mladenović, Mitrović, Krstev & Vitas, 2015; Mladenović, Mitrović & Krstev, 2016). The dissertation consists of seven chapters with the following structure. Chapter 1 introduces and defines methods, resources and concepts used in the first phase of research: text classification, sentiment classification, machine learning, supervised machine learning, probabilistic supervised machine learning, and language models. At the end of the introductory section, the tasks and objectives of the research have been defined. Chapter 2 presents a mathematical model of text classification methods and classification of sentiment methods. A mathematical model of a probabilistic classification and an application of the probabilistic classification in regression models are presented. At the end of the chapter it is shown that the method using the mathematical model of maximum entropy, as one of the regression models, has been successfully applied to natural language processing tasks. Chapter 3 presents the lexical resources of the Serbian language and the methods and tools of their processing. Chapter 4 deals with the comprehensive research on the currently available types and methods of sentiment classification. It shows the current work and research in sentiment classification of texts. It also presents a comparative overview of research in sentiment classification of texts using the method of maximum entropy. Chapter 5 discusses the contribution of this thesis to methods of feature space reduction for maximum entropy classification. First, a feature space reduction method is analysed. A new feature space reduction method which improves sentiment classification is proposed. А mathematical model containing proposed method is defined. Learning and testing sets and lexicalsemantic resources that are used in the proposed method are introduced. Chapter 5 also describes building and evaluation of a system for sentiment classification – SAFOS, which applies and evaluates the proposed method of a feature vector space reduction. The parameters and the functions of SAFOS are defined. Also, measures for evaluation of the system were discussed – precision, recall, F1measure and accuracy. A description of the method for assessing the statistical significance of a system is given. Also, implementation of the statistical test in the system SAFOS is discussed. The chapter provides an overview of the presented experiments, results and evaluation of the system. Chapter 6 deals with methods of recognizing figurative speech which can improve sentiment classification. The notion of domain ontology is introduced, the role of rhetorical figures and domain ontology of rhetorical figures. The importance of figurative speech in the sentiment classification has been explored. The description of the construction and structure of the first domain ontology of rhetorical figures in Serbian language, RetFig.owl, is given. Also, the description of the construction and structure of the corresponding task ontology that contains rules for identification of some classes of rhetorical figures is given. At the end of this chapter, an overview of the performed experiments, results and evaluation of the SAFOS system plugin that improved the recognition of figurative speech is given. The final chapter of this study deals with the achievemnts, problems and disadvantages of the SAFOS system. The conclusion of this thesis points to the great technological, social, educational and scientific importance of the sentiment analysis and recognition of the figurative speech and gives some routes in further development of the SAFOS system. URI: http://hdl.handle.net/123456789/4422 Files in this item: 1
Mladenovic_Miljana.pdf ( 13.60Mb ) 
Vučković, Bojan (Beograd , 2017)[more][less]
Abstract: We present original results from the following fields of discrete mathematics: chromatic graph theory, extremal set theory and Boolean matrix theory. From the chromatic graph theory we investigate edge and total colorings satisfying the condition that neighboring vertices of a graph possess different values of multiset, set or sum, induced by the giving coloring. Multiset neighbordistinguishing edge coloring of a graph is an assignment of colors to edges such that, for every edge uv of a graph, multiset of the edges incident with the vertex u differs from the multiset of the edges incident with the vertex v. The previous best result concerning the minimum number of colors required for such a coloring of an arbitrary graph states that four colors are sufficient. The author’s contribution is a proof that such a coloring is always possible with only three colors, which is in general case the optimal number of colors. We construct a graph for which we subsequently prove that a different number of colors is required to obtain a multiset neighbordistinguishing coloring and neighbordistinguishing coloring by sum. As far as we know, this is the first example of such a graph. A few results concerning the neighbor expended sum distinguishing coloring are given. The main contribution is a proof that for an arbitrary graph there exists a total coloring from the set f1; 2; 3g, such that every two adjacent vertices have different sums of its adjacent vertices and incident edges. Also, for certain classes of graphs is proved that there exists such a coloring using only the colors from the set f1; 2g. Neighbordistinguishing edge coloring of a graph G requires that every two adjacent edges receive different colors, while the sets of the edges incident with the vertices u and v differ for every edge uv of G. The author presents a procedure of edge coloring for an arbitrary graph without isolated edges, where we a smaller number of colors is used compared to all known results. For the adjacent vertex distinguishing total coloring of a graph G the condition is that every two adjacent and incident elements of V (G) [ E(G) receive different colors, while for every edge uv of G the set composed from the colors assigned to the edges incident with u together with the color of u, differs from such a set for v. The author improves the upper bound of the minimum number of colors needed for such a coloring, relative to the maximal degree of a graph. Frankl’s conjecture from the extremal set theory states that for every family closed under union there exists an element contained in at least half of the sets of the family. We give a proof that Frankl’s conjecture holds for every family contained from 12 elements, while it is known that this is true for families contained from 11 or less elements. Our proof is based on the efficient algorithm that exhausts all the possibilities, while using the results for subfamilies that eventual counterexample cannot contain, which we obtained in a number of consecutive steps. Family of sets G is an FCfamily if for every family F containing G there exists an element from S G that appears in at least half of the sets of F. NonFCfamily is every family that is not FC. The author’s contribution is the complete classification of all families consisting of 6 or less elements into FC and NonFCfamilies. From the Boolean matrices theory we present our results concerning the row space cardinality. Boolean matrices are the matrices whose all components are from the set f0; 1g, while the row space of a Boolean matrix is the set of vectors that can be obtained by disjunction from the rows of a matrix. We present the set consisted of all values a from the interval [2n2 + 2n3; 2n2] such that there exists a matrix of dimension n n having the row space cardinality equal to a. For the least positive integer an for which there exists no matrix of dimension n n having the row space cardinality equal to an, the author gives a lower bound that is an improvement over the previously known results. All proofs for the main results in the dissertation are constructive. Proofs of some of them require the use of computers where there is a calculation of a great number of possibilities. For other proofs this was not necessity, though algorithms following the steps of the proofs can be implemented to obtain a graph coloring or a matrix with the desired properties. URI: http://hdl.handle.net/123456789/4661 Files in this item: 1
Disertacija__Bojan_Vuckovic.pdf ( 1.143Mb ) 
Simić, Danijela (Beograd , 2017)[more][less]
Abstract: In this thesis is presented interactive formalization of various models of geometry and algebraic methods for automated proving geometry theorems. We present our current work on formalizing analytic (Cartesian) plane geometries within the proof assistant Isabelle/HOL. We give several equivalent definitions of the Cartesian plane and show that it models synthetic plane geometries (using both Tarski’s and Hilbert’s axiom systems). We also discuss several techniques used to simplify and automate the proofs. As one of our aims is to advocate the use of proof assistants in mathematical education, our exposure tries to remain simple and close to standard textbook definitions, but completely formal and mechanically verifiable. This formalization presents the develop of the necessary infrastructure for implementing decision procedures based on analytic geometry within proof assistants. Furthermore, we investigate complex numbers. Deep connections between complex numbers and geometry had been well known and carefully studied centuries ago. Fundamental objects that are investigated are the complex plane (usually extended by a single infinite point), its objects (points, lines and circles), and groups of transformations that act on them (e.g., inversions and Möbius transformations). In this thesis we treat the geometry of complex numbers formally and present a fully mechanically verified development within the theorem prover Isabelle/HOL. We discuss different approaches to formalization and discuss major advantages of the more algebraically oriented approach. Apart from applications in formalizing mathematics and in education, this work serves as a ground for formally investigating various nonEuclidean geometries and their intimate connections. We also present a formalization of part of Tarski axiom system withing Poincare disk model in Isabelle/HOL. Further on, we analyze connections between geometry and polynomials and the use of these connections. In Euclidean geometry, objects and relations between them can be expressed as polynomials. Further, any geometry construction can be expressed by set of polynomials and geometry statements can be proved by using algebraic methods (e.g. the Gröbner bases method or Wu’s method) over that set of polynomials. We describe an implementation of an algorithm in Isabelle/HOL that accepts a term representation of a geometry construction and returns its corresponding set of polynomials. Our further work will be to use the method of Gröbner bases within the Isabelle system on the generated polynomials, in order to prove correctness of the given construction. Furthermore, we investigate how spatial geometry constructions can be presented using polynomials. We investigate two different approaches in deriving those polynomials and then compare efficiency of algebraic provers depending on the approach used. We present a fully automated system for transforming geometry constructions into set of polynomials. Our further work would be to relate these geometry provers with dynamic geometry software and thus make easier for students to use it. URI: http://hdl.handle.net/123456789/4499 Files in this item: 1
06062017danijela_doktorat.pdf ( 1.158Mb ) 
Čukić, Ivan (Beograd , 2018)[more][less]
Abstract: There is a big class of problems that require software systems with asynchronously executed components. For example, distributed computations have the distributed nodes that process the data asynchronously to one anot her, serviceoriented architectures need to process separate requests asynchrono usly, and multicore and heterogeneous systems need to have multiple separa te tasks running concurrently to best utilize the hardware. Even ordinary GUI applications need asynchronous components – the user interface needs to be re sponsive at all times which means that no matter in what state the program is in, it needs to process and react to the input events coming from the user. The necessity of concurrency and asynchronous execution brings in the added com plexity of the Inversion of Control (IoC) into the system, either through mes sage passing or through event processing. IoC makes code difficult to develop and reason about, it increases component coupling and inhibits clean functional or objectoriented software design. In this dissertation, a method for solving the problems that IoC introduces is presented. It presents a way to model both synchronous and different types of asynchronous tasks with the continuation monad. The continuation monad serves as a primitive to build more complex control flow structures that mimic the control flow structures of the host programming language. It also allows for building more complex control structures specialized for parallelism, transactional execution, and for simulating functional programming idioms with asynchronous tasks through a generalization of the continuation monad that allows the asynchronous tasks to generate results one at a time. This allows for writing programming systems with asynchronously executed components by writing seemingly synchronous imperati ve or functional code while leaving it up to the compiler to do all the heavy lifting and convert the written code to asynchronously executed set of tasks. Another benefit of the presented method is that it allows for easier automatic handling of the data lifetime without the need for garbage collection. This method has been successfully applied and tested in several Free/Libre Open Source Software and proprietary realworld software projects used by hun dreds of millions of people around the world. In this dissertation, an example of a secure project management system is described which is based on a similar system implemented as a part of the KDE Plasma project. This dissertation also contains the important parts of the implementation of the AsynQt library which extends the Qt library, and its concurrency primitive – QFuture class – with functional reactive programming patterns based on the method proposed in this dissertation. URI: http://hdl.handle.net/123456789/4738 Files in this item: 1
ivan_cukic_phd.pdf ( 1.328Mb ) 
Radojčić, Nina (Beograd , 2018)[more][less]
Abstract: In this dissertation, three NPhard optimization problems are studied and va rious computational intelligence methods are considered for solving them, with a special emphasis on the possibilities of applying fuzzy logic in order to improve the performances of proposed methods. In addition, it is shown how fuzzy logic can be incorporated into a model to make it more adequate to real world applications. The first problem considered is the RiskConstrained CashinTransit Vehicle Routing Problem (RCTVRP) that represents a special case of the vehicle routing problem (VRP). Similar to the classical VRP, the aim is to determine the collection routes from one depot to a number of customers in order to minimize the overall travel distance (or cost). Additionally, the safety aspect of the routed risk constraints are introduced in the case of RCTVRP. The RCTVRP concerns the issue of secu rity during the transportation of cash or valuable goods (e.g. in the cashintransit industry). The other two problems studied in this dissertation belong to the class of loca tion problems: the Load Balancing Problem (LOBA) and the MaxMin Diversity Problem (MMDP). The goal of the LOBA problem is to locate a fixed number of facilities, such that the difference between the maximum and minimum number of customers served by each facility is balanced. The LOBA model is useful in cases where customers naturally choose the closest facility. The MMDP consists of se lecting a subset of a fixed number of elements from a given set in such a way that the diversity among the selected elements is maximized. This problem also arises in real world situations encompassing a variety of fields, particularly the social and biological sciences. In order to solve the RCTVRP, a fuzzy GRASP (Greedy Randomized Adaptive Search Procedure) is hybridized with Path Reliking (PR) methodology. Carefully adjusted fuzzy modification incorporated into the proposed GRASP for the RC TVRP improved its performance. Moreover, in this dissertation a new PR structure is implemented and can be used for other vehicle routing problems. To improve the algorithm’s time complexity, a new data structure for the RCTVRP is incor porated. The proposed fuzzy GRASP with PR hybrid shows better computational performance compared to its nonfuzzy version. Furthermore, computational results on publicly available data sets indicate that the proposed algorithm outperforms all existing methods from the literature for solving the RCTVRP. For solving the LOBA problem two efficient hybrid metaheuristic methods are proposed: a combination of reduced and standard variable neighborhood search met hods (RVNSVNS) and hybridization of evolutionary algorithm and VNS approach (EAVNS). The proposed hybrid methods are first benchmarked and compared to all the other methods on existing test instances for the LOBA problem with up to 100 customers and potential suppliers. In order to test the effectiveness of the pro posed methods, we modify several largescale instances from the literature with up to 402 customers and potential suppliers. Exhaustive computational experiments show that the proposed hybrid methods quickly reach all known optimal solutions while providing solutions on largescale problem instances in short CPU times. Re garding solution quality and running times, we conclude that the proposed EAVNS approach outperforms other considered methods for solving the LOBA problem. EA approach is also proposed for solving the MMDP. Computational experi ments on a smaller benchmark data set showed that the classic EA quickly reached all optimal solutions obtained previously by an exact solver. However, some of the larger instances of MMDP were challenging for classic EA. Although researchers have established the most commonly used parameter setting for EA that has good performance for most of the problems, it is still challenging to choose the adequate values for the parameters of the algorithm. One approach to overcome this is chan ging parameter values during the algorithm run. As part of this dissertation this problem was addressed by extending the evolutionary algorithm by adding a fuzzy rule formulated from EA experts’ knowledge and experience. The implemented fuzzy rule changes the mutation parameter during the algorithm run. The results on tested instances indicate that the proposed fuzzy approach is more suitable for solving the MMDP than classic EA. For all three problems addressed whereas the smaller instances that CPLEX was able to solve, obtained optimal solutions were used for comparison with proposed methods and all of the proposed methods obtained these optimal solutions. Moreover, in this dissertation it has been shown that fuzzy logic is a successful tool in modeling the RCTVRP. In this problem the risk constraints are set by using a risk threshold T on each route and thus, the routes with risk larger than T are forbidden. However, in this dissertation the aim is to take into account the probability of being robbed along each route instead of just allowing solutions with routes that satisfy the risk constraints. A new fuzzy model for the RCTVRP is developed which takes into account the value of the risk index of each route and the solutions with lower values of risk indexes on their routes are considered superior. In order to achieve that fuzzy numbers are used in the improved model. Moreover, two mixed integer program formulations of new fuzzy model are developed and presented in this dissertation. The introduced fuzzy model is compared with the model from the literature using an adequate example and the advantages of the newly proposed fuzzy RCTVRP is demonstrated. Computational experiments are performed and the comparison of the two models given in the paper show that the newly presented approach leads to safer routes. URI: http://hdl.handle.net/123456789/4737 Files in this item: 1
tezaNinaRadojicic.pdf ( 1.665Mb ) 
Alshafah, Samira (Beograd , 2018)[more][less]
Abstract: Proteins with intrinsically disordered regions are involved in large number of key cell processes including signaling, transcription, and chromatin remodeling functions . On the other side, such proteins have been observed in people suffering from neurological and cardiovascular diseases, as well as various malignancies. Process of experimentally determining disordered regions in proteins is a very expensive and long  term process. As a consequence, a various computer programs for predicting position of disordered regions in proteins have been developed and constantly improved. In this thesis a new method for determining Amino acid sequences that characterize ordered/disordered regions is presented. Material used in research includes 4076 viruses wit h more than 190000 proteins. Proposed method is based on defining correspondence between n grams (including both repeats and palindromic sequence s) characteristics and their belonging to ordered/disordered protein regions. Positions of ordered/disordered regions are predicted using three different predictors. The features of the repetitive strings used in the research include mol e fractions, fract ional differences, and z values. Also, data mining techniques association rules and classification were applied on both repeats and palindromes. The results obtained by all techniques show a high level of agreement for a short length of less than 6, while the level of agreement grows up to the maximum with increasing the length of the sequences. The high reliability of the results obtained by the data mining techniques shows that there are n grams, both repeating sequences and palindromes, which uniquely ch aracterize the disordered/ ordered regions of the proteins . The obtained results were verified by comparing with the results based on n grams from the DisProt database which contain s the positions of experimentally verified disordered regions of the protein. Results can be used both for the fast localization of disordered/ordered regions in proteins as well as for further improving existing programs for their prediction. URI: http://hdl.handle.net/123456789/4746 Files in this item: 1
ThesisSamira_Alshafah.pdf ( 3.106Mb ) 
Đenić, Aleksandar (Beograd , 2018)[more][less]
Abstract: This pap er considers two discrete lo cation problems: Bus Terminal Lo cation Problem (BTLP) and Longterm Care Facility Lo cation Problem (LTCFLP). Vari able Neighb orho o d Search (VNS) metho d for solving BTLP and LTCFLP is pre sented in this pap er. VNS is a singlesolution based metaheuristic based on system atic change of neighb orho o ds while searching for optimal solution of the problem. It consists two main phases: shake phase and lo cal search phase. BTLP is a discrete lo cation problem which considers lo cating bus terminals in order to provide the highest p ossible quality of public service to the clients. Clients are presented as public transp ortation stations, such as bus or metro stations. VNS algorithm is used for solving BTLP. This algorithm uses improved lo cal search based on e cient neighb orho o d interchange. VNS is parallelized (PVNS) which leads to signi cant time improvement in function of the pro cessor core count. Computa tional results show that prop osed PVNS metho d improves existing results from the literature in terms of quality. Larger instances, based on instances from the Trav eling Salesman Problem library, are presented and computational results for those instances are rep orted. LTCFLP is created as a part of health care infrastructure planning in South Korea. Clients are considered as groups of patients with a need of longterm health care, while established facilities present lo cations where the centers that provide health care services should b e built. Prede ned are n lo cations where centers are to b e established. This problem seeks at most K lo cations to establish health centers so they are to b e equally loaded with clients demand. For solving LTCFLP, by using VNS algorithm, data structure based on fast interchange is presented. It reduces the time complexity of one iteration of lo cal search algorithm to O ( n · max( n,K 2 )) comparing to the known time complexity from the literature O ( K 2 · n 2 ) . Reduced time complexity of the presented VNS leads to b etter quality solutions, due to larger numb er of VNS iterations that can b e p erformed in less computational time. This pap er presents computational results that outp erform the b est known results from the literature. URI: http://hdl.handle.net/123456789/4744 Files in this item: 1
Aleksandar_Djenic_phd.pdf ( 2.183Mb )
Previous Page
Now showing items 2128 of 28