Auflistung Computer Science nach Titel
Vorherige Seite
Anzeige der Dokumente 28-40 von 40
-
Kartelj, Aleksandar (Beograd , 2014)[more][less]
Zusammenfassung: This work investigates the potential of improving the classi cation process through solving three classi cation-related problems: feature selection, feature weighting and parameter selection. All three problems are challenging and currently in the focus of scienti c researches in the eld of machine learning. Each problem is solved by using population-based metaheruistic method called electromagnetismlike method. This method is used for combinatorial and global optimization. It is inspired by laws of attraction and repulsion among charged particles. Each particle is represented by a vector of real values. The solution of the problem of interest is then obtained by mapping these real-valued vectors to the feasible solution domain. Particles representing better solutions achieve higher level of charge, which consequently produces greater impact on other particles. The search process is performed by iterating the particle movement, induced by charges. Through implementing the methods, two key aspects are managed: 1) the classi cation quality obtained after applying the optimization method and 2) the e ciency of the proposed methods from the perspective of time and space resources. All methods are equiped with problem-speci c local search procedures which tend to increase the solution quality. The bene t of applying feature selection for the classi cation process is twofold. Firstly, the elimination of unnecessary features decreases the data set noise, which degrades the quality of the classi cation model. Secondly, the problem dimension is decreased, thus the e ciency is increased. Feature selection problem is very e - ciently solved by the proposed method. The classi cation quality is in the majority of cases (instances) improved relative to the methods from literature. For some of the instances, computational times are up to several hundred times smaller than those of the competing methods. Feature weighting and parameter selection problem share similar underlying solution representation, based on the vectors of real values. Since the representation of charged particles is based on the same underlying domain, the transition from the particle to the solution domain behaves smoothly. The quality of the method for iv feature weighting is demonstrated through nearest neighbors classi cation model. The testing of the method is conducted on di erent collection of instances, and after that, the comparison to several methods from literature is made. In the majority of cases, the proposed method outperformed the comparison methods. The parameter selection, in classi cation, has a great impact on the classi cation quality. The proposed method for parameter selection is applied on the support vector machihe, which has a complex parametric structure when the number of parameters and the size of their domains is in question. By using heuristic initialization procedure, the detection of high quality regions for parameter combinations is accelerated. Exhaustive tests are performed on various instances in terms of their dimension and feature structure: homogenous and heterogeneous. Single kernel learning is adopted for homogenous, and multiple kernel learning for heterogeneous instances. The comparison with methods from literature showed superiority of the proposed method when single and multiple kernel learning based on radial basis function is considered. The method shows to be competitive in other cases. All proposed methods improved the classi cation quality. Because of the way, the problem is being solved, all three methods can be generalized and applied to a wide class of classi cation models and/or classi cation problem. URI: http://hdl.handle.net/123456789/4234 Dateien zu dieser Ressource: 1
phdAleksandarKartelj.pdf ( 2.121Mb ) -
Grbić, Milana (Beograd , 2020)[more][less]
Zusammenfassung: In this dissertation some actual problems of bioinformatics and computational biology are explored,together with the methods for solving them. The following problems are considered: partitioning ofsparse biological networks intok-plexsubnetworks, prediction of the role of metabolites in metabolicreactions, partitioning of biological networks into highly connectedcomponents and the problem ofidentification of significant groups of proteins by adding new edges to the weighted protein interacti-ons network. The aforementioned problems have theoretical importance in areas of machine learningand optimization, and practical application in biological research. Inaddition to solving the afore-mentioned problems from the computational aspect, the dissertation explores further application ofthe obtained results in the fields of biology and biochemistry, as well as the integration of resultswithin existing bioinformatics tools.The problem of predicting the role of metabolites in metabolic reactions is solved by a predictivemachine learning method based on the conditional random fields, whilefor the remaining threeproblems the algorithams based on variable neighbourhood search are developed. For solving theproblem of identification of significant groups of proteins by adding new edges to the weighted proteininteractions network, the variable neighbourhood search is only the first phase of the proposedsolution, while in the second and the third phase of the proposed method, the integration withadditional biological information and bioinformatics tools are performed.The proposed computational methods of partitioning and groupingin biological networks confirmexisting findings in a new manner and lead to new discoveries about biological elements and theconnections between them. By solving these problems and by interpreting the obtained resultsin this dissertation, a scientific contribution was made to the scientific field of computer science,particularly to the scientific disciplines of bioinformatics and computational biology. URI: http://hdl.handle.net/123456789/5088 Dateien zu dieser Ressource: 1
grbic_Milana_disertacija.pdf ( 8.740Mb ) -
Jovanović, Jasmina (Beograd , 2022)[more][less]
Zusammenfassung: The analysis of biological sequence similarity between different species is significant in identifying functional, structural or evolutionary relationships among the species. Biological sequence similarity and analysis of newly discovered nucleotide and amino acid sequences are demanding tasks in bioinformatics. As biological data is growing exponentially, new and innovative algorithms are needed to be constantly developed to get faster and more effective data processing. The challenge in sequence similarity analysis algorithms is that sequence does not always have obvious features and the dimension of sequence features may be very high for applying regular feature selection methods on sequences. It is important to have a simple and effective algorithm for determining biological sequence relationships. This thesis proposes two new methods for sequence transformation in feature vectors that takes into consideration statistically significant repetitive parts of analyzed sequences, as well as includes different approaches for determination of nucleotide sequence similarity and sequence classification for predicting taxonomy groups of biological sequence data. The first method is based on information theory and fact that both position and frequency of repeated sequences are not expected to occur with the identical presence in a random sequence of the same length. The second method includes building signatures of biological sequences and profiles of taxonomic classes based on repetitive parts of sequences and distances between these repeats. Proposed methods have been validated on multiple data sets and compared with results obtained using different well known and accepted methods in this field like BLAST, Clustal Omega and methods based on k-mers. Resulted precision for proposed methods is close to values provided for existing methods for the majority of tested data-sets, and time performance depends strictly to used infrastructure and sequence type. Methods provide results that are comparable with other commonly used methods focused on resolving the same problem, taking into consideration statistically significant repetitive parts of sequences with different characteristics. URI: http://hdl.handle.net/123456789/5440 Dateien zu dieser Ressource: 1
JasminaJovanovic.pdf ( 3.984Mb ) -
Perović, Vladimir (Beograd , 2013)[more][less]
Zusammenfassung: Although long-range intermolecular interactions (interactions acting on distances >5Å) play an important role in recognition and targeting between molecules in biological systems, there is no one appropriate software package allowing use of this important property in investigation of biologically active molecules. The multifunctional EIIP/ISM software, which is based on physical parameters determining long-range molecular properties, was developed in this thesis. This novel and unique platform allows (i) investigation of protein-protein and protein-small molecule interactions, (ii) analysis of structure/function relationship of proteins, (iii) assessment of biological effects of mutations in proteins, (iv) monitoring of the functional evolution of proteins, (v) ―de novo‖ design of molecules with desired biological function and (vi) selection of candidate therapeutic molecules. Results of application of the EIIP/ISM platform on diverse problems (e.g. the evolution of influenza A viruses, assessment of biological effects of mutations on the LPL protein, representing a risk factor for cardiovascular diseases, identification of therapeutic targets for HIV and influenza viruses, virtual screening of molecular libraries for candidate antibiotics and anti-HIV drugs) which are presented in this thesis, confirm the applicability of this platform on broad spectrum of problems in molecular biology, biomedicine and pharmacology. URI: http://hdl.handle.net/123456789/4230 Dateien zu dieser Ressource: 1
phdPerovic_Vladimir.pdf ( 11.95Mb ) -
Đenić, Aleksandar (Beograd , 2018)[more][less]
Zusammenfassung: This pap er considers two discrete lo cation problems: Bus Terminal Lo cation Problem (BTLP) and Long-term Care Facility Lo cation Problem (LTCFLP). Vari- able Neighb orho o d Search (VNS) metho d for solving BTLP and LTCFLP is pre- sented in this pap er. VNS is a single-solution based metaheuristic based on system- atic change of neighb orho o ds while searching for optimal solution of the problem. It consists two main phases: shake phase and lo cal search phase. BTLP is a discrete lo cation problem which considers lo cating bus terminals in order to provide the highest p ossible quality of public service to the clients. Clients are presented as public transp ortation stations, such as bus or metro stations. VNS algorithm is used for solving BTLP. This algorithm uses improved lo cal search based on e cient neighb orho o d interchange. VNS is parallelized (PVNS) which leads to signi cant time improvement in function of the pro cessor core count. Computa- tional results show that prop osed PVNS metho d improves existing results from the literature in terms of quality. Larger instances, based on instances from the Trav- eling Salesman Problem library, are presented and computational results for those instances are rep orted. LTCFLP is created as a part of health care infrastructure planning in South Korea. Clients are considered as groups of patients with a need of long-term health care, while established facilities present lo cations where the centers that provide health care services should b e built. Prede ned are n lo cations where centers are to b e established. This problem seeks at most K lo cations to establish health centers so they are to b e equally loaded with clients demand. For solving LTCFLP, by using VNS algorithm, data structure based on fast interchange is presented. It reduces the time complexity of one iteration of lo cal search algorithm to O ( n · max( n,K 2 )) comparing to the known time complexity from the literature O ( K 2 · n 2 ) . Reduced time complexity of the presented VNS leads to b etter quality solutions, due to larger numb er of VNS iterations that can b e p erformed in less computational time. This pap er presents computational results that outp erform the b est known results from the literature. URI: http://hdl.handle.net/123456789/4744 Dateien zu dieser Ressource: 1
Aleksandar_Djenic_phd.pdf ( 2.183Mb ) -
Mišković, Stefan (Beograd , 2016)[more][less]
Zusammenfassung: In this dissertation, three NP-hard min-max discrete optimization problems are considered. The rst considered problem is multi-period emergency service location problem, the second one is dynamic maximal covering location problem with multiple covering radii, and the third one is uncapacitated multiple allocation p-hub center problem. In many practical situations, input parameters (such as user demands, transportation time or cost) often vary with unknown distributions. Therefore, it is necessary to involve these uncertainties in the deterministic variants of the problems by applying robust optimization approach. Mathematical models for the deterministic and non-deterministic variants of all three problems are developed, except for the deterministic uncapacitated multiple allocation p-hub center problem, which has already been addressed in the literature. In addition, for the rst time in the literature, it was proven that the emergency service location problem is NP-hard. The considered problems and their robust variants have numerous applications, due to the fact that in real-life situations input parameters are often subject to uncertainty. Multi-period emergency service location problem may be used when determining optimal locations for police stations, re brigades, ambulances, and other emergency units in the given region. The dynamic maximal covering location problem with multiple covering radii is useful when choosing the optimal strategy for establishing resources (service centers, suppliers, facilities, etc.) with maximal satisfaction of customer demands in a certain region, by assuming that the service e ciency directly depends on the distance between customer and service center (i.e., the selected coverage radius). The uncapacitated multiple allocation p-hub center problem has signi cant applications in designing telecommunication and transportation networks, postal delivery systems, emergency systems, supply networks, etc. Since exact methods provide optimal solutions only for problem instances of small dimensions, hybrid metaheuristic algorithms are developed to solve both deterministic and robust variants of the considered problems. The proposed hybrid algorithms are obtained by combining particle swarm optimization, with local search heuristic { classical local search or variable neighborhood search method. For dynamic maximal covering location problem with multiple covering radii, a hybridization of metaheuristic algorithm with exact method based on linear programming is developed. All elements of the proposed algorithms are adopted to the problems under consideration. Di erent strategies are implemented for improving the e ciency of proposed algorithms, especially for the calculation of the objective function value and the local search part. The in uence of di erent parameters of hybrid algorithms on the solution quality is analyzed in detail. All parameters are adjusted by using analysis of variance. For all considered problems (both deterministic and robust variant), the performance of the proposed hybrid algorithms is evaluated on adequate test data sets. The proposed algorithms are compared with existing heuristic from the literature and exact methods incorporated in commercial CPLEX solver. The obtained experimental results indicate the e ciency of proposed algorithms in obtaining high quality solutions for all considered test instances. The presented comparative analysis indicates the advantages of the proposed hybrid algorithms over existing methods in the sense of solution quality and/or required computational time, especially in the case of large problem dimensions. The results presented in this paper represent a contribution to the eld of discrete optimization, robust optimization and metaheuristic methods. URI: http://hdl.handle.net/123456789/4423 Dateien zu dieser Ressource: 1
Miskovic_Stefan_teza.pdf ( 1.773Mb ) -
Stojadinović, Mirko (Beograd , 2016)[more][less]
Zusammenfassung: Many real-world problems can be modeled as constraint satisfaction problems (CSPs) and then solved by one of many available techniques for solving these problems. One of the techniques is reduction to SAT, i.e. Boolean Satisfiability Problem. Variables and constraints of CSP are translated (encoded) to SAT instance, that is then solved by state-of-the-art SAT solvers and solution, if exists, is translated to the solution of the original CSP. The main aim of this thesis is to improve CSP solving techniques that are using reduction to SAT. Two new hybrid encodings of CSPs to SAT are presented and they combine good sides of the existing encodings. We give the proof of correctness of one encoding that did not exist in literature. We developed system meSAT that enables reduction of CSPs to SAT by using 4 basic and 2 hybrid encodings. The system also enables solving of CSPs by reduction to two problems related to SAT, SMT and PB. We developed a portfolio for automated selection of encoding/solver to be used on some new instance that needs to be solved. The developed portfolio is comparable with the state-of-the-art portfolios. We developed a hybrid approach based on short solving timeouts with the aim of significantly reducing the preparation time of a portfolio. By using this approach, we got results comparable to the ones obtained by using preparation time of usual length. We made comparison between several machine learning techniques with the aim to find out which one is the best suited for the short training approach. The problem of assigning air traffic controllers to shifts is described and three models of this problem are presented. We used a large number of different solving methods and a diverse set of solvers for solving this problem. We developed optimization techniques that aim to find optimal solutions of the problem. A hybrid technique combining reduction to SAT and local search is shown to be the most efficient one. We also considered sudoku puzzles and the existing techniques of solving the puzzles of greater size than 9 9. Amongst the used techniques, the existing reduction to SAT is the most efficient in solving these puzzles. We improved the existing algorithm for generating large sudoku puzzles. It is shown that simple preprocessing rules additionally improve speed of generating large sudokus. URI: http://hdl.handle.net/123456789/4427 Dateien zu dieser Ressource: 1
MirkoStojadinovicTeza.pdf ( 2.030Mb ) -
Kovačević, Jovana (Beograd , 2015)[more][less]
Zusammenfassung: Proteins represent the most important groups of biomoleculs. Di erent functions that they carry out in each organism are unique and irreplaceable, including versatile cellular processes, structural role of proteins, catalytic function, a number of metabolic functions and so on. Knowing and under- standing protein function is therefore essential in investigation of any biolo- gical process, especially of human diseases since a lot of them are caused by functional mutations. In this paper, we represent investigation of protein function domain through two di erent approaches. In the rst one, protein function is represented by GO ontologies with the structure of a directed acyclic graph. There are three GO ontologies: one for functions regarding biological processes, one for functions regarding cellular components and one for molecular functions. Each ontology contains several thousands of nodes, where every node deter- mines more speci c function than his ascendants. The task of this part of research was to develop a software for predicting protein function from its primary sequence based on structural support vector machines method which represents generalization of well-known support vector machines method on structural output. Structure-function paradigm is one of basic concepts in molecular biology, stating that 3D proten structure is closely connected to its role in organism. It has been detected that disordered proteins (the ones that lack 3D struc- ture) and disordered regions of proteins are related with severe contemporary illnesses, which contributed to their popularity in modern research. In an- other aspect, we investigated the relationship between proteins' functional categories and their disorder, as well ad with other physico-chemical char- acteristics of proteins. Here, protein function has been observed through 25 elementary functions grouped in 4 functional groups. In this work, we present results of thorough analysis over large protein dataset where dis- order has been determined computationally, using publicly available tools. URI: http://hdl.handle.net/123456789/4451 Dateien zu dieser Ressource: 1
DoktoratJK2015.pdf ( 1.116Mb ) -
Bačanin Džakula, Nebojša (Beograd , 2015)[more][less]
Zusammenfassung: Hard optimization problems that cannot be solved within acceptable computational time by deterministic mathematical methods have been successfully solved in recent years by population-based stochastic metaheuristics, among which swarm intelligence algorithms represent a prominent class. This thesis investigates improvements of the swarm intelligence metaheuristics by hybridization. During analysis of the existing swarm intelligence metaheuristics in some cases de ciencies and weaknesses in the solution space search mechanisms were observed, primarily as a consequence of the mathematical model that simulates natural process as well as inappropriate balance between intensi cation and diversi cation. The thesis examines whether existing swarm intelligence algorithms for global optimization could be improved (in the sense of obtaining better results, faster convergence, better robustness) by hybridization with other algorithms. A number of hybridized swarm intelligence metaheuristics were developed and implemented. Considering the fact that good hybrids are not created as a random combination of individual functional elements and procedures from di erent algorithms, but rather established on comprehensive analysis of the functional principles of the algorithms that are used in the process of hybridization, development of the hybrid approaches was preceded by thorough research of advantages and disadvantages of each involved algorithm in order to determine the best combination that neutralizes disadvantages of one approach by incorporating the strengths of the other. Developed hybrid approaches were veri ed by testing on standard benchmark sets for global optimization, with and without constraints, as well as on well-known practical problems. Comparative analysis with the state-of-the-art algorithms from the literature demonstrated quality of the developed hybrids and con rmed the hypothesis that swarm intelligence algorithms can be successfully improved by hybridization. URI: http://hdl.handle.net/123456789/4245 Dateien zu dieser Ressource: 1
phdBacaninNebojsa.pdf ( 3.813Mb ) -
Alatrash, Emhimed Salem (Beograd , 2015)[more][less]
Zusammenfassung: Ontologies, often defined as an explicit specification of conceptualization, are necessary for knowledge representation and knowledge exchange. This means that ontology describes concepts and relations that exist in a domain. To enable knowledge exchange, it is necessary to describe these concepts and relations in a better way than just ordering them in taxonomy. A computational ontology consists of a number of different components, such as Concepts, Instances, Individuals or Facts, Relations and Attributes. The present research is intended to consider different software tools related to Semantic web, and achieve a kind of comparison among them. In fact, five ontology-editors are described and compared. They are: Apollo, Onto Studio, Protégé, Swoop and TopBraid Composer Free Edition. The structure and basic features of these editors as well as the way of using them are described. The main criterion used in the process of comparing these editors lies in their convenience for the user, and the possibility to apply them in different kinds of application. The main goal of the work is to introduce a method for ontology construction of a certain domain in applying the Semantic web. A number of software tools adapted to build up the domain ontologies of most wide–spread natural languages are available; however accomplishing that for any given natural language presents a challenge. This research proposes a semi-automatic procedure to create ontologies for different natural languages. The approach utilizes various software tools that are available on the Internet, most notably DODDLE-OWL which is a domain ontology development tool implemented for English and Japanese languages. Through this tool, WordNet, Protégé and XSLT transformations, the researcher proposes a general procedure to construct domain ontology for any natural language. URI: http://hdl.handle.net/123456789/4266 Dateien zu dieser Ressource: 1
phdEmhimedAlatrash.pdf ( 2.171Mb ) -
Nikolić, Mladen (Belgrade , 2013)[more][less]
Zusammenfassung: In this thesis the problem of guiding search in automated theorem proving is considered. The thesis consists of two parts that have the CDCL search system, the system intensively used by modern SAT solvers, as their common topic. In the rst part of the thesis a simple approach to guiding search is considered | guiding by the selection of the solver, its heuristics, and their parameters, based on the properties of an instance to be solved. The basis of the proposed methods for algorithm selection is syntactical similarity of formulae which is re ected in their graph structure. This graph similarity is established and analyzed by using an original graph similarity measure (which turned out to be useful in other contexts, too). Yet, practical approaches to measuring similarity of formulae are based on their numerical features due to the computational complexity issues. Two simple methods for algorithm selection, based on k nearest neighbors, were proposed. The rst technique, ArgoSmArT is based on classi cation of instance in one of the prede ned families for which the e cient algorithms are known. The instance is solved by algorithm corresponding to the family to which the instance was classi ed. The second technique, ArgoSmArT k-NN is based on nding several similar instances in the training set for which the solving times by all considered algorithms are known. The instance is solved by the algorithm that behaves the best on those instances. ArgoSmArT technique is better suited for con guration selection of a SAT solver, and ArgoSmArT k-NN for SAT solver selection. ArgoSmArT k-NN technique showed to be more e cient than the most important and very complex system for SAT solver selection | SATzilla system. Apart from CNF SAT solver selection, the problem of non-CNF SAT solver selection is considered. The focus was not on solver selection techniques, since the proposed techniques are directly applicable, but on the attributes that can be used to describe non-CNF SAT instances, which have not been proposed earlier. The results in this domain are positive, but still limited. The main reason for that is the lack of greater number of non-CNF SAT solver of di erent behaviour, which is not surprising, having in mind that this kind of solvers is in its early stage of development. Apart from construction of e cient SAT solver selection system, the methodology of SAT solver comparison, based on statistical hypothesis testing is proposed. The need for such a methodology comes from great run time variations of single instance solving by a solver, which can result in di erent SAT solver orderings when one tries to compare their performance or rank them, as experimentally demonstrated. The proposed methodology gives the estimate of statistical signi cance of the performed test and the estimate of the e ect size, for instance the probability of a solver being faster than another. The second part of the thesis is concerned with generalizing CDCL search system to fragments of rst order logic. The proposed system can be used as a basis for e cient proving in some fragment if the rules of resolution and factoring are speci ed for that fragment. These rules are de ned for an extension of coherent logic. The soundness and completeness of the system are proved. The system has several distinguishing features which are a consequence of previously performed analysis of challenges in coherent logic theorem proving. The system enables rst order reasoning, instead of ground one characteristic for all existing coherent logic provers. Moreover, it introduces backjumps and lemma learning. The special attention in system design was paid to the possibility of generating readable proofs by the prover implementing the system. This possibility is one of the greatest qualities of coherent logic, but it is not easy to achieve if CDCL search system is used. One of the properties of the system that came from the need for generation of readable proofs is preservation of quanti ers in proving process which is rather unusual for existing CDCL systems. Another advantage of the proposed CDCL system is the possibility of transfer of heuristics which are already successfully employed in SAT solving to other domains. Based on the proposed system, the proof procedure Calypso for extended coherent logic was de ned which can also be used in standard coherent logic. The extension of Rete algorithm which enables detection of con icts and literals to be propagated or decided is proposed. Procedure Claypso is implemented in C++. It was evaluated on a representative coherent logic problems and it showed superior to other coherent logic provers and also the prover Vampire, the most e cient prover for rst order logic. Based on the results presented in this thesis, it can be concluded that the main hypothesis of this work, that the search system used in CDCL SAT solvers can be signi cantly improved by simple guiding and that it can be successfully formulated for fragments of rst order logic such as coherent logic, was con rmed and that the concrete answers on how to do that were provided. URI: http://hdl.handle.net/123456789/2584 Dateien zu dieser Ressource: 1
nikolic_mladen.pdf ( 1.448Mb ) -
Dimovski, Igor (Novi Sad , 2011)[more][less]
Zusammenfassung: A comprehensive pedagogical research regarding teaching mathematics at a tertiary, university level has been presented in the PhD dissertation. The educational resources tailored in an electronic form using the programme package Matlab are integrated in the learning process. The impact of ICT use to the essential knowledge that refers to multivariate calculus (functions of several variables, vector-valued functions and the three-dimensional analytical geometry) has been statistically explored by intensive use of 3D static and dynamic visual tools. Part of the students who have participated in the research have developed Matlab programmes all by their own. One part of the research has been focused on probable impact of the programming skills on learning mathematical concepts. URI: http://hdl.handle.net/123456789/3874 Dateien zu dieser Ressource: 1
PhD_I_Dimovski.pdf ( 5.423Mb ) -
Protić, Danijela (Beograd , 2023)[more][less]
Zusammenfassung: Anomaly detection is the recognition of suspicious computer network behavior by comparing unknown network traffic to a statistical model of normal network behavior. Binary classifiers based on supervised machine learning are good candidates for normality detection. This thesis presents five standard binary classifiers: the k-nearest neighbors, weighted k-nearest neighbors, decision trees, support vector machines and feedforward neural network. The main problem with supervised learning is that it takes a lot of data to train high-precision classifiers. To reduce the training time with minimal degradation of the accuracy of the models, a two-phase pre-processing step is performed. In the first phase, numeric attributes are selected to reduce the dataset. The second phase is a novel normalization method based on hyperbolic the tangent function and the damping strategy of the Levenberg-Marquardt algorithm. The Kyoto 2006+ dataset, the only publicly available data set of real-world network traffic intended solely for anomaly detection research in computer networks, was used to demonstrate the positive impact of such pre-processing on classifier training time and accuracy. Of all the selected classifiers, the feedforward neural network has the highest processing speed, while the weighted k-nearest neighbor model proved to be the most accurate. The assumption is that when the classifiers work concurrently, they should detect either an anomaly or normal network traffic, which occasionally is not the case, resulting in different decision about the anomaly, i.e. a conflict arises. The conflicting decision detector performs a logical exclusive OR (XOR) operation on the outputs of the classifiers. If both classifiers simultaneously detected an anomaly or recognized traffic as normal, their decision was no conflict had occurred. Otherwise a conflict is detected. The number of conflicts detected provides an opportunity for additional detection of changes in computer network behavior. URI: http://hdl.handle.net/123456789/5599 Dateien zu dieser Ressource: 1
Danijela Protic - Doktorska Disertacija.pdf ( 3.143Mb )
Vorherige Seite
Anzeige der Dokumente 28-40 von 40