Repository of Publications on Search Based Software Engineering

This page is maintained by Yuanyuan Zhang, CREST, Department of Computer Science, University College London, London, UK.

Email: yuanyuan.zhang [AT] ucl.ac.uk

The SBSE authors' information can be found in Who's Who.

 

If you would like to cite the repository website please use this BibTeX entry:

@misc{yzmham:sbse-repository,
author = Yuanyuan Zhang and Mark Harman and Afshin Mansouri,
title = The {SBSE} Repository: {A} repository and analysis of authors and research articles on Search Based Software Engineering,
publisher = {{CREST Centre, UCL}},
note = crestweb.cs.ucl.ac.uk/resources/sbse_repository/
}

 

 

Click on any column header to sort

Support for global search, search per column, selection per year, reference type and application.

Global Quick Search:   Number of matching entries: 0

Search Settings

    Time Stamp Author Title Year Journal / Proceedings / Book BibTeX Type Application
    2008.07.06 Anthony Finkelstein, Mark Harman, S. Afshin Mansouri, Jian Ren & Yuanyuan Zhang ``Fairness Analysis" in Requirements Assignments 2008 Proceedings of the 16th IEEE International Requirements Engineering Conference (RE '08), pp. 115-124, Barcelona Catalunya Spain, 8-12 September   Inproceedings Requirements/Specifications
    Abstract: Requirements engineering for multiple customers, each of whom have competing and often conflicting priorities, raises issues of negotiation, mediation and conflict resolution. This paper uses a multi-objective optimisation approach to support investigation of the trade-offs in various notions of fairness between multiple customers. Results are presented to validate the approach using two real-world data sets and also using data sets created specifically to stress test the approach. Simple graphical techniques are used to visualize the solution space.
    BibTeX:
    @inproceedings{FinkelsteinHMRZ08,
      author = {Anthony Finkelstein and Mark Harman and S. Afshin Mansouri and Jian Ren and Yuanyuan Zhang},
      title = {``Fairness Analysis" in Requirements Assignments},
      booktitle = {Proceedings of the 16th IEEE International Requirements Engineering Conference (RE '08)},
      publisher = {IEEE},
      year = {2008},
      pages = {115-124},
      address = {Barcelona, Catalunya, Spain},
      month = {8-12 September},
      doi = {http://dx.doi.org/10.1109/RE.2008.61}
    }
    					
    2014.09.02 Gordon Fraser & Andrea Arcuri 1600 Faults in 100 Projects: Automatically Finding Faults While Achieving High Coverage with Evosuite 2015 Empirical Software Engineering, Vol. 20(3), pp. 611-639, June   Article Testing and Debugging
    Abstract: Automated unit test generation techniques traditionally follow one of two goals: Either they try to find violations of automated oracles (e.g., assertions, contracts, undeclared exceptions), or they aim to produce representative test suites (e.g., satisfying branch coverage) such that a developer can manually add test oracles. Search-based testing (SBST) has delivered promising results when it comes to achieving coverage, yet the use in conjunction with automated oracles has hardly been explored, and is generally hampered as SBST does not scale well when there are too many testing targets. In this paper we present a search-based approach to handle both objectives at the same time, implemented in the EvoSuite tool. An empirical study applying EvoSuite on 100 randomly selected open source software projects (the SF100 corpus) reveals that SBST has the unique advantage of being well suited to perform both traditional goals at the same time-efficiently triggering faults, while producing representative test sets for any chosen coverage criterion. In our study, EvoSuite detected twice as many failures in terms of undeclared exceptions as a traditional random testing approach, witnessing thousands of real faults in the 100 open source projects. Two out of every five classes with undeclared exceptions have actual faults, but these are buried within many failures that are caused by implicit preconditions. This "noise" can be interpreted as either a call for further research in improving automated oracles-or to make tools like EvoSuite an integral part of software development to enforce clean program interfaces.
    BibTeX:
    @article{FraserA15b,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {1600 Faults in 100 Projects: Automatically Finding Faults While Achieving High Coverage with Evosuite},
      journal = {Empirical Software Engineering},
      year = {2015},
      volume = {20},
      number = {3},
      pages = {611-639},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s10664-013-9288-2}
    }
    					
    2014.05.30 Simon Poulding & Tanja E.J. Vos 6th International Workshop on Search-Based Software Testing (SBST 2013): Workshop Summary 2013 Proceedings of the 6th International Workshop on Search-Based Software Testing (SBST '13), pp. 404-405, Luxembourg Luxembourg, 22-22 March   Inproceedings Testing and Debugging
    Abstract: This paper summarizes the presentations and discussions at the 6th International Workshop on Search-Based Software Testing (SBST 2013), held in conjunction with the IEEE International Conference on Software Testing, Verification and Validation (ICST 2013).
    BibTeX:
    @inproceedings{PouldingV13,
      author = {Simon Poulding and Tanja E.J. Vos},
      title = {6th International Workshop on Search-Based Software Testing (SBST 2013): Workshop Summary},
      booktitle = {Proceedings of the 6th International Workshop on Search-Based Software Testing (SBST '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {404-405},
      address = {Luxembourg, Luxembourg},
      month = {22-22 March},
      doi = {http://dx.doi.org/10.1109/ICSTW.2013.68}
    }
    					
    2010.09.13 Gregory Gay A Baseline Method For Search-based Software Engineering 2010 The 6th International Conference on Predictive Models in Software Engineering (PROMISE '10), Timisoara Romania, 12-13 September   Inproceedings Requirements/Specifications
    Abstract: Background: Search-based Software Engineering (SBSE) uses a variety of techniques such as evolutionary algorithms or meta-heuristic searches but lacks a standard baseline method. Aims: The KEYS2 algorithm meets the criteria of a baseline. It is fast, stable, easy to understand, and presents results that are competitive with standard techniques. Method: KEYS2 operates on the theory that a small subset of variables control the majority of the search space. It uses a greedy search and a Bayesian ranking heuristic to fix the values of these variables, which rapidly forces the search towards stable high-scoring areas. Results: KEYS2 is faster than standard techniques, presents competitive results (assessed with a rank-sum test), and offers stable solutions. Conclusions: KEYS2 is a valid candidate to serve as a baseline technique for SBSE research.
    BibTeX:
    @inproceedings{Gay10,
      author = {Gregory Gay},
      title = {A Baseline Method For Search-based Software Engineering},
      booktitle = {The 6th International Conference on Predictive Models in Software Engineering (PROMISE '10)},
      publisher = {IEEE},
      year = {2010},
      address = {Timisoara, Romania},
      month = {12-13 September},
      doi = {http://dx.doi.org/10.1145/1868328.1868332}
    }
    					
    2012.03.05 Karnig Derderian, Mercedes G. Merayo, Robert M. Hierons & Manuel Núñez A Case Study on the Use of Genetic Algorithms to Generate Test Cases for Temporal Systems 2011 Proceedings of the 11th International Work-Conference on Artificial Neural Networks (IWANN '11), Vol. 6692, pp. 396-403, Torremolinos-Málaga Spain, 8-10 June   Inproceedings Testing and Debugging
    Abstract: Generating test data for formal state based specifications is computationally expensive. In previous work we presented a framework that addressed this issue by representing the test data generation problem as an optimisation problem. In this paper we analyze a communications protocol to illustrate how the test case generation problem can be presented as a search problem and automated. Genetic algorithms (GAs) and random search are used to generate test data and evaluate the approach. GAs show to outperform random search and seem to scale well as the problem size increases. We consider a very simple fitness function that can be used with other evolutionary search techniques and automated test case generation suites.
    BibTeX:
    @inproceedings{DerderianMHN11,
      author = {Karnig Derderian and Mercedes G. Merayo and Robert M. Hierons and Manuel Núñez},
      title = {A Case Study on the Use of Genetic Algorithms to Generate Test Cases for Temporal Systems},
      booktitle = {Proceedings of the 11th International Work-Conference on Artificial Neural Networks (IWANN '11)},
      publisher = {Springer},
      year = {2011},
      volume = {6692},
      pages = {396-403},
      address = {Torremolinos-Málaga, Spain},
      month = {8-10 June},
      doi = {http://dx.doi.org/10.1007/978-3-642-21498-1_50}
    }
    					
    2015.11.06 Mark Harman, Yue Jia & Yuanyuan Zhang Achievements, Open Problems and Challenges for Search Based Software Testing (keynote) 2015 Proceeding of the 8th IEEE International Conference on Software Testing, Verification and Validation (ICST '15), pp. 1-12, Graz Austria, 13-17 April   Inproceedings Testing and Debugging
    Abstract: Search Based Software Testing (SBST) formulates testing as an optimisation problem, which can be attacked using computational search techniques from the field of Search Based Software Engineering (SBSE). We present an analysis of the SBST research agenda, focusing on the open problems and challenges of testing non-functional properties, in particular a topic we call 'Search Based Energy Testing' (SBET), Multi-objective SBST and SBST for Test Strategy Identification. We conclude with a vision of FIFIVERIFY tools, which would automatically find faults, fix them and verify the fixes. We explain why we think such FIFIVERIFY tools constitute an exciting challenge for the SBSE community that already could be within its reach.
    BibTeX:
    @inproceedings{HarmanJZ15,
      author = {Mark Harman and Yue Jia and Yuanyuan Zhang},
      title = {Achievements, Open Problems and Challenges for Search Based Software Testing (keynote)},
      booktitle = {Proceeding of the 8th IEEE International Conference on Software Testing, Verification and Validation (ICST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-12},
      address = {Graz, Austria},
      month = {13-17 April},
      doi = {http://dx.doi.org/10.1109/ICST.2015.7102580}
    }
    					
    2014.09.02 Gordon Fraser & Andrea Arcuri Achieving Scalable Mutation-based Generation of Whole Test Suites 2015 Empirical Software Engineering, Vol. 20(3), pp. 783-812, June   Article Testing and Debugging
    Abstract: Without complete formal specification, automatically generated software tests need to be manually checked in order to detect faults. This makes it desirable to produce the strongest possible test set while keeping the number of tests as small as possible. As commonly applied coverage criteria like branch coverage are potentially weak, mutation testing has been proposed as a stronger criterion. However, mutation based test generation is hampered because usually there are simply too many mutants, and too many of these are either trivially killed or equivalent. On such mutants, any effort spent on test generation would per definition be wasted. To overcome this problem, our search-based EvoSuite test generation tool integrates two novel optimizations: First, we avoid redundant test executions on mutants by monitoring state infection conditions, and second we use whole test suite generation to optimize test suites towards killing the highest number of mutants, rather than selecting individual mutants. These optimizations allowed us to apply EvoSuite to a random sample of 100 open source projects, consisting of a total of 8,963 classes and more than two million lines of code, leading to a total of 1,380,302 mutants. The experiment demonstrates that our approach scales well, making mutation testing a viable test criterion for automated test case generation tools, and allowing us to analyze the relationship of branch coverage and mutation testing in detail.
    BibTeX:
    @article{FraserA15,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {Achieving Scalable Mutation-based Generation of Whole Test Suites},
      journal = {Empirical Software Engineering},
      year = {2015},
      volume = {20},
      number = {3},
      pages = {783-812},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s10664-013-9299-z}
    }
    					
    2014.08.14 Luiz Martins, Ricardo Nobre, Alexandre Delbem, Eduardo Marques & Joao Cardoso Coello Coello, C.A. (Hrsg.) A Clustering-Based Approach for Exploring Sequences of Compiler Optimizations 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation, pp. 2436-2443, Beijing China, 6-11 July   Inproceedings
    Abstract: In this paper we present a clustering-based selection approach for reducing the number of compilation passes used in search space during the exploration of optimisations aiming at increasing the performance of a given function and/or code fragment. The basic idea is to identify similarities among functions and to use the passes previously explored each time a new function is being compiled. This subset of compiler optimisations is then used by a Design Space Exploration (DSE) process. The identification of similarities is obtained by a data mining method which is applied to a symbolic code representation that translates the main structures of the source code to a sequence of symbols based on transformation rules. Experiments were performed for evaluating the effectiveness of the proposed approach. The selection of compiler Optimisation sequences considering a set of 49 compilation passes and targeting a Xilinx MicroBlaze processor was performed aiming at latency improvements for 41 functions from Texas Instruments benchmarks. The results reveal that the passes selection based on our clustering method achieves a significant gain on execution time over the full search space still achieving important performance speedups.
    BibTeX:
    @inproceedings{MartinsNDMC14,
      author = {Luiz Martins and Ricardo Nobre and Alexandre Delbem and Eduardo Marques and Joao Cardoso},
      title = {A Clustering-Based Approach for Exploring Sequences of Compiler Optimizations},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation},
      publisher = {IEEE},
      year = {2014},
      pages = {2436--2443},
      address = {Beijing, China},
      month = {6-11 July},
      doi = {http://dx.doi.org/10.1109/CEC.2014.6900634}
    }
    					
    2014.08.14 Hamid Masoud & Saeed Jalili A Clustering-based Model for Class Responsibility Assignment Problem in Object-Oriented Analysis 2014 Journal of Systems and Software, Vol. 93, pp. 110-131, July   Article
    Abstract: Assigning responsibilities to classes is a vital task in object-oriented analysis and design, and it directly affects the maintainability and reusability of software systems. There are many methodologies to help recognize the responsibilities of a system and assign them to classes, but all of them depend greatly on human judgment and decision-making. In this paper, we propose a clustering-based model to solve the class responsibility assignment (CRA) problem. The proposed model employs a novel interactive graph-based method to find inheritance hierarchies, and two novel criteria to determine the appropriate number of classes. It reduces the dependency of CRA on human judgment and provides a decision-making support for CRA in class diagrams. To evaluate the proposed model, we apply three different hierarchical agglomerative clustering algorithms and two different types of similarity measures. By comparing the obtained results of clustering techniques with the models designed by multi-objective genetic algorithm (MOGA), it is revealed that clustering techniques yield promising results.
    BibTeX:
    @article{MasoudJ14,
      author = {Hamid Masoud and Saeed Jalili},
      title = {A Clustering-based Model for Class Responsibility Assignment Problem in Object-Oriented Analysis},
      journal = {Journal of Systems and Software},
      year = {2014},
      volume = {93},
      pages = {110-131},
      month = {July},
      doi = {http://dx.doi.org/10.1016/j.jss.2014.02.053}
    }
    					
    2007.12.02 Kiarash Mahdavi A Clustering Genetic Algorithm for Software Modularisation with Multiple Hill Climbing Approach 2005 , February School: Brunel University West London, UK   Phdthesis Distribution and Maintenance
    Abstract: Software clustering is a useful technique for software comprehension and re-engineering. In this thesis we examine Software Module Clustering by Hill Climbing (HC) and Genetic Algorithms (GA). Our work primarily addresses graph partitioning using HC and GA. The software modules are represented as directed graphs and clustered using novel HC and GA search techniques. We use a fitness criterion to direct the search. The search consists of using multiple preliminary search to gather information about the search landscape, which is then converted to Building Blocks and used for subsequent search. This thesis includes the results of a series of empirical studies to use these novel HC and GA techniques. These results show this technique to be an effective way to improve Software Module Clustering. They also show that our GA reduces the need for user defined solution structure, which requires in depth understanding of the solutions landscape and it can also help improve efficiency. We also present work for automation of useful Building Block recognition and the results of an experiment which shows Building Blocks created in this way also help our GA search, making this an ideal opportunity for further investigation.
    BibTeX:
    @phdthesis{Mahdavi05,
      author = {Kiarash Mahdavi},
      title = {A Clustering Genetic Algorithm for Software Modularisation with Multiple Hill Climbing Approach},
      school = {Brunel University West London, UK},
      year = {2005},
      month = {February}
    }
    					
    2007.12.02 Enrique Alba & Francisco Chicano ACOhg: Dealing with Huge Graphs 2007 Proceedings of the 9th annual Conference on Genetic and Evolutionary Computation (GECCO '07), pp. 10-17, London England, 7-11 July   Inproceedings Software/Program Verification
    Abstract: Ant Colony Optimization (ACO) has been successfully applied to those combinatorial optimization problems which can be translated into a graph exploration. Artificial ants build solutions step by step adding solution components that are represented by graph nodes. The existing ACO algorithms are suitable when the graph is not very large (thousands of nodes) but is not useful when the graph size can be a challenge for the computer memory and cannot be completely generated or stored in it. In this paper we study a new ACO model that overcomes the difficulties found when working with a huge construction graph. In addition to the description of the model, we analyze in the experimental section one technique used for dealing with this huge graph exploration. The results of the analysis can help to understand the meaning of the new parameters introduced and to decide which parameterization is more suitable for a given problem. For the experiments we use one real problem with capital importance in Software Engineering: refutation of safety properties in concurrent systems. This way, we foster an innovative research line related to the application of ACO to formal methods in Software Engineering.
    BibTeX:
    @inproceedings{AlbaC07d,
      author = {Enrique Alba and Francisco Chicano},
      title = {ACOhg: Dealing with Huge Graphs},
      booktitle = {Proceedings of the 9th annual Conference on Genetic and Evolutionary Computation (GECCO '07)},
      publisher = {ACM},
      year = {2007},
      pages = {10-17},
      address = {London, England},
      month = {7-11 July},
      doi = {http://dx.doi.org/10.1145/1276958.1276961}
    }
    					
    2015.12.09 Prashant Vats & Manju Mandot A Comparative Analysis of Ant Colony Optimization for its Applications into Software Testing 2014 Proceedings of Innovative Applications of Computational Intelligence on Power, Energy and Controls with their impact on Humanity (CIPECH '14), pp. 476-481, Ghaziabad India, 28-29 November   Inproceedings Testing and Debugging
    Abstract: Ant Colony Optimization are metaheuristic algorithms that uses the search based algorithms as their base. It applies the natural phenomenon of finding the best possible path by the Ants that is covering the minimum distance from the food source to the ant colony, which will be followed by the rest of the ants, resulting into the optimized path. This phenomenon can be applied to provide optimized solutions to solve some complex computational problems. In this paper, we have carried out a review for the applications of the Ant colony Optimization algorithms in context to various level of the Software Testing, thus proving their worth in providing solutions to the various aspects of the Software Testing.
    BibTeX:
    @inproceedings{VatsM14,
      author = {Prashant Vats and Manju Mandot},
      title = {A Comparative Analysis of Ant Colony Optimization for its Applications into Software Testing},
      booktitle = {Proceedings of Innovative Applications of Computational Intelligence on Power, Energy and Controls with their impact on Humanity (CIPECH '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {476-481},
      address = {Ghaziabad, India},
      month = {28-29 November},
      doi = {http://dx.doi.org/10.1109/CIPECH.2014.7019110}
    }
    					
    2016.03.08 Thelma Elita Colanzi & Silvia Regina Vergilio A Comparative Analysis of Two Multi-objective Evolutionary Algorithms in Product Line Architecture Design Optimization 2014 Proceedings of IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI '14), pp. 681-688, Limassol Cyprus, 10-12 November   Inproceedings
    Abstract: The Product Line Architecture (PLA) design is a multi-objective optimization problem that can be properly solved with search-based algorithms. However, search-based PLA design is an incipient research field. Due to this, works in this field have addressed main points to solve the problem: adequate representation, specific search operators and suitable evaluation fitness functions. Similarly what happens in the search-based design of traditional software, existing works on search-based PLA design use NSGA-II, without evaluating the characteristics of this algorithm, such as the use of crossover operator. Considering this fact, this paper reports results from a comparative analysis of two algorithms, NSGA-II and PAES, to the PLA design problem. PAES was chosen because it implements a different evolution strategy that does not employ crossover. An experimental study was carried out with nine PLAs and results of the conducted study attest that NSGA-II performs better than PAES in the PLA design context.
    BibTeX:
    @inproceedings{ColanziV14,
      author = {Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {A Comparative Analysis of Two Multi-objective Evolutionary Algorithms in Product Line Architecture Design Optimization},
      booktitle = {Proceedings of IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {681-688},
      address = {Limassol, Cyprus},
      month = {10-12 November},
      doi = {http://dx.doi.org/10.1109/ICTAI.2014.107}
    }
    					
    2009.02.16 Wasif Afzal & Richard Torkar A Comparative Evaluation of using Genetic Programming for Predicting Fault Count Data 2008 Proceedings of the 3rd International Conference on Software Engineering Advances (ICSEA '08), pp. 407-414, Sliema Malta, 26-31 October   Inproceedings Software/Program Verification
    Abstract: There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.
    BibTeX:
    @inproceedings{AfzalT08b,
      author = {Wasif Afzal and Richard Torkar},
      title = {A Comparative Evaluation of using Genetic Programming for Predicting Fault Count Data},
      booktitle = {Proceedings of the 3rd International Conference on Software Engineering Advances (ICSEA '08)},
      publisher = {IEEE},
      year = {2008},
      pages = {407-414},
      address = {Sliema, Malta},
      month = {26-31 October},
      doi = {http://dx.doi.org/10.1109/ICSEA.2008.9}
    }
    					
    2009.07.29 Raluca Lefticaru & Florentin Ipate A Comparative Landscape Analysis of Fitness Functions for Search-based Testing 2008 Proceedings of the 10th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC '08), pp. 201-208, Timisoara Romania, 26-29 September   Inproceedings Testing and Debugging
    Abstract: Landscape analysis of fitness functions is an important topic. This paper makes an attempt to characterize the search problems associated with the fitness functions used in search-based testing, employing the following measures: diameter, autocorrelation and fitness distance correlation. In a previous work, a general form of objective functions for structural search-based software testing was tailored for state-based testing. A comparison is performed in this paper between the general fitness functions and some problem-specific fitness functions, taking into account their performance with different search methods.
    BibTeX:
    @inproceedings{LefticaruI08c,
      author = {Raluca Lefticaru and Florentin Ipate},
      title = {A Comparative Landscape Analysis of Fitness Functions for Search-based Testing},
      booktitle = {Proceedings of the 10th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC '08)},
      publisher = {IEEE},
      year = {2008},
      pages = {201-208},
      address = {Timisoara, Romania},
      month = {26-29 September},
      doi = {http://dx.doi.org/10.1109/SYNASC.2008.69}
    }
    					
    2008.07.20 Ghinwa Baradhi & Nashat Mansour A Comparative Study of Five Regression Testing Algorithms 1997 Proceedings of the Australian Software Engineering Conference (ASWEC '97), pp. 174-182, Sydney NSW Australia, 28 September - 2 October   Inproceedings Testing and Debugging
    Abstract: We compare five regression testing algorithms that include: slicing, incremental, firewall, genetic, and simulated annealing algorithms. The comparison is based on the following ten quantitative and qualitative criteria: execution time, number of selected retests, precision, inclusiveness, user parameters, handling of global variables, type of maintenance, type of testing, level of testing, and type of approach. The experimental results show that the five algorithms are suitable for different requirements of regression testing. Nevertheless, the incremental algorithm shows more favorable properties than the others.
    BibTeX:
    @inproceedings{BaradhiM97,
      author = {Ghinwa Baradhi and Nashat Mansour},
      title = {A Comparative Study of Five Regression Testing Algorithms},
      booktitle = {Proceedings of the Australian Software Engineering Conference (ASWEC '97)},
      publisher = {IEEE},
      year = {1997},
      pages = {174-182},
      address = {Sydney, NSW, Australia},
      month = {28 September - 2 October},
      doi = {http://dx.doi.org/10.1109/ASWEC.1997.623769}
    }
    					
    2016.03.09 Aurora Ramrez, José Raúl Romero & Sebastián Ventura A Comparative Study of Many-objective Evolutionary Algorithms for the Discovery of Software Architectures 2016 Empirical Software Engineering, Vol. 21(6), pp. 2546-2600, December   Article
    Abstract: During the design of complex systems, software architects have to deal with a tangle of abstract artefacts, measures and ideas to discover the most fitting underlying architecture. A common way to structure such complex systems is in terms of their interacting software components, whose composition and connections need to be properly adjusted. Along with the expected functionality, non-functional requirements are key at this stage to guide the many design alternatives to be evaluated by software architects. The appearance of Search Based Software Engineering (SBSE) brings an approach that supports the software engineer along the design process. Evolutionary algorithms can be applied to deal with the abstract and highly combinatorial optimisation problem of architecture discovery from a multiple objective perspective. The definition and resolution of many-objective optimisation problems is currently becoming an emerging challenge in SBSE, where the application of sophisticated techniques within the evolutionary computation field needs to be considered. In this paper, diverse non-functional requirements are selected to guide the evolutionary search, leading to the definition of several optimisation problems with up to 9 metrics concerning the architectural maintainability. An empirical study of the behaviour of 8 multi- and many-objective evolutionary algorithms is presented, where the quality and type of the returned solutions are analysed and discussed from the perspective of both the evolutionary performance and those aspects of interest to the expert. Results show how some many-objective evolutionary algorithms provide useful mechanisms to effectively explore design alternatives on highly dimensional objective spaces.
    BibTeX:
    @article{RamirezRV16,
      author = {Aurora Ramrez and José Raúl Romero and Sebastián Ventura},
      title = {A Comparative Study of Many-objective Evolutionary Algorithms for the Discovery of Software Architectures},
      journal = {Empirical Software Engineering},
      year = {2016},
      volume = {21},
      number = {6},
      pages = {2546-2600},
      month = {December},
      doi = {http://dx.doi.org/10.1007/s10664-015-9399-z}
    }
    					
    2014.09.03 Christopher L. Simons & Jim Smith A Comparison of Meta-heuristic Search for Interactive Software Design 2013 Soft Computing, Vol. 17(11), pp. 2147-2162, April   Article Design Tools and Techniques
    Abstract: Advances in processing capacity, coupled with the desire to tackle problems where a human subjective judgment plays an important role in determining the value of a proposed solution, has led to a dramatic rise in the number of applications of Interactive Artificial Intelligence. Of particular note is the coupling of meta-heuristic search engines with user-provided evaluation and rating of solutions, usually in the form of Interactive Evolutionary Algorithms (IEAs). These have a well-documented history of successes, but arguably the preponderance of IEAs stems from this history, rather than as a conscious design choice of meta-heuristic based on the characteristics of the problem at hand. This paper sets out to examine the basis for that assumption, taking as a case study the domain of interactive software design. We consider a range of factors that should affect the design choice including ease of use, scalability, and of course, performance, i.e. that ability to generate good solutions within the limited number of evaluations available in interactive work before humans lose focus. We then evaluate three methods, namely greedy local search, an evolutionary algorithm and ant colony optimization (ACO), with a variety of representations for candidate solutions. Results show that after suitable parameter tuning, ACO is highly effective within interactive search and out-performs evolutionary algorithms with respect to increasing numbers of attributes and methods in the software design problem. However, when larger numbers of classes are present in the software design, an evolutionary algorithm using a naïve grouping integer-based representation appears more scalable.
    BibTeX:
    @article{SimonsS13,
      author = {Christopher L. Simons and Jim Smith},
      title = {A Comparison of Meta-heuristic Search for Interactive Software Design},
      journal = {Soft Computing},
      year = {2013},
      volume = {17},
      number = {11},
      pages = {2147-2162},
      month = {April},
      doi = {http://dx.doi.org/10.1007/s00500-013-1039-1}
    }
    					
    2007.12.02 Joachim Wegener & Frank Mueller A Comparison of Static Analysis and Evolutionary Testing for the Verification of Timing Constraints 2001 Real-Time Systems, Vol. 21(3), pp. 241-268, November   Article Testing and Debugging
    Abstract: This paper contrasts two methods to verify timing constraints of real-time applications. The method of static analysis predicts the worst-case and best-case execution times of a task's code by analyzing execution paths and simulating processor characteristics without ever executing the program or requiring the program's input. Evolutionary testing is an iterative testing procedure, which approximates the extreme execution times within several generations. By executing the test object dynamically and measuring the execution times the inputs are guided yielding gradually tighter predictions of the extreme execution times. We examined both approaches on a number of real world examples. The results show that static analysis and evolutionary testing are complementary methods, which together provide upper and lower bounds for both worst-case and best-case execution times.
    BibTeX:
    @article{WegenerM01,
      author = {Joachim Wegener and Frank Mueller},
      title = {A Comparison of Static Analysis and Evolutionary Testing for the Verification of Timing Constraints},
      journal = {Real-Time Systems},
      publisher = {Kluwer Academic Publishers},
      year = {2001},
      volume = {21},
      number = {3},
      pages = {241-268},
      month = {November},
      doi = {http://dx.doi.org/10.1023/A:1011132221066}
    }
    					
    2013.08.05 Jim Smith & Christopher L. Simons A Comparison of Two Memetic Algorithms for Software Class Modelling 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1485-1492, Amsterdam The Netherlands, 6-10 July   Inproceedings Design Tools and Techniques
    Abstract: Recent research has demonstrated that the problem of class modelling within early cycle object orientated software engineering can be successfully tackled by posing it as a search problem to be tackled with meta-heuristics. This 'Search Based Software Engineering' approach has been illustrated using both Evolutionary Algorithms and Ant Colony Optimisation to perform the underlying search. Each has been shown to display strengths and weaknesses, both in terms of how easily 'standard' algorithms can be applied to the domain, and of optimisation performance. This paper extends that work by considering the effect of incorporating Local Search. Specifically we examine the hypothesis that within a memetic framework the choice of global search heuristic does not significantly affect search performance, freeing the decision to be made on other more subjective factors. Results show that in fact the use of local search is not always beneficial to the Ant Colony Algorithm, whereas for the Evolutionary Algorithm with order based recombination it is highly effective at improving both the quality and speed of optimisation. Across a range of parameter settings ACO found its best solutions earlier than EAs, but those solutions were of lower quality than those found by EAs. For both algorithms we demonstrated that the number of constraints present, which relates to the number of classes created, has a far bigger impact on solution quality and time than the size of the problem in terms of numbers of attributes and methods.
    BibTeX:
    @inproceedings{SmithS13,
      author = {Jim Smith and Christopher L. Simons},
      title = {A Comparison of Two Memetic Algorithms for Software Class Modelling},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1485-1492},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463552}
    }
    					
    2014.08.14 Luciano Souza, Ricardo Prudencio & Flavia Barros Coello Coello, C.A. (Hrsg.) A Comparison Study of Binary Multi-Objective Particle Swarm Optimization Approaches for Test Case Selection 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14), pp. 2164-2171, Beijing China, 6-11 July   Inproceedings Testing and Debugging
    Abstract: During the software testing process many test suites can be generated in order to evaluate and assure the quality of the products. In some cases the execution of all suites can not fit the available resources (time, people, etc). Hence, automatic Test Case (TC) selection could be used to reduce the suites based on some selection criterion. This process can be treated as an Optimisation problem, aiming to find a subset of TCs which optimises one or more objective functions (i.e., selection criteria). The majority of search-based works focus on single-objective selection. In this light, we developed mechanisms for functional TC selection which considers two objectives simultaneously: maximise requirements' coverage while minimising cost in terms of TC execution effort. These mechanisms were implemented by deploying multi-objective techniques based on Particle Swarm Optimisation (PSO). Due to the drawbacks of original binary version of PSO we implemented five binary PSO algorithms and combined them with a multi-objective versions of PSO in order to create new Optimisation strategies applied to TC selection. The experiments were performed on two real test suites, revealing the feasibility of the proposed strategies and the differences among them.
    BibTeX:
    @inproceedings{SouzaPB14,
      author = {Luciano Souza and Ricardo Prudencio and Flavia Barros},
      title = {A Comparison Study of Binary Multi-Objective Particle Swarm Optimization Approaches for Test Case Selection},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {2164--2171},
      address = {Beijing, China},
      month = {6-11 July},
      doi = {http://dx.doi.org/10.1109/CEC.2014.6900522}
    }
    					
    2012.10.25 Bogdan Marculescu, Robert Feldt & Richard Torkar A Concept for an Interactive Search-Based Software Testing System 2012 Proceedings of the 4th International Symposium on Search Based Software Engineering (SSBSE '12), Vol. 7515, pp. 273-278, Riva del Garda Italy, 28-30 September   Inproceedings Testing and Debugging
    Abstract: Software is an increasingly important part of various products, although not always the dominant component. For these software-intensive systems it is common that the software is assembled, and sometimes even developed, by domain specialists rather than by software engineers. To leverage the domain specialists' knowledge while maintaining quality we need testing tools that require only limited knowledge of software testing. Since each domain has unique quality criteria and trade-offs and there is a large variation in both software modeling and implementation syntax as well as semantics it is not easy to envisage general software engineering support for testing tasks. Particularly not since such support must allow interaction between the domain specialists and the testing system for iterative development. In this paper we argue that search-based software testing can provide this type of general and interactive testing support and describe a proof of concept system to support this argument. The system separates the software engineering concerns from the domain concerns and allows domain specialists to interact with the system in order to select the quality criteria being used to determine the fitness of potential solutions.
    BibTeX:
    @inproceedings{MarculescuFT12,
      author = {Bogdan Marculescu and Robert Feldt and Richard Torkar},
      title = {A Concept for an Interactive Search-Based Software Testing System},
      booktitle = {Proceedings of the 4th International Symposium on Search Based Software Engineering (SSBSE '12)},
      publisher = {Springer},
      year = {2012},
      volume = {7515},
      pages = {273-278},
      address = {Riva del Garda, Italy},
      month = {28-30 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-33119-0_21}
    }
    					
    2016.03.09 Shadab Irfan & Prabhat Ranjan A Concept of Out Degree in CFG for Optimal Test Data using Genetic Algorithm 2012 Proceedings of the 1st International Conference on Recent Advances in Information Technology (RAIT '12), pp. 436-441, Dhanbad India, 15-17 March   Inproceedings Testing and Debugging
    Abstract: To generate efficient test data is one of the major problems during testing phase. This task is complex and very time consuming and researchers proposed different methods for generating the test data. This paper proposed a technique that uses the source code of the program, transforms it into Control Flow Graph (CFG), thereafter calculate the outdegree and then apply Genetic Algorithm over it to generate valuable test data during testing phase. The advantage of using the concept of outdegree in CFG is that it simplifies the technique of calculating the fitness function and reduces the overall testing time. The proposed technique not only simplify the overall task of applying Genetic Algorithm operators over test data but also reduces the complexity and testing time thereby increasing the efficiency of the technique.
    BibTeX:
    @inproceedings{IrfanR12,
      author = {Shadab Irfan and Prabhat Ranjan},
      title = {A Concept of Out Degree in CFG for Optimal Test Data using Genetic Algorithm},
      booktitle = {Proceedings of the 1st International Conference on Recent Advances in Information Technology (RAIT '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {436-441},
      address = {Dhanbad, India},
      month = {15-17 March},
      doi = {http://dx.doi.org/10.1109/RAIT.2012.6194634}
    }
    					
    2014.09.22 Wael Kessentini, Marouane Kessentini, Houari Sahraoui, Slim Bechikh & Ali Ouni A Cooperative Parallel Search-Based Software Engineering Approach for Code-Smells Detection 2014 IEEE Transactions on Software Engineering, Vol. 40(9), pp. 841-861, September   Article
    Abstract: We propose in this paper to consider code-smells detection as a distributed optimization problem. The idea is that different methods are combined in parallel during the optimization process to find a consensus regarding the detection of code-smells. To this end, we used Parallel Evolutionary algorithms (P-EA) where many evolutionary algorithms with different adaptations (fitness functions, solution representations, and change operators) are executed, in a parallel cooperative manner, to solve a common goal which is the detection of code-smells. An empirical evaluation to compare the implementation of our cooperative P-EA approach with random search, two single population-based approaches and two code-smells detection techniques that are not based on meta-heuristics search. The statistical analysis of the obtained results provides evidence to support the claim that cooperative P-EA is more efficient and effective than state of the art detection approaches based on a benchmark of nine large open source systems where more than 85 percent of precision and recall scores are obtained on a variety of eight different types of code-smells.
    BibTeX:
    @article{KessentiniKSBO14,
      author = {Wael Kessentini and Marouane Kessentini and Houari Sahraoui and Slim Bechikh and Ali Ouni},
      title = {A Cooperative Parallel Search-Based Software Engineering Approach for Code-Smells Detection},
      journal = {IEEE Transactions on Software Engineering},
      year = {2014},
      volume = {40},
      number = {9},
      pages = {841-861},
      month = {September},
      doi = {http://dx.doi.org/10.1109/TSE.2014.2331057}
    }
    					
    2007.12.02 George S. Cowan, Robert G. Reynolds & George Jr. Cowan Acquisition of Software Engineering Knowledge SWEEP: An Automatic Programming System Based on Genetic Programming and Cultural Algorithms 2004 , Vol. 14, pp. 156, August   Book Distribution and Maintenance
    Abstract: This is the first book that attempts to provide a framework in which to embed an automatic programming system based on evolutionary learning (genetic programming) into a traditional software engineering environment. As such, it looks at how traditional software engineering knowledge can be integrated with an evolutionary programming process in a symbiotic way. Contents: * SWEEP: A System for the Software Engineering of Evolving Programs * The Genetic Programming Element Agents * The Metrics Apprentice: Using Cultural Algorithms to Formulate Quality Metrics for Software Systems * An Example Problem for Automatic Programming: Solving the Noisy Sine Problem with Discipulus * Data Collection and Analysis * Analysis: The Relationship of Software Metrics to Bloat * Defining a New Software Metric to Estimate Generalisation Using the Metrics Apprentice
    BibTeX:
    @book{CowanR04,
      author = {George S. Cowan and Robert G. Reynolds and George Jr. Cowan},
      title = {Acquisition of Software Engineering Knowledge SWEEP: An Automatic Programming System Based on Genetic Programming and Cultural Algorithms},
      publisher = {World Scientific},
      year = {2004},
      volume = {14},
      pages = {156},
      month = {August},
      url = {http://www.amazon.com/Acquisition-Software-Engineering-Knowledge-Programming/dp/9810229208}
    }
    					
    2016.02.17 Martin Monperrus A Critical Review of "Automatic Patch Generation Learned from Human-Written Patches": Essay on the Problem Statement and the Evaluation of Automatic Software Repair 2014 Proceedings of the 36th International Conference on Software Engineering (ICSE '14), pp. 234-242, Hyderabad India, 31 May - 7 June   Inproceedings
    Abstract: At ICSE'2013, there was the first session ever dedicated to automatic program repair. In this session, Kim et al. presented PAR, a novel template-based approach for fixing Java bugs. We strongly disagree with key points of this paper. Our critical review has two goals. First, we aim at explaining why we disagree with Kim and colleagues and why the reasons behind this disagreement are important for research on automatic software repair in general. Second, we aim at contributing to the field with a clarification of the essential ideas behind automatic software repair. In particular we discuss the main evaluation criteria of automatic software repair: understandability, correctness and completeness. We show that depending on how one sets up the repair scenario, the evaluation goals may be contradictory. Eventually, we discuss the nature of fix acceptability and its relation to the notion of software correctness.
    BibTeX:
    @inproceedings{Monperrus14,
      author = {Martin Monperrus},
      title = {A Critical Review of "Automatic Patch Generation Learned from Human-Written Patches": Essay on the Problem Statement and the Evaluation of Automatic Software Repair},
      booktitle = {Proceedings of the 36th International Conference on Software Engineering (ICSE '14)},
      publisher = {ACM},
      year = {2014},
      pages = {234-242},
      address = {Hyderabad, India},
      month = {31 May - 7 June},
      doi = {http://dx.doi.org/10.1145/2568225.2568324}
    }
    					
    2008.04.21 Christopher L. Simons & Ian C. Parmee A Cross-Disciplinary Technology Transfer for Search-based Evolutionary Computing: from Engineering Design to Software Engineering Design 2007 Engineering Optimization, Vol. 39(5), pp. 631-648, July   Article Design Tools and Techniques
    Abstract: Although object-oriented conceptual software design is difficult to learn and perform, computational tool support for the conceptual software designer is limited. In conceptual engineering design, however, computational tools exploiting interactive evolutionary computation (EC) have shown significant utility. This article investigates the cross-disciplinary technology transfer of search-based EC from engineering design to software engineering design in an attempt to provide support for the conceptual software designer. Firstly, genetic operators inspired by genetic algorithms (GAs) and evolutionary programming are evaluated for their effectiveness against a conceptual software design representation using structural cohesion as an objective fitness function. Building on this evaluation, a multi-objective GA inspired by a non-dominated Pareto sorting approach is investigated for an industrial-scale conceptual design problem. Results obtained reveal a mass of interesting and useful conceptual software design solution variants of equivalent optimality - a typical characteristic of successful multi-objective evolutionary search techniques employed in conceptual engineering design. The mass of software design solution variants produced suggests that transferring search-based technology across disciplines has significant potential to provide computationally intelligent tool support for the conceptual software designer.
    BibTeX:
    @article{SimonsP07,
      author = {Christopher L. Simons and Ian C. Parmee},
      title = {A Cross-Disciplinary Technology Transfer for Search-based Evolutionary Computing: from Engineering Design to Software Engineering Design},
      journal = {Engineering Optimization},
      year = {2007},
      volume = {39},
      number = {5},
      pages = {631-648},
      month = {July},
      doi = {http://dx.doi.org/10.1080/03052150701382974}
    }
    					
    2011.03.07 Peter M. Kruse, Joachim Wegener & Stefan Wappler A Cross-Platform Test System for Evolutionary Black-Box Testing of Embedded Systems 2010 ACM SIGEVOlution, Vol. 5(1), pp. 3-9, May   Article Testing and Debugging
    Abstract: When developing an electronic control unit (ECU) in a domain like the automotive industry, tests are performed on several test platforms, such as model-in-the-loop, software-in-the-loop and hardware-in-the-loop in order to find faults in early development stages. Test cases must be specified to verify the properties demanded of the developed system on these platforms. This is an expensive and non-trivial task. Evolutionary black-box testing, a recent approach to test case generation, can perform this task completely automatically. This paper describes our test system that implements evolutionary black-box testing and how to apply it to test functional and non-functional properties of an embedded system. Our test system supports the aforementioned test platforms and allows reuse of the generated test cases across them. We demonstrate the functioning of the test system in a case study with an antilock braking system.
    BibTeX:
    @article{KruseWW10,
      author = {Peter M. Kruse and Joachim Wegener and Stefan Wappler},
      title = {A Cross-Platform Test System for Evolutionary Black-Box Testing of Embedded Systems},
      journal = {ACM SIGEVOlution},
      year = {2010},
      volume = {5},
      number = {1},
      pages = {3-9},
      month = {May},
      doi = {http://dx.doi.org/10.1145/1811155.1811156}
    }
    					
    2010.08.25 José Carlos Bregieiro Ribeiro, Mário Alberto Zenha-Rela & Francisco Fernández de Vega Adaptive Evolutionary Testing: An Adaptive Approach to Search-Based Test Case Generation for Object-Oriented Software 2010 Proceedings of the International Workshop on Nature Inspired Cooperative Strategies for Optimization, pp. 185-197, Granada Spain, 12-14 May   Inproceedings Testing and Debugging
    Abstract: Adaptive Evolutionary Algorithms are distinguished by their dynamic manipulation of selected parameters during the course of evolving a problem solution; they have an advantage over their static counterparts in that they are more reactive to the unanticipated particulars of the problem. This paper proposes an adaptive strategy for enhancing Genetic Programming-based approaches to automatic test case generation. The main contribution of this study is that of proposing an Adaptive Evolutionary Testing methodology for promoting the introduction of relevant instructions into the generated test cases by means of mutation; the instructions from which the algorithm can choose are ranked, with their rankings being updated every generation in accordance to the feedback obtained from the individuals evaluated in the preceding generation. The experimental studies developed show that the adaptive strategy proposed improves the test case generation algorithm's efficiency considerably, while introducing a negligible computational overhead.
    BibTeX:
    @inproceedings{RibeiroZV10b,
      author = {José Carlos Bregieiro Ribeiro and Mário Alberto Zenha-Rela and Francisco Fernández de Vega},
      title = {Adaptive Evolutionary Testing: An Adaptive Approach to Search-Based Test Case Generation for Object-Oriented Software},
      booktitle = {Proceedings of the International Workshop on Nature Inspired Cooperative Strategies for Optimization},
      publisher = {Springer},
      year = {2010},
      pages = {185-197},
      address = {Granada, Spain},
      month = {12-14 May},
      doi = {http://dx.doi.org/10.1007/978-3-642-12538-6_16}
    }
    					
    2017.06.27 Federica Sarro, Filomena Ferrucci, Mark Harman, Alessandra Manna & Jian Ren Adaptive Multi-objective Evolutionary Algorithms for Overtime Planning in Software Projects To appear IEEE Transactions on Software Engineering   Article
    Abstract: Software engineering and development is wellknown to suffer from unplanned overtime, which causes stress and illness in engineers and can lead to poor quality software with higher defects. Recently, we introduced a multi-objective decision support approach to help balance project risks and duration against overtime, so that software engineers can better plan overtime. This approach was empirically evaluated on six real world software projects and compared against state-of-the-art evolutionary approaches and currently used overtime strategies. The results showed that our proposal comfortably outperformed all the benchmarks considered. This paper extends our previous work by investigating adaptive multi-objective approaches to meta-heuristic operator selection, thereby extending and (as the results show) improving algorithmic performance. We also extended our empirical study to include two new real world software projects, thereby enhancing the scientific evidence for the technical performance claims made in the paper. Our new results, over all eight projects studied, showed that our adaptive algorithm outperforms the considered state of the art multi-objective approaches in 93% of the experiments (with large effect size). The results also confirm that our approach significantly outperforms current overtime planning practices in 100% of the experiments (with large effect size).
    BibTeX:
    @article{SarroFHMR,
      author = {Federica Sarro and Filomena Ferrucci and Mark Harman and Alessandra Manna and Jian Ren},
      title = {Adaptive Multi-objective Evolutionary Algorithms for Overtime Planning in Software Projects},
      journal = {IEEE Transactions on Software Engineering},
      year = {To appear},
      doi = {http://dx.doi.org/10.1109/TSE.2017.2650914}
    }
    					
    2015.11.06 Aldeida Aleti & Madalina Drugan Adaptive Neighbourhood Search for the Component Deployment Problem 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE '15), pp. 188-202, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Since the establishment of the area of search-based software engineering, a wide range of optimisation techniques have been applied to automate various stages of software design and development. Architecture optimisation is one of the aspects that has been automated with methods like genetic algorithms, local search, and ant colony optimisation. A key challenge with all of these approaches is to adequately set the balance between exploration of the search space and exploitation of best candidate solutions. Different settings are required for different problem instances, and even different stages of the optimisation process. To address this issue, we investigate combinations of different search operators, which focus the search on either exploration or exploitation for an efficient variable neighbourhood search method. Three variants of the variable neighbourhood search method are investigated: the first variant has a deterministic schedule, the second variant uses fixed probabilities to select a search operator, and the third method adapts the search strategy based on feedback from the optimisation process. The adaptive strategy selects an operator based on its performance in the previous iterations. Intuitively, depending on the features of the fitness landscape, at different stages of the optimisation process different search strategies would be more suitable. Hence, the feedback from the optimisation process provides useful guidance in the choice of the best search operator, as evidenced by the experimental evaluation designed with problems of different sizes and levels of difficulty to evaluate the efficiency of varying the search strategy.
    BibTeX:
    @inproceedings{AletiD15,
      author = {Aldeida Aleti and Madalina Drugan},
      title = {Adaptive Neighbourhood Search for the Component Deployment Problem},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE '15)},
      publisher = {Springer},
      year = {2015},
      pages = {188-202},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_13}
    }
    					
    2010.11.29 Bo Jiang, Zhenyu Zhang, W.K. Chan & T.H. Tse Adaptive Random Test Case Prioritization 2009 Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering (ASE '09), pp. 233-244, Auckland New Zealand, 16-20 November   Inproceedings Testing and Debugging
    Abstract: Regression testing assures changed programs against unintended amendments. Rearranging the execution order of test cases is a key idea to improve their effectiveness. Paradoxically, many test case prioritization techniques resolve tie cases using the random selection approach, and yet random ordering of test cases has been considered as ineffective. Existing unit testing research unveils that adaptive random testing (ART) is a promising candidate that may replace random testing (RT). In this paper, we not only propose a new family of coverage-based ART techniques, but also show empirically that they are statistically superior to the RT-based technique in detecting faults. Furthermore, one of the ART prioritization techniques is consistently comparable to some of the best coverage-based prioritization techniques (namely, the "additional" techniques) and yet involves much less time cost.
    BibTeX:
    @inproceedings{JiangZCT09,
      author = {Bo Jiang and Zhenyu Zhang and W. K. Chan and T. H. Tse},
      title = {Adaptive Random Test Case Prioritization},
      booktitle = {Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering (ASE '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {233-244},
      address = {Auckland, New Zealand},
      month = {16-20 November},
      doi = {http://dx.doi.org/10.1109/ASE.2009.77}
    }
    					
    2012.08.21 Andrea Arcuri & Lionel C. Briand Adaptive Random Testing: An Illusion of Effectiveness? 2011 Proceedings of the 2011 International Symposium on Software Testing and Analysis (ISSTA '11), pp. 265-275, Toronto ON Canada, 17-21 July   Inproceedings Testing and Debugging
    Abstract: Adaptive Random Testing (ART) has been proposed as an enhancement to random testing, based on assumptions on how failing test cases are distributed in the input domain. The main assumption is that failing test cases are usually grouped into contiguous regions. Many papers have been published in which ART has been described as an effective alternative to random testing when using the average number of test case executions needed to find a failure (F-measure). But all the work in the literature is based either on simulations or case studies with unreasonably high failure rates. In this paper, we report on the largest empirical analysis of ART in the literature, in which 3727 mutated programs and nearly ten trillion test cases were used. Results show that ART is highly inefficient even on trivial problems when accounting for distance calculations among test cases, to an extent that probably prevents its practical use in most situations. For example, on the infamous Triangle Classification program, random testing finds failures in few milliseconds whereas ART execution time is prohibitive. Even when assuming a small, fixed size test set and looking at the probability of failure (P-measure), ART only fares slightly better than random testing, which is not sufficient to make it applicable in realistic conditions. We provide precise explanations of this phenomenon based on rigorous empirical analyses. For the simpler case of single-dimension input domains, we also perform formal analyses to support our claim that ART is of little use in most situations, unless drastic enhancements are developed. Such analyses help us explain some of the empirical results and identify the components of ART that need to be improved to make it a viable option in practice.
    BibTeX:
    @inproceedings{ArcuriB11b,
      author = {Andrea Arcuri and Lionel C. Briand},
      title = {Adaptive Random Testing: An Illusion of Effectiveness?},
      booktitle = {Proceedings of the 2011 International Symposium on Software Testing and Analysis (ISSTA '11)},
      publisher = {ACM},
      year = {2011},
      pages = {265-275},
      address = {Toronto, ON, Canada},
      month = {17-21 July},
      doi = {http://dx.doi.org/10.1145/2001420.2001452}
    }
    					
    2014.11.25 Simon Poulding & Hélène Waeselynck Adding Contextual Guidance to the Automated Search for Probabilistic Test Profiles 2014 Proceedings of the 7th International Conference on Software Testing, Verification and Validation (ICST '14), pp. 293-302, Cleveland OH USA, March 31 - April 4   Inproceedings Testing and Debugging
    Abstract: Statistical testing is a probabilistic approach to test data generation that has been demonstrated to be very effective at revealing faults. Its premise is to compensate for the imperfect connection between coverage criteria and the faults to be revealed by exercising each coverage element several times with different random data. The cornerstone of the approach is the often complex task of determining a suitable input profile, and recent work has shown that automated metaheuristic search can be a practical method of synthesising such profiles. The starting point of this paper is the hypothesis that, for some software, the existing grammar-based representation used by the search algorithm fails to capture important relationships between input arguments and this can limit the fault-revealing power of the synthesised profiles. We provide evidence in support of this hypothesis, and propose a solution in which the user provides some basic contextual knowledge to guide the search. Empirical results for two case studies are promising: knowledge gained by a very straightforward review of the software-under-test is sufficient to dramatically increase the efficacy of the profiles synthesised by search.
    BibTeX:
    @inproceedings{PouldingW14,
      author = {Simon Poulding and Hélène Waeselynck},
      title = {Adding Contextual Guidance to the Automated Search for Probabilistic Test Profiles},
      booktitle = {Proceedings of the 7th International Conference on Software Testing, Verification and Validation (ICST '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {293-302},
      address = {Cleveland, OH, USA},
      month = {March 31 - April 4},
      doi = {http://dx.doi.org/10.1109/ICST.2014.42}
    }
    					
    2017.06.27 José Miguel Rojas, Mattia Vivanti, Andrea Arcuri & Gordon Fraser A Detailed Investigation of the Effectiveness of Whole Test Suite Generation 2017 Empirical Software Engineering, Vol. 22(2), pp. 852-893, April   Article
    Abstract: A common application of search-based software testing is to generate test cases for all goals defined by a coverage criterion (e.g., lines, branches, mutants). Rather than generating one test case at a time for each of these goals individually, whole test suite generation optimizes entire test suites towards satisfying all goals at the same time. There is evidence that the overall coverage achieved with this approach is superior to that of targeting individual coverage goals. Nevertheless, there remains some uncertainty on (a) whether the results generalize beyond branch coverage, (b) whether the whole test suite approach might be inferior to a more focused search for some particular coverage goals, and (c) whether generating whole test suites could be optimized by only targeting coverage goals not already covered. In this paper, we perform an in-depth analysis to study these questions. An empirical study on 100 Java classes using three different coverage criteria reveals that indeed there are some testing goals that are only covered by the traditional approach, although their number is only very small in comparison with those which are exclusively covered by the whole test suite approach. We find that keeping an archive of already covered goals along with the tests covering them and focusing the search on uncovered goals overcomes this small drawback on larger classes, leading to an improved overall effectiveness of whole test suite generation.
    BibTeX:
    @article{RojasVAF17,
      author = {José Miguel Rojas and Mattia Vivanti and Andrea Arcuri and Gordon Fraser},
      title = {A Detailed Investigation of the Effectiveness of Whole Test Suite Generation},
      journal = {Empirical Software Engineering},
      year = {2017},
      volume = {22},
      number = {2},
      pages = {852-893},
      month = {April},
      doi = {http://dx.doi.org/10.1007/s10664-015-9424-2}
    }
    					
    2010.11.23 Thaís Alves Burity Pereira & Glêdson Elias A DSM-based Strategy to Optimize Software Components Clustering (in Portuguese) 2010 Proceedings of the Brazilian Workshop on Optimization in Software Engineering (WOES '10), Salvador Brazil, 30-30 September   Inproceedings Requirements/Specifications
    Abstract: In distributed Software Product Line (SPL) projects, dependencies between components influence on communication needs between their respective development teams. Thus, an alternative to reduce such needs is to cluster tightly coupled components into loosely coupled modules as long as each module is developed by a single team. In such a context, since numerous clustering possibilities exist, this paper describes an optimization strategy for clustering software components based on the technique named numerical DSM (Design Structure Matrix), which is adopted to represent dependencies among components of the SPL architecture.
    BibTeX:
    @inproceedings{PereiraE10,
      author = {Thaís Alves Burity Pereira and Glêdson Elias},
      title = {A DSM-based Strategy to Optimize Software Components Clustering (in Portuguese)},
      booktitle = {Proceedings of the Brazilian Workshop on Optimization in Software Engineering (WOES '10)},
      year = {2010},
      address = {Salvador, Brazil},
      month = {30-30 September},
      url = {http://www.uniriotec.br/~marcio.barros/woes2010/Paper09.pdf}
    }
    					
    2013.06.28 Westley Weimer Advances in Automated Program Repair and a Call to Arms 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13) - Keynote paper, Vol. 8084, pp. 1-3, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: In this keynote address I survey recent success and momentum in the subfield of automated program repair. I also encourage the search-based software engineering community to rise to various challenges and opportunities associated with test oracle generation, large-scale human studies, and reproducible research through benchmarks. I discuss recent advances in automated program repair, focusing on the search-based GenProg technique but also presenting a broad overview of the subfield. I argue that while many automated repair techniques are "correct by construction" or otherwise produce only a single repair (e.g., AFix [13], Axis [17], Coker and Hafiz [4], Demsky and Rinard [7], Gopinath et al. [12], Jolt [2], Juzi [8], etc.), the majority can be categorized as "generate and validate" approaches that enumerate and test elements of a space of candidate repairs and are thus directly amenable to search-based software engineering and mutation testing insights (e.g., ARC [1], AutoFix-E [23], ARMOR [3], CASC [24], ClearView [21], Debroy and Wong [6], FINCH [20], PACHIKA [5], PAR [14], SemFix [18], Sidiroglou and Keromytis [22], etc.). I discuss challenges and advances such as scalability, test suite quality, and repair quality while attempting to convey the excitement surrounding a subfield that has grown so quickly in the last few years that it merited its own session at the 2013 International Conference on Software Engineering [3,4,14,18]. Time permitting, I provide a frank discussion of mistakes made and lessons learned with GenProg [15]. In the second part of the talk, I pose three challenges to the SBSE community. I argue for the importance of human studies in automated software engineering. I present and describe multiple "how to" examples of using crowdsourcing (e.g., Amazon's Mechanical Turk) and massive on-line education (MOOCs) to enable SBSE-related human studies [10,11]. I argue that we should leverage our great strength in testing to tackle the increasingly-critical problem of test oracle generation (e.g., [9]) — not just test data generation — and draw supportive analogies with the subfields of specification mining and invariant detection [16,19]. Finally, I challenge the SBSE community to facilitate reproducible research and scientific advancement through benchmark creation, and support the need for such efforts with statistics from previous accepted papers.
    BibTeX:
    @inproceedings{Weimer13,
      author = {Westley Weimer},
      title = {Advances in Automated Program Repair and a Call to Arms},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13) - Keynote paper},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {1-3},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_1}
    }
    					
    2012.10.25 Kalyanmoy Deb Advances in Evolutionary Multi-objective Optimization 2012 Proceedings of the 4th International Symposium on Search Based Software Engineering (SSBSE '12), Vol. 7515, pp. 1-26, Riva del Garda Italy, 28-30 September   Inproceedings General Aspects and Survey
    Abstract: Started during 1993-95 with three different algorithms, evolutionary multi-objective optimization (EMO) has come a long way in a quick time to establish itself as a useful field of research and application. Till to date, there exist numerous textbooks and edited books, commercial softwares dedicated to EMO algorithms, freely downloadable codes in most-used computer languages, a biannual conference series (called EMO conference series) running successfully since 2001, and special sessions and workshops held in almost all major evolutionary computing conferences. In this paper, we discuss briefly the principles of EMO through an illustration of one specific algorithm.Thereafter, we focus on mentioning a few recent research and application developments of EMO. Specifically, we discuss EMO's use with multiple criterion decision making (MCDM) procedures and EMO's applicability in handling of a large number of objectives. Besides, the concept of multi-objectivization and innovization – which are practically motivated, is discussed next. A few other key advancements are also highlighted. The development and application of EMO to multi-objective optimization problems and their continued extensions to solve other related problems have elevated the EMO research to a level which may now undoubtedly be termed as an active field of research with a wide range of theoretical and practical research and application opportunities. EMO concepts are ready to be applied to search based software engineering (SBSE) problems.
    BibTeX:
    @inproceedings{Deb12,
      author = {Kalyanmoy Deb},
      title = {Advances in Evolutionary Multi-objective Optimization},
      booktitle = {Proceedings of the 4th International Symposium on Search Based Software Engineering (SSBSE '12)},
      publisher = {Springer},
      year = {2012},
      volume = {7515},
      pages = {1-26},
      address = {Riva del Garda, Italy},
      month = {28-30 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-33119-0_1}
    }
    					
    2010.01.12 Hong Zhu & Fevzi Belli Advancing Test Automation Technology to Meet the Challenges of Model-based Software Testing - Guest Editors' Introduction to the Special Section of the Third Ieee International Workshop on Automation of Software Test (AST 2008) 2009 Information and Software Technology, Vol. 51(11), pp. 1485-1486, November   Article Testing and Debugging
    Abstract: no abstract
    BibTeX:
    @article{ZhuB09,
      author = {Hong Zhu and Fevzi Belli},
      title = {Advancing Test Automation Technology to Meet the Challenges of Model-based Software Testing - Guest Editors' Introduction to the Special Section of the Third Ieee International Workshop on Automation of Software Test (AST 2008)},
      journal = {Information and Software Technology},
      year = {2009},
      volume = {51},
      number = {11},
      pages = {1485-1486},
      month = {November},
      doi = {http://dx.doi.org/10.1016/j.infsof.2009.06.012}
    }
    					
    2008.08.27 Xiaoyuan Xie, Baowen Xu, Liang Shi, Changhai Nie & Yanxiang He A Dynamic Optimization Strategy for Evolutionary Testing 2005 Proceedings of the 12th Asia-Pacific Software Engineering Conference (APSEC '05), pp. 568-575, Taibei Taiwan, 15-17 December   Inproceedings Testing and Debugging
    Abstract: Evolutionary Testing (ET) is an efficient technique of automated test case generation. ET uses a kind of metaheuristic search technique, Genetic Algorithm (GA), to convert the task of test case generation into an optimal problem. The configuration strategies of GA will have notable influences upon the performance of ET. In this paper, we present a dynamic self-adaptation strategy for evolutionary structural testing. It monitors evolution process dynamically, detects the symptom of prematurity by analyzing the population, and adjusts the mutation possibility to recover the diversity of the population. The empirical results show that the strategy can greatly improve the performance of the ET in many cases. Besides, some valuable advices are provided for the configuration strategies of ET by the empirical study.
    BibTeX:
    @inproceedings{XieXSNH05,
      author = {Xiaoyuan Xie and Baowen Xu and Liang Shi and Changhai Nie and Yanxiang He},
      title = {A Dynamic Optimization Strategy for Evolutionary Testing},
      booktitle = {Proceedings of the 12th Asia-Pacific Software Engineering Conference (APSEC '05)},
      publisher = {IEEE},
      year = {2005},
      pages = {568-575},
      address = {Taibei, Taiwan},
      month = {15-17 December},
      doi = {http://dx.doi.org/10.1109/APSEC.2005.6}
    }
    					
    2011.09.05 Arash Mehrmand & Robert Feldt A Factorial Experiment on Scalability of Search Based Software Testing 2010 Proceedings of the 3rd Artificial Intelligence Techniques in Software Engineering Workshop (AISEW '10), Larnaca Cyprus, 7-7 October   Inproceedings Testing and Debugging
    Abstract: Software testing is an expensive process, which is vital in the industry. Construction of the test-data in software testing requires the major cost and to decide which method to use in order to generate the test data is important. This paper discusses the efficiency of search-based algorithms (preferably genetic algorithm) versus random testing, in soft- ware test-data generation. This study di?ers from all previous studies due to sample programs (SUTs) which are used. Since we want to in- crease the complexity of SUTs gradually, and the program generation is automatic as well, Grammatical Evolution is used to guide the program generation. SUTs are generated according to the grammar we provide, with different levels of complexity. SUTs will first undergo genetic algorithm and then random testing. Based on the test results, this paper recommends one method to use for automation of software testing.
    BibTeX:
    @inproceedings{MehrmandF10,
      author = {Arash Mehrmand and Robert Feldt},
      title = {A Factorial Experiment on Scalability of Search Based Software Testing},
      booktitle = {Proceedings of the 3rd Artificial Intelligence Techniques in Software Engineering Workshop (AISEW '10)},
      year = {2010},
      address = {Larnaca, Cyprus},
      month = {7-7 October},
      url = {http://arxiv.org/abs/1101.2301v1}
    }
    					
    2011.09.05 Arash Mehrmand A Factorial Experiment on the Scalability of Search-Based Software Testing 2009 School: School of Engineering, Blekinge Institute of Technology   Mastersthesis Testing and Debugging
    Abstract: Software testing is an expensive process, which is vital in the industry. Construction of the test-data in software testing requires the major cost and knowing which method to use in order to generate the test data is very important. This paper discusses the performance of search-based algorithms (preferably genetic algorithm) versus random testing, in software test-data generation. A factorial experiment is designed so that, we have more than one factor for each experiment we make. Although many researches have been done in the area of automated software testing, this research differs from all of them due to sample programs (SUTs) which are used. Since the program generation is automatic as well, Grammatical Evolution is used to guide the program generations. They are not goal based, but generated according to the grammar we provide, with different levels of complexity. Genetic algorithm is ?rst applied to programs, then we apply random testing. Based on the results which come up, this paper recommends one method to use for software testing, if the SUT has the same conditions as we had in this study. SUTs are not like the sample programs, provided by other studies since they are generated using a grammar.
    BibTeX:
    @mastersthesis{Mehrmand09,
      author = {Arash Mehrmand},
      title = {A Factorial Experiment on the Scalability of Search-Based Software Testing},
      school = {School of Engineering, Blekinge Institute of Technology},
      year = {2009},
      url = {http://www.bth.se/fou/cuppsats.nsf/all/3db8cb2edf809a4fc125764d006e6775/$file/ARASH_MEHRMAND_Master_Thesis.pdf}
    }
    					
    2011.07.20 Soumaya Medini, Philippe Galinier, Massimiliano Di Penta, Yann-Gaël Guéhéneuc & Giuliano Antoniol A Fast Algorithm to Locate Concepts in Execution Traces 2011 Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11), Vol. 6956, pp. 252-266, Szeged Hungary, 10-12 September   Inproceedings Coding Tools and Techniques
    Abstract: The identification of cohesive segments in execution traces is an important step in concept location which, in turns, is of paramount importance for many program-comprehension activities. In this paper, we reformulate concept location as a trace segmentation problem solved via dynamic programming. Differently to approaches based on genetic algorithms, dynamic programming can compute an exact solution with better performance than previous approaches, even on long traces. We describe the new problem formulation and the algorithmic details of our approach. We then compare the performances of dynamic programming with those of a genetic algorithm, showing that dynamic programming reduces dramatically the time required to segment traces, without sacrificing precision and recall; even slightly improving them.
    BibTeX:
    @inproceedings{MediniGDGA11,
      author = {Soumaya Medini and Philippe Galinier and Massimiliano Di Penta and Yann-Gaël Guéhéneuc and Giuliano Antoniol},
      title = {A Fast Algorithm to Locate Concepts in Execution Traces},
      booktitle = {Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11)},
      publisher = {Springer},
      year = {2011},
      volume = {6956},
      pages = {252-266},
      address = {Szeged, Hungary},
      month = {10-12 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-23716-4_22}
    }
    					
    2016.03.08 Thelma Elita Colanzi & Silvia Regina Vergilio A Feature-Driven Crossover Operator for Product Line Architecture Design Optimization 2014 IEEE 38th Annual Computer Software and Applications Conference (COMPSAC '14), pp. 43-52, Västeras Sweden, 21-25 July   Inproceedings
    Abstract: The Product Line Architecture (PLA) design is a multi-objective optimization problem that can be properly solved in the Search Based Software Engineering (SBSE) field. However, the PLA design has specific characteristics. For example, the PLA is designed in terms of features and a highly modular PLA is necessary to enable the growth of a software product line. However, existing search based design approaches do not consider such needs. To overcome this limitation, this paper introduces a feature-driven crossover operator that aims at improving feature modularization. The proposed operator was applied in an empirical study using the multi-objective evolutionary algorithm named NSGAII. In comparison with another version of NSGAII that uses only mutation operators, the feature-driven crossover version found a greater diversity of solutions (potential PLA designs), with higher feature-based cohesion, and less feature scattering and tangling.
    BibTeX:
    @inproceedings{ColanziV14b,
      author = {Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {A Feature-Driven Crossover Operator for Product Line Architecture Design Optimization},
      booktitle = {IEEE 38th Annual Computer Software and Applications Conference (COMPSAC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {43-52},
      address = {Västeras, Sweden},
      month = {21-25 July},
      doi = {http://dx.doi.org/10.1109/COMPSAC.2014.11}
    }
    					
    2013.06.28 Zheng Li, Yi Bian, Ruilian Zhao & Jun Cheng A Fine-Grained Parallel Multi-objective Test Case Prioritization on GPU 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 111-125, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: Multi-Objective Evolutionary Algorithms (MOEAs) have been widely used to address regression test optimization problems, including test case selection and test suite minimization. GPU-based parallel MOEAs are proposed to increase execution efficiency to fulfill the industrial demands. When using binary representation in MOEAs, the fitness evaluation can be transformed a parallel matrix multiplication that is implemented on GPU easily and more efficiently. Such GPU-based parallel MOEAs may achieve higher level of speed-up for test case prioritization because the computation load of fitness evaluation in test case prioritization is more than that in test case selection or test suite minimization. However, the non-applicability of binary representation in the test case prioritization results in the challenge of parallel fitness evaluation on GPU. In this paper, we present a GPU-based parallel fitness evaluation and three novel parallel crossover computation schemes based on ordinal and sequential representations, which form a fine-grained parallel framework for multi-objective test case prioritization. The empirical studies based on eight benchmarks and one open source program show a maximum of 120x speed-up achieved.
    BibTeX:
    @inproceedings{LiBZC13,
      author = {Zheng Li and Yi Bian and Ruilian Zhao and Jun Cheng},
      title = {A Fine-Grained Parallel Multi-objective Test Case Prioritization on GPU},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {111-125},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_10}
    }
    					
    2009.07.30 Raquel Blanco, José García-Fanjul & Javier Tuya A First Approach to Test Case Generation for BPEL Compositions of Web Services Using Scatter Search 2009 Proceedings of the IEEE International Conference on Software Testing, Verification, and Validation Workshops (ICSTW '09), pp. 131-140, Denver Colorado USA, 1-4 April   Inproceedings Testing and Debugging
    Abstract: A challenging part of Software Testing entails the generation of test cases, which cost can be reduced by means of the use of techniques for automating this task. In this paper we present an approach based on the metaheuristic technique Scatter Search for the automatic test case generation of the BPEL business process. A transition coverage criterion is used as adequacy criterion.
    BibTeX:
    @inproceedings{BlancoGT09,
      author = {Raquel Blanco and José García-Fanjul and Javier Tuya},
      title = {A First Approach to Test Case Generation for BPEL Compositions of Web Services Using Scatter Search},
      booktitle = {Proceedings of the IEEE International Conference on Software Testing, Verification, and Validation Workshops (ICSTW '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {131-140},
      address = {Denver, Colorado, USA},
      month = {1-4 April},
      doi = {http://dx.doi.org/10.1109/ICSTW.2009.24}
    }
    					
    2009.03.31 Hsinyi Jiang, Carl K. Chang, Dan Zhu & Shuxing Cheng A Foundational Study on the Applicability of Genetic Algorithm to Software Engineering Problems 2007 Proceedings of IEEE Congress on Evolutionary Computation (CEC '07), pp. 2210-2219, Singapore, 25-28 September   Inproceedings General Aspects and Survey
    Abstract: Many problems in software engineering (SE) can be formulated as optimization problems. Genetic algorithm (GA) is one of the more effective tools for solving such optimization problems and has attracted the attention of SE researchers in recent years. However, there is a general lack of sound support theory to help SE researchers investigate the applicability of GA to certain classes of SE problems. Without such a theory, numerous attempts to conduct a wide spectrum of experiments for solution validation appear to be ad hoc and the results are often difficult to generalize. This paper reports a foundational study to develop such a support theory. Some preliminary results are also given.
    BibTeX:
    @inproceedings{JiangCZC07,
      author = {Hsinyi Jiang and Carl K. Chang and Dan Zhu and Shuxing Cheng},
      title = {A Foundational Study on the Applicability of Genetic Algorithm to Software Engineering Problems},
      booktitle = {Proceedings of IEEE Congress on Evolutionary Computation (CEC '07)},
      publisher = {IEEE},
      year = {2007},
      pages = {2210-2219},
      address = {Singapore},
      month = {25-28 September},
      doi = {http://dx.doi.org/10.1109/CEC.2007.4424746}
    }
    					
    2011.05.19 Si Huang A Framework for Automatically Repairing GUI Test Suites 2010 School: University of Nebraska   Mastersthesis Testing and Debugging
    BibTeX:
    @mastersthesis{Huang10,
      author = {Si Huang},
      title = {A Framework for Automatically Repairing GUI Test Suites},
      school = {University of Nebraska},
      year = {2010},
      url = {http://digitalcommons.unl.edu/computerscidiss/14/}
    }
    					
    2007.12.02 Renée C. Bryce, Charles Joseph Colbourn & Myra B. Cohen Roman, G.-C.; Griswold, W.G. & Nuseibeh, B. (Hrsg.) A Framework of Greedy Methods for Constructing Interaction Test Suites 2005 Proceedings of the 27th International Conference on Software Engineering (ICSE '05), pp. 146-155, St. Louis MO USA, 15-21 May   Inproceedings Testing and Debugging
    Abstract: Greedy algorithms for the construction of software interaction test suites are studied. A framework is developed to evaluate a large class of greedy methods that build suites one test at a time. Within this framework are many instantiations of greedy methods generalizing those in the literature. Greedy algorithms are popular when the time for test suite construction is of paramount concern. We focus on the size of the test suite produced by each instantiation. Experiments are analyzed using statistical techniques to determine the importance of the implementation decisions within the framework. This framework provides a platform for optimizing the accuracy and speed of "one-test-at-a-time" greedy methods.
    Review: Greedy algorithms for the construction of software interaction test suites are studied. A framework is developed to evaluate a large class of greedy methods that build suites one test at a time. Within this framework are many instantiations of greedy methods generalizing those in the literature. Greedy algorithms are popular when the time for test suite construction is of paramount concern. We focus on the size of the test suite produced by each instantiation. Experiments are analyzed using statistical techniques to determine the importance of the implementation decisions within the framework. This framework provides a platform for optimizing the accuracy and speed of "one-test-at-a-time" greedy methods.
    BibTeX:
    @inproceedings{BryceCC05,
      author = {Renée C. Bryce and Charles Joseph Colbourn and Myra B. Cohen},
      title = {A Framework of Greedy Methods for Constructing Interaction Test Suites},
      booktitle = {Proceedings of the 27th International Conference on Software Engineering (ICSE '05)},
      publisher = {ACM},
      year = {2005},
      pages = {146-155},
      address = {St. Louis, MO, USA},
      month = {15-21 May},
      doi = {http://dx.doi.org/10.1145/1062455.1062495}
    }
    					
    2016.03.08 Aurélio da Silva Grande, Rosiane de Freitas Rodrigues & Arilo Claudio Dias Neto A Framework to Support the Selection of Software Technologies by Search-Based Strategy 2014 Proceedings of IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI '14), pp. 979-983, Limassol Cyprus, 10-12 November   Inproceedings
    Abstract: This paper presents a framework to instantiate software technologies selection approaches by using search techniques. The software technologies selection problem (STSP) is modeled as a Combinatorial Optimization problem aiming attending different real scenarios in Software Engineering. The proposed framework works as a top-level layer over generic optimization frameworks that implement a high number of metaheuristics proposed in the technical literature, such as JMetal and OPT4J. It aims supporting software engineers that are not able to use optimization frameworks during a software project due to short deadlines and limited resources or skills. The framework was evaluated in a case study of a complex real-world software engineering scenario. This scenario was modeled as the STSP and some experiments were executed with different metaheuristics using the proposed framework. The results indicate its feasibility as support to the selection of software technologies.
    BibTeX:
    @inproceedings{GrandeFD14,
      author = {Aurélio da Silva Grande and Rosiane de Freitas Rodrigues and Arilo Claudio Dias Neto},
      title = {A Framework to Support the Selection of Software Technologies by Search-Based Strategy},
      booktitle = {Proceedings of IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {979-983},
      address = {Limassol, Cyprus},
      month = {10-12 November},
      doi = {http://dx.doi.org/10.1109/ICTAI.2014.148}
    }
    					
    2011.07.20 Dayvison Lima, Fabrício Gomes de Freitas, Gustavo Augusto Lima de Campos & Jerffeson Teixeira de Souza A Fuzzy Approach to Requirements Prioritization 2011 Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11), Vol. 6956, pp. 64-69, Szeged Hungary, 10-12 September   Inproceedings Requirements/Specifications
    Abstract: One of the most important issues in a software development project is the requirements prioritization. This task is used to indicate an order for the implementation of the requirements. This problem has uncertain aspects, therefore Fuzzy Logic concepts can be used to properly represent and tackle the task. The objective of this work is to present a formal framework to aid the decision making in prioritizing requirements in a software development process, including ambiguous and vague data.
    BibTeX:
    @inproceedings{LimaFCS11,
      author = {Dayvison Lima and Fabrício Gomes de Freitas and Gustavo Augusto Lima de Campos and Jerffeson Teixeira de Souza},
      title = {A Fuzzy Approach to Requirements Prioritization},
      booktitle = {Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11)},
      publisher = {Springer},
      year = {2011},
      volume = {6956},
      pages = {64-69},
      address = {Szeged, Hungary},
      month = {10-12 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-23716-4_8}
    }
    					
    2010.02.24 Cuauhtemoc Lopez-Martin A Fuzzy Logic Model for Predicting the Development Effort of Short Scale Programs Based Upon Two Independent Variables 2010 Applied Soft Computing, Vol. 11(1), pp. 724-732, January   Article Management
    Abstract: Fuzzy models have been recently used for estimating the development effort of software projects and this practice could start with short scale programs. In this paper, new and changed (N&C) as well as reused code were gathered from small programs developed by 74 programmers using practices of the Personal Software Process; these data were used as input for a fuzzy model for estimating the development effort. Accuracy of this fuzzy model was compared with the accuracy of a statistical regression model. Two samples of 163 and 68 programs were used for verifying and validating respectively the models; the comparison criterion was the Mean Magnitude of Error Relative to the estimate (MMER). In verification and validation stages, fuzzy model kept a MMER lower or equal than that regression model and an accuracies comparison of the models based on ANOVA, did not show a statistically significant difference amongst their means. This result suggests that fuzzy logic could be used for predicting the effort of small programs based upon these two kinds of lines of code.
    BibTeX:
    @article{LopezMartin10,
      author = {Cuauhtemoc Lopez-Martin},
      title = {A Fuzzy Logic Model for Predicting the Development Effort of Short Scale Programs Based Upon Two Independent Variables},
      journal = {Applied Soft Computing},
      year = {2010},
      volume = {11},
      number = {1},
      pages = {724-732},
      month = {January},
      doi = {http://dx.doi.org/10.1016/j.asoc.2009.12.034}
    }
    					
    2009.04.02 Gerassimos Barlas & Khaled El-Fakih A GA-based Movie-On-Demand Platform using Multiple Distributed Servers 2008 Multimedia Tools and Applications, Vol. 40(3), pp. 361-383, December   Article Design Tools and Techniques
    Abstract: In this paper we present the design and explore the performance of a unicast-based distributed system for Movie-on-Demand applications. The operation of multiple servers is coordinated with the assistance of an analytical framework that provides closed-form solutions to the content partitioning and scheduling problem, even under the presence of packet losses. The problem of mapping clients to servers is solved with a genetic algorithm, that manages to provide adequate, near-optimum solutions with a minimum of overhead. While previous studies focused on the static behavior of such a system, i.e. fixed a-priori known number of N servers and K clients commencing operation at the same time instance, this paper focuses on the dynamic behavior of such a system over a period of time with clients coming and going at random intervals. The paper includes a rigorous simulation study that shows how the system behaves in terms of a variety of metrics, including the average access time over all the requested media, in response to differences in the client arrival rate or the consumed server bandwidth. As it is shown, the proposed platform exhibits excellent performance characteristics that surpass traditional approaches that treat clients individually. This has been verified to be true up to extreme system loads, proving the scalability of the proposed content delivery scheme. The significance of our findings also stems from the assumption of unreliable communications, a first for the study of complete systems in this domain.
    BibTeX:
    @article{BarlasE08,
      author = {Gerassimos Barlas and Khaled El-Fakih},
      title = {A GA-based Movie-On-Demand Platform using Multiple Distributed Servers},
      journal = {Multimedia Tools and Applications},
      year = {2008},
      volume = {40},
      number = {3},
      pages = {361-383},
      month = {December},
      doi = {http://dx.doi.org/10.1007/s11042-008-0211-6}
    }
    					
    2012.03.09 Cosimo Birtolo, Diego De Chiara, Maria Ascione & Roberto Armenise A Generative Approach to Product Bundling in the e-Commerce domain 2011 Proceedings of 3rd World Congress on Nature and Biologically Inspired Computing (NaBIC '11), pp. 169-175, Salamanca Sapin, 19-21 October   Inproceedings
    Abstract: The product bundling problem is addressed by a Genetic Algorithm. We propose a generative approach in order to find the bundle of products that best satisfies user preferences and requirements and, at the same time, to guarantee the satisfaction of merchant needs such as the minimization of the dead stocks. The proposed approach succeeds in finding the optimal trade-off between different and conflicting constraints. Experimentation investigates algorithm convergence under several conditions, such as a different number of products in the bundle, increasing number of constraints, and different user requirements.
    BibTeX:
    @inproceedings{BirtoloCAA11,
      author = {Cosimo Birtolo and Diego De Chiara and Maria Ascione and Roberto Armenise},
      title = {A Generative Approach to Product Bundling in the e-Commerce domain},
      booktitle = {Proceedings of 3rd World Congress on Nature and Biologically Inspired Computing (NaBIC '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {169-175},
      address = {Salamanca, Sapin},
      month = {19-21 October},
      doi = {http://dx.doi.org/10.1109/NaBIC.2011.6089454}
    }
    					
    2009.03.31 Mehdi Amoui, Siavash Mirarab, Sepand Ansari & Caro Lucas A Genetic Algorithm Approach to Design Evolution using Design Pattern Transformation 2006 International Journal of Information Technology and Intelligent Computing (ITIC '06), Vol. 1(2), pp. 235-244, June/August   Article Design Tools and Techniques
    Abstract: Improving software quality is a major concern in software development process. Despite all previous attempts to evolve software for quality improvement, these methods are neither scalable nor fully automatable. In this research we approach software evolution problem by reformulating it as a search problem. For this purpose, we applied software transformations in a form of GOF patterns to UML design model and evaluated the quality of the transformed design according to Object-Oriented metrics, particularly 'Distance from the Main Sequence'. This search based formulation of the problem enables us to use Genetic Algorithm for optimizing the metrics and find the best sequence of transformations. The implementation results show that Genetic Algorithm is able to find the optimal solution efficiently, especially when different genetic operators, adapted to characteristics of transformations, are used. Overall, we conclude that software transformations can successfully be approached automatically using evolutionary algorithms.
    BibTeX:
    @article{AmouiMAL06,
      author = {Mehdi Amoui and Siavash Mirarab and Sepand Ansari and Caro Lucas},
      title = {A Genetic Algorithm Approach to Design Evolution using Design Pattern Transformation},
      journal = {International Journal of Information Technology and Intelligent Computing (ITIC '06)},
      year = {2006},
      volume = {1},
      number = {2},
      pages = {235-244},
      month = {June/August},
      url = {http://www.stargroup.uwaterloo.ca/~mamouika/papers/pdf/ITIC.2006.pdf}
    }
    					
    2008.07.20 Robert M. Patton, Annie S. Wu & Gwendolyn H. Walton A Genetic Algorithm Approach to Focused Software Usage Testing 2003 , pp. 259-286   Inbook Testing and Debugging
    Abstract: Because software system testing typically consists of only a very small sample from the set of possible scenarios of system use, it can be difficult or impossible to generalize the test results from a limited amount of testing based on high-level usage models. It can also be very difficult to determine the nature and location of the errors that caused any failures experienced during system testing (and therefore very difficult for the developers to find and fix these errors). To address these issues, this paper presents a Genetic Algorithm (GA) approach to focused software usage testing. Based on the results of macro-level software system testing, a GA is used to select additional test cases to focus on the behavior around the initial test cases to assist in identifying and characterizing the types of test cases that induce system failures (if any) and the types of test cases that do not induce system failures. Whether or not any failures are experienced, this GA approach supports increased test automation and provides increased evidence to support reasoning about the overall quality of the software. When failures are experienced, the approach can improve the efficiency of debugging activities by providing information about similar, but different, test cases that reveal faults in the software and about the input values that triggered the faults to induce failures.
    BibTeX:
    @inbook{PattonWW03,
      author = {Robert M. Patton and Annie S. Wu and Gwendolyn H. Walton},
      title = {A Genetic Algorithm Approach to Focused Software Usage Testing},
      publisher = {Springer},
      year = {2003},
      pages = {259-286},
      url = {http://books.google.com/books?hl=en&lr=&id=nkrhQ922dBoC&oi=fnd&pg=PA259&ots=X9LYKYPkJG&sig=qntnYG34yy3HevLNXLMwPyJugYA#PPA286,M1}
    }
    					
    2017.06.27 Salim Kebir, Isabelle Borne & Djamel Meslati A Genetic Algorithm-based Approach for Automated Refactoring of Component-based Software 2017 Information and Software Technology, Vol. 88, pp. 17-36, August   Article
    Abstract: Context: During its lifecycle, a software system undergoes repeated modifications to quickly fulfill new requirements, but its underlying design is not properly adjusted after each update. This leads to the emergence of bad smells. Refactoring provides a de facto behavior-preserving approach to eliminate these anomalies. However, manually determining and performing useful refactorings is a formidable challenge, as stated in the literature. Therefore, framing object-oriented automated refactoring as a search-based technique has been proposed. However, the literature shows that search-based refactoring of component-based software has not yet received proper attention.

    Objective: This paper presents a genetic algorithm-based approach for the automated refactoring of component-based software. This approach consists of detecting component-relevant bad smells and eliminating these bad smells by searching for the best sequence of refactorings using a genetic algorithm.

    Method: Our approach consists of four steps. The first step includes studying the literature related to component-relevant bad smells and formulating bad smell detection rules. The second step involves proposing a catalog of component-relevant refactorings. The third step consists of constructing a source code model by extracting facts from the source code of a component-based software. The final step seeks to identify the best sequence of refactorings to apply to reduce the presence of bad smells in the source code model using a genetic algorithm. The latter uses bad smell detection rules as a fitness function and the catalog of refactorings as a means to explore the search space.

    Results: As a case study, we conducted experiments on an unbiased set of four real-world component-based applications. The results indicate that our approach is able to efficiently reduce the total number of bad smells by more than one half, which is an acceptable value compared to the recent literature. Moreover, we determined that our approach is also accurate in refactoring only components suffering from bad smells while leaving the remaining components untouched whenever possible. Furthermore, a statistical analysis shows that our genetic algorithm outperforms random search and local search in terms of efficiency and accuracy on almost all the systems investigated in this work.

    Conclusion: This paper presents a search-based approach for the automated refactoring of component-based software. To the best of our knowledge, our approach is the first to focus on component-based refactoring, whereas the state-of-the-art approaches focus only on object-oriented refactoring.

    BibTeX:
    @article{KebirBM17,
      author = {Salim Kebir and Isabelle Borne and Djamel Meslati},
      title = {A Genetic Algorithm-based Approach for Automated Refactoring of Component-based Software},
      journal = {Information and Software Technology},
      year = {2017},
      volume = {88},
      pages = {17-36},
      month = {August},
      doi = {http://dx.doi.org/10.1016/j.infsof.2017.03.009}
    }
    					
    2012.03.07 Sangeeta Sabharwal, Ritu Sibal & Chayanika Sharma A Genetic Algorithm based Approach for Prioritization of Test Case Scenarios in Static Testing 2011 Proceedings of the 2nd International Conference on Computer and Communication Technology (ICCCT '11), pp. 304-309, Allahabad India, 15-17 September   Inproceedings Testing and Debugging
    Abstract: White box testing is a test technique that takes into account program code, code structure and internal design flow. White box testing is primarily of two kinds-static and structural. Whereas static testing requires only the source code of the product, not the binaries or executables, in structural testing tests are actually run by the computer on built products. In this paper, we propose a technique for optimizing static testing efficiency by identifying the critical path clusters using genetic algorithm. The testing efficiency is optimized by applying the genetic algorithm on the test data. The test case scenarios are derived from the source code. The information flow metric is adopted in this work for calculating the information flow complexity associated with each node of the control flow graph generated from the source code. This research paper is an extension of our previous research paper [18].
    BibTeX:
    @inproceedings{SabharwalSS11,
      author = {Sangeeta Sabharwal and Ritu Sibal and Chayanika Sharma},
      title = {A Genetic Algorithm based Approach for Prioritization of Test Case Scenarios in Static Testing},
      booktitle = {Proceedings of the 2nd International Conference on Computer and Communication Technology (ICCCT '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {304-309},
      address = {Allahabad, India},
      month = {15-17 September},
      doi = {http://dx.doi.org/10.1109/ICCCT.2011.6075160}
    }
    					
    2016.02.17 Yeresime Suresh & Santanu Ku Rath A Genetic Algorithm based Approach for Test Data Generationin Basis Path Testing 2013 International Journal of Soft Computing and Software Engineering, Vol. 3(3), pp. 326-332, March   Article Testing and Debugging
    Abstract: Software Testing is a process to identify the quality and reliability of software, which can be achieved through the help of proper test data. However, doing this manually is a difficult task due to the presence of number of predicate nodes in the module. So, this leads towards a problem of NP-complete. Therefore some intelligence-based search algorithms have to be used to generate test data. In this paper, we use a soft computing based approach, genetic algorithm to generate test data based on the set of basis paths. This paper combines the characteristics of genetic algorithm with test data, making use of the merits of respective global and local optimization capability to improve the generation capacity of test data. This automated process of generating test data optimally helps in reducing the test effort and time of a tester. Finally, the proposed approach is applied for ATM withdrawal task. Experimental results show that genetic algorithm was able to generate suitable test data based on a fitness value and avoid redundant data by optimization.
    BibTeX:
    @article{SureshR13,
      author = {Yeresime Suresh and Santanu Ku Rath},
      title = {A Genetic Algorithm based Approach for Test Data Generationin Basis Path Testing},
      journal = {International Journal of Soft Computing and Software Engineering},
      year = {2013},
      volume = {3},
      number = {3},
      pages = {326-332},
      month = {March},
      url = {http://www.jscse.com/papers/?vol=3&no=3&n=49}
    }
    					
    2011.03.07 Vahid Garousi A Genetic Algorithm-based Stress Test Requirements Generator Tool and Its Empirical Evaluation 2010 IEEE Transactions on Software Engineering, Vol. 36(6), pp. 778-797, November-December   Article Testing and Debugging
    Abstract: Genetic algorithms (GAs) have been applied previously to UML-driven stress test requirements generation with the aim of increasing chances of discovering faults relating to network traffic in distributed real-time systems. However, since evolutionary algorithms are heuristic, their performance can vary across multiple executions, which may affect robustness and scalability. To address this, we present the design and technical detail of a UML-driven, GA-based stress test requirements generation tool, together with its empirical analysis. The main goal is to analyze and improve the applicability, efficiency, and effectiveness and also to validate the design choices of the GA used in the tool. Findings of the empirical evaluation reveal that the tool is robust and reasonably scalable when it is executed on large-scale experimental design models. The study also reveals the main bottlenecks and limitations of the tools, e.g., there is a performance bottleneck when the system under test has a large number of sequence diagrams which could be triggered independently from each other. In addition, issues specific to stress testing, e.g., the impact of variations in task arrival pattern types, reveal that the tool generally generates effective test requirements, although the features of those test requirements might be different in different runs (e.g., different stress times from the test start time might be chosen). While the use of evolutionary algorithms to generate software test cases has been widely reported, the extent, depth, and detail of the empirical findings presented in this paper are novel and suggest that the proposed approach is effective and efficient in generating stress test requirements. It is hoped that the findings of this empirical study will help other SBSE researchers with the empirical evaluation of their own techniques and tools.
    BibTeX:
    @article{Garousi10,
      author = {Vahid Garousi},
      title = {A Genetic Algorithm-based Stress Test Requirements Generator Tool and Its Empirical Evaluation},
      journal = {IEEE Transactions on Software Engineering},
      year = {2010},
      volume = {36},
      number = {6},
      pages = {778-797},
      month = {November-December},
      doi = {http://dx.doi.org/10.1109/TSE.2010.5}
    }
    					
    2012.02.28 Romain Delamare & Nicholas A. Kraft A Genetic Algorithm for Computing Class Integration Test Orders for Aspect-Oriented Systems 2012 Proceedings of the 5th International Workshop on Search-Based Software Testing (SBST '12), pp. 804-813, Montreal Canada, 21-21 April   Inproceedings Testing and Debugging
    Abstract: In this paper we present an approach for the class integration test order problem in aspect-oriented programs. Several approaches have been proposed for aspect-oriented systems, but the proposed approach is the first, to our best knowledge, to consider the indirect impact of aspects. This approach relies on a genetic algorithm and can reduce the testing efforts when many methods are indirectly impacted by aspects. We detail the algorithm and then discuss its parameters. The approach has been implemented for Aspect J systems, and to validate it, has been applied to a motivating example.
    BibTeX:
    @inproceedings{DelamareK12,
      author = {Romain Delamare and Nicholas A. Kraft},
      title = {A Genetic Algorithm for Computing Class Integration Test Orders for Aspect-Oriented Systems},
      booktitle = {Proceedings of the 5th International Workshop on Search-Based Software Testing (SBST '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {804-813},
      address = {Montreal, Canada},
      month = {21-21 April},
      doi = {http://dx.doi.org/10.1109/ICST.2012.179}
    }
    					
    2014.05.28 Ivana Ognjanović & Ramo Šendelj A Genetic Algorithm for Configuration of the Business Process Families 2012 Proceedings of the 4th International Conference on Computer Engineering and Technology (ICCET '12), pp. 6-11, Bangkok Thailand, 12-13 May   Inproceedings
    Abstract: Business process model families (BPMF) capture all of the possible business processes that can be defined for a target domain of interest. Configuration problem which arises is defined as the problem of both, unique business process model derivation, and services selection optimally implementing the activities of the process model.There is increasing number of available services per each activity in the BPMF, with the same functionality which differentiates on QoS characteristics. We propose use of Genetic Algorithms (GA) as a technique which finds near optimal solution of combinatorial problems, adopt it to the problem of BPMF configuration and simulation analyze its efficiency for considering domain.
    BibTeX:
    @inproceedings{OgnjanovicS12,
      author = {Ivana Ognjanović and Ramo Šendelj},
      title = {A Genetic Algorithm for Configuration of the Business Process Families},
      booktitle = {Proceedings of the 4th International Conference on Computer Engineering and Technology (ICCET '12)},
      publisher = {IACSIT Press},
      year = {2012},
      pages = {6-11},
      address = {Bangkok, Thailand},
      month = {12-13 May},
      url = {http://www.ipcsit.com/vol40/002-ICCET2012-C20006.pdf}
    }
    					
    2011.02.05 Danielle Azar A Genetic Algorithm for Improving Accuracy of Software Quality Predictive Models: A Search-based Software Engineering Approach 2010 International Journal of Computational Intelligence and Applications, Vol. 9(2), pp. 125-136   Article Management
    Abstract: In this work, we present a genetic algorithm to optimize predictive models used to estimate software quality characteristics. Software quality assessment is crucial in the software development field since it helps reduce cost, time and effort. However, software quality characteristics cannot be directly measured but they can be estimated based on other measurable software attributes (such as coupling, size and complexity). Software quality estimation models establish a relationship between the unmeasurable characteristics and the measurable attributes. However, these models are hard to generalize and reuse on new, unseen software as their accuracy deteriorates significantly. In this paper, we present a genetic algorithm that adapts such models to new data. We give empirical evidence illustrating that our approach out-beats the machine learning algorithm C4.5 and random guess.
    BibTeX:
    @article{Azar10,
      author = {Danielle Azar},
      title = {A Genetic Algorithm for Improving Accuracy of Software Quality Predictive Models: A Search-based Software Engineering Approach},
      journal = {International Journal of Computational Intelligence and Applications},
      year = {2010},
      volume = {9},
      number = {2},
      pages = {125-136},
      doi = {http://dx.doi.org/10.1142/S1469026810002811}
    }
    					
    2012.03.09 Jianmei Guo, Jules White, Guangxin Wang, Jian Li & Yinglin Wang A Genetic Algorithm for Optimized Feature Selection with Resource Constraints in Software Product Lines 2011 The Journal of Systems and Software, Vol. 84(12), pp. 2208-2221, December   Article Requirements/Specifications
    Abstract: Software product line (SPL) engineering is a software engineering approach to building configurable software systems. SPLs commonly use a feature model to capture and document the commonalities and variabilities of the underlying software system. A key challenge when using a feature model to derive a new SPL configuration is determining how to find an optimized feature selection that minimizes or maximizes an objective function, such as total cost, subject to resource constraints. To help address the challenges of optimizing feature selection in the face of resource constraints, this paper presents an approach that uses G enetic A lgorithms for optimized FE ature S election (GAFES) in SPLs. Our empirical results show that GAFES can produce solutions with 86–97% of the optimality of other automated feature selection algorithms and in 45–99% less time than existing exact and heuristic feature selection techniques.
    BibTeX:
    @article{GuoWWLW11,
      author = {Jianmei Guo and Jules White and Guangxin Wang and Jian Li and Yinglin Wang},
      title = {A Genetic Algorithm for Optimized Feature Selection with Resource Constraints in Software Product Lines},
      journal = {The Journal of Systems and Software},
      year = {2011},
      volume = {84},
      number = {12},
      pages = {2208-2221},
      month = {December},
      doi = {http://dx.doi.org/10.1016/j.jss.2011.06.026}
    }
    					
    2007.12.02 Yannick Monnier, Jean-Pierre Beauvais & Anne-Marie Déplanche A Genetic Algorithm for Scheduling Tasks in a Real-Time Distributed System 1998 Proceedings of the 24th EUROMICRO Conference (EUROMICRO '98), Vol. 2, pp. 20708-20714, Västerås Sweden, 25-27 August   Inproceedings Design Tools and Techniques
    Abstract: Real-Time systems must ofen handle several independent periodic macro-tasks, each one represented by a general tasks graph, including communications and precedence constraints. The implementation of such applications on a distributed system communicating via a bus, requires tasks assignment and scheduling, as well as the taking into account of the communication delays. As periodicity implies macro-tasks deadlines, the problem of finding a feasible schedule is critical. This paper addresses this NP-hard problem resolution, by using a genetic algorithm, under ofline and non-preemptive scheduling assumptions. This algorithm performances are evaluated on a large simulation set, and compared to classical list-based algorithms, a simulated annealing algorithm and a specific clustering algorithm.
    BibTeX:
    @inproceedings{MonnierBD98,
      author = {Yannick Monnier and Jean-Pierre Beauvais and Anne-Marie Déplanche},
      title = {A Genetic Algorithm for Scheduling Tasks in a Real-Time Distributed System},
      booktitle = {Proceedings of the 24th EUROMICRO Conference (EUROMICRO '98)},
      publisher = {IEEE},
      year = {1998},
      volume = {2},
      pages = {20708-20714},
      address = {Västerås, Sweden},
      month = {25-27 August},
      doi = {http://dx.doi.org/10.1109/EURMIC.1998.708092}
    }
    					
    2012.07.20 Liang You & Yansheng Lu A Genetic Algorithm for the Time-aware Regression Testing Reduction Problem 2012 Proceedings of the 8th International Conference on Natural Computation (ICNC '12), pp. 596-599, Chongqing China, 29-31 May   Inproceedings Testing and Debugging
    Abstract: After the programmer fixes the bugs and enhances the functionality of the software project, regression testing reruns the regression testing suite to ensure that the new version software projects can run smoothly and correctly. Because the regression testing is the most expensive phase of the software testing, regression testing reduction eliminates the redundant test cases in the regression testing suite and saves the cost of the regression testing. This paper formally defines the time-aware regression testing reduction problem. It also proposes a novel genetic algorithm for the time-aware regression testing reduction problem. It defines the representation and fitness function of the genetic algorithm, it also describes the parent selection, crossover and mutation operator of the genetic algorithm. The novel algorithm not only removes redundant test cases in the regression testing suite but also minimizes the total running time of the remaining test cases. Finally, the paper evaluates the genetic algorithm using eight example programs. The experimental result illustrates the efficiency of the proposed genetic algorithm for the time-aware regression testing reduction problem.
    BibTeX:
    @inproceedings{YouL12,
      author = {Liang You and Yansheng Lu},
      title = {A Genetic Algorithm for the Time-aware Regression Testing Reduction Problem},
      booktitle = {Proceedings of the 8th International Conference on Natural Computation (ICNC '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {596-599},
      address = {Chongqing, China},
      month = {29-31 May},
      doi = {http://dx.doi.org/10.1109/ICNC.2012.6234754}
    }
    					
    2012.03.08 Sergio Di Martino, Filomena Ferrucci, Carmine Gravino & Federica Sarro A Genetic Algorithm to Configure Support Vector Machines for Predicting Fault-Prone Components 2011 Proceedings of the 12th International Conference on Product-Focused Software Process Improvement (PROFES '11), Vol. 6759, pp. 247-261, Torre Canne Italy, 20-22 June   Inproceedings
    Abstract: In some studies, Support Vector Machines (SVMs) have been turned out to be promising for predicting fault-prone software components. Nevertheless, the performance of the method depends on the setting of some parameters. To address this issue, we propose the use of a Genetic Algorithm (GA) to search for a suitable configuration of SVMs parameters that allows us to obtain optimal prediction performance. The approach has been assessed carrying out an empirical analysis based on jEdit data from the PROMISE repository. We analyzed both the inter- and the intra-release performance of the proposed method. As benchmarks we exploited SVMs with Grid-search and several other machine learning techniques. The results show that the proposed approach let us to obtain an improvement of the performance with an increasing of the Recall measure without worsening the Precision one. This behavior was especially remarkable for the inter-release use with respect to the other prediction techniques.
    BibTeX:
    @inproceedings{MartinoFGS11,
      author = {Sergio Di Martino and Filomena Ferrucci and Carmine Gravino and Federica Sarro},
      title = {A Genetic Algorithm to Configure Support Vector Machines for Predicting Fault-Prone Components},
      booktitle = {Proceedings of the 12th International Conference on Product-Focused Software Process Improvement (PROFES '11)},
      publisher = {Springer},
      year = {2011},
      volume = {6759},
      pages = {247-261},
      address = {Torre Canne, Italy},
      month = {20-22 June},
      doi = {http://dx.doi.org/10.1007/978-3-642-21843-9_20}
    }
    					
    2007.12.02 Babak Hodjat, Junichi Ito & Makoto Amamiya A Genetic Algorithm to Improve Agent-Oriented Natural Language Interpreters 2004 Proceedings of the 2004 Conference on Genetic and Evolutionary Computation (GECCO '04), Vol. 3103, pp. 1307-1309, Seattle Washington USA, 26-30 June   Inproceedings Distributed Artificial Intelligence
    Abstract: A genetic algorithm is used to improve the success-rate of an AAOSA-based application. Tests show promising results both in the improvement made in the success-rate of the development and test corpora, and in the nature and number of interpretation rules added to agents.
    BibTeX:
    @inproceedings{HodjatIA04,
      author = {Babak Hodjat and Junichi Ito and Makoto Amamiya},
      title = {A Genetic Algorithm to Improve Agent-Oriented Natural Language Interpreters},
      booktitle = {Proceedings of the 2004 Conference on Genetic and Evolutionary Computation (GECCO '04)},
      publisher = {Springer Berlin / Heidelberg},
      year = {2004},
      volume = {3103},
      pages = {1307-1309},
      address = {Seattle, Washington, USA},
      month = {26-30 June},
      doi = {http://dx.doi.org/10.1007/b98645}
    }
    					
    2010.07.19 James Kukunas, Robert D. Cupper & Gregory M. Kapfhammer A Genetic Algorithm to Improve Linux Kernel Performance on Resource-constrained Devices 2010 Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation (GECCO '10), pp. 2095-2096, Portland Oregon USA, 7-11 July   Inproceedings Coding Tools and Techniques
    Abstract: As computers become increasingly mobile, users demand more functionality, longer battery-life, and better performance from mobile devices. In response, chipset fabricators are focusing on elegant architectures to provide solutions that are both low-power and high-performance. Since these architectures rely on unique x86 extensions rather than fast clock speeds and large caches, careful thought must be placed into effective optimization strategies for not only user applications, but also the kernel itself, as the typical default optimizations used by modern compilers do not often take advantage of these specialized features. Focusing on the Intel Diamondville platform, this paper presents a genetic algorithm that evolves the compiler flags needed to build a Linux kernel that exhibits reduced response times.
    BibTeX:
    @inproceedings{KukunasCK10,
      author = {James Kukunas and Robert D. Cupper and Gregory M. Kapfhammer},
      title = {A Genetic Algorithm to Improve Linux Kernel Performance on Resource-constrained Devices},
      booktitle = {Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation (GECCO '10)},
      publisher = {ACM},
      year = {2010},
      pages = {2095-2096},
      address = {Portland, Oregon, USA},
      month = {7-11 July},
      doi = {http://dx.doi.org/10.1145/1830761.1830879}
    }
    					
    2010.09.20 Chengwen Zhang & Xiuqin Lin A Genetic Algorithm with Improved Convergence Capability for QoS-Aware Web Service Selection 2009 Proceedings of International Conference on Management and Service Science (MASS '09), pp. 1-4, Wuhan China, 20-22 September   Inproceedings Management
    Abstract: Genetic algorithm (GA) is a good service selection algorithm to select an optimal composite plan from many composite plans. Since the execution of GA relies on a randomly search procedure to seek possible solutions, bad convergence of GA is produced by random sequences generation. To improve the convergence of GA for Web service selection with global quality-of-service (QoS) constraints, chaos theory is introduced into GA with the relation matrix coding scheme. The chaotic law is based on the relation matrix coding scheme. During crossover process phase, chaotic time series are adopted instead of random ones. The effect of chaotic sequences and random ones is compared during several numerical tests. The performance of GA using chaotic time series and random ones is investigated. The simulation results on Web service selection with global QoS constraints have shown that the proposed strategy based on chaotic sequences can enhance GA's convergence capability.
    BibTeX:
    @inproceedings{ZhangL09,
      author = {Chengwen Zhang and Xiuqin Lin},
      title = {A Genetic Algorithm with Improved Convergence Capability for QoS-Aware Web Service Selection},
      booktitle = {Proceedings of International Conference on Management and Service Science (MASS '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {1-4},
      address = {Wuhan, China},
      month = {20-22 September},
      doi = {http://dx.doi.org/10.1109/ICMSS.2009.5304281}
    }
    					
    2011.05.19 So-Yeong Jeon & Yong-Hyuk Kim A Genetic Approach to Analyze Algorithm Performance Based on the Worst-Case Instances 2010 Journal of Software Engineering and Applications, Vol. 3(8), pp. 767-775, August   Article Testing and Debugging
    Abstract: Search-based software engineering has mainly dealt with automated test data generation by metaheuristic search techniques. Similarly, we try to generate the test data (i.e., problem instances) which show the worst case of algorithms by such a technique. In this paper, in terms of non-functional testing, we re-define the worst case of some algorithms, respectively. By using genetic algorithms (GAs), we illustrate the strategies corresponding to each type of instances. We here adopt three problems for examples; the sorting problem, the 0/1 knapsack problem (0/1KP), and the travelling salesperson problem (TSP). In some algorithms solving these problems, we could find the worst-case instances successfully; the successfulness of the result is based on a statistical approach and comparison to the results by using the random testing. Our tried examples introduce informative guidelines to the use of genetic algorithms in generating the worst-case instance, which is defined in the aspect of algorithm performance.
    BibTeX:
    @article{JeonK10,
      author = {So-Yeong Jeon and Yong-Hyuk Kim},
      title = {A Genetic Approach to Analyze Algorithm Performance Based on the Worst-Case Instances},
      journal = {Journal of Software Engineering and Applications},
      year = {2010},
      volume = {3},
      number = {8},
      pages = {767-775},
      month = {August},
      doi = {http://dx.doi.org/10.4236/jsea.2010.38089}
    }
    					
    2010.03.12 Eduardo Oliveira Costa, Aurora Trinidad Ramirez Pozo & Silvia Regina Vergilio A Genetic Programming Approach for Software Reliability Modeling 2010 IEEE Transactions on Reliability, Vol. 59(1), pp. 222-230, March   Article Distribution and Maintenance
    Abstract: Genetic Programming (GP) models adapt better to the reliability curve when compared with other traditional, and non-parametric models. In a previous work, we conducted experiments with models based on time, and on coverage. We introduced an approach, named Genetic Programming and Boosting (GPB), that uses boosting techniques to improve the performance of GP. This approach presented better results than classical GP, but required ten times the number of executions. Therefore, we introduce in this paper a new GP based approach, named $(mu+lambda)$ GP. To evaluate this new approach, we repeated the same experiments conducted before. The results obtained show that the $(mu+lambda)$ GP approach presents the same cost of classical GP, and that there is no significant difference in the performance when compared with the GPB approach. Hence, it is an excellent, less expensive technique to model software reliability.
    BibTeX:
    @article{CostaPV10,
      author = {Eduardo Oliveira Costa and Aurora Trinidad Ramirez Pozo and Silvia Regina Vergilio},
      title = {A Genetic Programming Approach for Software Reliability Modeling},
      journal = {IEEE Transactions on Reliability},
      year = {2010},
      volume = {59},
      number = {1},
      pages = {222-230},
      month = {March},
      doi = {http://dx.doi.org/10.1109/TR.2010.2040759}
    }
    					
    2009.11.25 Stephanie Forrest, ThanhVu Nguyen, Westley Weimer & Claire Le Goues A Genetic Programming Approach to Automated Software Repair 2009 Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation (GECCO '09), pp. 947-954   Inproceedings Testing and Debugging
    Abstract: Genetic programming is combined with program analysis methods to repair bugs in off-the-shelf legacy C programs. Fitness is defined using negative test cases that exercise the bug to be repaired and positive test cases that encode program requirements. Once a successful repair is discovered, structural differencing algorithms and delta debugging methods are used to minimize its size. Several modifications to the GP technique contribute to its success: (1) genetic operations are localized to the nodes along the execution path of the negative test case; (2) high-level statements are represented as single nodes in the program tree; (3) genetic operators use existing code in other parts of the program, so new code does not need to be invented. The paper describes the method, reviews earlier experiments that repaired 11 bugs in over 60,000 lines of code, reports results on new bug repairs, and describes experiments that analyze the performance and efficacy of the evolutionary components of the algorithm.
    BibTeX:
    @inproceedings{ForrestNWG09,
      author = {Stephanie Forrest and ThanhVu Nguyen and Westley Weimer and Claire Le Goues},
      title = {A Genetic Programming Approach to Automated Software Repair},
      booktitle = {Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation (GECCO '09)},
      year = {2009},
      pages = {947-954},
      doi = {http://dx.doi.org/10.1145/1569901.1570031}
    }
    					
    2008.08.17 Arjan Seesing & Hans-Gerhard Groß A Genetic Programming Approach to Automated Test Generation for Object-Oriented Software 2006 International Transactions on System Science and Applications, Vol. 1(2), pp. 127-134, October   Article Testing and Debugging
    Abstract: This article proposes a new method for creating test software for object-oriented systems using a genetic programming approach. It is believed that this approach is advantageous over the more established search-based test-case generation approaches because the test software is represented and altered as a fully functional computer program. Genetic programming (GP) uses a tree-shaped data structure which is more directly comparable and suitable for being mapped instantly to abstract syntax trees commonly used in computer languages and compilers. These structures can be manipulated and executed directly, bypassing intricate and error prone conversion procedures between different representations. In addition, tree structures make more operations possible, which keep the structure and semantics of the evolving test software better intact during program evolution, compared to linear structures. This speeds up the evolutionary program generation process because the loss of evolved structures due to mutations and crossover is prevented more effectively.
    BibTeX:
    @article{SeesingG06,
      author = {Arjan Seesing and Hans-Gerhard Groß},
      title = {A Genetic Programming Approach to Automated Test Generation for Object-Oriented Software},
      journal = {International Transactions on System Science and Applications},
      year = {2006},
      volume = {1},
      number = {2},
      pages = {127-134},
      month = {October},
      url = {http://www.mendeley.com/research/genetic-programming-approach-automated-test-generation-objectoriented-software/}
    }
    					
    2008.08.17 Arjan Seesing & Hans-Gerhard Groß A Genetic Programming Approach to Automated Test Generation for Object-Oriented Software 2006 (TUD-SERG-2006-017)   Techreport Testing and Debugging
    Abstract: In this article we propose a new method for creating test software for object-oriented systems using a genetic programming approach. We believe this approach is advantageous over the more established search-based test-case generation approaches because the test software is represented and altered as a fully functional computer program. Genetic programming (GP) uses a tree-shaped data structure which is more directly comparable and suitable for being mapped instantly to abstract syntax trees commonly used in computer languages and compilers. These structures can be manipulated and executed directly, bypassing intricate and error prone conversion procedures between different representations. In addition, tree structures make more operations possible, which keep the structure and semantics of the evolving test software better intact during program evolution, compared to linear structures. This speeds up the evolutionary program generation process because the loss of evolved structures due to mutations and crossover is prevented more effectively.
    BibTeX:
    @techreport{SeesingG06b,
      author = {Arjan Seesing and Hans-Gerhard Groß},
      title = {A Genetic Programming Approach to Automated Test Generation for Object-Oriented Software},
      year = {2006},
      number = {TUD-SERG-2006-017},
      url = {http://swerl.tudelft.nl/twiki/pub/Main/TechnicalReports/TUD-SERG-2006-017.pdf}
    }
    					
    2008.08.17 Arjan Seesing & Hans-Gerhard Groß A Genetic Programming Approach to Automated Test Generation for Object-Oriented Software 2006 Proceedings of the 1st International Workshop on Evaluation of Novel Approaches to Software Engineering, Erfurt Germany, 19-20 September   Inproceedings Testing and Debugging
    Abstract: In this article we propose a new method for creating test software for object-oriented systems using a genetic programming approach. We believe this approach is advantageous over the more established search-based test-case generation approaches because the test software is represented and altered as a fully functional computer program. Genetic programming (GP) uses a tree-shaped data structure which is more directly comparable and suitable for being mapped instantly to abstract syntax trees commonly used in computer languages and compilers. These structures can be manipulated and executed directly, bypassing intricate and error prone conversion procedures between different representations. In addition, tree structures make more operations possible, which keep the structure and semantics of the evolving test software better intact during program evolution, compared to linear structures. This speeds up the evolutionary program generation process because the loss of evolved structures due to mutations and crossover is prevented more effectively.
    BibTeX:
    @inproceedings{SeesingG06c,
      author = {Arjan Seesing and Hans-Gerhard Groß},
      title = {A Genetic Programming Approach to Automated Test Generation for Object-Oriented Software},
      booktitle = {Proceedings of the 1st International Workshop on Evaluation of Novel Approaches to Software Engineering},
      publisher = {NetObject Days 2006},
      year = {2006},
      address = {Erfurt, Germany},
      month = {19-20 September},
      url = {http://www.cs.bham.ac.uk/~wbl/biblio/cache/http___www.st.ewi.tudelft.nl__gross_Publications_Seesing_2006.pdf}
    }
    					
    2009.02.24 Lerina Aversano, Massililiano Di Penta & Kunal Taneja A Genetic Programming Approach to Support the Design Of Service Compositions 2006 Computer Systems Science & Engineering, Vol. 21(4), pp. 247-254, July   Article Design Tools and Techniques
    Abstract: The design of service composition is one of the most challenging research problems in service-oriented software engineering. Building composite services is concerned with identifying a suitable set of services that orchestrated in some way is able to solve a business goal which cannot be resolved using a single service amongst those available. Despite the literature reports several approaches for (semi) automatic service composition, several problems, such as the capability of determining the composition's topology, still remain open. This paper proposes a search-based approach to semi-automatically support the design of service compositions. In particular, the approach uses genetic programming to automatically generate workflows that accomplish a business goal and exhibit a given QoS level, with the aim of supporting the service integrator activities in the finalization of the workflow.
    BibTeX:
    @article{AversanoDT06,
      author = {Lerina Aversano and Massililiano Di Penta and Kunal Taneja},
      title = {A Genetic Programming Approach to Support the Design Of Service Compositions},
      journal = {Computer Systems Science & Engineering},
      year = {2006},
      volume = {21},
      number = {4},
      pages = {247-254},
      month = {July},
      url = {www.rcost.unisannio.it/mdipenta/papers/csse06.pdf}
    }
    					
    2010.03.11 Manuel Mucientes, Manuel Lama & Miguel I. Couto A Genetic Programming-based Algorithm for Composing Web Services 2009 Proceedings of 9th International Conference on Intelligent Systems Design and Applications (ISDA '09), pp. 379-384, Piza Italy, Nov 30-Dec. 2   Inproceedings Design Tools and Techniques
    Abstract: Web services are interfaces that describe a collection of operations that are network-accessible through standardized Web protocols. When a required operation is not found, several services can be compounded to get a composite service that performs the desired task. To find this composite service, a search process over a huge search space must be performed. The algorithm that composes the services must select the adequate atomic processes and, also, must choose the correct way to combine them using the different available control structures. In this paper a genetic programming algorithm for Web services composition is presented. The algorithm has a context-free grammar to generate the valid structures of the composite services. Moreover, it includes a method to update the attributes of each node. A full experimental validation with a repository of 1,000 Web services has been done, showing a great performance as the algorithm finds a valid solution in all the tests.
    BibTeX:
    @inproceedings{MucientesLC09,
      author = {Manuel Mucientes and Manuel Lama and Miguel I. Couto},
      title = {A Genetic Programming-based Algorithm for Composing Web Services},
      booktitle = {Proceedings of 9th International Conference on Intelligent Systems Design and Applications (ISDA '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {379--384},
      address = {Piza, Italy},
      month = {Nov 30-Dec. 2},
      doi = {http://dx.doi.org/10.1109/ISDA.2009.155}
    }
    					
    2008.08.07 Christopher L. Simons & Ian C. Parmee Keijzer, M. (Hrsg.) Agent-based Support for Interactive Search in Conceptual Software Engineering Design 2008 Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO '08), pp. 1785-1786, Atlanta GA USA, 12-16 July   Inproceedings Design Tools and Techniques
    Abstract: While recent attempts to search a conceptual software engineering design search space with multi-objective evolutionary algorithms have yielded promising results, the practical application of such search-based techniques remains to be addressed. This paper reports initial findings of the application of software agents in support of an interactive, user-centered conceptual software design scenario. The supporting role of a number of single responsibility agents is described and results for a case study indicate that the application of such agents to search-based design scenarios provides efficient, high performance and effective support. The notion of interactive, joint human-computer activity appears to map well to conceptual software design scenarios: focus on superior design concepts and thence to useful and interesting designs provides a natural and effective way of narrowing the population based search. In addition, agents and the human designer appear to interact as cooperative "team players", jointly influencing the evolutionary algorithm based search. Nevertheless, challenges remain, including expanding the scale of implementation of underlying technologies to support distributed, collaborative design.
    BibTeX:
    @inproceedings{SimonsP08,
      author = {Christopher L. Simons and Ian C. Parmee},
      title = {Agent-based Support for Interactive Search in Conceptual Software Engineering Design},
      booktitle = {Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO '08)},
      publisher = {ACM},
      year = {2008},
      pages = {1785-1786},
      address = {Atlanta, GA, USA},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/1389095.1389440}
    }
    					
    2013.08.05 Rodrigo C. Barros, Márcio P. Basgalupp, Ricardo Cerri, Tiago S. da Silva & André C.P.L.F. de Carvalho A Grammatical Evolution Approach for Software Effort Estimation 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1413-1420, Amsterdam The Netherlands, 6-10 July   Inproceedings Management
    Abstract: Software effort estimation is an important task within software engineering. It is widely used for planning and monitoring software project development as a means to deliver the product on time and within budget. Several approaches for generating predictive models from collected metrics have been proposed throughout the years. Machine learning algorithms, in particular, have been widely-employed to this task, bearing in mind their capability of providing accurate predictive models for the analysis of project stakeholders. In this paper, we propose a grammatical evolution approach for software metrics estimation. Our novel algorithm, namely SEEGE, is empirically evaluated on public project data sets, and we compare its performance with state-of-the-art machine learning algorithms such as support vector machines for regression and artificial neural networks, and also to popular linear regression. Results show that SEEGE outperforms the other algorithms considering three different evaluation measures, clearly indicating its effectiveness for the effort estimation task.
    BibTeX:
    @inproceedings{BarrosBCSC13,
      author = {Rodrigo C. Barros and Márcio P. Basgalupp and Ricardo Cerri and Tiago S. da Silva and André C.P.L.F. de Carvalho},
      title = {A Grammatical Evolution Approach for Software Effort Estimation},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1413-1420},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463546}
    }
    					
    2015.09.29 Abdul Rahman, A. Alsewari & Kamal Z. Zamli A Harmony Search based Pairwise Sampling Strategy for Combinatorial Testing 2012 Journal of the Physical Sciences, Vol. 7(7), pp. 1062-1072, February   Article Testing and Debugging
    Abstract: Over the years, we become increasingly dependent on software in many activities of our lives. Toensure software quality and reliability, many combinations of possible input parameters,hardware/software environments and system conditions need to be tested and verified against forconformance. Due to resource constraints and time-to-market pressure, considering all exhaustivetesting is practically impossible. In order to address this issue, a number of pairwise testing (andsampling) strategies have been developed in the literature in the past 15 years. In this paper, wepropose and evaluate a novel pairwise strategy called pairwise harmony search algorithm-basedstrategy (PHSS). Based on the published benchmarking results, the PHSS strategy outperforms mostexisting strategies in terms of the generated test size in many of the parameter configurationsconsidered. In the case where the PHSS is not the most optimal, the resulting test size is sufficientlycompetitive. PHSS serves as our research vehicle to investigate the effective use of harmony search(HS) algorithm for pairwise test data reduction.
    BibTeX:
    @article{RahmanAZ12,
      author = {Abdul Rahman and A. Alsewari and Kamal Z. Zamli},
      title = {A Harmony Search based Pairwise Sampling Strategy for Combinatorial Testing},
      journal = {Journal of the Physical Sciences},
      year = {2012},
      volume = {7},
      number = {7},
      pages = {1062-1072},
      month = {February},
      url = {http://www.academicjournals.org/journal/IJPS/article-full-text-pdf/1D016B216620},
      doi = {http://dx.doi.org/10.5897/IJPS11.1633}
    }
    					
    2010.06.04 Fatemeh Asadi, Massimiliano Di Penta, Giuliano Antoniol & Yann-Gaël Guéhéneuc A Heuristic-based Approach to Identify Concepts in Execution Traces 2010 Proceedings of the 14th European Conference on Software Maintenance and Reengineering (CSMR '10), pp. 31-40, Madrid Spain, 15-18 March   Inproceedings Distribution and Maintenance
    Abstract: Concept or feature identification, i.e., the identifi- cation of the source code fragments implementing a particular feature, is a crucial task during software understanding and maintenance. This paper proposes an approach to identify concepts in execution traces by finding cohesive and decoupled fragments of the traces. The approach relies on search-based optimization techniques, textual analysis of the system source code using latent semantic indexing, and trace compression techniques. It is evaluated to identify features from execution traces of two open source systems from different domains, JHotDraw and ArgoUML. Results show that the approach is always able to identify trace segments implementing concepts with a high precision and, for highly cohesive concepts, with a high overlap with the manually-built oracle.
    BibTeX:
    @inproceedings{AsadiDA10,
      author = {Fatemeh Asadi and Massimiliano Di Penta and Giuliano Antoniol and Yann-Gaël Guéhéneuc},
      title = {A Heuristic-based Approach to Identify Concepts in Execution Traces},
      booktitle = {Proceedings of the 14th European Conference on Software Maintenance and Reengineering (CSMR '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {31-40},
      address = {Madrid, Spain},
      month = {15-18 March},
      doi = {http://dx.doi.org/10.1109/CSMR.2010.17}
    }
    					
    2013.08.05 Achiya Elyasaf, Michael Orlov & Moshe Sipper A Heuristiclab Evolutionary Algorithm for FINCH 2013 Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13), pp. 1727-1728, Amsterdam The Netherlands, 6-10 July   Inproceedings Design Tools and Techniques
    Abstract: We present a HeuristicLab plugin for FINCH. FINCH (Fertile Darwinian Bytecode Harvester) is a system designed to evolutionarily improve actual, extant software, which was not intentionally written for the purpose of serving as a GP representation in particular, nor for evolution in general. This is in contrast to existing work that uses restricted subsets of the Java bytecode instruction set as a representation language for individuals in genetic programming. The ability to evolve Java programs will hopefully lead to a valuable new tool in the software engineer's toolkit.
    BibTeX:
    @inproceedings{ElyasafOS13,
      author = {Achiya Elyasaf and Michael Orlov and Moshe Sipper},
      title = {A Heuristiclab Evolutionary Algorithm for FINCH},
      booktitle = {Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1727-1728},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2464576.2480786}
    }
    					
    2009.02.04 Brian S. Mitchell A Heuristic Search Approach to Solving the Software Clustering Problem 2002 , March School: Drexel University, USA   Phdthesis Distribution and Maintenance
    Abstract: Most interesting software systems are large and complex, and as a consequence, understanding their structure is difficult. One of the reasons for this complexity is that source code contains many entities (e.g., classes, modules) that depend on each other in intricate ways (e.g., procedure calls, variable references). Additionally, once a software engineer understands a system's structure, it is difficult to preserve this understanding, because the structure tends to change during maintenance. Research into the software clustering problem has proposed several approaches to deal with the above issue by defining techniques that partition the structure of a software system into subsystems (clusters). Subsystems are collections of source code resources that exhibit similar features, properties or behaviors. Because there are far fewer subsystems than modules, studying the subsystem structure is easier than trying to understand the system by analyzing the source code manually. Our research addresses several aspects of the software clustering problem. Specifically, we created several heuristic search algorithms that automatically cluster the source code into subsystems. We implemented our clustering algorithms in a tool named Bunch, and conducted extensive evaluation via case studies and experiments. Bunch also includes a variety of services to integrate user knowledge into the clustering process, and to help users navigate through complex system structures manually. Since the criteria used to decompose the structure of a software system into subsystems vary across different clustering algorithms, mechanisms that can compare different clustering results objectively are needed. To address this need we first examined two techniques that have been used to measure the similarity between system decompositions, and then created two new similarity measurements to overcome some of the problems that we discovered with the existing measurements. Similarity measurements enable the results of clustering algorithms to be compared to each other, and preferably to be compared to an agreed upon "benchmark" standard. Since benchmark standards are not documented for most systems, we created another tool, called CRAFT, that derives a "reference decomposition" automatically by exploiting similarities in the results produced by several different clustering algorithms.
    BibTeX:
    @phdthesis{Mitchell02,
      author = {Brian S. Mitchell},
      title = {A Heuristic Search Approach to Solving the Software Clustering Problem},
      school = {Drexel University, USA},
      year = {2002},
      month = {March},
      url = {http://www.cs.drexel.edu/~bmitchel/research/thesis.html}
    }
    					
    2013.10.09 Andrea Arcuri & Lionel Briand A Hitchhiker's Guide To Statistical Tests For Assessing Randomized Algorithms In Software Engineering 2014 Journal of Software Testing, Verification and Reliability, Vol. 24(3), pp. 219-250, May   Article Testing and Debugging
    Abstract: Randomized algorithms are widely used to address many types of software engineering problems, especially in the area of software verification and validation with a strong emphasis on test automation. However, randomized algorithms are affected by chance and so require the use of appropriate statistical tests to be properly analysed in a sound manner. This paper features a systematic review regarding recent publications in 2009 and 2010 showing that, overall, empirical analyses involving randomized algorithms in software engineering tend to not properly account for the random nature of these algorithms. Many of the novel techniques presented clearly appear promising, but the lack of soundness in their empirical evaluations casts unfortunate doubts on their actual usefulness. In software engineering, although there are guidelines on how to carry out empirical analyses involving human subjects, those guidelines are not directly and fully applicable to randomized algorithms. Furthermore, many of the textbooks on statistical analysis are written from the viewpoints of social and natural sciences, which present different challenges from randomized algorithms. To address the questionable overall quality of the empirical analyses reported in the systematic review, this paper provides guidelines on how to carry out and properly analyse randomized algorithms applied to solve software engineering tasks, with a particular focus on software testing, which is by far the most frequent application area of randomized algorithms within software engineering.
    BibTeX:
    @article{ArcuriB14,
      author = {Andrea Arcuri and Lionel Briand},
      title = {A Hitchhiker's Guide To Statistical Tests For Assessing Randomized Algorithms In Software Engineering},
      journal = {Journal of Software Testing, Verification and Reliability},
      year = {2014},
      volume = {24},
      number = {3},
      pages = {219-250},
      month = {May},
      doi = {http://dx.doi.org/10.1002/stvr.1486}
    }
    					
    2016.02.12 Zachary P. Fry, Bryan Landau & Westley Weimer A Human Study of Patch Maintainability 2012 Proceedings of the 2012 International Symposium on Software Testing and Analysis (ISSTA '12), pp. 177-187, Minneapolis USA, 15-20 July   Inproceedings
    Abstract: Identifying and fixing defects is a crucial and expensive part of the software lifecycle. Measuring the quality of bug-fixing patches is a difficult task that affects both functional correctness and the future maintainability of the code base. Recent research interest in automatic patch generation makes a systematic understanding of patch maintainability and understandability even more critical. We present a human study involving over 150 participants, 32 real-world defects, and 40 distinct patches. In the study, humans perform tasks that demonstrate their understanding of the control flow, state, and maintainability aspects of code patches. As a baseline we use both human-written patches that were later reverted and also patches that have stood the test of time to ground our results. To address any potential lack of readability with machine-generated patches, we propose a system wherein such patches are augmented with synthesized, human-readable documentation that summarizes their effects and context. Our results show that machine-generated patches are slightly less maintainable than human-written ones, but that trend reverses when machine patches are augmented with our synthesized documentation. Finally, we examine the relationship between code features (such as the ratio of variable uses to assignments) with participants' abilities to complete the study tasks and thus explain a portion of the broad concept of patch quality.
    BibTeX:
    @inproceedings{FryLW12,
      author = {Zachary P. Fry and Bryan Landau and Westley Weimer},
      title = {A Human Study of Patch Maintainability},
      booktitle = {Proceedings of the 2012 International Symposium on Software Testing and Analysis (ISSTA '12)},
      publisher = {ACM},
      year = {2012},
      pages = {177-187},
      address = {Minneapolis, USA},
      month = {15-20 July},
      doi = {http://dx.doi.org/10.1145/2338965.2336775}
    }
    					
    2011.03.07 He Jiang, Jingyuan Zhang, Jifeng Xuan, Zhilei Re & Yan Hu A Hybrid ACO Algorithm for the Next Release Problem 2010 Proceedings of the 2nd International Conference on Software Engineering and Data Mining (SEDM '10), pp. 166-171, Chengdu China, 23-25 June   Inproceedings Requirements/Specifications
    Abstract: In this paper, we propose a Hybrid Ant Colony Optimization algorithm (HACO) for Next Release Problem (NRP). NRP, a NP-hard problem in requirement engineering, is to balance customer requests, resource constraints, and requirement dependencies by requirement selection. Inspired by the successes of Ant Colony Optimization algorithms (ACO) for solving NP-hard problems, we design our HACO to approximately solve NRP. Similar to traditional ACO algorithms, multiple artificial ants are employed to construct new solutions. During the solution construction phase, both pheromone trails and neighborhood information will be taken to determine the choices of every ant. In addition, a local search (first found hill climbing) is incorporated into HACO to improve the solution quality. Extensively wide experiments on typical NRP test instances show that HACO outperforms the existing algorithms (GRASP and simulated annealing) in terms of both solution quality and running time.
    BibTeX:
    @inproceedings{JiangZXRH10,
      author = {He Jiang and Jingyuan Zhang and Jifeng Xuan and Zhilei Re and Yan Hu},
      title = {A Hybrid ACO Algorithm for the Next Release Problem},
      booktitle = {Proceedings of the 2nd International Conference on Software Engineering and Data Mining (SEDM '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {166-171},
      address = {Chengdu, China},
      month = {23-25 June},
      url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5542931}
    }
    					
    2011.05.19 Anne Martens, Danilo Ardagna, Heiko Koziolek, Raffaela Mirandola & Ralf Reussner A Hybrid Approach for Multi-attribute QoS Optimisation in Component Based Software Systems 2010 Proceedings of the 6th International Conference on the Quality of Software Architectures (QoSA '10), pp. 84-101, Prague Czech Republic, 23-25 June   Inproceedings Management
    Abstract: Design decisions for complex, component-based systems impact multiple quality of service (QoS) properties. Often, means to improve one quality property deteriorate another one. In this scenario, selecting a good solution with respect to a single quality attribute can lead to unacceptable results with respect to the other quality attributes. A promising way to deal with this problem is to exploit multi-objective optimization where the objectives represent different quality attributes. The aim of these techniques is to devise a set of solutions, each of which assures a trade-off between the conflicting qualities. To automate this task, this paper proposes a combined use of analytical optimization techniques and evolutionary algorithms to efficiently identify a significant set of design alternatives, from which an architecture that best fits the different quality objectives can be selected. The proposed approach can lead both to a reduction of development costs and to an improvement of the quality of the final system. We demonstrate the use of this approach on a simple case study.
    BibTeX:
    @inproceedings{MartensAKMR10,
      author = {Anne Martens and Danilo Ardagna and Heiko Koziolek and Raffaela Mirandola and Ralf Reussner},
      title = {A Hybrid Approach for Multi-attribute QoS Optimisation in Component Based Software Systems},
      booktitle = {Proceedings of the 6th International Conference on the Quality of Software Architectures (QoSA '10)},
      publisher = {Springer},
      year = {2010},
      pages = {84-101},
      address = {Prague, Czech Republic},
      month = {23-25 June},
      doi = {http://dx.doi.org/10.1007/978-3-642-13821-8_8}
    }
    					
    2012.08.22 Ricardo Britto, Pedro Santos Neto, Ricardo Rabelo, Werney Ayala & Thiago Soares A Hybrid Approach to Solve the Agile Team Allocation Problem 2012 Proceedings of IEEE Congress on Evolutionary Computation (CEC '12), pp. 1-8, Brisbane Australia, 10-15 June   Inproceedings Management
    Abstract: The success of the team allocation in a agile software development project is essential. The agile team allocation is a NP-hard problem, since it comprises the allocation of self-organizing and cross-functional teams. Many researchers have driven efforts to apply Computational Intelligence techniques to solve this problem. This work presents a hybrid approach based on NSGA-II multi-objective metaheuristic and Mamdani Fuzzy Inference Systems to solve the agile team allocation problem, together with an initial evaluation of its use in a real environment.
    BibTeX:
    @inproceedings{BrittoNRAS12,
      author = {Ricardo Britto and Pedro Santos Neto and Ricardo Rabelo and Werney Ayala and Thiago Soares},
      title = {A Hybrid Approach to Solve the Agile Team Allocation Problem},
      booktitle = {Proceedings of IEEE Congress on Evolutionary Computation (CEC '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {1-8},
      address = {Brisbane, Australia},
      month = {10-15 June},
      doi = {http://dx.doi.org/10.1109/CEC.2012.6252999}
    }
    					
    2012.07.20 Zhihong Xu, Yunho Kim, Moonzoo Kim & Gregg Rothermel A Hybrid Directed Test Suite Augmentation Technique 2011 Proceedings of IEEE 22nd International Symposium on Software Reliability Engineering (ISSRE '11), pp. 150-159, Hiroshima Japan, 29 November-2 December   Inproceedings Testing and Debugging
    Abstract: Test suite augmentation techniques are used in regression testing to identify code elements affected by changes and to generate test cases to cover those elements. In previous work, we studied two approaches to augmentation, one using a concolic test case generation algorithm and one using a genetic test case generation algorithm. We found that these two approaches behaved quite differently in terms of their costs and their abilities to generate effective test cases for evolving programs. In this paper, we present a hybrid test suite augmentation technique that combines these two test case generation algorithms. We report the results of an empirical study that shows that this hybrid technique can be effective, but with varying degrees of costs, and we analyze our results further to provide suggestions for reducing costs.
    BibTeX:
    @inproceedings{XuKKR11,
      author = {Zhihong Xu and Yunho Kim and Moonzoo Kim and Gregg Rothermel},
      title = {A Hybrid Directed Test Suite Augmentation Technique},
      booktitle = {Proceedings of IEEE 22nd International Symposium on Software Reliability Engineering (ISSRE '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {150-159},
      address = {Hiroshima, Japan},
      month = {29 November-2 December},
      doi = {http://dx.doi.org/10.1109/ISSRE.2011.21}
    }
    					
    2008.07.27 Hsiu-Chi Wang A Hybrid Genetic Algorithm for Automatic Test Data Generation 2006 , Taiwan China, July School: National Sun Yat-sen University, Department of Information Management   Mastersthesis Testing and Debugging
    Abstract: Automatic test data generation is a hot topic in recent software testing research. Various techniques have been proposed with different emphases. Among them, most of the methods are based on Genetic Algorithms (GA). However, whether it is the best Metaheuristic method for such a problem remains unclear. In this paper, we choose to use another approach which arms a GA with an intensive local searcher (the so-called Memetic Algorithm (MA) according to the recent terminology). The idea of incorporating local searcher is based on the observations from many real-world programs. It turns out the results outperform many other known Metaheuristic methods so far. We argue the needs of local search for software testing in the discussion of the paper.
    BibTeX:
    @mastersthesis{Wang06,
      author = {Hsiu-Chi Wang},
      title = {A Hybrid Genetic Algorithm for Automatic Test Data Generation},
      school = {National Sun Yat-sen University, Department of Information Management},
      year = {2006},
      address = {Taiwan, China},
      month = {July},
      url = {http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0713106-201239}
    }
    					
    2009.05.14 Fernando Netto, Márcio de Oliveira Barros & Adriana C.F. Alvim A Hybrid Heuristic Approach for Scheduling Bug Fix Tasks to Software 2009 Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09), Cumberland Lodge Windsor UK, 13-15 May   Inproceedings Management
    Abstract: Large software projects usually maintain bug repositories where both developers and end users can report and track the resolution of software defects. These defects must be fixed and new versions of the software incorporating the patches that solve them must be released. The project manager must schedule a set of error correction tasks with different priorities in order to minimize the time required to make the next release available and guarantee that the more important issues were fixed. In this paper, we propose a hybrid heuristic approach based on an algorithm to schedule error correction tasks which combines a genetic algorithm and a constructive heuristic. We believe that our method can propose schedules for bug fix tasks with lower cost when compared to an adhoc schedule for non-trivial projects.
    BibTeX:
    @inproceedings{NettoBA09,
      author = {Fernando Netto and Márcio de Oliveira Barros and Adriana C. F. Alvim},
      title = {A Hybrid Heuristic Approach for Scheduling Bug Fix Tasks to Software},
      booktitle = {Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09)},
      publisher = {IEEE},
      year = {2009},
      address = {Cumberland Lodge, Windsor, UK},
      month = {13-15 May},
      url = {http://www.ssbse.org/2009/fa/ssbse2009_submission_28.pdf}
    }
    					
    2009.07.30 Danielle Azar, Haidar M. Harmanani & Rita Korkmaz A Hybrid Heuristic Approach to Optimize Rule-Based Software Quality Estimation Models 2009 Information and Software Technology, Vol. 51(9), pp. 1365-1376, September   Article Management
    Abstract: Software quality is defined as the degree to which a software component or system meets specified requirements and specifications. Assessing software quality in the early stages of design and development is crucial as it helps reduce effort, time and money. However, the task is difficult since most software quality characteristics (such as maintainability, reliability and reusability) cannot be directly and objectively measured before the software product is deployed and used for a certain period of time. Nonetheless, these software quality characteristics can be predicted from other measurable software quality attributes such as complexity and inheritance. Many metrics have been proposed for this purpose. In this context, we speak of estimating software quality characteristics from measurable attributes. For this purpose, software quality estimation models have been widely used. These take different forms: statistical models, rule-based models and decision trees. However, data used to build such models is scarce in the domain of software quality. As a result, the accuracy of the built estimation models deteriorates when they are used to predict the quality of new software components. In this paper, we propose a search-based software engineering approach to improve the prediction accuracy of software quality estimation models by adapting them to new unseen software products. The method has been implemented and favorable result comparisons are reported in this work.
    BibTeX:
    @article{AzarHK09,
      author = {Danielle Azar and Haidar M. Harmanani and Rita Korkmaz},
      title = {A Hybrid Heuristic Approach to Optimize Rule-Based Software Quality Estimation Models},
      journal = {Information and Software Technology},
      year = {2009},
      volume = {51},
      number = {9},
      pages = {1365-1376},
      month = {September},
      doi = {http://dx.doi.org/10.1016/j.infsof.2009.05.003}
    }
    					
    2010.05.24 Ahmed Al-Emran, Dietmar Pfahl & Günther Ruhe A Hybrid Method for Advanced Decision Support in Strategic Product Release Planning 2010 (088/2010), March   Techreport Requirements/Specifications
    Abstract: DECIDERelease is a hybrid method that combines simulation-based robustness analysis and multi-criteria decision analysis with existing strategic release planning approach EVOLVE*. The purpose of DECIDERelease is to qualify decision-making by pro-actively exploring the robustness of the operational plans of upcoming releases. Based on this analysis, the strategic release plan that is the most robust against assumed changes in planning parameters at operational level can be selected. In this technical report we present the computational details of robustness analysis and multi-criteria decision analysis with a case example.
    BibTeX:
    @techreport{Al-EmranPR10,
      author = {Ahmed Al-Emran and Dietmar Pfahl and Günther Ruhe},
      title = {A Hybrid Method for Advanced Decision Support in Strategic Product Release Planning},
      year = {2010},
      number = {088/2010},
      month = {March},
      url = {http://people.ucalgary.ca/~aalemran/re2010/SEDS-TR%20088_2010.pdf}
    }
    					
    2013.08.01 Xinye Cai & Ou We A Hybrid of Decomposition and Domination based Evolutionary Algorithm for Multi-objective Software Next Release Problem 2013 Proceedings of the 10th IEEE International Conference on Control and Automation (ICCA '13), pp. 412-417, Hangzhou China, 12-14 June   Inproceedings Requirements/Specifications
    Abstract: In software industry, one common problem that the companies face is to decide what requirements should be implemented in the next release of the software. In a more realistic perspective, multiple objectives, such as cost and customer satisfaction, need to be considered when making the decision; thus the multi-objective formulations of the NRP become increasingly popular. This paper studies various multi-objective evolutionary algorithms (MOEAs) to address multi-objective NRP(MONRP). A novel multi-objective algorithm, MOEA/DD is proposed to obtain trade-off solutions when deciding which requirements to be implemented in MONRP. The proposed MOEA/DD addresses several important issues of decomposition based MOEAs in context of MONRP by combining it with desirable feature of domination based MOEAs. A density based mechanism is proposed to switch between decomposition and domination archives when constructing the subpopulation of subproblems. Our experimental results suggest the proposed approach outperforms the state-of-art domination or decomposition based multi-objective MOEAs.
    BibTeX:
    @inproceedings{CaiW13,
      author = {Xinye Cai and Ou We},
      title = {A Hybrid of Decomposition and Domination based Evolutionary Algorithm for Multi-objective Software Next Release Problem},
      booktitle = {Proceedings of the 10th IEEE International Conference on Control and Automation (ICCA '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {412-417},
      address = {Hangzhou, China},
      month = {12-14 June},
      doi = {http://dx.doi.org/10.1109/ICCA.2013.6565143}
    }
    					
    2012.03.05 Dharmalingam Jeya Mala, Elizabeth Ruby & Vasudev Mohan A Hybrid Test Optimization Framework - Coupling Genetic Algorithm with Local Search Technique 2010 Computing and Informatics, Vol. 29(1), pp. 133-164   Article Testing and Debugging
    Abstract: Quality of test cases is determined by their ability to uncover as many errors as possible in the software code. In our approach, we applied Hybrid Genetic Algorithm (HGA) for improving the quality of test cases. This improvement can be achieved by analyzing both mutation score and path coverage of each test case. Our approach selects effective test cases that have higher mutation score and path coverage from a near infinite number of test cases. Hence, the final test set size is reduced which in turn reduces the total time needed in testing activity. In our proposed framework, we included two improvement heuristics, namely RemoveTop and LocalBest, to achieve near global optimal solution. Finally, we compared the efficiency of the test cases generated by our approach against the existing test case optimization approaches such as Simple Genetic Algorithm (SGA) and Bacteriologic Algorithm (BA) and concluded that our approach generates better quality test cases.
    BibTeX:
    @article{MalaRM10,
      author = {Dharmalingam Jeya Mala and Elizabeth Ruby and Vasudev Mohan},
      title = {A Hybrid Test Optimization Framework - Coupling Genetic Algorithm with Local Search Technique},
      journal = {Computing and Informatics},
      year = {2010},
      volume = {29},
      number = {1},
      pages = {133-164},
      url = {http://www.cai.sk/ojs/index.php/cai/article/view/77/62}
    }
    					
    2014.05.28 Dharmalingam Jeya Mala, Sabarinathan K. & Balamurugan S. A Hybrid Test Optimization Framework using Memetic Algorithm with Cuckoo Flocking Based Search Approach 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), Hyderabad India, 2-3 June   Inproceedings
    Abstract: The testing process of industrial strength applications usually takes more time to ensure that all the components are rigorously tested to have failure-free operation upon delivery. This research work proposed a hybrid optimization approach that combines the population based multi-objective optimization approach namely Memetic Algorithm with Cuckoo Search (MA-CK) to generate optimal number of test cases that achieves the specified test adequacy criteria based on mutation score and branch coverage. Further, GA, HGA and MA based heuristic algorithms are empirically evaluated and it has been shown that the proposed MA with cuckoo search based optimization algorithm provides an optimal solution.
    BibTeX:
    @inproceedings{DharmalingamKS14,
      author = {Dharmalingam Jeya Mala and Sabarinathan K. and Balamurugan S.},
      title = {A Hybrid Test Optimization Framework using Memetic Algorithm with Cuckoo Flocking Based Search Approach},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593843}
    }
    					
    2015.08.07 Giovani Guizzo, Gian Mauricio Fritsche, Silvia Regina Vergilio & Aurora Trinidad Ramirez Pozo A Hyper-Heuristic for the Multi-Objective Integration and Test Order Problem 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 1343-1350, Madrid Spain, 11-15 July   Inproceedings
    Abstract: Multi-objective evolutionary algorithms (MOEAs) have been efficiently applied to Search-Based Software Engineering (SBSE) problems. However, skilled software engineers waste significant effort designing such algorithms for a particular problem, adapting them, selecting operators and configuring parameters. Hyper-heuristics can help in these tasks by dynamically selecting or creating heuristics. Despite of such advantages, we observe a lack of works regarding this subject in the SBSE field. Considering this fact, this work introduces HITO, a Hyper-heuristic for the Integration and Test Order Problem. It includes a set of well-defined steps and is based on two selection functions (Choice Function and Multi-armed Bandit) to select the best low-level heuristic (combination of mutation and crossover operators) in each mating. To perform the selection, a quality measure is proposed to assess the performance of low-level heuristics throughout the evolutionary process. HITO was implemented using NSGA-II and evaluated to solve the integration and test order problem in seven systems. The introduced hyper-heuristic obtained the best results for all systems, when compared to a traditional algorithm.
    BibTeX:
    @inproceedings{GuizzoFVP15,
      author = {Giovani Guizzo and Gian Mauricio Fritsche and Silvia Regina Vergilio and Aurora Trinidad Ramirez Pozo},
      title = {A Hyper-Heuristic for the Multi-Objective Integration and Test Order Problem},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {1343-1350},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754725}
    }
    					
    2009.05.25 Karnig Derderian, Mercedes G. Merayo, Robert M. Hierons & Manuel Núñez Aiding Test Case Generation in Temporally Constrained State based Systems using Genetic Algorithms 2009 Proceedings of the 10th International Work-Conference on Artificial Neural Networks (IWANN '09)   Inproceedings Testing and Debugging
    Abstract: Generating test data is computationally expensive. This paper improves a framework that addresses this issue by representing the test data generation problem as an optimisation problem and uses heuristics to help generate test cases. The paper considers the temporal constraints and behaviour of a certain class of (timed) finite state machines. A very simple fitness function is defined that can be used with several evolutionary search techniques and automated test case generation tools. An extended version of this paper, including a case study, can be found in [1]. Research supported by the Spanish projects WEST/FAST (TIN2006-15578-C02-01) and MATES (CCG08-UCM/TIC-4124).
    BibTeX:
    @inproceedings{DerderianMHN09,
      author = {Karnig Derderian and Mercedes G. Merayo and Robert M. Hierons and Manuel Núñez},
      title = {Aiding Test Case Generation in Temporally Constrained State based Systems using Genetic Algorithms},
      booktitle = {Proceedings of the 10th International Work-Conference on Artificial Neural Networks (IWANN '09)},
      year = {2009},
      doi = {http://dx.doi.org/10.1007/978-3-642-02478-8_41}
    }
    					
    2009.03.31 Massimiliano Di Penta, Markus Neteler, Giuliano Antoniol & Ettore Merlo A Language-Independent Software Renovation Framework 2005 Journal of Systems and Software, Vol. 77(3), pp. 225-240, September   Article Distribution and Maintenance
    Abstract: One of the undesired effects of software evolution is the proliferation of unused components, which are not used by any application. As a consequence, the size of binaries and libraries tends to grow and system maintainability tends to decrease. At the same time, a major trend of today's software market is the porting of applications on hand-held devices or, in general, on devices which have a limited amount of available resources. Refactoring and, in particular, the miniaturization of libraries and applications are therefore necessary. We propose a Software Renovation Framework (SRF) and a toolkit covering several aspects of software renovation, such as removing unused objects and code clones, and refactoring existing libraries into smaller more cohesive ones. Refactoring has been implemented in the SRF using a hybrid approach based on hierarchical clustering, on genetic algorithms and hill climbing, also taking into account the developers' feedback. The SRF aims to monitor software system quality in terms of the identified affecting factors, and to perform renovation activities when necessary. Most of the framework activities are language-independent, do not require any kind of source code parsing, and rely on object module analysis. The SRF has been applied to GRASS, which is a large open source Geographical Information System of about one million LOCs in size. It has significantly improved the software organization, has reduced by about 50% the average number of objects linked by each application, and has consequently also reduced the applications' memory requirements.
    BibTeX:
    @article{DiPentaNAM05,
      author = {Massimiliano Di Penta and Markus Neteler and Giuliano Antoniol and Ettore Merlo},
      title = {A Language-Independent Software Renovation Framework},
      journal = {Journal of Systems and Software},
      year = {2005},
      volume = {77},
      number = {3},
      pages = {225-240},
      month = {September},
      doi = {http://dx.doi.org/10.1016/j.jss.2004.03.033}
    }
    					
    2009.02.26 Gerardo Canfora, Massimiliano Di Penta, Raffaele Esposito & Maria Luisa Villani A Lightweight Approach for QoS-Aware Service Composition 2004 Proceedings of the 2nd International Conference on Service Oriented Computing (ICSOC '04), New York USA, 15-18 November   Inproceedings Design Tools and Techniques
    Abstract: One of the most challenging issues of service-centric software engineering is the QoS-aware composition of services. The aim is to search for the optimal set of services that, composed to create a new service, result in the best QoS, under the user or service designer constraints. During service execution, re-planning such a composition may be needed whenever deviations from the QoS estimates occur. Both QoS-aware composition and re-planning may need to be performed in a short time, especially for interactive or real-time systems. This paper proposes a lightweight approach for QoS-aware service composition that uses genetic algorithms for the optimal QoS estimation. Also, the paper presents an algorithm for early triggering service re-planning. If required re-planning is triggered as soon as possible during service execution. The performances of our approach are evaluated by means of numerical simulation.
    BibTeX:
    @inproceedings{CanforaDEV04,
      author = {Gerardo Canfora and Massimiliano Di Penta and Raffaele Esposito and Maria Luisa Villani},
      title = {A Lightweight Approach for QoS-Aware Service Composition},
      booktitle = {Proceedings of the 2nd International Conference on Service Oriented Computing (ICSOC '04)},
      publisher = {ACM},
      year = {2004},
      address = {New York, USA},
      month = {15-18 November},
      url = {http://www.rcost.unisannio.it/mdipenta/papers/tr-qos.pdf}
    }
    					
    2007.12.02 Nicolas Gold, Mark Harman, Zheng Li & Kiarash Mahdavi Allowing Overlapping Boundaries in Source Code using a Search Based Approach to Concept Binding 2006 Proceedings of the 22nd IEEE International Conference on Software Maintenance (ICSM '06), pp. 310-319, Philadelphia USA, 24-27 September   Inproceedings Distribution and Maintenance
    Abstract: One approach to supporting program comprehension involves binding concepts to source code. Previously proposed approaches to concept binding have enforced non-overlapping boundaries. However, real-world programs may contain overlapping concepts. This paper presents techniques to allow boundary overlap in the binding of concepts to source code. In order to allow boundaries to overlap, the concept binding problem is reformulated as a search problem. It is shown that the search space of overlapping concept bindings is exponentially large, indicating the suitability of sampling-based search algorithms. Hill climbing and genetic algorithms are introduced for sampling the space. The paper reports on experiments that apply these algorithms to 21 COBOL II programs taken from the commercial financial services sector. The results show that the genetic algorithm produces significantly better solutions than both the hill climber and random search.
    BibTeX:
    @inproceedings{GoldHLM06,
      author = {Nicolas Gold and Mark Harman and Zheng Li and Kiarash Mahdavi},
      title = {Allowing Overlapping Boundaries in Source Code using a Search Based Approach to Concept Binding},
      booktitle = {Proceedings of the 22nd IEEE International Conference on Software Maintenance (ICSM '06)},
      publisher = {IEEE},
      year = {2006},
      pages = {310-319},
      address = {Philadelphia, USA},
      month = {24-27 September},
      doi = {http://dx.doi.org/10.1109/ICSM.2006.10}
    }
    					
    2010.06.04 Mark Harman, Yue Jia & William B. Langdon A Manifesto for Higher Order Mutation Testing 2010 Proceedings of the 5th International Workshop on Mutation Analysis (MUTATION '10), pp. 80-89, Paris France, 6 April   Inproceedings Testing and Debugging
    Abstract: We argue that higher order mutants are potentially better able to simulate real faults and to reveal insights into bugs than the restricted class of first order mutants. The Mutation Testing community has previously shied away from Higher Order Mutation Testing believing it to be too expensive and therefore impractical. However, this paper argues that Search Based Software Engineering can provide a solution to this apparent problem, citing results from recent work on search based optimization techniques for constructing higher order mutants. We also present a research agenda for the development of Higher Order Mutation Testing.
    BibTeX:
    @inproceedings{HarmanJL10,
      author = {Mark Harman and Yue Jia and William B. Langdon},
      title = {A Manifesto for Higher Order Mutation Testing},
      booktitle = {Proceedings of the 5th International Workshop on Mutation Analysis (MUTATION '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {80-89},
      address = {Paris, France},
      month = {6 April},
      doi = {http://dx.doi.org/10.1109/ICSTW.2010.13}
    }
    					
    2009.07.03 Usman Farooq & Chiou Peng Lam A Max-Min Multiobjective Technique to Optimize Model Based Test Suite 2009 Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, ACIS International Conference onProceedings of the 10th ACIS International Conference on Software Engineering, Artificial Intelligences, Networking and Parallel/Distributed Computing, Vol. 0, pp. 569-574, Los Alamitos CA USA   Inproceedings Testing and Debugging
    Abstract: Generally, quality software production seeks timely delivery with higher productivity at lower cost. Redundancy in a test suite raises the execution cost and wastes scarce project resources. In model-based testing, the testing process starts with earlier software developmental phases and enables fault detection in earlier phases. The redundancy in the test suites generated from models can be detected earlier as well and removed prior to its execution. The paper presents a novel max-min multiobjective technique incorporated into a test suite optimization framework to find a better trade-off between the intrinsically conflicting goals. For illustration two objectives i.e. coverage and size of a test suite were used however it can be extended to more objectives. The study is associated with model based testing and reports the results of the empirical analysis on four UML based synthetic as well as industrial Activity Diagram models.
    BibTeX:
    @inproceedings{FarooqL09,
      author = {Usman Farooq and Chiou Peng Lam},
      title = {A Max-Min Multiobjective Technique to Optimize Model Based Test Suite},
      booktitle = {Proceedings of the 10th ACIS International Conference on Software Engineering, Artificial Intelligences, Networking and Parallel/Distributed Computing},
      journal = {Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, ACIS International Conference on},
      publisher = {IEEE},
      year = {2009},
      volume = {0},
      pages = {569-574},
      address = {Los Alamitos, CA, USA},
      doi = {http://dx.doi.org/10.1109/SNPD.2009.33}
    }
    					
    2010.09.24 Zai Wang, Ke Tang & Xin Yao A Memetic Algorithm for Multi-Level Redundancy Allocation 2010 IEEE Transactions on Reliability, Vol. 59(4), pp. 754-765, December   Article
    Abstract: Redundancy allocation problems (RAPs) have attracted much attention for the past thirty years due to its wide applications in improving the reliability of various engineering systems. Because RAP is an NP-hard problem, and exact methods are only applicable to small instances, various heuristic and meta-heuristic methods have been proposed to solve it. In the literature, most studies on RAPs have been conducted for single-level systems. However, real-world engineering systems usually contain multiple levels. In this paper, the RAP on multi-level systems is investigated. A novel memetic algorithm (MA) is proposed to solve this problem. Two genetic operators, namely breadth-first crossover and breadth-first mutation, and a local search method are designed for the MA. Comprehensive experimental studies have shown that the proposed MA outperformed the state-of-the-art approach significantly on two representative examples.
    BibTeX:
    @article{WangTY10b,
      author = {Zai Wang and Ke Tang and Xin Yao},
      title = {A Memetic Algorithm for Multi-Level Redundancy Allocation},
      journal = {IEEE Transactions on Reliability},
      year = {2010},
      volume = {59},
      number = {4},
      pages = {754-765},
      month = {December},
      doi = {http://dx.doi.org/10.1109/TR.2010.2055927}
    }
    					
    2007.12.02 Andrea Arcuri & Xin Yao A Memetic Algorithm for Test Data Generation of Object-Oriented Software 2007 Proceedings of the 2007 IEEE Congress on Evolutionary Computation (CEC '07), pp. 2048-2055, Singapore, 25-28 September   Inproceedings Testing and Debugging
    Abstract: Generating test data for Object-Oriented (OO) software is a hard task. Little work has been done on the subject, and a lot of open problems still need to be investigated. In this paper we focus on container classes. They are used in almost every type of software, hence their reliability is of utmost importance. We present novel techniques to generate test data for container classes in an automatic way. A new representation with novel search operators is described and tested. A way to reduce the search space for OO software is presented. This is achieved by dynamically eliminating the functions that cannot give any further help from the search. Besides, the problem of applying the branch distances of disjunctions and conjunctions to OO software is solved. Finally, Hill Climbing, Genetic Algorithms and Memetic Algorithms are used and compared. Our empirical case study shows that our Memetic Algorithm outperforms the other algorithms.
    BibTeX:
    @inproceedings{ArcuriY07b,
      author = {Andrea Arcuri and Xin Yao},
      title = {A Memetic Algorithm for Test Data Generation of Object-Oriented Software},
      booktitle = {Proceedings of the 2007 IEEE Congress on Evolutionary Computation (CEC '07)},
      publisher = {IEEE},
      year = {2007},
      pages = {2048-2055},
      address = {Singapore},
      month = {25-28 September},
      doi = {http://dx.doi.org/10.1109/CEC.2007.4424725}
    }
    					
    2014.09.02 Gordon Fraser, Andrea Arcuri & Phil McMinn A Memetic Algorithm for Whole Test Suite Generation 2015 Journal of Systems and Software, Vol. 103, pp. 311-327, May   Article Testing and Debugging
    Abstract: The generation of unit-level test cases for structural code coverage is a task well-suited to Genetic Algorithms. Method call sequences must be created that construct objects, put them into the right state and then execute uncovered code. However, the generation of primitive values, such as integers and doubles, characters that appear in strings, and arrays of primitive values, are not so straightforward. Often, small local changes are required to drive the value toward the one needed to execute some target structure. However, global searches like Genetic Algorithms tend to make larger changes that are not concentrated on any particular aspect of a test case. In this paper, we extend the Genetic Algorithm behind the EvoSuite test generation tool into a Memetic Algorithm, by equipping it with several local search operators. These operators are designed to efficiently optimize primitive values and other aspects of a test suite that allow the search for test cases to function more effectively. We evaluate our operators using a rigorous experimental methodology on over 12,000 Java classes, comprising open source classes of various different kinds, including numerical applications and text processors. Our study shows that increases in branch coverage of up to 53% are possible for an individual class in practice.
    BibTeX:
    @article{FraserAM15,
      author = {Gordon Fraser and Andrea Arcuri and Phil McMinn},
      title = {A Memetic Algorithm for Whole Test Suite Generation},
      journal = {Journal of Systems and Software},
      year = {2015},
      volume = {103},
      pages = {311-327},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.jss.2014.05.032}
    }
    					
    2014.05.28 Ivana Ognjanović, Bardia Mohabbati, Dragan Gašević, Ebrahim Bagheri & Marko Bošković A Metaheuristic Approach for the Configuration of Business Process Families 2012 Proceedings of the 9th International Conference on Services Computing (SCC '12), pp. 25-32, Honolulu HI USA, 24-29 June   Inproceedings
    Abstract: Business process families provide an over-arching representation of the possible business processes of a target domain. They are defined by capturing the similarities and differences among the possible business processes of the target domain. To realize a business process family into a concrete business process model, the variability points of the business process family need to be bounded. The decision on how to bind these variation points boils down to the stakeholders' requirements and needs. Given specific requirements from the stakeholders, the business process family can be configured. This paper formally introduces and empirically evaluates a framework called ConfBPFM that utilizes standard techniques for identifying stakeholders' quality requirements and employs a metaheuristic search algorithm (i.e., Genetic Algorithms) to optimally configure a business process family.
    BibTeX:
    @inproceedings{OgnjanovicMGBB12,
      author = {Ivana Ognjanović and Bardia Mohabbati and Dragan Gašević and Ebrahim Bagheri and Marko Bošković},
      title = {A Metaheuristic Approach for the Configuration of Business Process Families},
      booktitle = {Proceedings of the 9th International Conference on Services Computing (SCC '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {25-32},
      address = {Honolulu, HI, USA},
      month = {24-29 June},
      doi = {http://dx.doi.org/10.1109/SCC.2012.6}
    }
    					
    2011.07.20 Sebastian Bauersfeld, Stefan Wappler & Joachim Wegener A Metaheuristic Approach to Test Sequence Generation for Applications with a GUI 2011 Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11), Vol. 6956, pp. 173-187, Szeged Hungary, 10-12 September   Inproceedings Testing and Debugging
    Abstract: As the majority of today's software applications employ a graphical user interface (GUI), it is an important though challenging task to thoroughly test those interfaces. Unfortunately few tools exist to help automating the process of testing. Despite of their well-known deficits, scripting- and capture and replay applications remain among the most common tools in the industry. In this paper we will present an approach where we treat the problem of generating test sequences to GUIs as an optimization problem. We employ ant colony optimization and a relatively new metric called MCT (Maximum Call Tree) to search fault-sensitive test cases. We therefore implemented a test environment for Java SWT applications and will present first results of our experiments with a graphical editor as our main application under test.
    BibTeX:
    @inproceedings{BauersfeldWW11,
      author = {Sebastian Bauersfeld and Stefan Wappler and Joachim Wegener},
      title = {A Metaheuristic Approach to Test Sequence Generation for Applications with a GUI},
      booktitle = {Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11)},
      publisher = {Springer},
      year = {2011},
      volume = {6956},
      pages = {173-187},
      address = {Szeged, Hungary},
      month = {10-12 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-23716-4_17}
    }
    					
    2009.03.31 Khaled El-Fakih, Hirozumi Yamaguchi & Gregor v. Bochmann A Method and a Genetic Algorithm for Deriving Protocols for Distributed Applications with Minimum Communication Cost 1999 Proceedings of the 11th International Conference on Parallel and Distributed Computing and Systems (PDCS '99), Boston USA, 3-6 November   Inproceedings Network Protocols
    Abstract: We consider a set of rules for deriving the specification of the protocol of a distributed system from a given specification of services, and define and formulate the message exchange optimization problem using a 0-1 integer programming model. This problem is about determining the minimum number of messages to be exchanged between the physical locations of the distributed system, in order to reduce the communication cost. We then present a genetic algorithm for solving this problem. The main advantage of this algorithm, in comparison with exact algorithms, is that its complexity remains manageable for realistic large specifications. The experimental results show that the minimum number of messages to be exchanged is found in a very reasonable time.
    BibTeX:
    @inproceedings{ElFakihYB99,
      author = {Khaled El-Fakih and Hirozumi Yamaguchi and Gregor v. Bochmann},
      title = {A Method and a Genetic Algorithm for Deriving Protocols for Distributed Applications with Minimum Communication Cost},
      booktitle = {Proceedings of the 11th International Conference on Parallel and Distributed Computing and Systems (PDCS '99)},
      year = {1999},
      address = {Boston, USA},
      month = {3-6 November},
      url = {http://www-higashi.ist.osaka-u.ac.jp/~h-yamagu/resource/pdcs99.pdf}
    }
    					
    2017-07-07 Ali Aburas & Alex Groce A Method Dependence Relations Guided Genetic Algorithm 2016 Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16), pp. 267-273, Raleigh NC USA, 8-10 October   Inproceedings
    Abstract: Search based test generation approaches have already been shown to be effective for generating test data that achieves high code coverage for object-oriented programs. In this paper, we present a new search-based approach, called GAMDR, that uses a genetic algorithm (GA) to generate test data. GAMDR exploits method dependence relations (MDR) to narrow down the search space and direct mutation operators to the most beneficial regions for achieving high branch coverage. We compared GAMDR's effectiveness with random testing, EvoSuite, and a simple GA. The tests generated by GAMDR achieved higher branch coverage.
    BibTeX:
    @inproceedings{AburasG16,
      author = {Ali Aburas and Alex Groce},
      title = {A Method Dependence Relations Guided Genetic Algorithm},
      booktitle = {Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16)},
      publisher = {Springer},
      year = {2016},
      pages = {267-273},
      address = {Raleigh, NC, USA},
      month = {8-10 October},
      doi = {http://dx.doi.org/10.1007/978-3-319-47106-8_22}
    }
    					
    2015.09.30 Tanja E.J. Vos, Beatriz Marín, Maria Jose Escalona & Alessandro Marchetto A Methodological Framework for Evaluating Software Testing Techniques and Tools 2012 Proceedings of the 12th International Conference onQuality Software (QSIC '12), pp. 230-239, Xi'an China, 27-29 August   Inproceedings Testing and Debugging
    Abstract: There exists a real need in industry to have guidelines on what testing techniques use for different testing objectives, and how usable (effective, efficient, satisfactory) these techniques are. Up to date, these guidelines do not exist. Such guidelines could be obtained by doing secondary studies on a body of evidence consisting of case studies evaluating and comparing testing techniques and tools. However, such a body of evidence is also lacking. In this paper, we will make a first step towards creating such body of evidence by defining a general methodological evaluation framework that can simplify the design of case studies for comparing software testing tools, and make the results more precise, reliable, and easy to compare. Using this framework, (1) software testing practitioners can more easily define case studies through an instantiation of the framework, (2) results can be better compared since they are all executed according to a similar design, (3) the gap in existing work on methodological evaluation frameworks will be narrowed, and (4) a body of evidence will be initiated. By means of validating the framework, we will present successful applications of this methodological framework to various case studies for evaluating testing tools in an industrial environment with real objects and real subjects.
    BibTeX:
    @inproceedings{VosMEM12,
      author = {Tanja E.J. Vos and Beatriz Marín and Maria Jose Escalona and Alessandro Marchetto},
      title = {A Methodological Framework for Evaluating Software Testing Techniques and Tools},
      booktitle = {Proceedings of the 12th International Conference onQuality Software (QSIC '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {230-239},
      address = {Xi'an, China},
      month = {27-29 August},
      doi = {http://dx.doi.org/10.1109/QSIC.2012.16}
    }
    					
    2007.12.02 Kurt F. Fischer, F. Raji & A. Chruscicki A Methodology for Retesting Modified Software 1981 Proceedings of the National Telecommunications Conference (NTC '81), pp. 1-6, New Orleans LA USA, 29 November-3 December   Inproceedings Testing and Debugging
    BibTeX:
    @inproceedings{FischerRC81,
      author = {Kurt F. Fischer and F. Raji and A. Chruscicki},
      title = {A Methodology for Retesting Modified Software},
      booktitle = {Proceedings of the National Telecommunications Conference (NTC '81)},
      year = {1981},
      pages = {1-6},
      address = {New Orleans, LA, USA},
      month = {29 November-3 December}
    }
    					
    2009.02.16 Chao-Jung Hsu, Chin-Yu Huang & Tsan-Yuan Chen A Modified Genetic Algorithm for Parameter Estimation of Software Reliability Growth Models 2008 Proceedings of the 19th International Symposium on Software Reliability Engineering (ISSRE '08), pp. 281-282, Seattle/Redmond WA USA, 10-14 November   Inproceedings Software/Program Verification
    Abstract: In this paper, we propose a modified genetic algorithm (MGA) with calibrating fitness functions, weighted bit mutation, and rebuilding mechanism for the parameter estimation of software reliability growth models (SRGMs). An example using a real failure data is given to demonstrate the performance of proposed method. Experimental result shows that MGA is effective for estimating the parameters of SRGM.
    BibTeX:
    @inproceedings{HsuHC08,
      author = {Chao-Jung Hsu and Chin-Yu Huang and Tsan-Yuan Chen},
      title = {A Modified Genetic Algorithm for Parameter Estimation of Software Reliability Growth Models},
      booktitle = {Proceedings of the 19th International Symposium on Software Reliability Engineering (ISSRE '08)},
      publisher = {IEEE},
      year = {2008},
      pages = {281-282},
      address = {Seattle/Redmond, WA, USA},
      month = {10-14 November},
      doi = {http://dx.doi.org/10.1109/ISSRE.2008.35}
    }
    					
    2017-07-07 Jeongju Sohn, Seongmin Lee & Shin Yoo Amortised Deep Parameter Optimisation of GPGPU Work Group Size for OpenCV 2016 Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16), pp. 211-217, Raleigh NC USA, 8-10 October   Inproceedings
    Abstract: The appeal of highly-configurable software systems lies in their adaptability to users' needs. Search-based Combinatorial Interaction Testing (CIT) techniques have been specifically developed to drive the systematic testing of such highly-configurable systems. In order to apply these, it is paramount to devise a model of parameter configurations which conforms to the software implementation. This is a non-trivial task. Therefore, we extend traditional search-based CIT by devising 4 new testing policies able to check if the model correctly identifies constraints among the various software parameters. Our experiments show that one of our new policies is able to detect faults both in the model and the software implementation that are missed by the standard approaches.
    BibTeX:
    @inproceedings{SohnLY16,
      author = {Jeongju Sohn and Seongmin Lee and Shin Yoo},
      title = {Amortised Deep Parameter Optimisation of GPGPU Work Group Size for OpenCV},
      booktitle = {Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16)},
      publisher = {Springer},
      year = {2016},
      pages = {211-217},
      address = {Raleigh, NC, USA},
      month = {8-10 October},
      doi = {http://dx.doi.org/10.1007/978-3-319-47106-8_4}
    }
    					
    2015.11.06 Shin Yoo Amortised Optimisation of Non-functional Properties in Production Environments 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE '15), pp. 31-46, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Search Based Software Engineering has high potential for optimising non-functional properties such as execution time or power consumption. However, many non-functional properties are dependent not only on the software system under consideration but also the environment that surrounds the system. This necessitates a support for online, in situ optimisation. This paper introduces the novel concept of amortised optimisation which allows such online optimisation. The paper also presents two case studies: one that seeks to optimise JIT compilation, and another to optimise a hardware dependent algorithm. The results show that, by using the open source libraries we provide, developers can improve the speed of their Python script by up to 8.6 % with virtually no extra effort, and adapt a hardware dependent algorithm automatically for unseen CPUs.
    BibTeX:
    @inproceedings{Yoo,
      author = {Shin Yoo},
      title = {Amortised Optimisation of Non-functional Properties in Production Environments},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE '15)},
      publisher = {Springer},
      year = {2015},
      pages = {31-46},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_3}
    }
    					
    2014.08.14 Danilo Ardagna, Giovanni Paolo Gibilisco, Michele Ciavotta & Alexander Lavrentev A Multi-Model Optimization Framework for the Model Driven Design of Cloud Applications 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 61-76, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: The rise and adoption of the Cloud computing paradigm had a strong impact on the ICT world in the last few years; this technology has now reached maturity and Cloud providers offer a variety of solutions and services to their customers. However, beside the advantages, Cloud computing introduced new issues and challenges. In particular, the heterogeneity of the Cloud services offered and their relative pricing models makes the identification of a deployment solution that minimizes costs and guarantees QoS very complex. Performance assessment of Cloud based application needs for new models and tools to take into consideration the dynamism and multi-tenancy intrinsic of the Cloud environment. The aim of this work is to provide a novel mixed integer linear program (MILP) approach to find a minimum cost feasible cloud configuration for a given cloud based application. The feasibility of the solution is considered with respect to some non-functional requirements that are analyzed through multiple performance models with different levels of accuracy. The initial solution is further improved by a local search based procedure. The quality of the initial feasible solution is compared against first principle heuristics currently adopted by practitioners and Cloud providers.
    BibTeX:
    @inproceedings{ArdagnaGCL14,
      author = {Danilo Ardagna and Giovanni Paolo Gibilisco and Michele Ciavotta and Alexander Lavrentev},
      title = {A Multi-Model Optimization Framework for the Model Driven Design of Cloud Applications},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {61-76},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_5}
    }
    					
    2017.06.29 Giovani Guizzo, Silvia Regina Vergilio, Aurora Trinidad Ramirez Pozo & Gian Mauricio Fritsche A Multi-objective and Evolutionary Hyper-heuristic Applied to the Integration and Test Order Problem 2017 Applied Soft Computing, Vol. 56, pp. 331-344, July   Article
    Abstract: The field of Search-Based Software Engineering (SBSE) has widely utilized Multi-Objective Evolutionary Algorithms (MOEAs) to solve complex software engineering problems. However, the use of such algorithms can be a hard task for the software engineer, mainly due to the significant range of parameter and algorithm choices. To help in this task, the use of Hyper-heuristics is recommended. Hyper-heuristics can select or generate low-level heuristics while optimization algorithms are executed, and thus can be generically applied. Despite their benefits, we find only a few works using hyper-heuristics in the SBSE field. Considering this fact, we describe HITO, a Hyper-heuristic for the Integration and Test Order Problem, to adaptively select search operators while MOEAs are executed using one of the selection methods: Choice Function and Multi-Armed Bandit. The experimental results show that HITO can outperform the traditional MOEAs NSGA-II and MOEA/DD. HITO is also a generic algorithm, since the user does not need to select crossover and mutation operators, nor adjust their parameters.
    BibTeX:
    @article{GuizzoVPF17,
      author = {Giovani Guizzo and Silvia Regina Vergilio and Aurora Trinidad Ramirez Pozo and Gian Mauricio Fritsche},
      title = {A Multi-objective and Evolutionary Hyper-heuristic Applied to the Integration and Test Order Problem},
      journal = {Applied Soft Computing},
      year = {2017},
      volume = {56},
      pages = {331-344},
      month = {July},
      doi = {http://dx.doi.org/10.1016/j.asoc.2017.03.012}
    }
    					
    2009.11.01 Camila Loiola Brito Maia, Rafael Augusto Ferreira do Carmo, Fabrício Gomes de Freitas, Gustavo Augusto Lima de Campos & Jerffeson Teixeira de Souza A Multi-Objective Approach for the Regression Test Case Selection Problems 2009 Proceedings of XLI Brazilian Symposium of Operational Research (XLI SBPO '09), pp. 1824-1835, Porto Seguro Bahia Brazil, 1-4 September   Inproceedings Testing and Debugging
    Abstract: When software is modified, some functionality that had been working can be affected. The reliable way to guarantee that the software is working correctly after those changes is to test the whole system again, but generally there is not sufficient time. Then, it is necessary to select significant test cases to be executed, in order to guarantee that the system is working as it should be. Although there are already works regarding on the regression test case selection problem, some important features which can influence in the test case selection are not considered in them. In this work, we state a new and more complete multi-objective formulation for this problem. The work also shows the results of the solution for the problem using a multi-objective genetic algorithm, comparing it with a random algorithm.
    BibTeX:
    @inproceedings{Maiadddd09,
      author = {Camila Loiola Brito Maia and Rafael Augusto Ferreira do Carmo and Fabrício Gomes de Freitas and Gustavo Augusto Lima de Campos and Jerffeson Teixeira de Souza},
      title = {A Multi-Objective Approach for the Regression Test Case Selection Problems},
      booktitle = {Proceedings of XLI Brazilian Symposium of Operational Research (XLI SBPO '09)},
      year = {2009},
      pages = {1824-1835},
      address = {Porto Seguro, Bahia, Brazil},
      month = {1-4 September},
      url = {http://sobrapo.org.br/simposios/XLI-2009/XLI_SBPO_2009_artigos/artigos/56096.pdf}
    }
    					
    2017-07-07 Duany Dreyton, Allysson Allex AraÃjo, Altino Dantas, Raphael Saraiva & Jerffeson Souza A Multi-objective Approach to Prioritize and Recommend Bugs in Open Source Repositories 2016 Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16), pp. 143-158, Raleigh NC USA, 8-10 October   Inproceedings
    Abstract: Bugs prioritization in open source repositories poses as a challenging and complex task, given the significant number of reports and the impact of a wrong bug assignment to the software evolution. Deciding the most suitable bugs in order to be solved can be considered as an optimization problem. Thus, we propose a search-bas ed approach supported by a multi-objective paradigm to tackle this problem, aiming to maximize the resolution of the most important bugs, while minimizing the risk of later resolution of the most severe ones. Furthermore, we propose a strategy to avoid the developer’s effort when choosing a solution from the Pareto Front. Regarding the empirical study, we evaluate the performance of three metaheuristics and investigate the human competitiveness of the approach. Overall, the proposal can be said human competitive in a real-world scenario and the NSGA-II outperformed both MOCell and IBEA in the adopted quality measures.
    BibTeX:
    @inproceedings{DreytonADSS16,
      author = {Duany Dreyton and Allysson Allex AraÃjo and Altino Dantas and Raphael Saraiva and Jerffeson Souza},
      title = {A Multi-objective Approach to Prioritize and Recommend Bugs in Open Source Repositories},
      booktitle = {Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16)},
      publisher = {Springer},
      year = {2016},
      pages = {143-158},
      address = {Raleigh, NC, USA},
      month = {8-10 October},
      doi = {http://dx.doi.org/10.1007/978-3-319-47106-8_10}
    }
    					
    2009.08.05 Zai Wang, Tianshi Chen, Ke Tang & Xin Yao A Multi-objective Approach to Redundancy Allocation Problem in Parallel-series Systems 2009 Proceedings of the 10th IEEE Congress on Evolutionary Computation (CEC '09), pp. 582-589, Trondheim Norway, 18-21 May   Inproceedings Management
    Abstract: The Redundancy Allocation Problem (RAP) is a kind of reliability optimization problems. It involves the selection of components with appropriate levels of redundancy or reliability to maximize the system reliability under some predefined constraints. We can formulate the RAP as a combinatorial problem when just considering the redundancy level, while as a continuous problem when considering the reliability level. The RAP employed in this paper is that kind of combinatorial optimization problems. During the past thirty years, there have already been a number of investigations on RAP. However, these investigations often treat RAP as a single objective problem with the only goal to maximize the system reliability (or minimize the designing cost). In this paper, we regard RAP as a multi-objective optimization problem: the reliability of the system and the corresponding designing cost are considered as two different objectives. Consequently, we can utilize a classical Multi-objective Evolutionary Algorithm (MOEA), named Non-dominated Sorting Genetic Algorithm II (NSGA-II), to cope with this multi-objective redundancy allocation problem (MORAP) under a number of constraints. The experimental results demonstrate that the multi-objective evolutionary approach can provide more promising solutions in comparison with two widely used single-objective approaches on two parallel-series systems which are frequently studied in the field of reliability optimization.
    BibTeX:
    @inproceedings{WangCTY09,
      author = {Zai Wang and Tianshi Chen and Ke Tang and Xin Yao},
      title = {A Multi-objective Approach to Redundancy Allocation Problem in Parallel-series Systems},
      booktitle = {Proceedings of the 10th IEEE Congress on Evolutionary Computation (CEC '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {582-589},
      address = {Trondheim, Norway},
      month = {18-21 May},
      url = {http://portal.acm.org/citation.cfm?id=1689675}
    }
    					
    2007.12.02 Mark Harman, Kiran Lakhotia & Phil McMinn A Multi-Objective Approach to Search-based Test Data Generation 2007 Proceedings of the 9th annual Conference on Genetic and Evolutionary Computation (GECCO '07), pp. 1098-1105, London England, 7-11 July   Inproceedings Testing and Debugging
    Abstract: There has been a considerable body of work on search-based test data generation for branch coverage. However, hitherto, there has been no work on multi-objective branch coverage. In many scenarios a single-objective formulation is unrealistic; testers will want to find test sets that meet several objectives simultaneously in order to maximize the value obtained from the inherently expensive process of running the test cases and examining the output they produce. This paper introduces multi-objective branch coverage. The paper presents results from a case study of the twin objectives of branch coverage and dynamic memory consumption for both real and synthetic programs. Several multi-objective evolutionary algorithms are applied. The results show that multi-objective evolutionary algorithms are suitable for this problem, and illustrates the way in which a Pareto optimal search can yield insights into the trade-offs between the two simultaneous objectives.
    BibTeX:
    @inproceedings{HarmanLM07,
      author = {Mark Harman and Kiran Lakhotia and Phil McMinn},
      title = {A Multi-Objective Approach to Search-based Test Data Generation},
      booktitle = {Proceedings of the 9th annual Conference on Genetic and Evolutionary Computation (GECCO '07)},
      publisher = {ACM},
      year = {2007},
      pages = {1098-1105},
      address = {London, England},
      month = {7-11 July},
      doi = {http://dx.doi.org/10.1145/1276958.1277175}
    }
    					
    2008.09.02 Zai Wang, Ke Tang & Xin Yao A Multi-Objective Approach to Testing Resource Allocation in Modular Software Systems 2008 Proceedings of the 2008 IEEE Congress on Evolutionary Computation (CEC '08), pp. 1148-1153, Hong Kong China, 1-6 June   Inproceedings Software/Program Verification
    Abstract: Nowadays, as the software systems become increasingly large and complex, the problem of allocating the limited testing-resource during the testing phase has become more and more difficult. In this paper, we propose to solve the testing-resource allocation problem (TRAP) using multi-objective evolutionary algorithms. Specifically, we formulate TRAP as two multi-objective problems. First, we consider the reliability of the system and the testing cost as two objectives. In the second formulation, the total testing-resource consumed is also taken into account as the third goal. Two multi-objective evolutionary algorithms, Non-dominated Sorting Genetic Algorithm II (NSGA2) and Multi-Objective Differential Evolution Algorithms (MODE), are applied to solve the TRAP in the two scenarios. This is the first time that the TRAP is explicitly formulated and solved by multi-objective evolutionary approaches. Advantages of our approaches over the state-of-the-art single-objective approaches are demonstrated on two parallel-series modular software models.
    BibTeX:
    @inproceedings{WangTY08,
      author = {Zai Wang and Ke Tang and Xin Yao},
      title = {A Multi-Objective Approach to Testing Resource Allocation in Modular Software Systems},
      booktitle = {Proceedings of the 2008 IEEE Congress on Evolutionary Computation (CEC '08)},
      publisher = {IEEE},
      year = {2008},
      pages = {1148-1153},
      address = {Hong Kong, China},
      month = {1-6 June},
      doi = {http://dx.doi.org/10.1109/CEC.2008.4630941}
    }
    					
    2011.07.18 Thaise Yano, Eliane Martins & Fabiano Luis de Sousa A Multi-Objective Evolutionary Algorithm to Obtain Test Cases With Variable Lengths 2011 Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (GECCO '11), pp. 1875-1882, Dublin Ireland, 12-16 July   Inproceedings Testing and Debugging
    Abstract: In this paper a new multi-objective implementation of the generalized extremal optimization (GEO) algorithm, named M-GEOvsl, is presented. It was developed primarily to be used as a test case generator to find transition paths from extended finite state machines (EFSM), taking into account not only the transition to be covered but also the minimization of the test length. M-GEOvsl has the capability to deal with strings whose number of elements vary dynamically, making it possible to generate solutions with different lengths. The steps of the algorithm are described for a general multi-objective problem in which the solution length is an element to be optimized. Experiments were performed to generate test case from EFSM benchmark models using M-GEOvsl and the approach was compared with a related work.
    BibTeX:
    @inproceedings{YanoMd11,
      author = {Thaise Yano and Eliane Martins and Fabiano Luis de Sousa},
      title = {A Multi-Objective Evolutionary Algorithm to Obtain Test Cases With Variable Lengths},
      booktitle = {Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (GECCO '11)},
      publisher = {ACM},
      year = {2011},
      pages = {1875-1882},
      address = {Dublin, Ireland},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2001576.2001828}
    }
    					
    2013.06.28 Nesa Asoudeh & Yvan Labiche A Multi-Objective Genetic Algorithm for Generating Test Suites from Extended Finite State Machines 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: We propose a test suite generation technique from extended finite state machines based on a genetic algorithm that fulfills multiple (conflicting) objectives. We aim at maximizing coverage and feasibility of a set of test cases while minimizing similarity between these cases and minimizing overall cost.
    BibTeX:
    @inproceedings{AsoudehL13,
      author = {Nesa Asoudeh and Yvan Labiche},
      title = {A Multi-Objective Genetic Algorithm for Generating Test Suites from Extended Finite State Machines},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_26}
    }
    					
    2013.06.28 Lionel Briand, Yvan Labiche & Kathy Chen A Multi-Objective Genetic Algorithm to Rank State-Based Test Cases 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 66-80, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: We propose a multi-objective genetic algorithm method to prioritize state-based test cases to achieve several competing objectives such as budget and coverage of data flow information, while hopefully detecting faults as early as possible when executing prioritized test cases. The experimental results indicate that our approach is useful and effective: prioritizations quickly achieve maximum data flow coverage and this results in early fault detection; prioritizations perform much better than random orders with much smaller variance.
    BibTeX:
    @inproceedings{BriandLC13,
      author = {Lionel Briand and Yvan Labiche and Kathy Chen},
      title = {A Multi-Objective Genetic Algorithm to Rank State-Based Test Cases},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {66-80},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_7}
    }
    					
    2011.03.07 Gustavo Henrique de Lima Pinto & Silvia Regina Vergilio A Multi-Objective Genetic Algorithm to Test Data Generation 2010 Proceedings of the 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI '10), pp. 129-134, Arras France, 27-29 October   Inproceedings Testing and Debugging
    Abstract: Evolutionary testing has successfully applied search based optimization algorithms to the test data generation problem. The existing works use different techniques and fitness functions. However, the used functions consider only one objective, which is, in general, related to the coverage of a testing criterion. But, in practice, there are many factors that can influence the generation of test data, such as memory consumption, execution time, revealed faults, and etc. Considering this fact, this work explores a multiobjective optimization approach for test data generation. A framework that implements a multi-objective genetic algorithm is described. Two different representations for the population are used, which allows the test of procedural and object-oriented code. Combinations of three objectives are experimentally evaluated: coverage of structural test criteria, ability to reveal faults, and execution time.
    BibTeX:
    @inproceedings{PintoV10,
      author = {Gustavo Henrique de Lima Pinto and Silvia Regina Vergilio},
      title = {A Multi-Objective Genetic Algorithm to Test Data Generation},
      booktitle = {Proceedings of the 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {129-134},
      address = {Arras, France},
      month = {27-29 October},
      doi = {http://dx.doi.org/10.1109/ICTAI.2010.26}
    }
    					
    2007.12.02 Taghi M. Khoshgoftaar, Yi Liu & Naeem Seliya A Multiobjective Module-Order Model for Software Quality Enhancement 2004 IEEE Transactions on Evolutionary Computation, Vol. 8(6), pp. 593-608, December   Article Design Tools and Techniques
    Abstract: The knowledge, prior to system operations, of which program modules are problematic is valuable to a software quality assurance team, especially when there is a constraint on software quality enhancement resources. A cost-effective approach for allocating such resources is to obtain a prediction in the form of a quality-based ranking of program modules. Subsequently, a module-order model (MOM) is used to gauge the performance of the predicted rankings. From a practical software engineering point of view, multiple software quality objectives may be desired by a MOM for the system under consideration: e.g., the desired rankings may be such that 100% of the faults should be detected if the top 50% of modules with highest number of faults are subjected to quality improvements. Moreover, the management team for the same system may also desire that 80% of the faults should be accounted if the top 20% of the modules are targeted for improvement. Existing work related to MOM(s) use a quantitative prediction model to obtain the predicted rankings of program modules, implying that only the fault prediction error measures such as the average, relative, or mean square errors are minimized. Such an approach does not provide a direct insight into the performance behavior of a MOM. For a given percentage of modules enhanced, the performance of a MOM is gauged by how many faults are accounted for by the predicted ranking as compared with the perfect ranking. We propose an approach for calibrating a multiobjective MOM using genetic programming. Other estimation techniques, e.g., multiple linear regression and neural networks cannot achieve multiobjective optimization for MOM(s). The proposed methodology facilitates the simultaneous optimization of multiple performance objectives for a MOM. Case studies of two industrial software systems are presented, the empirical results of which demonstrate a new promise for goal-oriented software quality modeling.
    BibTeX:
    @article{KhoshgoftaarLS04b,
      author = {Taghi M. Khoshgoftaar and Yi Liu and Naeem Seliya},
      title = {A Multiobjective Module-Order Model for Software Quality Enhancement},
      journal = {IEEE Transactions on Evolutionary Computation},
      year = {2004},
      volume = {8},
      number = {6},
      pages = {593-608},
      month = {December},
      doi = {http://dx.doi.org/10.1109/TEVC.2004.837108}
    }
    					
    2014.02.21 Wesley Klewerton Guez Assunção, Thelma Elita Colanzi, Silvia Regina Vergilio & Aurora Pozo A Multi-objective Optimization Approach for the Integration and Test Order Problem 2014 Information Sciences, Vol. 267, pp. 119-139, May   Article Testing and Debugging
    Abstract: A common problem found during the integration testing is to determine an order to integrate and test the units. Important factors related to stubbing costs and constraints regarding to the software development context must be considered. To solve this problem, the most promising results were obtained with multi-objective algorithms, however few algorithms and contexts have been addressed by existing works. Considering such fact, this paper aims at introducing a generic approach based on multi-objective optimization to be applied in different development contexts and with distinct multi-objective algorithms. The approach is instantiated in the object and aspect-oriented contexts, and evaluated with real systems and three algorithms: NSGA-II, SPEA2 and PAES. The algorithms are compared by using different number of objectives and four quality indicators. Results point out that the characteristics of the systems, the instantiation context and the number of objectives influence on the behavior of the algorithms. Although for more complex systems, PAES reaches better results, NSGA-II is more suitable to solve the referred problem in general cases, considering all systems and indicators.
    BibTeX:
    @article{AssuncaoCVP14,
      author = {Wesley Klewerton Guez Assunção and Thelma Elita Colanzi and Silvia Regina Vergilio and Aurora Pozo},
      title = {A Multi-objective Optimization Approach for the Integration and Test Order Problem},
      journal = {Information Sciences},
      year = {2014},
      volume = {267},
      pages = {119-139},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.ins.2013.12.040}
    }
    					
    2010.11.23 Tarciane de Castro Andrade, Fabrício Gomes de Freitas, Daniel Pinto Coutinho & Jerffeson Teixeira de Souza A Multiobjective Optimization Approach to the Defect Correction Priorization Problem (in Portuguese) 2010 Proceedings of the Brazilian Workshop on Optimization in Software Engineering (WOES '10), Salvador Brazil, 30-30 September   Inproceedings Testing and Debugging
    Abstract: Tests are essential to improve software quality and they generate as result a list of defects. The prioritization of which defects should be corrected aids in a better allocation of resources, effort and planning of future versions. However, prioritize them requires that some factors should be considered according to different views of the team and the client. This paper proposes an multiobjective optimization formulation to solve this problem and validates the approach with the metaheuristic NSGA-II and MOCell. The results with the proposed approach outperformed the results of other techniques frequently applied in the context of the problem.
    BibTeX:
    @inproceedings{deCastroAndradedCd10,
      author = {Tarciane de Castro Andrade and Fabrício Gomes de Freitas and Daniel Pinto Coutinho and Jerffeson Teixeira de Souza},
      title = {A Multiobjective Optimization Approach to the Defect Correction Priorization Problem (in Portuguese)},
      booktitle = {Proceedings of the Brazilian Workshop on Optimization in Software Engineering (WOES '10)},
      year = {2010},
      address = {Salvador, Brazil},
      month = {30-30 September},
      url = {http://www.uniriotec.br/~marcio.barros/woes2010/Paper05.pdf}
    }
    					
    2012.08.22 Márcia Maria Albuquerque Brasil, Thiago Gomes Nepomuceno da Silva, Fabrício Gomes de Freitas, Jerffeson Teixeira de Souza & Mariela Inés Cortés A Multiobjective Optimization Approach to the Software Release Planning with Undefined Number of Releases and Interdependent Requirements 2011 Proceedings of the 13th International Conerence on Enterprise Information Systems (ICEIS '11), Beijing China, 8-11 June   Inproceedings Requirements/Specifications
    Abstract: In software development, release planning is a complex activity which involves several aspects related to which requirements are going to be developed in each release of the system. The planning must meet the customers' needs and comply with existing constraints. This paper presents an approach based on multiobjective optimization for release planning. The approach tackles formulations when the number of releases is not known a priori and also when the stakeholders have a desired number of releases (target). The optimization model is based on stakeholders' satisfaction, business value and risk management. Requirements interdependencies are also considered. In order to validate the approach, experiments are carried out and the results indicates the validity of the proposed approach.
    BibTeX:
    @inproceedings{BrasilSFSC11,
      author = {Márcia Maria Albuquerque Brasil and Thiago Gomes Nepomuceno da Silva and Fabrício Gomes de Freitas and Jerffeson Teixeira de Souza and Mariela Inés Cortés},
      title = {A Multiobjective Optimization Approach to the Software Release Planning with Undefined Number of Releases and Interdependent Requirements},
      booktitle = {Proceedings of the 13th International Conerence on Enterprise Information Systems (ICEIS '11)},
      publisher = {Springer},
      year = {2011},
      address = {Beijing, China},
      month = {8-11 June},
      doi = {http://dx.doi.org/10.1007/978-3-642-29958-2_20}
    }
    					
    2016.02.17 Luciano S. de Souza, Pericles B.C. de Miranda, Ricardo B.C. Prudencio & Flavia de A. Barros A Multi-objective Particle Swarm Optimization for Test Case Selection based on Functional Requirements Coverage and Execution Effort 2011 Proceedings of the 23rd IEEE International Conference on Tools with Artificial Intelligence (ICTAI '11), pp. 245-252, Boca Raton USA, 7-9 November   Inproceedings Testing and Debugging
    Abstract: Although software testing is a central task in the software lifecycle, it is sometimes neglected due to its high costs. Tools to automate the testing process minor its costs, however they generate large test suites with redundant Test Cases (TC). Automatic TC Selection aims to reduce a test suite based on some selection criterion. This process can be treated as an optimization problem, aiming to find a subset of TCs which optimizes one or more objective functions (i.e., selection criteria). The majority of search-based works focus on single-objective selection. In this light, we developed a mechanism for functional TC selection which considers two objectives simultaneously: maximize requirements' coverage while minimizing cost in terms of TC execution effort. This mechanism was implemented as a multi-objective optimization process based on Particle Swarm Optimization (PSO). We implemented two multi-objective versions of PSO (BMOPSO and BMOPSO-CDR). The experiments were performed on two real test suites, revealing very satisfactory results (attesting the feasibility of the proposed approach). We highlight that execution effort is an important aspect in the testing process, and it has not been used in a multi-objective way together with requirements coverage for functional TC selection.
    BibTeX:
    @inproceedings{SouzaMPB11,
      author = {Luciano S. de Souza and Pericles B. C. de Miranda and Ricardo B. C. Prudencio and Flavia de A. Barros},
      title = {A Multi-objective Particle Swarm Optimization for Test Case Selection based on Functional Requirements Coverage and Execution Effort},
      booktitle = {Proceedings of the 23rd IEEE International Conference on Tools with Artificial Intelligence (ICTAI '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {245-252},
      address = {Boca Raton, USA},
      month = {7-9 November},
      doi = {http://dx.doi.org/10.1109/ICTAI.2011.45}
    }
    					
    2009.03.03 Taghi M. Khoshgoftaar & Yi Liu A Multi-Objective Software Quality Classification Model Using Genetic Programming 2007 IEEE Transactions on Reliability, Vol. 56(2), pp. 237-245, June   Article Management
    Abstract: A key factor in the success of a software project is achieving the best-possible software reliability within the allotted time & budget. Classification models which provide a risk-based software quality prediction, such as fault-prone & not fault-prone, are effective in providing a focused software quality assurance endeavor. However, their usefulness largely depends on whether all the predicted fault-prone modules can be inspected or improved by the allocated software quality-improvement resources, and on the project-specific costs of misclassifications. Therefore, a practical goal of calibrating classification models is to lower the expected cost of misclassification while providing a cost-effective use of the available software quality-improvement resources. This paper presents a genetic programming-based decision tree model which facilitates a multi-objective optimization in the context of the software quality classification problem. The first objective is to minimize the "Modified Expected Cost of Misclassification", which is our recently proposed goal-oriented measure for selecting & evaluating classification models. The second objective is to optimize the number of predicted fault-prone modules such that it is equal to the number of modules which can be inspected by the allocated resources. Some commonly used classification techniques, such as logistic regression, decision trees, and analogy-based reasoning, are not suited for directly optimizing multi-objective criteria. In contrast, genetic programming is particularly suited for the multi-objective optimization problem. An empirical case study of a real-world industrial software system demonstrates the promising results, and the usefulness of the proposed model.
    BibTeX:
    @article{KhoshgoftaarL07,
      author = {Taghi M. Khoshgoftaar and Yi Liu},
      title = {A Multi-Objective Software Quality Classification Model Using Genetic Programming},
      journal = {IEEE Transactions on Reliability},
      year = {2007},
      volume = {56},
      number = {2},
      pages = {237-245},
      month = {June},
      doi = {http://dx.doi.org/10.1109/TR.2007.896763}
    }
    					
    2007.12.02 Kiarash Mahdavi, Mark Harman & Robert M. Hierons A Multiple Hill Climbing Approach to Software Module Clustering 2003 Proceedings of the International Conference on Software Maintenance (ICSM '03), pp. 315-324, Amsterdam Holland, 26-27 September   Inproceedings Distribution and Maintenance
    Abstract: Automated software module clustering is important for maintenance of legacy systems written in a 'monolithic format' with inadequate module boundaries. Even where systems were originally designed with suitable module boundaries, structure tends to degrade as the system evolves, making re-modularization worthwhile. This paper focuses upon search-based approaches to the automated module clustering problem, where hitherto, the local search approach of hill climbing has been found to be most successful. In the paper we show that results from a set of multiple hill climbs can be combined to locate good 'building blocks' for subsequent searches. Building blocks are formed by identifying the common features in a selection of best hill climbs. This process reduces the search space, while simultaneously 'hard wiring' parts of the solution. The paper reports the results of an empirical study that show that the multiple hill climbing approach does indeed guide the search to higher peaks in subsequent executions. The paper also investigates the relationship between the improved results and the system size.
    BibTeX:
    @inproceedings{MahdaviHH03b,
      author = {Kiarash Mahdavi and Mark Harman and Robert M. Hierons},
      title = {A Multiple Hill Climbing Approach to Software Module Clustering},
      booktitle = {Proceedings of the International Conference on Software Maintenance (ICSM '03)},
      publisher = {IEEE},
      year = {2003},
      pages = {315-324},
      address = {Amsterdam, Holland},
      month = {26-27 September},
      doi = {http://dx.doi.org/10.1109/ICSM.2003.1235437}
    }
    					
    2012.03.08 Mohammad Alshraideh, Basel A. Mahafzah & Saleh Al-Sharaeh A Multiple-population Genetic Algorithm for Branch Coverage Test Data Generation 2011 Software Quality Journal, Vol. 19(3), pp. 489-513   Article Testing and Debugging
    Abstract: The software testing phase in the software development process is considered a time-consuming process. In order to reduce the overall development cost, automatic test data generation techniques based on genetic algorithms have been widely applied. This research explores a new approach for using genetic algorithms as test data generators to execute all the branches in a program. In the literature, existing approaches for test data generation using genetic algorithms are mainly focused on maintaining a single-population of candidate tests, where the computation of the �tness function for a particular target branch is based on the closeness of the input execution path to the control dependency condition of that branch. The new approach utilizes acyclic predicate paths of the program's control flow graph containing the target branch as goals of separate search processes using distinct island populations. The advantages of the suggested approach is its ability to explore a greater variety of execution paths, and in certain conditions, increasing the search effectiveness. When applied to a collection of programs with a moderate number of branches, it has been shown experimentally that the proposed multiple-population algorithm outperforms the single-population algorithm signi�cantly in terms of the number of executions, execution time, time improvement, and search effectiveness.
    BibTeX:
    @article{AlshraidehMA11,
      author = {Mohammad Alshraideh and Basel A. Mahafzah and Saleh Al-Sharaeh},
      title = {A Multiple-population Genetic Algorithm for Branch Coverage Test Data Generation},
      journal = {Software Quality Journal},
      year = {2011},
      volume = {19},
      number = {3},
      pages = {489-513},
      doi = {http://dx.doi.org/10.1007/s11219-010-9117-4}
    }
    					
    2015.12.09 Rui Angelo Matnei Filho & Silvia Regina Vergilio A Mutation and Multi-objective Test Data Generation Approach for Feature Testing of Software Product Lines 2015 Proceedings of the 29th Brazilian Symposium on Software Engineering (SBES '15), pp. 21-30, Belo Horizonte Brazil, 21-26 September   Inproceedings Testing and Debugging
    Abstract: Mutation approaches have been recently applied for feature testing of Software Product Lines (SPLs). The idea is to select products, associated to mutation operators that describe possible faults in the Feature Model (FM). In this way, the operators and mutation score can be used to evaluate and generate a test set, that is a set of SPL products to be tested. However, the generation of test sets to kill all the mutants with a reduced, possible minimum, number of products is a complex task. To solve such problem, this paper introduces a multiobjective approach that includes a representation to the problem, search operators and two objectives related to the number of test cases and dead mutants. The approach was implemented with three representative multi-objective and evolutionary algorithms: NSGA-II, SPEA2 and IBEA. The conducted evaluation analyses the solutions obtained and compares the algorithms. An advantage of this approach is to offer a set of good solutions to the tester with a reduced number of products and high mutation score values, that is, with high probability of revealing faults described by the mutation testing.
    BibTeX:
    @inproceedings{FilhoV15,
      author = {Rui Angelo Matnei Filho and Silvia Regina Vergilio},
      title = {A Mutation and Multi-objective Test Data Generation Approach for Feature Testing of Software Product Lines},
      booktitle = {Proceedings of the 29th Brazilian Symposium on Software Engineering (SBES '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {21-30},
      address = {Belo Horizonte, Brazil},
      month = {21-26 September},
      doi = {http://dx.doi.org/10.1109/SBES.2015.17}
    }
    					
    2017-07-07 Vivek Nair, Tim Menzies & Jianfeng Chen An (Accidental) Exploration of Alternatives to Evolutionary Algorithms for SBSE 2016 Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16), pp. 96-111, Raleigh NC USA, 8-10 October   Inproceedings
    Abstract: SBSE researchers often use an evolutionary algorithm to solve various software engineering problems. This paper explores an alternate approach of sampling. This approach is called SWAY (Samplying WAY) and finds the (near) optimal solutions to the problem by (i) creating a larger initial population and (ii) intelligently sampling the solution space to find the best subspace. Unlike evolutionary algorithms, SWAY does not use mutation or cross-over or multi-generational reasoning to find interesting subspaces but relies on the underlying dimensions of the solution space. Experiments with Software Engineering (SE) models shows that SWAY’s performance improvement is competitive with standard MOEAs while, terminating over an order of magnitude faster.
    BibTeX:
    @inproceedings{NairMC16,
      author = {Vivek Nair and Tim Menzies and Jianfeng Chen},
      title = {An (Accidental) Exploration of Alternatives to Evolutionary Algorithms for SBSE},
      booktitle = {Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16)},
      publisher = {Springer},
      year = {2016},
      pages = {96-111},
      address = {Raleigh, NC, USA},
      month = {8-10 October},
      doi = {http://dx.doi.org/10.1007/978-3-319-47106-8_7}
    }
    					
    2012.10.25 Thiago do Nascimento Ferreira & Jerffeson Teixeira de Souza An ACO approach for the Next Release Problem with Dependency among Requirements 2012 Proceedings of the 3rd Brazilian Workshop on Search-Based Software Engineering (WESB '12), Natal RN Brazil, 23 September   Inproceedings Requirements/Specifications
    Abstract: The Next Release Problem is one of the problems in the software engineering field where, given a set of requirements, it's demanded the selection of the maximum number of requirements that maximize the value and doesn't exceed the release budget. The Next Release Problem with Requirements Interdependencies is the problem where the inclusion of requirements in the next release will affect the cost and value and lead to the inclusion or not of other mandatory requirements. This paper aims at presenting an Ant Colony Optmization approach to this problem by performing a comparative study with other meta-heuristics such as Genetic Algorithms, Simulated Annealing.
    BibTeX:
    @inproceedings{FerreiraS12,
      author = {Thiago do Nascimento Ferreira and Jerffeson Teixeira de Souza},
      title = {An ACO approach for the Next Release Problem with Dependency among Requirements},
      booktitle = {Proceedings of the 3rd Brazilian Workshop on Search-Based Software Engineering (WESB '12)},
      year = {2012},
      address = {Natal, RN, Brazil},
      month = {23 September}
    }
    					
    2016.02.27 Xinye Cai, Xin Cheng, Zhun Fan, Erik Goodman & Lisong Wang An Adaptive Memetic Framework for Multi-objective Combinatorial Optimization Problems: Studies on Software Next Release and Travelling Salesman Problems 2017 Soft Computing, Vol. 21(9), pp. 2215-2236, May   Article
    Abstract: In this paper, we propose two multi-objective memetic algorithms (MOMAs) using two different adaptive mechanisms to address combinatorial optimization problems (COPs). One mechanism adaptively selects solutions for local search based on the solutions' convergence toward the Pareto front. The second adaptive mechanism uses the convergence and diversity information of an external set (dominance archive), to guide the selection of promising solutions for local search. In addition, simulated annealing is integrated in this framework as the local refinement process. The multi-objective memetic algorithms with the two adaptive schemes (called uMOMA-SA and aMOMA-SA) are tested on two COPs and compared with some well-known multi-objective evolutionary algorithms. Experimental results suggest that uMOMA-SA and aMOMA-SA outperform the other algorithms with which they are compared. The effects of the two adaptive mechanisms are also investigated in the paper. In addition, uMOMA-SA and aMOMA-SA are compared with three single-objective and three multi-objective optimization approaches on software next release problems using real instances mined from bug repositories (Xuan et al. IEEE Trans Softw Eng 38(5):1195–1212, 2012). The results show that these multi-objective optimization approaches perform better than these single-objective ones, in general, and that aMOMA-SA has the best performance among all the approaches compared.
    BibTeX:
    @article{CaiCFGW17,
      author = {Xinye Cai and Xin Cheng and Zhun Fan and Erik Goodman and Lisong Wang},
      title = {An Adaptive Memetic Framework for Multi-objective Combinatorial Optimization Problems: Studies on Software Next Release and Travelling Salesman Problems},
      journal = {Soft Computing},
      year = {2017},
      volume = {21},
      number = {9},
      pages = {2215-2236},
      month = {May},
      doi = {http://dx.doi.org/10.1007/s00500-015-1921-0}
    }
    					
    2009.07.26 José Carlos Bregieiro Ribeiro, Mário Alberto Zenha-Rela & Francisco Fernández de Vega An Adaptive Strategy for Improving the Performance of Genetic Programming-based Approaches to Evolutionary Testing 2009 Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation (GECCO '09), pp. 1949-1950, Montréal Canada, 8-12 July   Inproceedings Testing and Debugging
    Abstract: This paper proposes an adaptive strategy for enhancing Genetic Programming-based approaches to automatic test case generation. The main contribution of this study is that of proposing an adaptive Evolutionary Testing methodology for promoting the introduction of relevant instructions into the generated test cases by means of mutation; the instructions from which the algorithm can choose are ranked, with their rankings being updated every generation in accordance to the feedback obtained from the individuals evaluated in the preceding generation. The experimental studies developed show that the adaptive strategy proposed improves the algorithm's efficiency considerably, while introducing a negligible computational overhead.
    BibTeX:
    @inproceedings{RibeiroZV09b,
      author = {José Carlos Bregieiro Ribeiro and Mário Alberto Zenha-Rela and Francisco Fernández de Vega},
      title = {An Adaptive Strategy for Improving the Performance of Genetic Programming-based Approaches to Evolutionary Testing},
      booktitle = {Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation (GECCO '09)},
      publisher = {ACM},
      year = {2009},
      pages = {1949-1950},
      address = {Montréal, Canada},
      month = {8-12 July},
      doi = {http://dx.doi.org/10.1145/1569901.1570253}
    }
    					
    2012.03.07 Ruchika Malhotra & Mohit Garg An Adequacy Based Test Data Generation Technique using Genetic Algorithms 2011 Journal of Information Processing Systems, Vol. 7(2), pp. 363-384   Article Testing and Debugging
    Abstract: As the complexity of software is increasing, generating an effective test data has become a necessity. This necessity has increased the demand for techniques that can generate test data effectively. This paper proposes a test data generation technique based on adequacy based testing criteria. Adequacy based testing criteria uses the concept of mutation analysis to check the adequacy of test data. In general, mutation analysis is applied after the test data is generated. But, in this work, we propose a technique that applies mutation analysis at the time of test data generation only, rather than applying it after the test data has been generated. This saves significant amount of time (required to generate adequate test cases) as compared to the latter case as the total time in the latter case is the sum of the time to generate test data and the time to apply mutation analysis to the generated test data. We also use genetic algorithms that explore the complete domain of the program to provide near-global optimum solution. In this paper, we first define and explain the proposed technique. Then we validate the proposed technique using ten real time programs. The proposed technique is compared with path testing technique (that use reliability based testing criteria) for these ten programs. The results show that the adequacy based proposed technique is better than the reliability based path testing technique and there is a significant reduce in number of generated test cases and time taken to generate test cases.
    BibTeX:
    @article{MalhotraG11,
      author = {Ruchika Malhotra and Mohit Garg},
      title = {An Adequacy Based Test Data Generation Technique using Genetic Algorithms},
      journal = {Journal of Information Processing Systems},
      year = {2011},
      volume = {7},
      number = {2},
      pages = {363-384},
      doi = {http://dx.doi.org/10.3745/JIPS.2011.7.2.363}
    }
    					
    2016.04.29 Aldeida Aleti, I. Moser & Lars Grunske Analysing the Fitness Landscape of Search-based Software Testing Problems 2017 Automated Software Engineering, Vol. 24(3), pp. 603-621, September   Article
    Abstract: Search-based software testing automatically derives test inputs for a software system with the goal of improving various criteria, such as branch coverage. In many cases, evolutionary algorithms are implemented to find near-optimal test suites for software systems. The result of the search is usually received without any indication of how successful the search has been. Fitness landscape characterisation can help understand the search process and its probability of success. In this study, we recorded the information content, negative slope coefficient and the number of improvements during the progress of a genetic algorithm within the EvoSuite framework. Correlating the metrics with the branch and method coverages and the fitness function values reveals that the problem formulation used in EvoSuite could be improved by revising the objective function. It also demonstrates that given the current formulation, the use of crossover has no benefits for the search as the most problematic landscape features are not the number of local optima but the presence of many plateaus.
    BibTeX:
    @article{AletiMG17,
      author = {Aldeida Aleti and I. Moser and Lars Grunske},
      title = {Analysing the Fitness Landscape of Search-based Software Testing Problems},
      journal = {Automated Software Engineering},
      year = {2017},
      volume = {24},
      number = {3},
      pages = {603-621},
      month = {September},
      doi = {http://dx.doi.org/10.1007/s10515-016-0197-7}
    }
    					
    2008.08.27 Tao Jiang, Mark Harman & Youssef Hassoun Analysis of Procedure Splitability 2008 Proceedings of the 15th Working Conference on Reverse Engineering (WCRE '08), pp. 247-256, Antwerp Belgium, 15-18 October   Inproceedings Coding Tools and Techniques
    Abstract: As software evolves there is a tendency for size to increase and structure to degrade, leading to problems for ongoing maintenance and reverse engineering. This paper introduces a greedy dependence-based procedure splitting algorithm that provides automated support for analysis and intervention where procedures show signs of poor structure and over large size. The paper reports on the algorithms, implementation and empirical evaluation of procedure splitability. The study reveals a surprising prevalence of splitable procedures and a strong correlation between procedure size and splitability.
    BibTeX:
    @inproceedings{JiangHH08,
      author = {Tao Jiang and Mark Harman and Youssef Hassoun},
      title = {Analysis of Procedure Splitability},
      booktitle = {Proceedings of the 15th Working Conference on Reverse Engineering (WCRE '08)},
      publisher = {IEEE},
      year = {2008},
      pages = {247-256},
      address = {Antwerp, Belgium},
      month = {15-18 October},
      doi = {http://dx.doi.org/10.1109/WCRE.2008.31}
    }
    					
    2009.03.31 Gabriel Jarillo, Giancarlo Succi, Witold Pedrycz & Marek Reformat Analysis of Software Engineering Data using Computational Intelligence Techniques 2001 Proceedings of the 7th International Conference on Object Oriented Information Systems (OOIS '01), pp. 133-142, Calgary Canada, 27-29 August   Inproceedings Management
    Abstract: The accurate estimation of software development effort has major implications for the management of software development in the industry. Underestimates lead to time pressures that may compromise full functional development and thorough testing of the software product. On the other hand, overestimates can result in over allocation of development resources and personnel [7]. Many models for effort estimation have been developed during the past years; some of them use parametric methods with some degree of success, other kind of methods belonging to the computational intelligence family, such as Neural Networks (NN), have been also studied in this field showing more accurate estimations, and finally the Genetic programming (GP) techniques are being considered as promising tools for the prediction of effort estimation.
    BibTeX:
    @inproceedings{JarilloSPR01,
      author = {Gabriel Jarillo and Giancarlo Succi and Witold Pedrycz and Marek Reformat},
      title = {Analysis of Software Engineering Data using Computational Intelligence Techniques},
      booktitle = {Proceedings of the 7th International Conference on Object Oriented Information Systems (OOIS '01)},
      publisher = {Springer},
      year = {2001},
      pages = {133-142},
      address = {Calgary, Canada},
      month = {27-29 August},
      url = {http://www.google.com/url?sa=t&source=web&ct=res&cd=1&url=http%3A%2F%2Fwww.inf.unibz.it%2F~gsucci%2Fpublications%2Fimages%2Fanalysisofsoftwareengineeringdatausingcomputationalsoftwaretechniques.pdf&ei=Gz6tSZntMqTJjAfu4pmjBg&usg=AFQjCNGQ_Yv6Mk1I4CYLxzk32uT91DZdhQ&sig2=7TnWSmQGyBJoF_gucgUB2Q}
    }
    					
    2009.02.16 Jun-Wei Lin & Chin-Yu Huang Analysis of Test Suite Reduction with Enhanced Tie-Breaking Techniques 2009 Information and Software Technology, Vol. 51(4), pp. 679-690, April   Article Testing and Debugging
    Abstract: Test suite minimization techniques try to remove redundant test cases of a test suite. However, reducing the size of a test suite might reduce its ability to reveal faults. In this paper, we present a novel approach for test suite reduction that uses an additional testing criterion to break the ties in the minimization process. We integrated the proposed approach with two existing algorithms and conducted experiments for evaluation. The experiment results show that our approach can improve the fault detection effectiveness of reduced suites with a negligible increase in the size of the suites. Besides, under specific conditions, the proposed approach can also accelerate the process of minimization.
    BibTeX:
    @article{LinH09,
      author = {Jun-Wei Lin and Chin-Yu Huang},
      title = {Analysis of Test Suite Reduction with Enhanced Tie-Breaking Techniques},
      journal = {Information and Software Technology},
      year = {2009},
      volume = {51},
      number = {4},
      pages = {679-690},
      month = {April},
      doi = {http://dx.doi.org/10.1016/j.infsof.2008.11.004}
    }
    					
    2009.12.01 Yue Jia & Mark Harman An Analysis and Survey of the Development of Mutation Testing 2009 (TR-09-06), September   Techreport General Aspects and Survey
    Abstract: Mutation Testing is a fault–based software testing technique that has been widely studied for over three decades. The literature on Mutation Testing has contributed a set of approaches, tools, developments and empirical results which have not been surveyed in detail until now. This paper provides a comprehensive analysis and survey of Mutation Testing. The paper also presents the results of several development trend analyses. These analyses provide evidence that Mutation Testing techniques and tools are reaching a state of maturity and applicability, while the topic of Mutation Testing itself is the subject of increasing interest.
    BibTeX:
    @techreport{JiaH09b,
      author = {Yue Jia and Mark Harman},
      title = {An Analysis and Survey of the Development of Mutation Testing},
      year = {2009},
      number = {TR-09-06},
      month = {September},
      url = {http://www.dcs.kcl.ac.uk/pg/jiayue/repository/TR-09-06.pdf}
    }
    					
    2010.06.04 Yue Jia & Mark Harman An Analysis and Survey of the Development of Mutation Testing 2011 IEEE Transactions on Software Engineering, Vol. 37(5), pp. 649-678, September-October   Article Testing and Debugging
    Abstract: Mutation Testing is a fault–based software testing technique that has been widely studied for over three decades. The literature on Mutation Testing has contributed a set of approaches, tools, developments and empirical results. This paper provides a comprehensive analysis and survey of Mutation Test- ing. The paper also presents the results of several development trend analyses. These analyses provide evidence that Mutation Testing techniques and tools are reaching a state of maturity and applicability, while the topic of Mutation Testing itself is the subject of increasing interest.
    BibTeX:
    @article{JiaH11,
      author = {Yue Jia and Mark Harman},
      title = {An Analysis and Survey of the Development of Mutation Testing},
      journal = {IEEE Transactions on Software Engineering},
      year = {2011},
      volume = {37},
      number = {5},
      pages = {649-678},
      month = {September-October},
      doi = {http://dx.doi.org/10.1109/TSE.2010.62}
    }
    					
    2013.07.26 Leandro L. Minku & Xin Yao An Analysis of Multi-objective Evolutionary Algorithms for Training Ensemble Models Based on Different Performance Measures in Software Effort Estimation 2013 Proceedings of the 9th International Conference on Predictive Models in Software Engineering (PROMISE '13), Baltimore Maryland USA, 9-9 October   Inproceedings
    Abstract: Background: Previous work showed that Multi-objective Evolutionary Algorithms (MOEAs) can be used for training ensembles of learning machines for Software Effort Estimation (SEE) by optimising different performance measures concurrently. Optimisation based on three measures (LSD, MMRE and PRED(25)) was analysed and led to promising results in terms of performance on these and other measures. Aims: (a) It is not known how well ensembles trained on other measures would behave for SEE, and whether training on certain measures would improve performance particularly on these measures. (b) It is also not known whether it is best to include all SEE models created by the MOEA into the ensemble, or solely the models with the best training performance in terms of each measure being optimised. Investigating (a) and (b) is the aim of this work. Method: MOEAs were used to train ensembles by optimising four different sets of performance measures, involving a total of nine different measures. The performance of all ensembles was then compared based on all these nine performance measures. Ensembles composed of different sets of models generated by the MOEAs were also compared. Results: (a) Ensembles trained on LSD, MMRE and PRED (25) obtained the best results in terms of most performance measures, being considered more successful than the others. Optimising certain performance measures did not necessarily lead to the best test performance on these particular measures probably due to overfitting. (b) There was no inherent advantage in using ensembles composed of all the SEE models generated by the MOEA in comparison to using solely the best SEE model according to each measure separately. Conclusions: Care must be taken to prevent overfitting on the performance measures being optimised. Our results suggest that concurrently optimising LSD, MMRE and PRED (25) promoted more ensemble diversity than other combinations of measures, and hence performed best. Low diversity is more likely to lead to overfitting.
    BibTeX:
    @inproceedings{MinkuY13,
      author = {Leandro L. Minku and Xin Yao},
      title = {An Analysis of Multi-objective Evolutionary Algorithms for Training Ensemble Models Based on Different Performance Measures in Software Effort Estimation},
      booktitle = {Proceedings of the 9th International Conference on Predictive Models in Software Engineering (PROMISE '13)},
      publisher = {ACM},
      year = {2013},
      address = {Baltimore, Maryland, USA},
      month = {9-9 October},
      doi = {http://dx.doi.org/10.1145/2499393.2499396}
    }
    					
    2016.02.12 Zichao Qi, Fan Long, Sara Achour & Martin Rinard An Analysis of Patch Plausibility and Correctness for Generate-and-Validate Patch Generation Systems 2015 Proceedings of the 2015 International Symposium on Software Testing and Analysis (ISSTA '15), pp. 24-36, Baltimore July, 14-17 July   Inproceedings
    Abstract: We analyze reported patches for three existing generate-and- validate patch generation systems (GenProg, RSRepair, and AE). The basic principle behind generate-and-validate systems is to accept only plausible patches that produce correct outputs for all inputs in the validation test suite. Because of errors in the patch evaluation infrastructure, the majority of the reported patches are not plausible — they do not produce correct outputs even for the inputs in the validation test suite. The overwhelming majority of the reported patches are not correct and are equivalent to a single modification that simply deletes functionality. Observed negative effects include the introduction of security vulnerabilities and the elimination of desirable functionality. We also present Kali, a generate-and-validate patch generation system that only deletes functionality. Working with a simpler and more effectively focused search space, Kali generates at least as many correct patches as prior GenProg, RSRepair, and AE systems. Kali also generates at least as many patches that produce correct outputs for the inputs in the validation test suite as the three prior systems. We also discuss the patches produced by ClearView, a generate-and-validate binary hot patching system that lever- ages learned invariants to produce patches that enable systems to survive otherwise fatal defects and security attacks. Our analysis indicates that ClearView successfully patches 9 of the 10 security vulnerabilities used to evaluate the system. At least 4 of these patches are correct.
    BibTeX:
    @inproceedings{QiLAR15,
      author = {Zichao Qi and Fan Long and Sara Achour and Martin Rinard},
      title = {An Analysis of Patch Plausibility and Correctness for Generate-and-Validate Patch Generation Systems},
      booktitle = {Proceedings of the 2015 International Symposium on Software Testing and Analysis (ISSTA '15)},
      publisher = {ACM},
      year = {2015},
      pages = {24-36},
      address = {Baltimore, July},
      month = {14-17 July},
      doi = {http://dx.doi.org/10.1145/2771783.2771791}
    }
    					
    2012.07.20 Márcio de Oliveira Barros An Analysis of the Effects of Composite Objectives in Multiobjective Software Module Clustering 2012 Proceedings of the 14th International Conference on Genetic and Evolutionary Computation Conference (GECCO '12), pp. 1205-1212, Philadelphia USA, 7-11 July   Inproceedings Design Tools and Techniques
    Abstract: The application of multiobjective optimization to address Software Engineering problems is a growing trend. Multiobjective algorithms provide a balance between the ability of the computer to search a large solution space for valuable solutions and the capacity of the human decision-maker to select an alternative when two or more incomparable objectives are presented. However, when more than a single objective is available, the set of objectives to be considered by the search becomes part of the decision. In this paper, we address the efficiency and effectiveness of using two composite objectives while searching solutions for the software clustering problem. We designed an experimental study which shows that a multiobjective genetic algorithm can find a set of solutions with increased quality and using less processing time if these composite objectives are suppressed from the formulation for the software clustering problem.
    BibTeX:
    @inproceedings{Barros12,
      author = {Márcio de Oliveira Barros},
      title = {An Analysis of the Effects of Composite Objectives in Multiobjective Software Module Clustering},
      booktitle = {Proceedings of the 14th International Conference on Genetic and Evolutionary Computation Conference (GECCO '12)},
      publisher = {ACM},
      year = {2012},
      pages = {1205-1212},
      address = {Philadelphia, USA},
      month = {7-11 July},
      doi = {http://dx.doi.org/10.1145/2330163.2330330}
    }
    					
    2011.07.20 Camila Loiola Brito Maia, Thiago do Nascimento Ferreira, Fabrício Gomes de Freitas & Jerffeson Teixeira de Souza An Ant Colony Based Algorithm for Test Case Prioritization with Precedence 2011 Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11), Vol. 6956, Szeged Hungary, 10-12 September   Inproceedings Testing and Debugging
    Abstract: Test case prioritization is a difficult problem of Software Engineering, since several factors may be considered in order to find the best order for test cases. Search-based techniques have been applied to find solutions for test case prioritization problem. Some of these works apply Ant Colony based algorithms, but the precedence of test cases was not considered. We propose an Ant Colony Optimization based algorithm to prioritize test cases with precedence. Each ant builds a solution, and when it is necessary to choose a new vertex (test case), only allowed test cases are seen by the ant, implementing the precedence constraint of the problem.
    BibTeX:
    @inproceedings{MaiaFFS11,
      author = {Camila Loiola Brito Maia and Thiago do Nascimento Ferreira and Fabrício Gomes de Freitas and Jerffeson Teixeira de Souza},
      title = {An Ant Colony Based Algorithm for Test Case Prioritization with Precedence},
      booktitle = {Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11)},
      publisher = {Springer},
      year = {2011},
      volume = {6956},
      address = {Szeged, Hungary},
      month = {10-12 September},
      url = {http://www.ssbse.org/2011/fastabstracts/maia.pdf}
    }
    					
    2011.03.07 Danielle Azar & Joseph Vybihal An Ant Colony Optimization Algorithm to Improve Software Quality Prediction Models: Case of Class Stability 2011 Information and Software Technology, Vol. 53(4), pp. 388-393, April   Article Management
    Abstract: Context: Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data. Objective: The objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data. Method: This paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels. Results: Results show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it. Conclusion: Our approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.
    BibTeX:
    @article{AzarV11,
      author = {Danielle Azar and Joseph Vybihal},
      title = {An Ant Colony Optimization Algorithm to Improve Software Quality Prediction Models: Case of Class Stability},
      journal = {Information and Software Technology},
      year = {2011},
      volume = {53},
      number = {4},
      pages = {388-393},
      month = {April},
      doi = {http://dx.doi.org/10.1016/j.infsof.2010.11.013}
    }
    					
    2007.12.02 Huaizhong Li & Chiou Peng Lam An Ant Colony Optimization Approach to Test Sequence Generation for Statebased Software Testing 2005 Proceedings of the 5th International Conference on Quality Software (QSIC '05), pp. 255-264, Melbourne Australia, 19-20 September   Inproceedings Testing and Debugging
    Abstract: Properly generated test suites may not only locate the defects in software systems, but also help in reducing the high cost associated with software testing. It is often desired that test sequences in a test suite can be automatically generated to achieve required test coverage. However, automatic test sequence generation remains a major problem in software testing. This paper proposes an Ant Colony Optimization approach to automatic test sequence generation for state-based software testing. The proposed approach can directly use UML artifacts to automatically generate test sequences to achieve required test coverage.
    BibTeX:
    @inproceedings{LiL05b,
      author = {Huaizhong Li and Chiou Peng Lam},
      title = {An Ant Colony Optimization Approach to Test Sequence Generation for Statebased Software Testing},
      booktitle = {Proceedings of the 5th International Conference on Quality Software (QSIC '05)},
      publisher = {IEEE},
      year = {2005},
      pages = {255-264},
      address = {Melbourne, Australia},
      month = {19-20 September},
      doi = {http://dx.doi.org/10.1109/QSIC.2005.12}
    }
    					
    2011.07.20 Jerffeson Teixeira de Souza, Camila Loiola Brito Maia, Thiago Ferreira, Rafael Augusto Ferreira do Carmo & Marcia Brasil An Ant Colony Optimization Approach to the Software Release Planning with Dependent Requirements 2011 Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11), Vol. 6956, pp. 142-157, Szeged Hungary, 10-12 September   Inproceedings Requirements/Specifications
    Abstract: Ant Colony Optimization (ACO) has been successfully employed to tackle a variety of hard combinatorial optimization problems, including the traveling salesman problem, vehicle routing, sequential ordering and timetabling. ACO, as a swarm intelligence framework, mimics the indirect communication strategy employed by real ants mediated by pheromone trails. Among the several algorithms following the ACO general framework, the Ant Colony System (ACS) has obtained convincing results in a range of problems. In Software Engineering, the effective application of ACO has been very narrow, being restricted to a few sparse problems. This paper expands this applicability, by adapting the ACS algorithm to solve the well-known Software Release Planning problem in the presence of dependent requirements. The evaluation of the proposed approach is performed over 72 synthetic datasets and considered, besides ACO, the Genetic Algorithm and Simulated Annealing. Results are consistent to show the ability of the proposed ACO algorithm to generate more accurate solutions to the Software Release Planning problem when compared to Genetic Algorithm and Simulated Annealing.
    BibTeX:
    @inproceedings{SouzaMFCB11,
      author = {Jerffeson Teixeira de Souza and Camila Loiola Brito Maia and Thiago Ferreira and Rafael Augusto Ferreira do Carmo and Marcia Brasil},
      title = {An Ant Colony Optimization Approach to the Software Release Planning with Dependent Requirements},
      booktitle = {Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11)},
      publisher = {Springer},
      year = {2011},
      volume = {6956},
      pages = {142-157},
      address = {Szeged, Hungary},
      month = {10-12 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-23716-4_15}
    }
    					
    2009.02.16 Chikuang Chao, Jeffery Komada, Qing Liu, Mohit Muteja, Yahya Alsalqan & Carl Chang An Application of Genetic Algorithms to Software Project Management 1993 Proceedings of the 9th International Advanced Science and Technology, pp. 247-252, Chicago Illinois USA, March   Inproceedings Management
    BibTeX:
    @inproceedings{ChaoKLMAC93,
      author = {Chikuang Chao and Jeffery Komada and Qing Liu and Mohit Muteja and Yahya Alsalqan and Carl Chang},
      title = {An Application of Genetic Algorithms to Software Project Management},
      booktitle = {Proceedings of the 9th International Advanced Science and Technology},
      year = {1993},
      pages = {247-252},
      address = {Chicago, Illinois, USA},
      month = {March},
      url = {http://www.cs.iastate.edu/~chang/publication/conference.htm}
    }
    					
    2011.07.20 Bruno Ribeiro & Glêdson Elias An Approach Based on Genetic Algorithm for Allocation of Distributed Teams (in Portuguese) 2011 Proceedings of the 2nd Brazilian Workshop on Search Based Software Engineering (WESB '11), Sao Paulo Brazil, 26-26 September   Inproceedings Management
    BibTeX:
    @inproceedings{RibeiroE11,
      author = {Bruno Ribeiro and Glêdson Elias},
      title = {An Approach Based on Genetic Algorithm for Allocation of Distributed Teams (in Portuguese)},
      booktitle = {Proceedings of the 2nd Brazilian Workshop on Search Based Software Engineering (WESB '11)},
      year = {2011},
      address = {Sao Paulo, Brazil},
      month = {26-26 September}
    }
    					
    2007.12.02 Gerardo Canfora, Massimiliano Di Penta, Raffaele Esposito & Maria Luisa Villani Beyer, H.-G. & O'Reilly, U.-M. (Hrsg.) An Approach for QoS-aware Service Composition based on Genetic Algorithms 2005 Proceedings of the 2005 Conference on Genetic and Evolutionary Computation (GECCO '05), pp. 1069-1075, Washington D.C. USA, 25-29 June   Inproceedings Design Tools and Techniques
    Abstract: Web services are rapidly changing the landscape of software engineering. One of the most interesting challenges introduced by web services is represented by Quality Of Service (QoS)-aware composition and late-binding. This allows to bind, at run-time, a service-oriented system with a set of services that, among those providing the required features, meet some non-functional constraints, and optimize criteria such as the overall cost or response time. In other words, QoS-aware composition can be modeled as an optimization problem.We propose to adopt Genetic Algorithms to this aim. Genetic Algorithms, while being slower than integer programming, represent a more scalable choice, and are more suitable to handle generic QoS attributes. The paper describes our approach and its applicability, advantages and weaknesses, discussing results of some numerical simulations.
    BibTeX:
    @inproceedings{CanforaDEV05,
      author = {Gerardo Canfora and Massimiliano Di Penta and Raffaele Esposito and Maria Luisa Villani},
      title = {An Approach for QoS-aware Service Composition based on Genetic Algorithms},
      booktitle = {Proceedings of the 2005 Conference on Genetic and Evolutionary Computation (GECCO '05)},
      publisher = {ACM},
      year = {2005},
      pages = {1069-1075},
      address = {Washington, D.C., USA},
      month = {25-29 June},
      doi = {http://dx.doi.org/10.1145/1068009.1068189}
    }
    					
    2014.09.02 Daniele Romano, Massililiano Di Penta & Giuliano Antoniol An Approach for Search Based Testing of Null Pointer Exceptions 2011 Proceedings of IEEE 4th International Conference on Software Testing, Verification and Validation (ICST '11), pp. 160-169, Berlin Germany, 21-25 March   Inproceedings Testing and Debugging
    Abstract: Uncaught exceptions, and in particular null pointer exceptions (NPEs), constitute a major cause of crashes for software systems. Although tools for the static identification of potential NPEs exist, there is need for proper approaches able to identify system execution scenarios causing NPEs. This paper proposes a search-based test data generation approach aimed at automatically identify NPEs. The approach consists of two steps: (i) an inter-procedural data and control flow analysis - relying on existing technology - that identifies paths between input parameters and potential NPEs, and (ii) a genetic algorithm that evolves a population of test data with the aim of covering such paths. The algorithm is able to deal with complex inputs containing arbitrary data structures. The approach has been evaluated on to test class clusters from six Java open source systems, where NPE bugs have been artificially introduced. Results show that the approach is, indeed, able to identify the NPE bugs, and it outperforms random testing. Also, we show how the approach is able to identify real NPE bugs some of which are posted in the bug-tracking system of the Apache libraries.
    BibTeX:
    @inproceedings{RomanoPA11,
      author = {Daniele Romano and Massililiano Di Penta and Giuliano Antoniol},
      title = {An Approach for Search Based Testing of Null Pointer Exceptions},
      booktitle = {Proceedings of IEEE 4th International Conference on Software Testing, Verification and Validation (ICST '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {160-169},
      address = {Berlin, Germany},
      month = {21-25 March},
      doi = {http://dx.doi.org/10.1109/ICST.2011.49}
    }
    					
    2015.02.23 Aurora Ramrez, José Raúl Romero & Sebastián Ventura An Approach for the Evolutionary Discovery of Software Architectures 2015 Information Sciences, Vol. 305, pp. 234-255, June   Article
    Abstract: Software architectures constitute important analysis artefacts in software projects, as they reflect the main functional blocks of the software. They provide high-level analysis artefacts that are useful when architects need to analyse the structure of working systems. Normally, they do this process manually, supported by their prior experiences. Even so, the task can be very tedious when the actual design is unclear due to continuous uncontrolled modifications. Since the recent appearance of search based software engineering, multiple tasks in the area of software engineering have been formulated as complex search and optimisation problems, where evolutionary computation has found a new area of application. This paper explores the design of an evolutionary algorithm (EA) for the discovery of the underlying architecture of software systems. Important efforts have been directed towards the creation of a generic and human-oriented process. Hence, the selection of a comprehensible encoding, a fitness function inspired by accurate software design metrics, and a genetic operator simulating architectural transformations all represent important characteristics of the proposed approach. Finally, a complete parameter study and experimentation have been performed using real software systems, looking for a generic evolutionary approach to help software engineers towards their decision making process.
    BibTeX:
    @article{RamirezRV15,
      author = {Aurora Ramrez and José Raúl Romero and Sebastián Ventura},
      title = {An Approach for the Evolutionary Discovery of Software Architectures},
      journal = {Information Sciences},
      year = {2015},
      volume = {305},
      pages = {234-255},
      month = {June},
      doi = {http://dx.doi.org/10.1016/j.ins.2015.01.017}
    }
    					
    2010.04.13 Praveen Ranjan Srivastava, Km Baby & G Raghurama An Approach of Optimal Path Generation using Ant Colony Optimization 2009 Proceedings of IEEE Region 10 Conference on TENCON 2009, pp. 1-6, Singapore, 23-26 January   Inproceedings Testing and Debugging
    Abstract: Software Testing is one of the indispensable parts of the software development lifecycle and structural testing is one of the most widely used testing paradigms to test various software. Structural testing relies on code path identification, which in turn leads to identification of effective paths. Aim of the current paper is to present a simple and novel algorithm with the help of an ant colony optimization, for the optimal path identification by using the basic property and behavior of the ants. This novel approach uses certain set of rules to find out all the effective/optimal paths via ant colony optimization (ACO) principle. The method concentrates on generation of paths, equal to the cyclomatic complexity. This algorithm guarantees full path coverage.
    BibTeX:
    @inproceedings{SrivastavaBR09,
      author = {Praveen Ranjan Srivastava and Km Baby and G Raghurama},
      title = {An Approach of Optimal Path Generation using Ant Colony Optimization},
      booktitle = {Proceedings of IEEE Region 10 Conference on TENCON 2009},
      publisher = {IEEE},
      year = {2009},
      pages = {1-6},
      address = {Singapore},
      month = {23-26 January},
      doi = {http://dx.doi.org/10.1109/TENCON.2009.5396088}
    }
    					
    2015.12.08 Jian Hua Sun & Shu Juan Jiang An Approach to Automatic Generating Test Data for Multi-path Coverage by Genetic Algorithm 2010 Proceedings of the 6th International Conference on Natural Computation (ICNC '10), pp. 1533-1536, Yantai Shandong China, 10-12 August   Inproceedings Testing and Debugging
    Abstract: Software test is an important step during software development. Improving the automation of software testing can increase the robustness of software and decrease the cost of development. The key of improving the automation ability of testing is improving the automatic test data generation. The paper presents a new method to generate test data for multi-path coverage. The method uses a two-dimensional matrix to record the paths whether the test data passed, and improve the fitness function to suit the new method. In the end, it presents a case study to evaluate the efficient of proposes an efficiency of the method.
    BibTeX:
    @inproceedings{SunJ10,
      author = {Jian Hua Sun and Shu Juan Jiang},
      title = {An Approach to Automatic Generating Test Data for Multi-path Coverage by Genetic Algorithm},
      booktitle = {Proceedings of the 6th International Conference on Natural Computation (ICNC '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {1533-1536},
      address = {Yantai, Shandong, China},
      month = {10-12 August},
      doi = {http://dx.doi.org/10.1109/ICNC.2010.5583778}
    }
    					
    2012.03.06 Sebastian Bauersfeld, Stefan Wappler & Joachim Wegener An Approach to Automatic Input Sequence Generation for GUI Testing using Ant Colony Optimization 2011 Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (GECCO '11), pp. 1915-1922, Dublin Ireland, 12-16 July   Inproceedings Testing and Debugging
    Abstract: Testing applications with a graphical user interface (GUI) is an important, though challenging and time consuming task. The state of the art in the industry are still capture and replay tools, which greatly simplify the recording and execution of input sequences, but do not support the tester in finding fault-sensitive test cases. While search-based test case generation strategies, such as evolutionary testing, are well researched for various areas of testing, relatively little work has been done on applying these techniques to an entire GUI of an application. This paper presents an approach to finding input sequences for GUIs using ant colony optimization and a relatively new metric called maximum call stacks for use within the fitness function.
    BibTeX:
    @inproceedings{BauersfeldWW11b,
      author = {Sebastian Bauersfeld and Stefan Wappler and Joachim Wegener},
      title = {An Approach to Automatic Input Sequence Generation for GUI Testing using Ant Colony Optimization},
      booktitle = {Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (GECCO '11)},
      publisher = {ACM},
      year = {2011},
      pages = {1915-1922},
      address = {Dublin, Ireland},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2001858.2001999}
    }
    					
    2015.12.09 Yang Cao, Chunhua Hu & Luming Li An Approach to Generate Software Test Data for a Specific Path Automatically with Genetic Algorithm 2009 Proceedings of the 8th International Conference on Reliability, Maintainability and Safety (ICRMS '09), pp. 888 - 892, Chengdu China, 20-24 July   Inproceedings Testing and Debugging
    Abstract: We focus on software reliability with testing coverage, which will grow with increment of the coverage. We expect to improve quality of software testing with it automated. An approach of generating test data for a specific single path is presented in this paper, different from the predicate distance applied by most test data generators based on genetic algorithms. A similarity between the target path and execution path with sub path overlapped is designed as fitness value to evaluate the individuals of a population and drive GA to search the appropriate solutions. Several experiments are taken to examine the effectiveness of the designed fitness function, which evaluate performance of the function with the convergence ability and consumed time. Results show that the function performs well compared with other two typical fitness functions for specific paths.
    BibTeX:
    @inproceedings{CaoHL09b,
      author = {Yang Cao and Chunhua Hu and Luming Li},
      title = {An Approach to Generate Software Test Data for a Specific Path Automatically with Genetic Algorithm},
      booktitle = {Proceedings of the 8th International Conference on Reliability, Maintainability and Safety (ICRMS '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {888 - 892},
      address = {Chengdu, China},
      month = {20-24 July},
      doi = {http://dx.doi.org/10.1109/ICRMS.2009.5269962}
    }
    					
    2008.09.16 Marjan Hericko, Ales Zivkovic & Ivan Rozman An Approach to Optimizing Software Development Team Size 2008 Information Processing Letters, Vol. 108(3), pp. 101-106, October   Article Management
    Abstract: This paper explores the relationship between software size, development effort and team size. We propose an approach aimed at finding the team size where the project effort has its minimum. The approach was applied to the ISBSG repository containing nearly 4000 software projects. Based on the results we provide our recommendation for the optimal or near-optimal team size in seven project groups defined by four project properties.
    BibTeX:
    @article{HerickoZR08,
      author = {Marjan Hericko and Ales Zivkovic and Ivan Rozman},
      title = {An Approach to Optimizing Software Development Team Size},
      journal = {Information Processing Letters},
      year = {2008},
      volume = {108},
      number = {3},
      pages = {101-106},
      month = {October},
      doi = {http://dx.doi.org/10.1016/j.ipl.2008.04.014}
    }
    					
    2013.06.28 Priti Bansal, Sangeeta Sabharwal, Shreya Malik, Vikhyat Arora & Vineet Kumar An Approach to Test Set Generation for Pair-wise Testing using Genetic Algorithms 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: Instead of performing exhaustive testing that tests all possible combinations of input parameter values of a system, it is better to switch to a more efficient and effective testing technique i.e., pair wise testing. In pair wise testing, test cases are designed to cover all possible combinations of each pair of input parameter values. It has been shown that the problem of finding the minimum set of test cases for pair-wise testing is an NP complete problem. In this paper we apply genetic algorithm, a meta heuristic search algorithm, to find an optimal solution to the pair-wise test set generation problem. We present a method to generate initial population using hamming distance and an algorithm to find crossover points for combining individuals selected for reproduction. We describe the implementation of the proposed approach by extending an open source tool PWiseGen and evaluate the effectiveness of the proposed approach. Empirical results indicate that our approach can generate test sets with higher fitness level by covering more pairs of input parameter values.
    BibTeX:
    @inproceedings{BansalSMAK13,
      author = {Priti Bansal and Sangeeta Sabharwal and Shreya Malik and Vikhyat Arora and Vineet Kumar},
      title = {An Approach to Test Set Generation for Pair-wise Testing using Genetic Algorithms},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_27}
    }
    					
    2011.07.20 Ítalo Mendonça Rocha, Gerardo Valdisio R. Viana & Jerffeson Teixeira de Souza An Approach to the Problem of Optimal Allocation of Teams and Task Scheduling for Obtaining Efficient Schedules (in Portuguese) 2011 Proceedings of the 2nd Brazilian Workshop on Search Based Software Engineering (WESB '11), Sao Paulo Brazil, 26-26 September   Inproceedings
    BibTeX:
    @inproceedings{RochaVS11,
      author = {Ítalo Mendonça Rocha and Gerardo Valdisio R. Viana and Jerffeson Teixeira de Souza},
      title = {An Approach to the Problem of Optimal Allocation of Teams and Task Scheduling for Obtaining Efficient Schedules (in Portuguese)},
      booktitle = {Proceedings of the 2nd Brazilian Workshop on Search Based Software Engineering (WESB '11)},
      year = {2011},
      address = {Sao Paulo, Brazil},
      month = {26-26 September}
    }
    					
    2012.02.28 Adil A. Aziz, Wan M.N. Wan Kadir & Adil Yousif An Architecture-based Approach to Support Alternative Design Decision in Component-Based System: A Case Study from Information System Domain 2012 International Journal of Advanced Science and Technology, Vol. 38, January   Article Design Tools and Techniques
    Abstract: Component-Based System (CBS) is a promised approach to build applications from deployed components. It provides efficiency, reliability, maintainability. Interpreting the results of performance analysis and generating an alternative design to build system from component (Hardware/Software) is a great challenge in the software performance domain. There are so many options to compose the system. Span of design space hinders the selection of the appropriate design alternative. Currently, Meta-heuristics such as Genetic Algorithm (GA) methods are used to solve the problem. In recent investigations Particle Swarm Optimization (PSO), an alternative search technique, often outperforms GA when applied to various problems. In this paper, we describe performance prediction approach based on PSO for component-Based system development. PSO technique can be used to effectively generate alternatives design options in spanned design space and facilitate the design decision during the development process. The proposed approach aids developers to effectively trades-off between architectural designs alternatives, because it covers and generates all possible options and provides the best solution. To the best of our knowledge, we are the first who use PSO in software performance prediction, particularly in the context of CBS. To this end, outlines and an example are presented in the paper to describe the approach.
    BibTeX:
    @article{AzizWY12,
      author = {Adil A. Aziz and Wan M. N. Wan Kadir and Adil Yousif},
      title = {An Architecture-based Approach to Support Alternative Design Decision in Component-Based System: A Case Study from Information System Domain},
      journal = {International Journal of Advanced Science and Technology},
      year = {2012},
      volume = {38},
      month = {January},
      url = {http://www.sersc.org/journals/IJAST/vol38/1.pdf}
    }
    					
    2016.02.17 Bo Yang, Yanmei Hu & Chin-Yu Huang An Architecture-Based Multi-Objective Optimization Approach to Testing Resource Allocation 2015 IEEE Transactions on Reliability, Vol. 64(1), pp. 497-515, March   Article
    Abstract: Software systems are widely employed in society. With a limited amount of testing resource available, testing resource allocation among components of a software system becomes an important issue. Most existing research on the testing resource allocation problem takes a single-objective optimization approach, which may not adequately address all the concerns in the decision-making process. In this paper, an architecture-based multi-objective optimization approach to testing resource allocation is proposed. An architecture-based model is used for system reliability assessment, which has the advantage of explicitly considering system architecture over the reliability block diagram (RBD)-based models, and has good flexibility to different architectural alternatives and component changes. A system cost modeling approach which is based on well-developed software cost models is proposed, which would be a more flexible, suitable approach to the cost modeling of software than the approach adopted by others which is based on an empirical cost model. A multi-objective optimization model is developed for the testing resource allocation problem, in which the three major concerns in the testing resource allocation problem, i.e., system reliability, system cost, and the total amount of testing resource consumed, are taken into consideration. A multi-objective evolutionary algorithm (MOEA), called multi-objective differential evolution based on weighted normalized sum (WNS-MODE), is developed. Experimental studies are presented, and the experiments show several results. 1) The proposed architecture-based multi-objective optimization approach can identify the testing resource allocation strategy which has a good trade-off among optimization objectives. 2) The developed WNS-MODE is better than the MOEA developed in recent research, called HaD-MOEA, in terms of both solution quality and computational efficiency. 3) The WNS-MODE seems quite robust from the sensitivity analysis results.
    BibTeX:
    @article{YangHH15,
      author = {Bo Yang and Yanmei Hu and Chin-Yu Huang},
      title = {An Architecture-Based Multi-Objective Optimization Approach to Testing Resource Allocation},
      journal = {IEEE Transactions on Reliability},
      year = {2015},
      volume = {64},
      number = {1},
      pages = {497-515},
      month = {March},
      doi = {http://dx.doi.org/10.1109/TR.2014.2372411}
    }
    					
    2010.08.25 Brian S. Mitchell, Martin Traverso & Spiros Mancoridis An Architecture for Distributing the Computation of Software Clustering Algorithms 2001 Proceedings of the Working IEEE/IFIP Conference on Software Architecture (WICSA '01), pp. 181-190, Amsterdam The Netherlands, 28-31 August   Inproceedings Design Tools and Techniques
    Abstract: Collections of general purpose networked workstations offer processing capability that often rivals or exceeds supercomputers. Since networked workstations are readily available in most organizations, they provide an economic and scalable alternative to parallel machines. In this paper we discuss how individual nodes in computer network can be used as a collection of connected processing elements to improve the performance of software engineering tool that we developed. Our tool, called Bunch, automatically clusters the structure of software systems into hierarchy of subsystems. Clustering helps developers understand complex systems by providing them with high-level abstract (clustered)views of the software structure. The algorithms used by Bunch are computationally intensive and, hence, we would like to improve our tool's performance in order to cluster very large systems. This paper describes how we designed and implemented distributed version of Bunch, which is useful for clustering large systems.
    BibTeX:
    @inproceedings{MitchellRM01,
      author = {Brian S. Mitchell and Martin Traverso and Spiros Mancoridis},
      title = {An Architecture for Distributing the Computation of Software Clustering Algorithms},
      booktitle = {Proceedings of the Working IEEE/IFIP Conference on Software Architecture (WICSA '01)},
      publisher = {IEEE},
      year = {2001},
      pages = {181-190},
      address = {Amsterdam, The Netherlands},
      month = {28-31 August},
      doi = {http://dx.doi.org/10.1109/WICSA.2001.948427}
    }
    					
    2011.07.20 Wesley Klewerton Guez Assunção, Thelma Elita Colanzi, Aurora Trinidad Ramirez Pozo & Silvia Regina Vergilio An Assessment of the Use of Different Evolutionary Algorithms for Multiobjective Integration Classes and Aspects (in Portuguese) 2011 Proceedings of the 2nd Brazilian Workshop on Search Based Software Engineering (WESB '11), Sao Paulo Brazil, 26-26 September   Inproceedings
    BibTeX:
    @inproceedings{AssuncaoCPV11b,
      author = {Wesley Klewerton Guez Assunção and Thelma Elita Colanzi and Aurora Trinidad Ramirez Pozo and Silvia Regina Vergilio},
      title = {An Assessment of the Use of Different Evolutionary Algorithms for Multiobjective Integration Classes and Aspects (in Portuguese)},
      booktitle = {Proceedings of the 2nd Brazilian Workshop on Search Based Software Engineering (WESB '11)},
      year = {2011},
      address = {Sao Paulo, Brazil},
      month = {26-26 September}
    }
    					
    2011.05.19 Fernando Netto, Márcio de Oliveira Barros & Adriana C.F. Alvim An Automated Approach for Scheduling Bug Fix Tasks 2010 Proceedings of the 2010 Brazilian Symposium on Software Engineering (SBES '10), pp. 80-89, 27 September - 1 October   Inproceedings Testing and Debugging
    Abstract: Even if a development team uses the best Software Engineering practices to produce high-quality software, end users may find defects that were not previously identified during the software development life-cycle. These defects must be fixed and new versions of the software incorporating the patches that solve them must be released. The project manager must schedule a set of error correction tasks with different priorities in order to minimize the time required to accomplish these tasks and guarantee that the more important issues have been fixed. Given the large number of distinct schedules, an automatically tool to find good schedules may be helpful to project managers. This work proposes a method which captures relevant information from bug repositories and submits them to a genetic algorithm to find near optimal bug correction task schedules. We have evaluated the approach using a subset of the Eclipse bug repository and it suggested better schedules than the actual schedules followed by Eclipse developers.
    BibTeX:
    @inproceedings{NettoBA10,
      author = {Fernando Netto and Márcio de Oliveira Barros and Adriana C. F. Alvim},
      title = {An Automated Approach for Scheduling Bug Fix Tasks},
      booktitle = {Proceedings of the 2010 Brazilian Symposium on Software Engineering (SBES '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {80-89},
      month = {27 September - 1 October},
      doi = {http://dx.doi.org/10.1109/SBES.2010.16}
    }
    					
    2009.07.26 Nigel Tracey, John A. Clark, Keith Mander & John McDermid An Automated Framework for Structural Test-Data Generation 1998 Proceedings of the 13th IEEE International Conference on Automated Software Engineering (ASE '98), pp. 285-288, Honolulu Hawaii USA, 13-16 October   Inproceedings Testing and Debugging
    Abstract: Structural testing criteria are mandated in many software development standards and guidelines. The process of generating test data to achieve 100% coverage of a given structural coverage metric is labour-intensive and expensive. This paper presents an approach to automate the generation of such test data. The test-data generation is based on the application of a dynamic optimisation-based search for the required test data. The same approach can be generalised to solve other test-data generation problems. Three such applications are discussed-boundary value analysis, assertion/run-time exception testing, and component re-use testing. A prototype tool-set has been developed to facilitate the automatic generation of test data for these structural testing problems. The results of preliminary experiments using this technique and the prototype tool-set are presented and show the efficiency and effectiveness of this approach.
    BibTeX:
    @inproceedings{TraceyCMM98,
      author = {Nigel Tracey and John A. Clark and Keith Mander and John McDermid},
      title = {An Automated Framework for Structural Test-Data Generation},
      booktitle = {Proceedings of the 13th IEEE International Conference on Automated Software Engineering (ASE '98)},
      publisher = {IEEE},
      year = {1998},
      pages = {285-288},
      address = {Honolulu, Hawaii, USA},
      month = {13-16 October},
      doi = {http://dx.doi.org/10.1109/ASE.1998.732680}
    }
    					
    2014.08.14 Frederik Schmidt, Stephen G. MacDonell & Andrew M. Connor Lee, R. (Hrsg.) An Automatic Architecture Reconstruction and Refactoring Framework 2012 , pp. 95-111   Inbook Distribution and Maintenance
    Abstract: A variety of sources have noted that a substantial proportion of non trivial software systems fail due to unhindered architectural erosion. This design deterioration leads to low maintainability, poor testability and reduced development speed. The erosion of software systems is often caused by inadequate understanding, documentation and maintenance of the desired implementation architecture. If the desired architecture is lost or the deterioration is advanced, the reconstruction of the desired architecture and the realignment of this desired architecture with the physical architecture both require substantial manual analysis and implementation effort. This paper describes the initial development of a framework for automatic software architecture reconstruction and source code migration. This framework offers the potential to reconstruct the conceptual architecture of software systems and to automatically migrate the physical architecture of a software system toward a conceptual architecture model. The approach is implemented within a proof of concept prototype which is able to analyze java system and reconstruct a conceptual architecture for these systems as well as to refactor the system towards a conceptual architecture.
    BibTeX:
    @inbook{SchmidtMacC12,
      author = {Frederik Schmidt and Stephen G. MacDonell and Andrew M. Connor},
      title = {An Automatic Architecture Reconstruction and Refactoring Framework},
      publisher = {Springer},
      year = {2012},
      pages = {95-111},
      doi = {http://dx.doi.org/10.1007/978-3-642-23202-2_7}
    }
    					
    2008.07.27 Andreas S. Andreou, Kypros A. Economides & Anastasis A. Sofokleous An Automatic Software Test-Data Generation Scheme Based on Data Flow Criteria and Genetic Algorithms 2007 Proceedings of the 7th IEEE International Conference on Computer and Information Technology (CIT '07), pp. 867-872, Fukushima Japan, 16-19 October   Inproceedings Testing and Debugging
    Abstract: Software test-data generation research primarily focuses on using control flow graphs for producing an optimum set of test cases. This paper proposes the integration of a data flow graph module with an existing testing framework and the utilisation of a specially designed genetic algorithm for automatically generating test cases based on data flow coverage criteria. The enhanced framework aims to explore promising aspects of software testing that have not yet received adequate research attention, by exploiting the data information of a program and provide a different testing coverage approach compared to existing control flow-oriented ones. The performance of the proposed approach is assessed and validated on a number of sample programs of different levels of size and complexity. The associated experimental results indicate successful performance in terms of testing coverage, which is significantly better when compared to those of existing dynamic data flow-oriented test data generation methods.
    BibTeX:
    @inproceedings{AndreouES07,
      author = {Andreas S. Andreou and Kypros A. Economides and Anastasis A. Sofokleous},
      title = {An Automatic Software Test-Data Generation Scheme Based on Data Flow Criteria and Genetic Algorithms},
      booktitle = {Proceedings of the 7th IEEE International Conference on Computer and Information Technology (CIT '07)},
      publisher = {IEEE},
      year = {2007},
      pages = {867-872},
      address = {Fukushima, Japan},
      month = {16-19 October},
      doi = {http://dx.doi.org/10.1109/CIT.2007.97}
    }
    					
    2015.02.05 Thiago Gomes Nepomuceno Da Silva, Leonardo Sampaio Rocha & José Everardo Bessa Maia An Effective Method for MOGAs Initialization to Solve the Multi-Objective Next Release Problem 2014 Proceedings of the 13th Mexican International Conference on Artificial Intelligence (MICAI '14), pp. 25-37, Tuxtla Gutiérrez Mexico, 16-22 November   Inproceedings
    Abstract: In this work we evaluate the usefulness of a Path Relinking based method for generating the initial population of Multi-Objective Genetic Algorithms and evaluate its performance on the Multi-Objective Next Release Problem.The performance of the method was evaluated for the algorithms MoCell and NSGA-II, and the experimental results have shown that it is consistently superior to the random initialization method and the extreme solutions method, considering the convergence speed and the quality of the Pareto front, that was measured using the Spread and Hypervolume indexes.
    BibTeX:
    @inproceedings{SilvaRM14,
      author = {Thiago Gomes Nepomuceno Da Silva and Leonardo Sampaio Rocha and José Everardo Bessa Maia},
      title = {An Effective Method for MOGAs Initialization to Solve the Multi-Objective Next Release Problem},
      booktitle = {Proceedings of the 13th Mexican International Conference on Artificial Intelligence (MICAI '14)},
      publisher = {Springer},
      year = {2014},
      pages = {25-37},
      address = {Tuxtla Gutiérrez, Mexico},
      month = {16-22 November},
      doi = {http://dx.doi.org/10.1007/978-3-319-13650-9_3}
    }
    					
    2009.07.24 Simon Poulding, Paul Emberson, Iain Bate & John A. Clark An Efficient Experimental Methodology for Configuring Search-Based Design Algorithms 2007 Proceedings of the 10th IEEE High Assurance Systems Engineering Symposium (HASE '07), pp. 53-62, Dallas Texas USA, 14-16 November   Inproceedings Design Tools and Techniques
    Abstract: Many problems in high assurance systems design are only tractable using computationally expensive search algorithms. For these algorithms to be useful, designers must be provided with guidance as to how to configure the algorithms appropriately. This paper presents an experimental methodology for deriving such guidance that remains efficient when the algorithm requires substantial computing resources or takes a long time to find solutions. The methodology is shown to be effective on a highly-constrained task allocation algorithm that provides design solutions for high integrity systems. Using the methodology, an algorithm configuration is derived in a matter of days that significantly outperforms one resulting from months of "trial-and-error" optimisation.
    BibTeX:
    @inproceedings{PouldingEBC07,
      author = {Simon Poulding and Paul Emberson and Iain Bate and John A. Clark},
      title = {An Efficient Experimental Methodology for Configuring Search-Based Design Algorithms},
      booktitle = {Proceedings of the 10th IEEE High Assurance Systems Engineering Symposium (HASE '07)},
      publisher = {IEEE},
      year = {2007},
      pages = {53-62},
      address = {Dallas, Texas, USA},
      month = {14-16 November},
      doi = {http://dx.doi.org/10.1109/HASE.2007.19}
    }
    					
    2016.03.08 Erik M. Fredericks & Betty H.C. Cheng An Empirical Analysis of Providing Assurance for Self-Adaptive Systems at Different Levels of Abstraction in the Face of Uncertainty 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 8-14, Florence Italy, 18-19 May   Inproceedings
    Abstract: Self-adaptive systems (SAS) must frequently continue to deliver acceptable behavior at run time even in the face of uncertainty. Particularly, SAS applications can self-reconfigure in response to changing or unexpected environmental conditions and must therefore ensure that the system performs as expected. Assurance can be addressed at both design time and run time, where environmental uncertainty poses research challenges for both settings. This paper presents empirical results from a case study in which search-based software engineering techniques have been systematically applied at different levels of abstraction, including requirements analysis, code implementation, and run-time validation, to a remote data mirroring application that must efficiently diffuse data while experiencing adverse operating conditions. Experimental results suggest that our techniques perform better in terms of providing assurance than alternative software engineering techniques at each level of abstraction.
    BibTeX:
    @inproceedings{FredericksC15,
      author = {Erik M. Fredericks and Betty H. C. Cheng},
      title = {An Empirical Analysis of Providing Assurance for Self-Adaptive Systems at Different Levels of Abstraction in the Face of Uncertainty},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {8-14},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.9}
    }
    					
    2017.09.13 José Campos, Yan Ge, Gordon Fraser, Marcelo Eler & Andrea Arcuri An Empirical Evaluation of Evolutionary Algorithms for Test Suite Generation 2017 Proceedings of the 9th International Symposium on Search Based Software Engineering (SSBSE '17), Vol. 10452, pp. 33-48, Paderborn Germany, 9-11 September   Inproceedings
    Abstract: Evolutionary algorithms have been shown to be effective at generating unit test suites optimised for code coverage. While many aspects of these algorithms have been evaluated in detail (e.g., test length and different kinds of techniques aimed at improving performance, like seeding), the influence of the specific algorithms has to date seen less attention in the literature. As it is theoretically impossible to design an algorithm that is best on all possible problems, a common approach in software engineering problems is to first try a Genetic Algorithm, and only afterwards try to refine it or compare it with other algorithms to see if any of them is more suited for the addressed problem. This is particularly important in test generation, since recent work suggests that random search may in practice be equally effective, whereas the reformulation as a many-objective problem seems to be more effective. To shed light on the influence of the search algorithms, we empirically evaluate six different algorithms on a selection of non-trivial open source classes. Our study shows that the use of a test archive makes evolutionary algorithms clearly better than random testing, and it confirms that the many-objective search is the most effective.
    BibTeX:
    @inproceedings{CamposGFEA17,
      author = {José Campos and Yan Ge and Gordon Fraser and Marcelo Eler and Andrea Arcuri},
      title = {An Empirical Evaluation of Evolutionary Algorithms for Test Suite Generation},
      booktitle = {Proceedings of the 9th International Symposium on Search Based Software Engineering (SSBSE '17)},
      publisher = {Springer},
      year = {2017},
      volume = {10452},
      pages = {33-48},
      address = {Paderborn, Germany},
      month = {9-11 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-66299-2_3}
    }
    					
    2010.09.21 Kiran Lakhotia, Phil McMinn & Mark Harman An Empirical Investigation Into Branch Coverage for C Programs Using CUTE and AUSTIN 2010 Journal of Systems and Software, Vol. 83(12), pp. 2379-2391, December   Article Testing and Debugging
    Abstract: Automated test data generation has remained a topic of considerable interest for several decades because it lies at the heart of attempts to automate the process of Software Testing. This paper reports the results of an empirical study using the dynamic symbolic-execution tool, CUTE, and a search based tool, AUSTIN on five non-trivial open source applications. The aim is to provide practitioners with an assessment of what can be achieved by existing techniques with little or no specialist knowledge and to provide researchers with baseline data against which to measure subsequent work. To achieve this, each tool is applied ‘as is', with neither additional tuning nor supporting harnesses and with no adjustments applied to the subject programs under test. The mere fact that these tools can be applied ‘out of the box' in this manner reflects the growing maturity of Automated test data generation. However, as might be expected, the study reveals opportunities for improvement and suggests ways to hybridize these two approaches that have hitherto been developed entirely independently.
    BibTeX:
    @article{LakhotiaMH10,
      author = {Kiran Lakhotia and Phil McMinn and Mark Harman},
      title = {An Empirical Investigation Into Branch Coverage for C Programs Using CUTE and AUSTIN},
      journal = {Journal of Systems and Software},
      year = {2010},
      volume = {83},
      number = {12},
      pages = {2379-2391},
      month = {December},
      doi = {http://dx.doi.org/10.1016/j.jss.2010.07.026}
    }
    					
    2012.03.09 Christopher L. Simons & Ian C. Parmee An Empirical Investigation of Search-based Computational Support for Conceptual Software Engineering Design 2009 Proceedings of IEEE International Conference on Systems, Man and Cybernetics (SMC '09), pp. 2503-2508, San Antonio USA, 11-14 October   Inproceedings Design Tools and Techniques
    Abstract: Conceptual software engineering design is an intensely people-oriented and non-trivial activity, yet current computational tool support is limited. While a number of search-based software engineering approaches to support software design have been reported, few empirical studies into their application have been described. This paper reports the findings of an observational study of conceptual design episodes in a UK higher education problem domain. When compared with a manual design episode, a design episode enabled by a user-interactive, search-based, evolutionary computation tool generates a large number of useful and interesting candidate designs, and provides enhanced qualitative and quantitative evaluation. It is also found that tool-supported visualization of UML class designs offers opportunities for sudden design discovery, and that designers respond positively to opportunities to explore and exploit multiple candidate designs. It appears therefore that search-based computational tool support offers high potential in the support of conceptual software engineering design.
    BibTeX:
    @inproceedings{SimonsP09,
      author = {Christopher L. Simons and Ian C. Parmee},
      title = {An Empirical Investigation of Search-based Computational Support for Conceptual Software Engineering Design},
      booktitle = {Proceedings of IEEE International Conference on Systems, Man and Cybernetics (SMC '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {2503-2508},
      address = {San Antonio, USA},
      month = {11-14 October},
      doi = {http://dx.doi.org/10.1109/ICSMC.2009.5346344}
    }
    					
    2012.03.09 Ekin Koc, Nur Ersoy, Ali Andac, Zelal Seda Camlidere, Ibrahim Cereci & Hurevren Kilic An Empirical Study about Search-Based Refactoring using Alternative Multiple and Population-Based Search Techniques 2011 Computer and Information Sciences II, pp. 59-66   Article Distribution and Maintenance
    Abstract: Automated maintenance of object-oriented software system designs via refactoring is a performance demanding combinatorial optimization problem. In this study, we made an empirical comparative study to see the performances of alternative search algorithms under a quality model defined by an aggregated software fitness metric. We handled 20 different refactoring actions that realize searches on design landscape defined by combination of 24 object-oriented software metrics. The investigated algorithms include random, steepest descent, multiple first descent, multiple steepest descent, simulated annealing and artificial bee colony searches. The study is realized by using a tool called A-CMA developed in Java that accepts bytecode compiled Java codes as its input. The empiricial study showed that multiple steepest descent and population-based artificial bee colony algorithms are two most suitable approaches for the efficient solution of the search based refactoring problem.
    BibTeX:
    @article{KocEACCK11,
      author = {Ekin Koc and Nur Ersoy and Ali Andac and Zelal Seda Camlidere and Ibrahim Cereci and Hurevren Kilic},
      title = {An Empirical Study about Search-Based Refactoring using Alternative Multiple and Population-Based Search Techniques},
      journal = {Computer and Information Sciences II},
      year = {2011},
      pages = {59-66},
      doi = {http://dx.doi.org/10.1007/978-1-4471-2155-8_7}
    }
    					
    2015.11.06 Yuanyuan Zhang, Mark Harman, Gabriela Ochoa, Guenther Ruhe & Sjaak Brinkkemper An Empirical Study of Meta-and Hyper-Heuristic Search for Multi-Objective Release Planning 2014 (RN/14/07), June   Techreport
    Abstract: A variety of meta-heuristic search algorithms have been introduced for optimising software release planning. However, there has been no comprehensive empirical study of different search algorithms across multiple different real world datasets. In this paper we present an empirical study of global, local and hybrid meta- and hyper-heuristic search based algorithms on 10 real world datasets. We find that the hyper-heuristics are particularly effective. For example, the hyper-heuristic genetic algorithm significantly outperformed the other six approaches (and with high effect size) for solution quality 85% of the time, and was also faster than all others 70% of the time. Furthermore, correlation analysis reveals that it scales well as the number of requirements increases.
    BibTeX:
    @techreport{ZhangHORB14,
      author = {Yuanyuan Zhang and Mark Harman and Gabriela Ochoa and Guenther Ruhe and Sjaak Brinkkemper},
      title = {An Empirical Study of Meta-and Hyper-Heuristic Search for Multi-Objective Release Planning},
      year = {2014},
      number = {RN/14/07},
      month = {June},
      url = {http://www.cs.ucl.ac.uk/fileadmin/UCL-CS/research/Research_Notes/RN_14_07.pdf}
    }
    					
    2007.12.02 Mark Harman, Stephen Swift & Kiarash Mahdavi An Empirical Study of the Robustness of Two Module Clustering Fitness Functions 2005 Proceedings of the 2005 Conference on Genetic and Evolutionary Computation (GECCO '05), Vol. 1, pp. 1029-1036, Washington D.C. USA, 25-29 June   Inproceedings Distribution and Maintenance
    Abstract: Two of the attractions of search-based software engineering (SBSE) derive from the nature of the fitness functions used to guide the search. These have proved to be highly robust (for a variety of different search algorithms) and have yielded insight into the nature of the search space itself, shedding light upon the software engineering problem in hand.This paper aims to exploit these two benefits of SBSE in the context of search based module clustering. The paper presents empirical results which compare the robustness of two fitness functions used for software module clustering: one (MQ) used exclusively for module clustering. The other is EVM, a clustering fitness function previously applied to time series and gene expression data.The results show that both metrics are relatively robust in the presence of noise, with EVM being the more robust of the two. The results may also yield some interesting insights into the nature of software graphs.
    BibTeX:
    @inproceedings{HarmanSM05,
      author = {Mark Harman and Stephen Swift and Kiarash Mahdavi},
      title = {An Empirical Study of the Robustness of Two Module Clustering Fitness Functions},
      booktitle = {Proceedings of the 2005 Conference on Genetic and Evolutionary Computation (GECCO '05)},
      publisher = {ACM},
      year = {2005},
      volume = {1},
      pages = {1029-1036},
      address = {Washington, D.C., USA},
      month = {25-29 June},
      doi = {http://dx.doi.org/10.1145/1068009.1068184}
    }
    					
    2014.09.22 Amarjeet & Jitender Kumar Chhabra An Empirical Study of the Sensitivity of Quality Indicator for Software Module Clustering 2014 Proceedings of the 7th International Conference on Contemporary Computing (IC3 '14), pp. 206-211, Noida India, 7-9 August   Inproceedings
    Abstract: Recently, there has been a significant progress in applying evolutionary multiobjective optimization techniques to solve software module clustering problem. The results of evolutionary multiobjective optimization techniques for software module clustering problem are a set of many non-dominating clustering solutions. Generally, the quality indicators of clustering solutions produced by these techniques are sensitive to minor variation in the decision variables of the clustering solutions. Researchers have focused on finding software module clustering with better cluster quality indicator; however in practice developers may not always be interested to better quality indicator clustering solutions, particularly if these quality indicators are quite sensitive. Under such situations, developer looks for clustering solutions whose quality indicators are not sensitive to small variations in the decision variables of the candidate clustering solution. The paper performs an experiment for the sensitivity analysis of quality indicator on software module clustering solution with two multiobjective formulations MCA and ECA. To perform the experiment the NSGA-II is used as multi-objective evolutionary algorithm. We evaluate sensitivity of quality indicators for six real-world software and one random problem. Results indicate that the quality indicator for MCA formulation is less sensitive than ECA formulation and hence MCA will be a better choice for multiobjective software module clustering from sensitivity perspective.
    BibTeX:
    @inproceedings{AmarjeetC14,
      author = {Amarjeet and Jitender Kumar Chhabra},
      title = {An Empirical Study of the Sensitivity of Quality Indicator for Software Module Clustering},
      booktitle = {Proceedings of the 7th International Conference on Contemporary Computing (IC3 '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {206-211},
      address = {Noida, India},
      month = {7-9 August},
      doi = {http://dx.doi.org/10.1109/IC3.2014.6897174}
    }
    					
    2014.09.19 Mohammod Abul Kashem & Mohammad Naderuzzaman An Enhanced Pairwise Search Approach for Generating Optimum Number of Test Data and Reduce Execution Time 2013 Computer Engineering and Intelligent Systems, Vol. 4(1), pp. 19-28   Article Testing and Debugging
    Abstract: In recent days testing considers the most important task for building software that is free from error. Since the resources and time is limited to produce software, hence, it is not possible of performing exhaustive tests (i.e. to test all possible combinations of input data.) An alternative to get ride from this type exhaustive numbers and as well to reduce cost, an approach called Pairwise (2 way) test data generation approach will be effective. Most of the software faults in pairwise approach caused by unusual combination of input data. Hence, the demand for the optimization of number of generated test-cases and reducing the execution time is growing in software industries. This paper proposes an enhancement in pairwise search approach which generates optimum number of input values for testing purposes. In this approach it searches the most coverable pairs by pairing parameters and adopts one-test-at-a-time strategy for constructing a final test-suite. Compared to other existing strategies, Our proposed approach is effective in terms of number of generated test cases and of execution time.
    BibTeX:
    @article{KashemN13,
      author = {Mohammod Abul Kashem and Mohammad Naderuzzaman},
      title = {An Enhanced Pairwise Search Approach for Generating Optimum Number of Test Data and Reduce Execution Time},
      journal = {Computer Engineering and Intelligent Systems},
      year = {2013},
      volume = {4},
      number = {1},
      pages = {19-28},
      url = {http://iiste.org/Journals/index.php/CEIS/article/download/3845/3916}
    }
    					
    2011.05.19 Hadi Hemmati, Lionel C. Briand, Andrea Arcuri & Shaukat Ali An Enhanced Test Case Selection Approach for Model-Based Testing: An Industrial Case Study 2010 Proceedings of the 18th ACM SIGSOFT international symposium on Foundations of software engineering (FSE '10), pp. 267-276, New Mexico USA, 7-11 November   Inproceedings Testing and Debugging
    Abstract: In recent years, Model-Based Testing (MBT) has attracted an increasingly wide interest from industry and academia. MBT allows automatic generation of a large and comprehensive set of test cases from system models (e.g., state machines), which leads to the systematic testing of the system. However, even when using simple test strategies, applying MBT in large industrial systems often leads to generating large sets of test cases that cannot possibly be executed within time and cost constraints. In this situation, test case selection techniques are employed to select a subset from the entire test suite such that the selected subset conforms to available resources while maximizing fault detection. In this paper, we propose a new similarity-based selection technique for state machine-based test case selection, which includes a new similarity function using triggers and guards on transitions of state machines and a genetic algorithm-based selection algorithm. Applying this technique on an industrial case study, we show that our proposed approach is more effective in detecting real faults than existing alternatives. We also assess the overall benefits of model-based test case selection in our case study by comparing the fault detection rate of the selected subset with the maximum possible fault detection rate of the original test suite.
    BibTeX:
    @inproceedings{HemmatiBAA10,
      author = {Hadi Hemmati and Lionel C. Briand and Andrea Arcuri and Shaukat Ali},
      title = {An Enhanced Test Case Selection Approach for Model-Based Testing: An Industrial Case Study},
      booktitle = {Proceedings of the 18th ACM SIGSOFT international symposium on Foundations of software engineering (FSE '10)},
      publisher = {ACM},
      year = {2010},
      pages = {267-276},
      address = {New Mexico, USA},
      month = {7-11 November},
      doi = {http://dx.doi.org/10.1145/1882291.1882331}
    }
    					
    2009.07.26 R. Landa Becerra, Ramón Sagarna & Xin Yao An Evaluation of Differential Evolution in Software Test Data Generation 2009 Proceedings of the 10th IEEE Congress on Evolutionary Computation (CEC '09), pp. 2850-2857, Trondheim Norway, 18-21 May   Inproceedings Testing and Debugging
    Abstract: One of the main tasks software testing involves is the generation of the test inputs to be used during the test. Due to its expensive cost, the automation of this task has become one of the key issues in the area. Recently, this generation has been explicitly formulated as the resolution of a set of constrained optimisation problems. Differential Evolution (DE) is a population based evolutionary algorithm which has been successfully applied in a number of domains, including constrained optimisation. We present a test data generator employing DE to solve each of the constrained optimisation problems, and empirically evaluate its performance for several DE models. With the aim of comparing this technique with other approaches, we extend the experiments to the Breeder Genetic Algorithm and face it to DE, and compare different test data generators in the literature with the DE approach. The results present DE as a promising solution technique for this real-world problem.
    BibTeX:
    @inproceedings{BecerraSY09,
      author = {R. Landa Becerra and Ramón Sagarna and Xin Yao},
      title = {An Evaluation of Differential Evolution in Software Test Data Generation},
      booktitle = {Proceedings of the 10th IEEE Congress on Evolutionary Computation (CEC '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {2850-2857},
      address = {Trondheim, Norway},
      month = {18-21 May},
      doi = {http://dx.doi.org/10.1109/CEC.2009.4983300}
    }
    					
    2012.10.25 Edison Klafke Fillus & Silvia Regina Vergilio An Evaluation of the Use of Different Clustering Algorithms and Distance Measures for Mining Aspects 2012 Proceedings of the 3rd Brazilian Workshop on Search-Based Software Engineering (WESB '12), Natal RN Brazil, 23 September   Inproceedings
    Abstract: Clustering based aspect mining techniques group methods that are associated to the same crosscutting concerns guided by a distance measure, without requiring any prior knowledge about the system. However, a limitation of most clustering approaches is that the measure used is generally based on a single kind of crosscutting symptom. To overcome this limitation, this work proposes the use of an aggregated distance measure that considers the scattering, code cloning and name convention crosscutting symptoms. Experimental results show that the introduced measure produces better results than that ones generally used by existing approaches.
    BibTeX:
    @inproceedings{FillusV12,
      author = {Edison Klafke Fillus and Silvia Regina Vergilio},
      title = {An Evaluation of the Use of Different Clustering Algorithms and Distance Measures for Mining Aspects},
      booktitle = {Proceedings of the 3rd Brazilian Workshop on Search-Based Software Engineering (WESB '12)},
      year = {2012},
      address = {Natal, RN, Brazil},
      month = {23 September}
    }
    					
    2016.02.02 Xin Du, Youcong Ni, Peng Ye, Xin Yao, Leandro L. Minku & Ruliang Xiao An Evolutionary Algorithm for Performance Optimization at Software Architecture Level 2015 Proceedings of IEEE Congress on Evolutionary Computation (CEC '15), pp. 2129 - 2136, Sendai Japan, 25-28 May   Inproceedings
    Abstract: Architecture-based software performance optimization can not only significantly save time but also reduce cost. A few rule-based performance optimization approaches at software architecture (SA) level have been proposed in recent years. However, in these approaches, the number of rules being used and the order of application of each rule are uncertain in the optimization process and these uncertainties have not been fully considered so far. As a result, the search space for performance improvement is limited, possibly excluding optimal solutions. Aiming to solve this problem, we propose an evolutionary algorithm for rule-based performance optimization at SA level named EA4PO. First, the rule-based software performance optimization at SA level is abstracted into a mathematical model called RPOM. RPOM can precisely characterize the mathematical relation between the usage of rules and the optimal solution in the performance improvement space. Then, a framework named RSEF is designed to support the execution of rule sequences. Based on RPOM and RSEF, EA4PO is proposed to find the optimal performance improvement solution. In EA4PO, an adaptive mutation operator is designed to guide the search direction by fully considering heuristic information of rule usage during the evolution. Finally, the effectiveness of EA4PO is validated by comparing EA4PO with a typical rule-based approach. The results show that EA4PO can explore a relatively larger space and get better solutions.
    BibTeX:
    @inproceedings{DuNYYMX15,
      author = {Xin Du and Youcong Ni and Peng Ye and Xin Yao and Leandro L. Minku and Ruliang Xiao},
      title = {An Evolutionary Algorithm for Performance Optimization at Software Architecture Level},
      booktitle = {Proceedings of IEEE Congress on Evolutionary Computation (CEC '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {2129 - 2136},
      address = {Sendai, Japan},
      month = {25-28 May},
      doi = {http://dx.doi.org/10.1109/CEC.2015.7257147}
    }
    					
    2012.03.07 Joachim Hänsel, Daniela Rose, Paula Herber & Sabine Glesner An Evolutionary Algorithm for the Generation of Timed Test Traces for Embedded Real-Time Systems 2011 Proceedings of the 4th IEEE International Conference on Software Testing, Verification and Validation (ICST '11), pp. 170-179, Berlin Germany, 21-25 March   Inproceedings Testing and Debugging
    Abstract: In safety-critical applications, the real-time behavior is crucial for the correctness of the overall system and must be tested thoroughly. However, the generation of test traces that cover most or all of the desired behavior of a real-time system is a difficult challenge. In this paper, we present an evolutionary algorithm that generates timed test traces, which achieve a given transition coverage. We generate these traces from a timed automata model. Our main contribution is a novel approach to encode timed test traces as individuals of an evolutionary algorithm. The major difficulty in doing so is that test traces for embedded real-time systems have to be very long. To solve this problem, we introduce the notion of blocks, which simplify long traces by cutting them into pieces. With that, we reduce the search space significantly. Furthermore, we have implemented crossover and mutation operators and a fitness function that takes time-dependent behavior implicitly into account. We show the success of our approach by experimental results from an anti-lock braking system.
    BibTeX:
    @inproceedings{HanselRHG11,
      author = {Joachim Hänsel and Daniela Rose and Paula Herber and Sabine Glesner},
      title = {An Evolutionary Algorithm for the Generation of Timed Test Traces for Embedded Real-Time Systems},
      booktitle = {Proceedings of the 4th IEEE International Conference on Software Testing, Verification and Validation (ICST '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {170-179},
      address = {Berlin, Germany},
      month = {21-25 March},
      doi = {http://dx.doi.org/10.1109/ICST.2011.37}
    }
    					
    2015.12.08 Tao Yue, Shaukat Ali & Shuai Wang An Evolutionary and Automated Virtual Team Making Approach for Crowdsourcing Platforms 2015 , pp. 113-130   Inbook
    Abstract: Crowdsourcing has demonstrated its capability of supporting various software development activities including development and testing as it can be seen by several successful crowdsourcing platforms such as TopCoder and uTest. However, to crowd source large-scale and complex software development and testing tasks, there are several optimization challenges to be addressed such as division of tasks, searching a set of registrants, and assignment of tasks to registrants.Since in crowdsourcing a task can be assigned to registrants geographically distributed with various backgrounds, the quality of final task deliverables is a key issue. As the first step to improve the quality, we propose a systematic and automated approach to optimize the assignment of registrants in a crowdsourcing platform to a crowdsourcing task. The objective is to find the best fit of a group of registrants to the defined task. A few examples of factors forming the optimization problem include budget defined by the task submitter and pay expectation from a registrant, skills required by a task, skills of a registrant, task delivering deadline, and availability of a registrant. We first collected a set of commonly seen factors that have impact on the perfect matching between tasks submitted and a virtual team that consists of a selected set of registrants. We then formulated the optimization objective as a fitness functionłthe heuristics used by search algorithms (e.g., Genetic Algorithms) to find an optimal solution. We empirically evaluated a set of well-known search algorithms in software engineering, along with the proposed fitness function, to identify the best solution for our optimization problem. Results of our experiments are very positive in terms of solving optimization problems in a crowdsourcing context.
    BibTeX:
    @inbook{YueAW15,
      author = {Tao Yue and Shaukat Ali and Shuai Wang},
      title = {An Evolutionary and Automated Virtual Team Making Approach for Crowdsourcing Platforms},
      publisher = {Springer},
      year = {2015},
      pages = {113-130},
      doi = {http://dx.doi.org/10.1007/978-3-662-47011-4_7}
    }
    					
    2007.12.02 Huaizhong Li & Chiou Peng Lam Dosch, W. & Lee, R.Y. (Hrsg.) An Evolutionary Approach for Optimisation of State-based Test Suites for Software Systems 2003 Proceedings of the 4th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD '03), pp. 226-233, Lübeck Germany, 16-18, October   Inproceedings Testing and Debugging
    Abstract: There are many different approaches to generate the test data for software testing; one of the frequently used approaches is the state-based test data generation. In general, test coverage specified by the per-determined testing requirements is frequently used to ascertain the quality of software testing. Test Suites are automatically generated to satisfy the required coverage. Properly generated test suite is one of the key assurances to the software system quality. However, automatic test data generation may introduce problems in state-based testing because the generated test cases are generally not optimized for the required coverage. It is well-known that the more test cases included in a test suite, the more likely for the test suite to satisfy the required testing coverage. However, a bigger set of test cases may inevitably introduce higher testing cost due to the inclusion of many redundant test cases. It is desirable to reduce the cost associated with redundant test cases without sacrificing the quality of the software. This paper proposes an evolutionary approach to optimize the state-based test suites. The optimization is achieved by reducing the number of redundant test cases while still guaranteeing the required transition coverage of the test suite.
    BibTeX:
    @inproceedings{LiL03,
      author = {Huaizhong Li and Chiou Peng Lam},
      title = {An Evolutionary Approach for Optimisation of State-based Test Suites for Software Systems},
      booktitle = {Proceedings of the 4th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD '03)},
      publisher = {ACIS},
      year = {2003},
      pages = {226-233},
      address = {Lübeck, Germany},
      month = {16-18, October},
      url = {http://www.pubzone.org/dblp/conf/snpd/LiL03}
    }
    					
    2008.09.02 José Carlos Bregieiro Ribeiro, Mário Zenha-Rela & Francisco Fernández de Vega An Evolutionary Approach for Performing Structural Unit-Testing on Third-Party Object-Oriented Java Software 2007 Proceedings of International Workshop on Nature Inspired Cooperative Strategies for Optimization (NICSO '07), pp. 379-388, Acireale Italy, 8-10 November   Inproceedings Testing and Debugging
    Abstract: Evolutionary Testing is an emerging methodology for automatically generating high quality test data. The focus of this paper is on presenting an approach for generating test cases for the unit-testing of object-oriented programs, with basis on the information provided by the structural analysis and interpretation of Java bytecode and on the dynamic execution of the instrumented test object. The rationale for working at the bytecode level is that even when the source code is unavailable, insight can still be obtained and used to guide the search-based test case generation process. Test cases are represented using the Strongly Typed Genetic Programming paradigm, which effectively mimics the polymorphic relationships, inheritance dependences and method argument constraints of object-oriented programs.
    BibTeX:
    @inproceedings{RibeiroZV07a,
      author = {José Carlos Bregieiro Ribeiro and Mário Zenha-Rela and Francisco Fernández de Vega},
      title = {An Evolutionary Approach for Performing Structural Unit-Testing on Third-Party Object-Oriented Java Software},
      booktitle = {Proceedings of International Workshop on Nature Inspired Cooperative Strategies for Optimization (NICSO '07)},
      publisher = {Springer},
      year = {2007},
      pages = {379-388},
      address = {Acireale, Italy},
      month = {8-10 November},
      doi = {http://dx.doi.org/10.1007/978-3-540-78987-1_34}
    }
    					
    2007.12.02 Jesús S. Aguilar-Ruiz, Isabel Ramos, José Cristóbal Riquelme & Miguel Toro An Evolutionary Approach to Estimating Software Development Projects 2001 Information and Software Technology, Vol. 43(14), pp. 875-882, December   Article Management
    Abstract: The use of dynamic models and simulation environments in connection with software projects paved the way for tools that allow us to simulate the behaviour of the projects. The main advantage of a Software Project Simulator (SPS) is the possibility of experimenting with dierent decisions to be taken at no cost. In this paper, we present a new approach based on the combination of an SPS and Evolutionary Computation. The purpose is to provide accurate decision rules in order to help the project manager to take decisions at any time in the development. The SPS generates a database from the software project, which is provided as input to the Evolutionary Algorithm for producing the set of management rules. These rules will help the project manager to keep the project within the cost, quality and duration targets. The set of alternatives within the decision-making framework is therefore reduced to a quality set of decisions.
    BibTeX:
    @article{Aguilar-RuizRRT01,
      author = {Jesús S. Aguilar-Ruiz and Isabel Ramos and José Cristóbal Riquelme and Miguel Toro},
      title = {An Evolutionary Approach to Estimating Software Development Projects},
      journal = {Information and Software Technology},
      year = {2001},
      volume = {43},
      number = {14},
      pages = {875-882},
      month = {December},
      doi = {http://dx.doi.org/10.1016/S0950-5849(01)00193-8}
    }
    					
    2009.04.01 Sunny Huynh & Yuanfang Cai An Evolutionary Approach to Software Modularity Analysis 2007 Proceedings of the 1st International Workshop on Assessment of Contemporary Modularization Techniques (ACoM'07), pp. 1-6, Minneapolis USA, 20-26 May   Inproceedings Distribution and Maintenance
    Abstract: Modularity determines software quality in terms of evolveability, changeability, maintainability, etc. and a module could be a vertical slicing through source code directory structure or class boundary. Given a modularized design, we need to determine whether its implementation realizes the designed modularity. Manually comparing source code modular structure with abstracted design mod- ular structure is tedious and error-prone. In this paper, we present an automated approach to check the conformance of source code modularity to the designed modularity. Our approach uses design structure matrices (DSMs) as a uniform representation; it uses existing tools to automatically derive DSMs from the source code and design, and uses a genetic algorithm to automatically cluster DSMs and check the conformance. We applied our approach to a small canonical software system as a proof of concept experiment. The results supported our hypothesis that it is possible to check the conformance between source code structure and design struc- ture automatically, and this approach has the potential to be scaled for use in large software systems.
    BibTeX:
    @inproceedings{HuynhC07,
      author = {Sunny Huynh and Yuanfang Cai},
      title = {An Evolutionary Approach to Software Modularity Analysis},
      booktitle = {Proceedings of the 1st International Workshop on Assessment of Contemporary Modularization Techniques (ACoM'07)},
      publisher = {ACM},
      year = {2007},
      pages = {1-6},
      address = {Minneapolis, USA},
      month = {20-26 May},
      doi = {http://dx.doi.org/10.1109/ACOM.2007.1}
    }
    					
    2011.02.05 Diponkar Paul An Evolutionary Testing Approach for Test Data Generation from EFSM Model with String Data Input 2009 , October School: University of Gothenburg   Mastersthesis Testing and Debugging
    Abstract: The thesis has aimed to test data generation from EFSM model with string data input. In testing area a very few work is done to generate test data with string data input. So this topic is interesting to the testing arena. To reach the goal a genetic algorithm (GA) tool is developed. A study was carried out to choose the best fitness function for string data input; resulting modified edit distance algorithm was used as a fitness function. Firstly, string and alphanumeric values with different lengths were passed through the GA tool and evaluated the result. Then three EFSM models were designed and deployed to the GA tool where most of cases the whole path is passed. This work was limited to string equality and there is a scope to work with string ordering in future.
    BibTeX:
    @mastersthesis{Paul09,
      author = {Diponkar Paul},
      title = {An Evolutionary Testing Approach for Test Data Generation from EFSM Model with String Data Input},
      school = {University of Gothenburg},
      year = {2009},
      month = {October},
      url = {http://gupea.ub.gu.se/handle/2077/22085}
    }
    					
    2009.02.26 Giuliano Antoniol, Concettina Del Grosso & Massimiliano Di Penta An Evolutionary Testing Approach to detect Buffer Overflows 2004 Proceedings of the International Symposium on Software Reliability Engineering (ISSRE '04), pp. 77-78, Rennes and Saint-Malo France, 2-5 November   Inproceedings Testing and Debugging
    Abstract: No abstract
    BibTeX:
    @inproceedings{AntoniolDD04,
      author = {Giuliano Antoniol and Concettina Del Grosso and Massimiliano Di Penta},
      title = {An Evolutionary Testing Approach to detect Buffer Overflows},
      booktitle = {Proceedings of the International Symposium on Software Reliability Engineering (ISSRE '04)},
      year = {2004},
      pages = {77-78},
      address = {Rennes and Saint-Malo, France},
      month = {2-5 November},
      url = {http://www.rcost.unisannio.it/mdipenta/papers/issre04.pdf}
    }
    					
    2015.12.09 Azam Andalib & Seyed Morteza Babamir A New Approach for Test Case Generation by Discrete Particle Swarm Optimization Algorithm 2014 Proceedings of the 22nd Iranian Conference on Electrical Engineering (ICEE '14), pp. 1180-1185, Tehran Iran, 20-22 May   Inproceedings
    Abstract: The increasing complexities in softwares, have reduced the efficiency of the common methods of software testing and urged the necessity to use new and optimal methods to produce test case (TC) to cover high percentage of target plan to find existing errors. Thus, today, the production of TC is considered to be an important aim in software testing methods. Particle Swarm Optimization (PSO) is an intelligent technique based on the collective movement of the particles inspired by social behavior of the flocks of birds and schools of fish. After full introduction of this algorithm and the reasons behind usage PSO, the present study will propose a method for automatic production of the TC, such that the highest code covering is done by the minimum TC. For better analysis of the results, a plan was investigated as a case study with the proposed method. As the evolutionary structures such as genetic algorithm (GA) with high percent of code covering are being used for a long time, thus to prove the optimization of the proposed method, the results are compared with the GA.
    BibTeX:
    @inproceedings{AndalibB14,
      author = {Azam Andalib and Seyed Morteza Babamir},
      title = {A New Approach for Test Case Generation by Discrete Particle Swarm Optimization Algorithm},
      booktitle = {Proceedings of the 22nd Iranian Conference on Electrical Engineering (ICEE '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {1180-1185},
      address = {Tehran, Iran},
      month = {20-22 May},
      doi = {http://dx.doi.org/10.1109/IranianCEE.2014.6999714}
    }
    					
    2009.11.02 Felipe Colares, Jerffeson Teixeira de Souza, Rafael Augusto Ferreira do Carmo, Clarindo Pádua & Geraldo Robson Mateus A New Approach to the Software Release Planning 2009 Proceedings of the 23rd Brazilian Symposium on Software Engineering (SBES '09), pp. 207-215, Fortaleza Ceará Brazil, 5-9 October   Inproceedings Requirements/Specifications
    Abstract: This paper presents a new approach to the software release planning problem. This approach presents a mathematical formulation that takes into account several important aspects to this problem, such as stakeholders' satisfaction, costs, deadlines, available resources, efforts needed, risks management and requirements interdependencies. Results from an experimental data set using a well-known metaheuristic show evidence that the proposed approach is very effective to model the features of this problem. Additionally, a comparison of human competitiveness with the proposed approach shows that the proposed approach outperforms human-based solutions.
    BibTeX:
    @inproceedings{ColaresSCPM09,
      author = {Felipe Colares and Jerffeson Teixeira de Souza and Rafael Augusto Ferreira do Carmo and Clarindo Pádua and Geraldo Robson Mateus},
      title = {A New Approach to the Software Release Planning},
      booktitle = {Proceedings of the 23rd Brazilian Symposium on Software Engineering (SBES '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {207-215},
      address = {Fortaleza, Ceará, Brazil},
      month = {5-9 October},
      doi = {http://dx.doi.org/10.1109/SBES.2009.23}
    }
    					
    2014.08.14 Shuai Wang, Shaukat Ali & Arnaud Gotlieb A New Learning Mechanism for Resolving Inconsistencies in Using Cooperative Co-evolution Model 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 215-221, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: Many aspects of Software Engineering problems lend themselves to a coevolutionary model of optimization because software systems are complex and rich in potential population that could be productively coevolved. Most of these aspects can be coevolved to work better together in a cooperative manner. Compared with the simple and common used predator-prey co-evolution model, cooperative co-evolution model has more challenges that need to be addressed. One of these challenges is how to resolve the inconsistencies between two populations in order to make them work together with no conflict. In this position paper, we propose a new learning mechanism based on Baldwin effect, and introduce the learning genetic operators to address the inconsistency issues. A toy example in the field of automated architectural synthesis is provided to describe the use of our proposed approach.
    BibTeX:
    @inproceedings{XuL14,
      author = {Shuai Wang and Shaukat Ali and Arnaud Gotlieb},
      title = {A New Learning Mechanism for Resolving Inconsistencies in Using Cooperative Co-evolution Model},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {215-221},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_14}
    }
    					
    2015.12.08 Ciyong Chen, Xiaofeng Xu, Yan Chen, Xiaochao Li & Donghui Guo A New Method of Test Data Generation for Branch Coverage in Software Testing based on EPDG and Genetic Algorithm 2009 Proceedings of the 3rd International Conference on Anti-counterfeiting, Security, and Identification in Communication (ASID '09), pp. 307-310, Hong Kong China, 20-22 August   Inproceedings Testing and Debugging
    Abstract: A new method called EPDG-GA which utilizes the edge partitions dominator graph (EPDG) and genetic algorithm (GA) for branch coverage testing is presented in this paper. First, a set of critical branches (CBs) are obtained by analyzing the EPDG of the tested program, while covering all the CBs implies covering all the branches of the control flow graph (CFG). Then, the fitness functions are instrumented in the right position by analyzing the pre-dominator tree (PreDT), and two metrics are developed to prioritize the CBs. Coverage-Table is established to record the CBs information and keeps track of whether a branch is executed or not. GA is used to generate test data to cover CBs so as to cover all the branches. The comparison results show that this approach is more efficient than random testing approach.
    BibTeX:
    @inproceedings{ChenXCLG09,
      author = {Ciyong Chen and Xiaofeng Xu and Yan Chen and Xiaochao Li and Donghui Guo},
      title = {A New Method of Test Data Generation for Branch Coverage in Software Testing based on EPDG and Genetic Algorithm},
      booktitle = {Proceedings of the 3rd International Conference on Anti-counterfeiting, Security, and Identification in Communication (ASID '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {307-310},
      address = {Hong Kong, China},
      month = {20-22 August},
      doi = {http://dx.doi.org/10.1109/ICASID.2009.5276897}
    }
    					
    2010.11.23 Márcia Maria Albuquerque Brasil, Fabrício Gomes de Freitas, Thiago Gomes Nepomuceno da Silva, Jerffeson Teixeira de Souza & Mariela Inés Cortés A New Multiobjective Optimization Approach for Release Planning in Iterative and Incremental Software Development (in Portuguese) 2010 Proceedings of the Brazilian Workshop on Optimization in Software Engineering (WOES '10), Salvador Brazil, 30-30 September   Inproceedings Requirements/Specifications
    Abstract: Agile software development processes are designed to quickly create useful software. In this context, the iterative and incremental development approach stands out as one characterized by small, frequent system releases. The release planning aims at allocating the requirements to releases such that all restrictions are respected. Efficient planning represents an important and complex activity. This paper presents a multiobjective optimization-based approach to the release planning problem taking into account customer satisfaction and managing risks appropriately. An empirical evaluation shows the feasibility of the proposed formulation.
    BibTeX:
    @inproceedings{BrasildddC10,
      author = {Márcia Maria Albuquerque Brasil and Fabrício Gomes de Freitas and Thiago Gomes Nepomuceno da Silva and Jerffeson Teixeira de Souza and Mariela Inés Cortés},
      title = {A New Multiobjective Optimization Approach for Release Planning in Iterative and Incremental Software Development (in Portuguese)},
      booktitle = {Proceedings of the Brazilian Workshop on Optimization in Software Engineering (WOES '10)},
      year = {2010},
      address = {Salvador, Brazil},
      month = {30-30 September},
      url = {http://www.uniriotec.br/~marcio.barros/woes2010/Paper06.pdf}
    }
    					
    2007.12.02 Mark Harman, Robert M. Hierons & Mark Proctor A New Representation and Crossover Operator for Search-based Optimization of Software Modularization 2002 Proceedings of the 2002 Conference on Genetic and Evolutionary Computation (GECCO '02), pp. 1351-1358, New York USA, 9-13 July   Inproceedings Distribution and Maintenance
    Abstract: This paper reports experiments with automated software modularization and remodularization, using search-based algorithms, the fitness functions of which are derived from measures of module granularity, cohesion and coupling. The paper introdeuces a new representaion and crossover operator for this problem and reports initial results based on simple component topologies.
    BibTeX:
    @inproceedings{HarmanHP02,
      author = {Mark Harman and Robert M. Hierons and Mark Proctor},
      title = {A New Representation and Crossover Operator for Search-based Optimization of Software Modularization},
      booktitle = {Proceedings of the 2002 Conference on Genetic and Evolutionary Computation (GECCO '02)},
      publisher = {Morgan Kaufmann Publishers},
      year = {2002},
      pages = {1351-1358},
      address = {New York, USA},
      month = {9-13 July},
      url = {http://portal.acm.org/citation.cfm?id=683110}
    }
    					
    2011.05.19 Ahmed S. Ghiduk A New Software Data-Flow Testing Approach via Ant Colony Algorithms 2010 Universal Journal of Computer Science and Engineering Technology, Vol. 1(1), pp. 64-72, October   Article Testing and Debugging
    Abstract: Search-based optimization techniques (e.g., hill climbing, simulated annealing, and genetic algorithms) have been applied to a wide variety of software engineering activities including cost estimation, next release problem, and test generation. Several search based test generation techniques have been developed. These techniques had focused on finding suites of test data to satisfy a number of control-flow or data-flow testing criteria. Genetic algorithms have been the most widely employed search-based optimization technique in software testing issues. Recently, there are many novel search-based optimization techniques have been developed such as Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Artificial Immune System (AIS), and Bees Colony Optimization. ACO and AIS have been employed only in the area of control-flow testing of the programs. This paper aims at employing the ACO algorithms in the issue of software data-flow testing. The paper presents an ant colony optimization based approach for generating set of optimal paths to cover all definition-use associations (du-pairs) in the program under test. Then, this approach uses the ant colony optimization to generate suite of test-data for satisfying the generated set of paths. In addition, the paper introduces a case study to illustrate our approach.
    BibTeX:
    @article{Ghiduk10b,
      author = {Ahmed S. Ghiduk},
      title = {A New Software Data-Flow Testing Approach via Ant Colony Algorithms},
      journal = {Universal Journal of Computer Science and Engineering Technology},
      year = {2010},
      volume = {1},
      number = {1},
      pages = {64-72},
      month = {October},
      url = {http://www.unicse.org/october2010/A%20new%20software%20data-flow%20testing%20approach%20via%20ant%20colony%20algorithms.pdf}
    }
    					
    2014.02.21 Márcio de Oliveira Barros An Experimental Evaluation of the Importance of Randomness in Hill Climbing Searches applied to Software Engineering Problems 2014 Empirical Software Engineering   Article
    Abstract: Random number generators are a core component of heuristic search algorithms. They are used to build candidate solutions and reduce bias while transforming these solutions during the search. Despite their usefulness, random numbers also have drawbacks, as one cannot guarantee that all portions of the search space are covered by the search and must run an algorithm many times to statistically assess its behavior. Determine whether deterministic quasi-random sequences can be used as an alternative to pseudo-random numbers in feeding "randomness" into Hill Climbing searches addressing Software Engineering problems. We have designed and executed three experimental studies in which a Hill Climbing search was used to find solutions for two Software Engineering problems: software module clustering and requirement selection. The algorithm was executed using both pseudo-random numbers and three distinct quasi-random sequences (Faure, Halton, and Sobol). The software clustering problem was evaluated for 32 real-world instances and the requirement selection problem was addressed using 15 instances reused from previous research works. The experimental studies were chained to allow varying as few as possible experimental factors between any given study and its subsequent one. Results found by searches powered by distinct quasi-random sequences were compared to those produced by the pseudo-random search on a per instance basis. The comparison evaluated search efficiency (processing time required to run the search) and effectiveness (quality of results produced by the search). Contrary to previous findings observed in the context of other heuristic search algorithms, we found evidence that quasi-random sequences cannot outperform pseudo-random numbers regularly in Hill Climbing searches. Detailed statistical analysis is provided to support the evidence favoring pseudo-random numbers.
    BibTeX:
    @article{Barros14,
      author = {Márcio de Oliveira Barros},
      title = {An Experimental Evaluation of the Importance of Randomness in Hill Climbing Searches applied to Software Engineering Problems},
      journal = {Empirical Software Engineering},
      year = {2014},
      doi = {http://dx.doi.org/10.1007/s10664-013-9294-4}
    }
    					
    2009.01.21 Hao Zhong, Lu Zhang & Hong Mei An Experimental Study of Four Typical Test Suite Reduction Techniques 2008 Information and Software Technology, Vol. 50(6), pp. 534-546, May   Article Testing and Debugging
    Abstract: In software development, developers often rely on testing to reveal bugs. Typically, a test suite should be prepared before initial testing, and new test cases may be added to the test suite during the whole testing process. This may usually cause the test suite to contain more or less redundancy. In other words, a subset of the test suite (called the representative set) may still satisfy all the test objectives. As the redundancy can increase the cost of executing the test suite, quite a few test suite reduction techniques have been brought out in spite of the NP-completeness of the general problem of finding the optimal representative set of a test suite. In the literature, there have been some experimental studies of test suite reduction techniques, but the limitations of the these experimental studies are quite obvious. Recently proposed techniques are not experimentally compared against each other, and reported experiments are mainly based on small programs or even simulation data. This paper presents a new experimental study of the four typical test suite reduction techniques, including Harrold et al.'s heuristic, and three other recently proposed techniques such as Chen and Lau's GRE heuristic, Mansour and El-Fakin's genetic algorithm-based approach, and Black et al.'s ILP-based approach. Based on the results of this experimental study, we also provide a guideline for choosing the appropriate test suite reduction technique and some insights into why the techniques vary in effectiveness and efficiency.
    BibTeX:
    @article{ZhongZM08,
      author = {Hao Zhong and Lu Zhang and Hong Mei},
      title = {An Experimental Study of Four Typical Test Suite Reduction Techniques},
      journal = {Information and Software Technology},
      year = {2008},
      volume = {50},
      number = {6},
      pages = {534-546},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.infsof.2007.06.003}
    }
    					
    2013.06.28 Márcio de Oliveira Barros An Experimental Study on Incremental Search-Based Software Engineering 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 34-49, St. Petersburg Russia, 24-26 August   Inproceedings
    Abstract: Since its inception, SBSE has supported many different software engineering activities, including some which aim on improving or correcting existing systems. In such cases, search results may propose changes to the organization of the systems. Extensive changes may be inconvenient for developers, who maintain a mental model about the state of the system and use this knowledge to be productive in their daily business. Thus, a balance between optimization objectives and their impact on system structure may be pursued. In this paper, we introduce incremental search-based software engineering, an extension to SBSE which suggests optimizing a system through a sequence of restricted search turns, each limited to a maximum number of changes, so that developers can become aware of these changes before a new turn is enacted. We report on a study addressing the cost of breaking a search into a sequence of restricted turns and conclude that, at least for the selected problem and instances, there is indeed huge penalty in doing so.
    BibTeX:
    @inproceedings{Barros13,
      author = {Márcio de Oliveira Barros},
      title = {An Experimental Study on Incremental Search-Based Software Engineering},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {34-49},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_5}
    }
    					
    2008.04.12 Robert Feldt An Experiment on using Genetic Programming to Develop Multiple Diverse Software Variants 1998 (98-13), Gothenburg Sweden, September   Techreport Design Tools and Techniques
    Abstract: Software fault tolerance schemes often employ multiple software variants developed to meet the same specification. If the variants fail independently of each other, they can be combined to give high levels of reliability. While design diversity is a means to develop these variants, it has been questioned because it increases development costs and because reliability gains are limited by common-mode failures. We propose the use of genetic programming algorithm. We have developed an environment to generate programs for a controller in an aircraft arrestment system. Eighty programs have been developed and tested on 10000 test cases. The experimental data shows that failure diversity is achieved but for the top performing programs its levels are limited.
    BibTeX:
    @techreport{Feldt98b,
      author = {Robert Feldt},
      title = {An Experiment on using Genetic Programming to Develop Multiple Diverse Software Variants},
      year = {1998},
      number = {98-13},
      address = {Gothenburg, Sweden},
      month = {September}
    }
    					
    2012.08.22 Phil McMinn An Identification of Program Factors that Impact Crossover Performance in Evolutionary Test Input Generation for the Branch Coverage of C Programs 2013 Information and Software Technology, Vol. 55(1), pp. 153-172, January   Article Testing and Debugging
    Abstract: Context: Genetic Algorithms are a popular search-based optimisation technique for automatically generating test inputs for structural coverage of a program, but there has been little work investigating the class of programs for which they will perform well. Objective: This paper presents and evaluates a series of program factors that are hypothesised to affect the performance of crossover, a key search operator in Genetic Algorithms, when searching for inputs that cover the branching structure of a C function. Method: Each program factor is evaluated with example programs using Genetic Algorithms with and without crossover. Experiments are also performed to test whether crossover is acting as macro-mutation operator rather than usefully recombining the component parts of input vectors when searching for test data. Results: The results show that crossover has an impact for each of the program factors studied. Conclusion: It is concluded crossover plays an increasingly important role for programs with large, multi-dimensional input spaces, where the target structure's input condition breaks down into independent sub-problems for which solutions may be sought in parallel. Furthermore, it is found that crossover can be inhibited when the program under test is unstructured or involves nested conditional statements; and when intermediate variables are used in branching conditions, as opposed to direct input values.
    BibTeX:
    @article{McMinn13,
      author = {Phil McMinn},
      title = {An Identification of Program Factors that Impact Crossover Performance in Evolutionary Test Input Generation for the Branch Coverage of C Programs},
      journal = {Information and Software Technology},
      year = {2013},
      volume = {55},
      number = {1},
      pages = {153-172},
      month = {January},
      doi = {http://dx.doi.org/10.1016/j.infsof.2012.03.010}
    }
    					
    2015.12.09 Jianfeng Wang & Shouda Jiang An Improved Algorithm for Test Data Generation Based on Particle Swarm Optimization 2011 Proceedings of the 1st International Conference on Instrumentation, Measurement, Computer, Communication and Control, pp. 404-407, Beijing China, 21-23 October   Inproceedings Testing and Debugging
    Abstract: The test case generation is one of key issues of combinatorial testing. In this paper, a new algorithm for test data generation based on Particle Swarm Optimization (PSO) is presented. Based on Particle Swarm Optimization, the optimization base and extended parameters are introduced. The number of the current output test data is adjusted dynamically according to the data generated before. The efficiency of the test data generation is improved effectively on the premise of ensuring the optimization of the data generated.
    BibTeX:
    @inproceedings{WangJ11,
      author = {Jianfeng Wang and Shouda Jiang},
      title = {An Improved Algorithm for Test Data Generation Based on Particle Swarm Optimization},
      booktitle = {Proceedings of the 1st International Conference on Instrumentation, Measurement, Computer, Communication and Control},
      publisher = {IEEE},
      year = {2011},
      pages = {404-407},
      address = {Beijing, China},
      month = {21-23 October},
      doi = {http://dx.doi.org/10.1109/IMCCC.2011.108}
    }
    					
    2015.11.06 Mahmoud A. Bokhari, Thorsten Bormer & Markus Wagner An Improved Beam-Search for the Test Case Generation for Formal Verification Systems 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE '15), pp. 77-92, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: The correctness of software verification systems is vital, since they are used to confirm that safety and security critical software systems satisfy their requirements. Modern verification systems need to understand their target software, which can be done by using an axiomatization base. It captures the semantics of the programming language used for writing the target software. To ensure their correctness, it is necessary to validate both parts: the implementation and the axiomatization base. As a result, it is essential to increase the axiom coverage in order to verify its correctness. However, creating test cases manually is a time consuming and difficult task even for verification engineers. We present a beam search approach to automatically generate test cases by modifying existing test cases as well as a comparison between axiomatization and code coverage. Our results show that the overall coverage of the existing test suite can be improved by more than 20 %. In addition, our approach explores the search space more efficiently than existing ones.
    BibTeX:
    @inproceedings{BokhariBW15,
      author = {Mahmoud A. Bokhari and Thorsten Bormer and Markus Wagner},
      title = {An Improved Beam-Search for the Test Case Generation for Formal Verification Systems},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE '15)},
      publisher = {Springer},
      year = {2015},
      pages = {77-92},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_6}
    }
    					
    2009.04.01 Sen Su, Chengwen Zhang & Junliang Chen An Improved Genetic Algorithm for Web Services Selection 2007 Proceedings of the 7th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems (DAIS '07), Vol. 4531, pp. 284-295, Paphos Cyprus, 6-8 June   Inproceedings Design Tools and Techniques
    Abstract: An improved genetic algorithm is presented to select optimal web services composite plans from a lot of composite plans on the basis of global Quality-of-Service (QoS) constraints. The relation matrix coding scheme of genome is its basis. In this genetic algorithm, an especial fitness function and a mutation policy are proposed on the basis of the relation matrix coding scheme of genome. They enhance convergence of genetic algorithm and can get more excellent composite service plan because they accord with web services selection very well. The simulation results on QoS-aware web services selection have shown that the improved genetic algorithm can gain effectively the composite service plan that satisfies the global QoS requirements, and that the convergence of genetic algorithm was improved very well.
    BibTeX:
    @inproceedings{SuZC07,
      author = {Sen Su and Chengwen Zhang and Junliang Chen},
      title = {An Improved Genetic Algorithm for Web Services Selection},
      booktitle = {Proceedings of the 7th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems (DAIS '07)},
      publisher = {Springer},
      year = {2007},
      volume = {4531},
      pages = {284-295},
      address = {Paphos, Cyprus},
      month = {6-8 June},
      doi = {http://dx.doi.org/10.1007/978-3-540-72883-2_21}
    }
    					
    2014.11.25 Ali Aburas & Alex Groce An Improved Memetic Algorithm with Method Dependence Relations (MAMDR) 2014 Proceedings of the 14th International Conference on Quality Software (QSIC '14), pp. 11-20, Allen TX USA, 2-3 October   Inproceedings Testing and Debugging
    Abstract: Search-based approaches are successfully used for generating unit tests for object-oriented programs in Java. However, these approaches may struggle to generate sequence method calls with specific values to achieve high coverage due to the large size of the search space. This paper proposes a memetic algorithm (MA) approach in which static analysis is used to identify method dependence relations (MDR) based on the field access. This method dependence information is employed for reducing the search space and used to guide the search towards regions that lead to full (or at least high) structural coverage. Our approach, MAMDR, combines both a genetic algorithm (GA) and Hill Climbing (HC) to generate test data for Java programs. The former is used to produce test cases that maximize the branch coverage of the CUT, while minimizing the length of each test case. The latter is used to target uncovered branches in the preceding search phase using static information that guides the search to generate sequences of method calls and values that could cover target branches. We compare MAMDR with pure random testing, a well-known search based approach (EvoSuite), and a simple MA on several open source projects and classes, and show that the combination of MA and MDR is effective.
    BibTeX:
    @inproceedings{AburasG14,
      author = {Ali Aburas and Alex Groce},
      title = {An Improved Memetic Algorithm with Method Dependence Relations (MAMDR)},
      booktitle = {Proceedings of the 14th International Conference on Quality Software (QSIC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {11-20},
      address = {Allen, TX, USA},
      month = {2-3 October},
      doi = {http://dx.doi.org/10.1109/QSIC.2014.12}
    }
    					
    2009.05.14 Brady J. Garvin, Myra B. Cohen & Matthew B. Dwyer An Improved Meta-Heuristic Search for Constrained Interaction Testing 2009 Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09), pp. 13-22, Cumberland Lodge Windsor UK, 13-15 May   Inproceedings Testing and Debugging
    Abstract: Combinatorial interaction testing (CIT) is a cost-effective sampling technique for discovering interaction faults in highly configurable systems. Recent work with greedy CIT algorithms efficiently supports constraints on the features that can coexist in a configuration. But when testing a single system configuration is expensive, greedy techniques perform worse than meta-heuristic algorithms because they produce larger samples. Unfortunately, current meta-heuristic algorithms are inefficient when constraints are present. We investigate the sources of inefficiency, focusing on simulated annealing, a well-studied meta-heuristic algorithm. From our findings we propose changes to improve performance, including a reorganized search space based on the CIT problem structure. Our empirical evaluation demonstrates that the optimizations reduce run-time by three orders of magnitude and yield smaller samples. Moreover, on real problems the new version compares favorably with greedy algorithms.
    BibTeX:
    @inproceedings{GarvinCD09,
      author = {Brady J. Garvin and Myra B. Cohen and Matthew B. Dwyer},
      title = {An Improved Meta-Heuristic Search for Constrained Interaction Testing},
      booktitle = {Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {13-22},
      address = {Cumberland Lodge, Windsor, UK},
      month = {13-15 May},
      doi = {http://dx.doi.org/10.1109/SSBSE.2009.25}
    }
    					
    2012.03.07 Seon Yeol Lee, Hyun Jae Choi, Yeon Ji Jeong, Tae Ho Kim, Heung Seok Chae & Carl K. Chang An Improved Technique of Fitness Evaluation for Evolutionary Testing 2011 Proceedings of the 35th IEEE Annual Computer Software and Applications Conference Workshops (COMPSACW '11), pp. 190-193, Munich Germany, 18-22 July   Inproceedings Testing and Debugging
    Abstract: Many Search Based Software testing (SBST) have been proposed and experiments show that they can generate effective test data. However, a meta-heuristic search (MHS) algorithm in these techniques incurs considerable computation cost to evaluate fitness values, which results in huge test case generation cost. In this paper, we propose a more effective fitness evaluation technique based on Fitness Evaluation Program (FEP). FEP, derived from a path constraint of SUT, is introduced as a special program for evaluating fitness values. We implement a test generation tool, named ConGA, and apply it to generate test cases for C programs for evaluating efficiency of the FEP-based test case generation technique. The experiments show that the proposed technique can reduce significant amount of test data generation time on average.
    BibTeX:
    @inproceedings{LeeCJKCC11,
      author = {Seon Yeol Lee and Hyun Jae Choi and Yeon Ji Jeong and Tae Ho Kim and Heung Seok Chae and Carl K. Chang},
      title = {An Improved Technique of Fitness Evaluation for Evolutionary Testing},
      booktitle = {Proceedings of the 35th IEEE Annual Computer Software and Applications Conference Workshops (COMPSACW '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {190-193},
      address = {Munich, Germany},
      month = {18-22 July},
      doi = {http://dx.doi.org/10.1109/COMPSACW.2011.41}
    }
    					
    2014.03.26 Raluca Lefticaru & Florentin Ipate An Improved Test Generation Approach from Extended Finite State Machines using Genetic Algorithms 2012 Proceedings of the 10th International Conference on Software Engineering and Formal Methods (SEFM '12), pp. 293-307, Thessaloniki Greece, 1-5 October   Inproceedings Testing and Debugging
    Abstract: This paper presents a new approach to test generation from extended finite state machines using genetic algorithms, by proposing a new fitness function for path data generation. The fitness function that guides the search is crucial for the success of a genetic algorithm; an improvement in the fitness function will reduce the duration of the generation process and increase the success chances of the search algorithm. The paper performs a comparison between the newly proposed fitness function and the most widely used function in the literature. The experimental results show that, for more complex paths, that can be logically decomposed into independent sub-paths, the new function outperforms the previously proposed function and the difference is statistically significant.
    BibTeX:
    @inproceedings{LefticaruI12,
      author = {Raluca Lefticaru and Florentin Ipate},
      title = {An Improved Test Generation Approach from Extended Finite State Machines using Genetic Algorithms},
      booktitle = {Proceedings of the 10th International Conference on Software Engineering and Formal Methods (SEFM '12)},
      publisher = {Springer},
      year = {2012},
      pages = {293-307},
      address = {Thessaloniki, Greece},
      month = {1-5 October},
      doi = {http://dx.doi.org/10.1007/978-3-642-33826-7_20}
    }
    					
    2008.08.22 Jarmo T. Alander An Indexed Bibliography of Genetic Algorithms in Testing 2008 (94-1-TEST), Vaasa Finland, May   Techreport General Aspects and Survey
    Abstract: No abstract
    BibTeX:
    @techreport{Alander08,
      author = {Jarmo T. Alander},
      title = {An Indexed Bibliography of Genetic Algorithms in Testing},
      year = {2008},
      number = {94-1-TEST},
      address = {Vaasa, Finland},
      month = {May},
      url = {ftp://ftp.uwasa.fi/cs/report94-1/gaTESTbib.pdf}
    }
    					
    2015.12.08 Bogdan Marculescu, Robert Feldt, Richard Torkar & Simon Poulding An Initial Industrial Evaluation of Interactive Search-based Testing for Embedded Software 2015 Applied Soft Computing, Vol. 29, pp. 26-39, April   Article Testing and Debugging
    Abstract: Search-based software testing promises the ability to generate and evaluate large numbers of test cases at minimal cost. From an industrial perspective, this could enable an increase in product quality without a matching increase in the time and effort required to do so. Search-based software testing, however, is a set of quite complex techniques and approaches that do not immediately translate into a process for use with most companies. For example, even if engineers receive the proper education and training in these new approaches, it can be hard to develop a general fitness function that covers all contingencies. Furthermore, in industrial practice, the knowledge and experience of domain specialists are often key for effective testing and thus for the overall quality of the final software system. But it is not clear how such domain expertise can be utilized in a search-based system. This paper presents an interactive search-based software testing (ISBST) system designed to operate in an industrial setting and with the explicit aim of requiring only limited expertise in software testing. It uses SBST to search for test cases for an industrial software module, while also allowing domain specialists to use their experience and intuition to interactively guide the search. In addition to presenting the system, this paper reports on an evaluation of the system in a company developing a framework for embedded software controllers. A sequence of workshops provided regular feedback and validation for the design and improvement of the ISBST system. Once developed, the ISBST system was evaluated by four electrical and system engineers from the company (the ‘domain specialists' in this context) used the system to develop test cases for a commonly used controller module. As well as evaluating the utility of the ISBST system, the study generated interaction data that were used in subsequent laboratory experimentation to validate the underlying search-based algorithm in the presence of realistic, but repeatable, interactions. The results validate the importance that automated software testing tools in general, and search-based tools, in particular, can leverage input from domain specialists while generating tests. Furthermore, the evaluation highlighted benefits of using such an approach to explore areas that the current testing practices do not cover or cover insufficiently.
    BibTeX:
    @article{MarculescuFTP15,
      author = {Bogdan Marculescu and Robert Feldt and Richard Torkar and Simon Poulding},
      title = {An Initial Industrial Evaluation of Interactive Search-based Testing for Embedded Software},
      journal = {Applied Soft Computing},
      year = {2015},
      volume = {29},
      pages = {26-39},
      month = {April},
      doi = {http://dx.doi.org/10.1016/j.asoc.2014.12.025}
    }
    					
    2016.02.03 Nadarajen Veerapen, Gabriela Ochoa, Mark Harman & Edmund K. Burke An Integer Linear Programming approach to the single and bi-objective Next Release Problem 2015 Information and Software Technology, Vol. 65, pp. 1-13, September   Article
    Abstract: Context The Next Release Problem involves determining the set of requirements to implement in the next release of a software project. When the problem was first formulated in 2001, Integer Linear Programming, an exact method, was found to be impractical because of large execution times. Since then, the problem has mainly been addressed by employing metaheuristic techniques. Objective In this paper, we investigate if the single-objective and bi-objective Next Release Problem can be solved exactly and how to better approximate the results when exact resolution is costly. Methods We revisit Integer Linear Programming for the single-objective version of the problem. In addition, we integrate it within the Epsilon-constraint method to address the bi-objective problem. We also investigate how the Pareto front of the bi-objective problem can be approximated through an anytime deterministic Integer Linear Programming-based algorithm when results are required within strict runtime constraints. Comparisons are carried out against NSGA-II. Experiments are performed on a combination of synthetic and real-world datasets. Findings We show that a modern Integer Linear Programming solver is now a viable method for this problem. Large single objective instances and small bi-objective instances can be solved exactly very quickly. On large bi-objective instances, execution times can be significant when calculating the complete Pareto front. However, good approximations can be found effectively. Conclusion This study suggests that (1) approximation algorithms can be discarded in favor of the exact method for the single-objective instances and small bi-objective instances, (2) the Integer Linear Programming-based approximate algorithm outperforms the NSGA-II genetic approach on large bi-objective instances, and (3) the run times for both methods are low enough to be used in real-world situations.
    BibTeX:
    @article{VeerapenOHB15,
      author = {Nadarajen Veerapen and Gabriela Ochoa and Mark Harman and Edmund K. Burke},
      title = {An Integer Linear Programming approach to the single and bi-objective Next Release Problem},
      journal = {Information and Software Technology},
      year = {2015},
      volume = {65},
      pages = {1-13},
      month = {September},
      doi = {http://dx.doi.org/10.1016/j.infsof.2015.03.008}
    }
    					
    2011.06.08 Chen Li, Marjan van den Akker, Sjaak Brinkkemper & Guido Diepen An Integrated Approach for Requirement Selection and Scheduling in Software Release Planning 2010 Requirements Engineering, Vol. 15(4), pp. 375-396, November   Article Requirements/Specifications
    Abstract: It is essential for product software companies to decide which requirements should be included in the next release and to make an appropriate time plan of the development project. Compared to the extensive research done on requirement selection, very little research has been performed on time scheduling. In this paper, we introduce two integer linear programming models that integrate time scheduling into software release planning. Given the resource and precedence constraints, our first model provides a schedule for developing the requirements such that the project duration is minimized. Our second model combines requirement selection and scheduling, so that it not only maximizes revenues but also simultaneously calculates an on-time-delivery project schedule. Since requirement dependencies are essential for scheduling the development process, we present a more detailed analysis of these dependencies. Furthermore, we present two mechanisms that facilitate dynamic adaptation for over-estimation or under-estimation of revenues or processing time, one of which includes the Scrum methodology. Finally, several simulations based on real-life data are performed. The results of these simulations indicate that requirement dependency can significantly influence the requirement selection and the corresponding project plan. Moreover, the model for combined requirement selection and scheduling outperforms the sequential selection and scheduling approach in terms of efficiency and on-time delivery.
    BibTeX:
    @article{LiVBD10,
      author = {Chen Li and Marjan van den Akker and Sjaak Brinkkemper and Guido Diepen},
      title = {An Integrated Approach for Requirement Selection and Scheduling in Software Release Planning},
      journal = {Requirements Engineering},
      year = {2010},
      volume = {15},
      number = {4},
      pages = {375-396},
      month = {November},
      doi = {http://dx.doi.org/10.1007/s00766-010-0104-x}
    }
    					
    2012.01.09 Abdul Salam Kalaji, Robert Mark Hierons & Stephen Swift An Integrated Search-based Approach for Automatic Testing from Extended Finite State Machine (EFSM) Models 2011 Information and Software Technology, Vol. 53(12), pp. 1297-1318, December   Article Testing and Debugging
    Abstract: Context The extended finite state machine (EFSM) is a modelling approach that has been used to represent a wide range of systems. When testing from an EFSM, it is normal to use a test criterion such as transition coverage. Such test criteria are often expressed in terms of transition paths (TPs) through an EFSM. Despite the popularity of EFSMs, testing from an EFSM is difficult for two main reasons: path feasibility and path input sequence generation. The path feasibility problem concerns generating paths that are feasible whereas the path input sequence generation problem is to find an input sequence that can traverse a feasible path. Objective While search-based approaches have been used in test automation, there has been relatively little work that uses them when testing from an EFSM. In this paper, we propose an integrated search-based approach to automate testing from an EFSM. Method The approach has two phases, the aim of the first phase being to produce a feasible TP (FTP) while the second phase searches for an input sequence to trigger this TP. The first phase uses a Genetic Algorithm whose fitness function is a TP feasibility metric based on dataflow dependence. The second phase uses a Genetic Algorithm whose fitness function is based on a combination of a branch distance function and approach level. Results Experimental results using five EFSMs found the first phase to be effective in generating FTPs with a success rate of approximately 96.6%. Furthermore, the proposed input sequence generator could trigger all the generated feasible TPs (success rate = 100%). Conclusion The results derived from the experiment demonstrate that the proposed approach is effective in automating testing from an EFSM.
    BibTeX:
    @article{KalajiHS11,
      author = {Abdul Salam Kalaji and Robert Mark Hierons and Stephen Swift},
      title = {An Integrated Search-based Approach for Automatic Testing from Extended Finite State Machine (EFSM) Models},
      journal = {Information and Software Technology},
      year = {2011},
      volume = {53},
      number = {12},
      pages = {1297-1318},
      month = {December},
      doi = {http://dx.doi.org/10.1016/j.infsof.2011.06.004}
    }
    					
    2014.09.19 Ying Xing, Junfei Huang, Yunzhan Gong, Yawen Wang & Xuzhou Zhang An Intelligent Method Based on State Space Search for Automatic Test Case Generation 2014 Journal of Software, Vol. 9(2), pp. 358-364, February   Article Testing and Debugging
    Abstract: Search-Based Software Testing reformulates testing as search problems so that test case generation can be automated by some chosen search algorithms. This paper reformulates path-oriented test case generation as a state space search problem and proposes an intelligent method Best-First-Search Branch & Bound to solve it, utilizing the algorithms of Branch & Bound and Backtrack to search the space of potential test cases and adopting bisection to lower the bounds of the search space. We also propose an optimization method by removing irrelevant variables. Experiments show that the proposed search method generates test cases with promising performance and outperforms some MetaHeuristic Search algorithms.
    BibTeX:
    @article{XingHGWZ14,
      author = {Ying Xing and Junfei Huang and Yunzhan Gong and Yawen Wang and Xuzhou Zhang},
      title = {An Intelligent Method Based on State Space Search for Automatic Test Case Generation},
      journal = {Journal of Software},
      year = {2014},
      volume = {9},
      number = {2},
      pages = {358-364},
      month = {February},
      doi = {http://dx.doi.org/10.4304/jsw.9.2.358-364}
    }
    					
    2008.04.12 Robert Feldt An Interactive Software Development Workbench based on Biomimetic Algorithms 2002 (02-16), Gothenburg Sweden, November   Techreport Design Tools and Techniques
    Abstract: Based on a theory for software development that focus on the internal models of the developer this paper presents a design for an interactive workbench to support the iterative refinement of developers models. The goal for the workbench is to expose unknown features of the software being developed so that the developer can check if they correspond to his expectations. The workbench employs a biomimetic search system to find tests with novel features. The search system assembles test templates from small pieces of test code and data packaged into a cell. We describe a prototype of the workbench implemented in Ruby and focus on the module used for evolving tests. A case study show that the prototype supports development of tests that are both diverse, complete and have a meaning to the developer. Furthermore, the system can easily be extended by the developer when he comes up with new test strategies.
    BibTeX:
    @techreport{Feldt02,
      author = {Robert Feldt},
      title = {An Interactive Software Development Workbench based on Biomimetic Algorithms},
      year = {2002},
      number = {02-16},
      address = {Gothenburg, Sweden},
      month = {November},
      url = {http://www.cs.bham.ac.uk/~wbl/biblio/cache/http___drfeldt.googlepages.com_feldt_2002_wise_tech_report.pdf}
    }
    					
    2017.09.13 Christopher Steven Timperley, Susan Stepney & Claire Le Goues An Investigation into the Use of Mutation Analysis for Automated Program Repair 2017 Proceedings of the 9th International Symposium on Search Based Software Engineering (SSBSE '17), Vol. 10452, pp. 99-114, Paderborn Germany, 9-11 September   Inproceedings
    Abstract: Research in Search-Based Automated Program Repair has demonstrated promising results, but has nevertheless been largely confined to small, single-edit patches using a limited set of mutation operators. Tackling a broader spectrum of bugs will require multiple edits and a larger set of operators, leading to a combinatorial explosion of the search space. This motivates the need for more efficient search techniques. We propose to use the test case results of candidate patches to localise suitable fix locations. We analysed the test suite results of single-edit patches, generated from a random walk across 28 bugs in 6 programs. Based on the findings of this analysis, we propose a number of mutation-based fault localisation techniques, which we subsequently evaluate by measuring how accurately they locate the statements at which the search was able to generate a solution. After demonstrating that these techniques fail to result in a significant improvement, we discuss why this may be the case, despite the successes of mutation-based fault localisation in previous studies.
    BibTeX:
    @inproceedings{TimperleySG17,
      author = {Christopher Steven Timperley and Susan Stepney and Claire Le Goues},
      title = {An Investigation into the Use of Mutation Analysis for Automated Program Repair},
      booktitle = {Proceedings of the 9th International Symposium on Search Based Software Engineering (SSBSE '17)},
      publisher = {Springer},
      year = {2017},
      volume = {10452},
      pages = {99-114},
      address = {Paderborn, Germany},
      month = {9-11 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-66299-2_7}
    }
    					
    2007.12.02 Ramón Sagarna An Optimization Approach for Software Test Data Generation: Applications of Estimation of Distribution Algorithms and Scatter Search 2007 , San Sebastian Spain, January School: University of the Basque Country, Spain   Phdthesis Testing and Debugging
    Abstract: No abstract
    BibTeX:
    @phdthesis{Sagarna07,
      author = {Ramón Sagarna},
      title = {An Optimization Approach for Software Test Data Generation: Applications of Estimation of Distribution Algorithms and Scatter Search},
      school = {University of the Basque Country, Spain},
      year = {2007},
      address = {San Sebastian, Spain},
      month = {January},
      url = {http://www.cercia.ac.uk/projects/research/SEBASE/pdfs/2007/rs-thesis.pdf}
    }
    					
    2010.09.08 Andréa Magalhães Magdaleno An Optimization-based Approach to Software Development Process Tailoring 2010 Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10), pp. 40-43, Benevento Italy, 7-9 September   Inproceedings
    Abstract: A major activity performed by the manager before starting a software project is tailoring its development process. Such activity requires information about the context under which the project will be executed, including organizational, project, and team characteristics. In addition, it also requires pondering many factors and evaluating all existing constraints. In this scenario, we claim that a balance between collaboration and discipline can be the drivers to tailor software development processes in order to meet project and organization needs. With the purpose of facilitating this balancing, it is possible to automate some of the steps to solve the problem, reducing the effort required to execute this task and improving the obtained process. Therefore, this work presents an optimization-based approach where the balancing in process tailoring is defined, modeled and briefly analyzed. This approach uses collaboration and discipline as utility functions to select the most appropriate process for a software development project, considering its current context.
    BibTeX:
    @inproceedings{Magdaleno10,
      author = {Andréa Magalhães Magdaleno},
      title = {An Optimization-based Approach to Software Development Process Tailoring},
      booktitle = {Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {40-43},
      address = {Benevento, Italy},
      month = {7-9 September},
      doi = {http://dx.doi.org/10.1109/SSBSE.2010.15}
    }
    					
    2016.02.27 Matias Nicoletti, Silvia Schiaffino & J. Andres Diaz-Pace An Optimization-based Tool to Support the Cost-effective Productionof Software Architecture Documentation 2015 Journal of Software: Evolution and Process, Vol. 27(9), pp. 674-699, September   Article
    Abstract: Some of the challenges faced by most software projects are tight budget constraints and schedules, which often make managers and developers prioritize the delivery of a functional product over other engineering activities, such as software documentation. In particular, having little or low-quality documentation of the software architecture of a system can have negative consequences for the project, as the architecture is the main container of the key design decisions to fulfill the stakeholders' goals. To further complicate this situation, generating and maintaining architectural documentation is a non-trivial and time-consuming activity. In this context, we present a tool approach that aims at (i) assisting the documentation writer in their tasks and (ii) ensuring a cost-effective documentation process by means of optimization techniques. Our tool, called SADHelper, follows the principle of producing reader-oriented documentation, in order to focus the available, and often limited, resources on generating just enough documentation that satisfies the stakeholders' concerns. The approach was evaluated in two experiments with users of software architecture documents, with encouraging results. These results show evidence that our tool can be useful to reduce the documentation costs and even improve the documentation quality, as perceived by their stakeholders.
    BibTeX:
    @article{NicolettiSD15,
      author = {Matias Nicoletti and Silvia Schiaffino and J. Andres Diaz-Pace},
      title = {An Optimization-based Tool to Support the Cost-effective Productionof Software Architecture Documentation},
      journal = {Journal of Software: Evolution and Process},
      year = {2015},
      volume = {27},
      number = {9},
      pages = {674-699},
      month = {September},
      doi = {http://dx.doi.org/10.1002/smr.1734}
    }
    					
    2008.03.18 Vittorio Cortellessa, Fabrizio Marinelli & Pasqualina Potena An Optimization Framework for ``Build-or-Buy" Decisions in Software Architecture 2008 Computers & Operations Research, Vol. 35(10), pp. 3090-3106, October   Article Management
    Abstract: Building a software architecture that meets functional requirements is a quite consolidated activity, whereas keeping high quality attributes is still an open challenge. In this paper we introduce an optimization framework that supports the decision whether to buy software components or to build them in-house upon designing a software architecture. We devise a non-linear cost/quality optimization model based on decision variables indicating the set of architectural components to buy and to build in order to minimize the software cost while keeping satisfactory values of quality attributes. From this point of view, our tool can be ideally embedded into a Cost Benefit Analysis Method to provide decision support to software architects. The novelty of our approach consists in building costs and quality attributes on a common set of decision variables related to software development. We start from a special case of the framework where the quality constraints are related to the delivery time and the product reliability, and the model solution also devises the amount of unit testing to be performed on built components. We generalize the framework formulation to represent a broader class of architectural cost-minimization problems under quality constraints, and discuss advantages and limitations of such approach.
    BibTeX:
    @article{CortellessaMP08,
      author = {Vittorio Cortellessa and Fabrizio Marinelli and Pasqualina Potena},
      title = {An Optimization Framework for ``Build-or-Buy" Decisions in Software Architecture},
      journal = {Computers & Operations Research},
      year = {2008},
      volume = {35},
      number = {10},
      pages = {3090-3106},
      month = {October},
      doi = {http://dx.doi.org/10.1016/j.cor.2007.01.011}
    }
    					
    2012.03.09 Zhiqiao Wu, Jiafu Tang, C.K. Kwong & C.Y. Chan An Optimization Model for Reuse Scenario Selection Considering Reliability and Cost in Software Product Line Development 2011 International Journal of Information Technology & Decision Making, Vol. 10(5), pp. 811-841   Article Requirements/Specifications
    Abstract: In this paper, a model that assists developers to evaluate and compare alternative reuse scenarios in software product line (SPL) development systematically in proposed. The model can identify basic activities (abstracted as operations) and precisely relate cost and reliability with each basic operation. A typical reuse mode is described from the perspectives of application and domain engineering. According to this scheme, six reuse modes are identified, and alternative industry reuse scenarios can be derived from these modes. A bi-objective 0-1 integer programming model is developed to help decision makers select reuse scenarios when they develop a SPL to minimize cost and maximize reliability while satisfying system requirements to a certain degree. This model is called the cost and reliability optimization under constraint satisfaction (CROS). To design the model efficiently, a three-phase algorithm for finding all efficient solutions is developed, where the first two phases can obtain an efficient solution, and the last phase can generate a nonsupported efficient solution. Two practical methods are presented to facilitate decision making on selecting from the entire range of efficient solutions in light of the decision-maker's preference for man–computer interaction. An application of the CROS model in a mail server system development is presented as a case study.
    BibTeX:
    @article{WuTKC11,
      author = {Zhiqiao Wu and Jiafu Tang and C. K. Kwong and C. Y. Chan},
      title = {An Optimization Model for Reuse Scenario Selection Considering Reliability and Cost in Software Product Line Development},
      journal = {International Journal of Information Technology & Decision Making},
      year = {2011},
      volume = {10},
      number = {5},
      pages = {811-841},
      doi = {http://dx.doi.org/10.1142/S0219622011004580}
    }
    					
    2010.04.13 Praveen Ranjan Srivastava, Aditya Vijay, Bhupesh Barukha, Prashant Singh Sengar & Rajat Sharma An Optimized Technique for Test Case Generation and Prioritization using Tabu Search and Data Clustering 2009 Proceedings of the 4th Indian International Conference on Artificial Intelligence (IICAI '09), pp. 30-46, Tumkur India, 16-18 December   Inproceedings Testing and Debugging
    BibTeX:
    @inproceedings{SrivastavaVBSS09,
      author = {Praveen Ranjan Srivastava and Aditya Vijay and Bhupesh Barukha and Prashant Singh Sengar and Rajat Sharma},
      title = {An Optimized Technique for Test Case Generation and Prioritization using Tabu Search and Data Clustering},
      booktitle = {Proceedings of the 4th Indian International Conference on Artificial Intelligence (IICAI '09)},
      year = {2009},
      pages = {30-46},
      address = {Tumkur, India},
      month = {16-18 December},
      url = {http://www.pubzone.org/dblp/conf/iicai/SrivastavaVBSS09}
    }
    					
    2013.06.28 Saswat Anand, Edmund Burke, Tsong Yueh Chen, John A. Clark, Myra B. Cohen, Wolfgang Grieskamp, Mark Harman, Mary Jean Harrold & Phil McMinn An Orchestrated Survey on Automated Software Test Case Generation 2013 Journal of Systems and Software, Vol. 86(8), pp. 1978-2001, August   Article Testing and Debugging
    Abstract: Test case generation is among the most labour-intensive tasks in software testing. It also has a strong impact on the effectiveness and efficiency of software testing. For these reasons, it has been one of the most active research topics in software testing for several decades, resulting in many different approaches and tools. This paper presents an orchestrated survey of the most prominent techniques for automatic generation of software test cases, reviewed in self-standing sections. The techniques presented include: (a) structural testing using symbolic execution, (b) model-based testing, (c) combinatorial testing, (d) random testing and its variant of adaptive random testing, and (e) search-based testing. Each section is contributed by world-renowned active researchers on the technique, and briefly covers the basic ideas underlying the method, the current state of the art, a discussion of the open research problems, and a perspective of the future development of the approach. As a whole, the paper aims at giving an introductory, up-to-date and (relatively) short overview of research in automatic test case generation, while ensuring a comprehensive and authoritative treatment.
    BibTeX:
    @article{AnandBCCCGHHM13,
      author = {Saswat Anand and Edmund Burke and Tsong Yueh Chen and John A. Clark and Myra B. Cohen and Wolfgang Grieskamp and Mark Harman and Mary Jean Harrold and Phil McMinn},
      title = {An Orchestrated Survey on Automated Software Test Case Generation},
      journal = {Journal of Systems and Software},
      year = {2013},
      volume = {86},
      number = {8},
      pages = {1978-2001},
      month = {August},
      doi = {http://dx.doi.org/10.1016/j.jss.2013.02.061}
    }
    					
    2007.12.02 Salah Bouktif, Giuliano Antoniol, Ettore Merlo & Markus Neteler A Novel Approach to Optimize Clone Refactoring Activity 2006 Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation (GECCO '06), pp. 1885-1892, Seattle Washington USA, 8-12 July   Inproceedings Distribution and Maintenance
    Abstract: Software evolution and software quality are ever changing phenomena. As software evolves, evolution impacts software quality. On the other hand, software quality needs may drive software evolution strategies.This paper presents an approach to schedule quality improvement under constraints and priority. The general problem of scheduling quality improvement has been instantiated into the concrete problem of planning duplicated code removal in a geographical information system developed in C throughout the last 20 years. Priority and constraints arise from development team and from the adopted development process. The developer team long term goal is to get rid of duplicated code, improve software structure, decrease coupling, and improve cohesion.We present our problem formulation, the adopted approach, including a model of clone removal effort and preliminary results obtained on a real world application.
    BibTeX:
    @inproceedings{BouktifAN06,
      author = {Salah Bouktif and Giuliano Antoniol and Ettore Merlo and Markus Neteler},
      title = {A Novel Approach to Optimize Clone Refactoring Activity},
      booktitle = {Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation (GECCO '06)},
      publisher = {ACM},
      year = {2006},
      pages = {1885-1892},
      address = {Seattle, Washington, USA},
      month = {8-12 July},
      doi = {http://doi.acm.org/10.1145/1143997.1144312}
    }
    					
    2008.04.21 Andrea Arcuri & Xin Yao A Novel Co-evolutionary Approach to Automatic Software Bug Fixing 2008 Proceedings of the IEEE Congress on Evolutionary Computation (CEC '08), pp. 162-168, Hongkong China, 1-6 June   Inproceedings Testing and Debugging
    Abstract: Many tasks in Software Engineering are very expensive, and that has led the investigation to how to automate them. In particular, Software Testing can take up to half of the resources of the development of new software. Although there has been a lot of work on automating the testing phase, fixing a bug after its presence has been discovered is still a duty of the programmers. In this paper we propose an evolutionary approach to automate the task of fixing bugs. This novel evolutionary approach is based on Co-evolution, in which programs and test cases co-evolve, influencing each other with the aim of fixing the bugs of the programs. This competitive co-evolution is similar to what happens in nature for predators and prey. The user needs only to provide a buggy program and a formal specification of it. No other information is required. Hence, the approach may work for any implementable software. We show some preliminary experiments in which bugs in an implementation of a sorting algorithm are automatically fixed.
    BibTeX:
    @inproceedings{ArcuriY08,
      author = {Andrea Arcuri and Xin Yao},
      title = {A Novel Co-evolutionary Approach to Automatic Software Bug Fixing},
      booktitle = {Proceedings of the IEEE Congress on Evolutionary Computation (CEC '08)},
      publisher = {IEEE},
      year = {2008},
      pages = {162-168},
      address = {Hongkong, China},
      month = {1-6 June},
      doi = {http://dx.doi.org/10.1109/CEC.2008.4630793}
    }
    					
    2013.08.05 Aurora Ramrez, José Raúl Romero & Sebastián Ventura A Novel Component Identification Approach using Evolutionary Programming 2013 Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13), pp. 209-210, Amsterdam The Netherlands, 6-10 July   Inproceedings Design Tools and Techniques
    Abstract: Component identification is a critical phase in software architecture analysis to prevent later errors and control the project time and budget. Obtaining the most appropriate architecture according to predetermined design criteria can be treated as an optimisation problem, especially since the appearance of the Search Based Software Engineering, and its combination with bio-inspired metaheuristics. In this work, an evolutionary programming (EP) algorithm is used to identify components, based on a novel and comprehensible representation of software architectures.
    BibTeX:
    @inproceedings{RamirezRV13,
      author = {Aurora Ramrez and José Raúl Romero and Sebastián Ventura},
      title = {A Novel Component Identification Approach using Evolutionary Programming},
      booktitle = {Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {209-210},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2464576.2464679}
    }
    					
    2009.12.18 Andrew F. Tappenden & James Miller A Novel Evolutionary Approach for Adaptive Random Testing 2009 IEEE Transactions On Reliability, Vol. 58(4), pp. 619-633, December   Article Testing and Debugging
    Abstract: Random testing is a low cost strategy that can be applied to a wide range of testing problems. While the cost and straightforward application of random testing are appealing, these benefits must be evaluated against the reduced effectiveness due to the generality of the approach. Recently, a number of novel techniques, coined Adaptive Random Testing, have sought to increase the effectiveness of random testing by attempting to maximize the testing coverage of the input domain. This paper presents the novel application of an evolutionary search algorithm to this problem. The results of an extensive simulation study are presented in which the evolutionary approach is compared against the Fixed Size Candidate Set (FSCS), Restricted Random Testing (RRT), quasi-random testing using the Sobol sequence (Sobol), and random testing (RT) methods. The evolutionary approach was found to be superior to FSCS, RRT, Sobol, and RT amongst block patterns, the arena in which FSCS, and RRT have demonstrated the most appreciable gains in testing effectiveness. The results among fault patterns with increased complexity were shown to be similar to those of FSCS, and RRT; and showed a modest improvement over Sobol, and RT. A comparison of the asymptotic and empirical runtimes of the evolutionary search algorithm, and the other testing approaches, was also considered, providing further evidence that the application of an evolutionary search algorithm is feasible, and within the same order of time complexity as the other adaptive random testing approaches.
    BibTeX:
    @article{TappendenM09,
      author = {Andrew F. Tappenden and James Miller},
      title = {A Novel Evolutionary Approach for Adaptive Random Testing},
      journal = {IEEE Transactions On Reliability},
      year = {2009},
      volume = {58},
      number = {4},
      pages = {619-633},
      month = {December},
      doi = {http://dx.doi.org/10.1109/TR.2009.2034288}
    }
    					
    2009.04.01 Chengwen Zhang, Sen Su & Junliang Chen A Novel Genetic Algorithm for QoS-Aware Web Services Selection 2006 Proceedings of the 2nd International Workshop on Data Engineering Issues in E-Commerce and Services (DEECS '06), Vol. 4055, pp. 224-235, San Francisco CA USA, 26 June   Inproceedings Design Tools and Techniques
    Abstract: A novel genetic algorithm characterized by improved fitness value is presented for Quality of Service (QoS)-aware web services selection. The genetic algorithm includes a special relation matrix coding scheme of chromosomes, an initial population policy and a mutation policy. The relation matrix coding scheme suits with QoS-aware web service composition more than the one dimension coding scheme. By running only once, the proposed genetic algorithm can construct the composite service plan according with the QoS requirement from many services compositions. Meanwhile, the adoption of the initial population policy and the mutation policy promotes the fitness of genetic algorithm. Experiments on QoS-aware web services selection show that the genetic algorithm with this matrix can get more excellent composite service plan than the genetic algorithm with the one dimension coding scheme, and that the two policies play an important role at the improvement of the fitness of genetic algorithm.
    BibTeX:
    @inproceedings{ZhangSC06,
      author = {Chengwen Zhang and Sen Su and Junliang Chen},
      title = {A Novel Genetic Algorithm for QoS-Aware Web Services Selection},
      booktitle = {Proceedings of the 2nd International Workshop on Data Engineering Issues in E-Commerce and Services (DEECS '06)},
      publisher = {Springer},
      year = {2006},
      volume = {4055},
      pages = {224-235},
      address = {San Francisco, CA, USA},
      month = {26 June},
      doi = {http://dx.doi.org/10.1007/11780397_18}
    }
    					
    2010.09.08 Shin Yoo A Novel Mask-Coding Representation for Set Cover Problems with Applications in Test Suite Minimisation 2010 Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10), pp. 19-28, Benevento Italy, 7-9 September   Inproceedings Testing and Debugging
    Abstract: Multi-Objective Set Cover problem forms the basis of many optimisation problems in software testing because the concept of code coverage is based on the set theory. This paper presents Mask-Coding, a novel representation of solutions for set cover optimisation problems that explores the problem space rather than the solution space. The new representation is empirically evaluated with set cover problems formulated from real code coverage data. The results show that Mask-Coding representation can improve both the convergence and diversity of the Pareto-efficient solution set of the multi-objective set cover optimisation.
    BibTeX:
    @inproceedings{Yoo10b,
      author = {Shin Yoo},
      title = {A Novel Mask-Coding Representation for Set Cover Problems with Applications in Test Suite Minimisation},
      booktitle = {Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {19-28},
      address = {Benevento, Italy},
      month = {7-9 September},
      doi = {http://dx.doi.org/10.1109/SSBSE.2010.12}
    }
    					
    2014.05.28 James R. Williams A Novel Representation for Search-Based Model-Driven Engineering 2013 , September School: University of York   Phdthesis
    Abstract: Model-Driven Engineering (MDE) and Search-Based Software Engineering (SBSE) are development approaches that focus on automation to increase productivity and throughput. MDE focuses on high-level domain models and the automatic management of models to perform development processes, such as model validation or code generation. SBSE on the other hand, treats software engineering problems as optimisation problems, and aims to automatically discover solutions rather than systematically construct them. SBSE techniques have been shown to be beneficial at all stages in the software development life-cycle. There has, however, been few attempts at applying SBSE techniques to the MDE domain and all are problem-specific. In this thesis we present a method of encoding MDE models that enables many robust SBSE techniques to be applied to a wide-range of MDE problems. We use the model representation to address three in-scope MDE problems: discovering an optimal domain model; extracting a model of runtime system behaviour; and applying sensitivity analysis to model management operations in order to analyse the uncertainty present in models. We perform an empirical analysis of two important properties of the representation, locality and redundancy, which have both been shown to affect the ability of SBSE techniques to discover solutions, and we propose a detailed plan for further analysis of the representation, or other representations of its kind.
    BibTeX:
    @phdthesis{Williams13,
      author = {James R. Williams},
      title = {A Novel Representation for Search-Based Model-Driven Engineering},
      school = {University of York},
      year = {2013},
      month = {September},
      url = {http://etheses.whiterose.ac.uk/5155/1/Thesis.pdf}
    }
    					
    2016.03.08 Mohamed Boussaa, Olivier Barais, Gerson Sunyé & Benoît Baudry A Novelty Search Approach for Automatic Test Data Generation 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 40-43, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: In search-based structural testing, metaheuristic search techniques have been frequently used to automate the test data generation. In Genetic Algorithms (GAs) for example, test data are rewarded on the basis of an objective function that represents generally the number of statements or branches covered. However, owing to the wide diversity of possible test data values, it is hard to find the set of test data that can satisfy a specific coverage criterion. In this paper, we introduce the use of Novelty Search (NS) algorithm to the test data generation problem based on statement-covered criteria. We believe that such approach to test data generation is attractive because it allows the exploration of the huge space of test data within the input domain. In this approach, we seek to explore the search space without regard to any objectives. In fact, instead of having a fitness-based selection, we select test cases based on a novelty score showing how different they are compared to all other solutions evaluated so far.
    BibTeX:
    @inproceedings{BoussaaBSB15,
      author = {Mohamed Boussaa and Olivier Barais and Gerson Sunyé and Benoît Baudry},
      title = {A Novelty Search Approach for Automatic Test Data Generation},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {40-43},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.17}
    }
    					
    2014.05.28 Huayao Wu & Changhai Nie An Overview of Search Based Combinatorial Testing 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), pp. 27-30, Hyderabad India, 2-3 June   Inproceedings Testing and Debugging
    Abstract: Combinatorial testing (CT) is a branch of software testing, which aims to detect the interaction triggered failures as much as possible. Search based combinatorial testing is to use the search techniques to solve the problem in combinatorial testing. It has been shown to be effective and promising. In this paper, we aim to provide an overview of search based combinatorial testing, especially focusing on test suite generation without constraint, and discuss the potential future directions in this field.
    BibTeX:
    @inproceedings{WuN14,
      author = {Huayao Wu and Changhai Nie},
      title = {An Overview of Search Based Combinatorial Testing},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      pages = {27-30},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593839}
    }
    					
    2009.02.16 Rajesh K. Bhatia, Mayank Dave & R.C. Joshi Ant Colony based Rule Generation for Reusable Software Component Retrieval 2008 Proceedings of the 1st Conference on India Software Engineering Conference (ISEC '08), pp. 129-130, Hyderabad India, 19-22 February   Inproceedings Design Tools and Techniques
    Abstract: Storage and representation of reusable software components in software repositories to facilitate convenient identification and retrieval has been always a concern for software reuse researchers. This paper discusses and demonstrated an ant colony algorithm based technique that generates rules to store and then identify the component from software repository for possible reuse. Proposed technique helps user in organization and storage of components in repository and later can help in identifying most appropriate component for given context. In first stage while searching it makes use of keywords, their synonyms and their inter-relationships. Then it makes use of ant colony optimization; initial pheromone of one is assigned to all domain representative terms of components. By updating pheromone for participating terms and non-participating terms iteratively and by calculating the quality of each rule generated, it leads to quality rules to represent and retrieve the reusable components
    BibTeX:
    @inproceedings{BhatiaDJ08,
      author = {Rajesh K. Bhatia and Mayank Dave and R. C. Joshi},
      title = {Ant Colony based Rule Generation for Reusable Software Component Retrieval},
      booktitle = {Proceedings of the 1st Conference on India Software Engineering Conference (ISEC '08)},
      publisher = {ACM},
      year = {2008},
      pages = {129-130},
      address = {Hyderabad, India},
      month = {19-22 February},
      doi = {http://dx.doi.org/10.1145/1342211.1342237}
    }
    					
    2009.04.07 Chiou Peng Lam, Jitian Xiao & Huaizhong Li Ant Colony Optimisation for Generation of Conformance Testing Sequences using a Characterising Set 2007 Proceedings of the 3rd IASTED Conference on Advances in Computer Science and Technology, pp. 140-146, Phuket Thailand, 2-4 April   Inproceedings Testing and Debugging
    Abstract: Protocol conformance testing generally involves cheching whether the protocol under test conforms to the given specification. The generation of test sequences in an efficient and effective way that achieves the required fault detection coverage is highly desirable. This paper proposed an approach that formulates the problem of finding shorter test sequences based on the Wp method into one of finding the shortest tour in the asymmetric travelling salesman problem (ATSP). In the formulation of the ATSP, the approach excludes redundant test segments, employs the concepts of overlap to reduce test sequence length and concatenation without linking cost ito the test sequence generation technique. The approach recast a Software Engineering problem as a search-based problems using Ant Colony Optimization to find the shortest will maintain the same fault detection capability as those of the Wp method. The approach is applicable to every minimal FSM as each one of them possesses a characterising set.
    BibTeX:
    @inproceedings{LamXL07,
      author = {Chiou Peng Lam and Jitian Xiao and Huaizhong Li},
      title = {Ant Colony Optimisation for Generation of Conformance Testing Sequences using a Characterising Set},
      booktitle = {Proceedings of the 3rd IASTED Conference on Advances in Computer Science and Technology},
      publisher = {ACTA Press},
      year = {2007},
      pages = {140-146},
      address = {Phuket, Thailand},
      month = {2-4 April},
      url = {http://portal.acm.org/citation.cfm?id=1322491}
    }
    					
    2014.05.28 Ying lin Wang & Jin wei Pang Ant Colony Optimization for Feature Selection in Software Product Lines 2014 Journal of Shanghai Jiaotong University (Science), Vol. 19(1), pp. 50-58, February   Article
    Abstract: Software product lines (SPLs) are important software engineering techniques for creating a collection of similar software systems. Software products can be derived from SPLs quickly. The process of software product derivation can be modeled as feature selection optimization with resource constraints, which is a nondeterministic polynomial-time hard (NP-hard) problem. In this paper, we present an approach that using ant colony optimization to get an approximation solution of the problem in polynomial time. We evaluate our approach by comparing it to two important approximation techniques. One is filtered Cartesian flattening and modified heuristic (FCF+M-HEU) algorithm, the other is genetic algorithm for optimized feature selection (GAFES). The experimental results show that our approach performs 6% worse than FCF+M-HEU with reducing much running time. Meanwhile, it performs 10% better than GAFES with taking more time.
    BibTeX:
    @article{WangP14,
      author = {Ying-lin Wang and Jin-wei Pang},
      title = {Ant Colony Optimization for Feature Selection in Software Product Lines},
      journal = {Journal of Shanghai Jiaotong University (Science)},
      year = {2014},
      volume = {19},
      number = {1},
      pages = {50-58},
      month = {February},
      doi = {http://dx.doi.org/10.1007/s12204-013-1468-0}
    }
    					
    2007.12.02 Enrique Alba & Francisco Chicano Ant Colony Optimization for Model Checking 2007 Proceedings of the 11th International Conference on Computer Aided Systems Theory (EUROCAST '07), Vol. 4739, pp. 523-530, Las Palmas de Gran Canaria Spain, 12-16 February   Inproceedings Software/Program Verification
    Abstract: Model Checking is a well-known and fully automatic technique for checking software properties, usually given as temporal logic formulae on the program variables. Most model checkers found in the literature use exact deterministic algorithms to check the properties. These algorithms usually require huge amounts of computational resources if the checked model is large. We propose here the use of Ant Colony Optimization (ACO) to refute safety properties in concurrent systems. ACO algorithms are stochastic techniques belonging to the class of metaheuristic algorithms and inspired by the foraging behaviour of real ants. The results state that ACO algorithms find optimal or near optimal error trails in faulty concurrent systems with a reduced amount of resources, outperforming algorithms that are the state-of-the-art in model checking. This fact makes them suitable for checking safety properties in large concurrent systems, in which traditional techniques fail to find errors because of the model size.
    BibTeX:
    @inproceedings{AlbaC07,
      author = {Enrique Alba and Francisco Chicano},
      title = {Ant Colony Optimization for Model Checking},
      booktitle = {Proceedings of the 11th International Conference on Computer Aided Systems Theory (EUROCAST '07)},
      publisher = {Springer},
      year = {2007},
      volume = {4739},
      pages = {523-530},
      address = {Las Palmas de Gran Canaria, Spain},
      month = {12-16 February},
      doi = {http://dx.doi.org/10.1007/978-3-540-75867-9}
    }
    					
    2009.05.14 José del Sagrado & Isabel María del Águila Ant Colony Optimization for Requirement Selection in Incremental Software Development 2009 Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09), Cumberland Lodge Windsor UK, 13-15 May   Inproceedings Requirements/Specifications
    Abstract: no abstract
    BibTeX:
    @inproceedings{SagradoA09,
      author = {José del Sagrado and Isabel María del Águila},
      title = {Ant Colony Optimization for Requirement Selection in Incremental Software Development},
      booktitle = {Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09)},
      publisher = {IEEE},
      year = {2009},
      address = {Cumberland Lodge, Windsor, UK},
      month = {13-15 May},
      url = {http://www.ssbse.org/2009/fa/ssbse2009_submission_30.pdf}
    }
    					
    2012.07.23 Wei-Neng Chen & Jun Zhang Ant Colony Optimization for Software Project Scheduling and Staffing with an Event-Based Scheduler 2013 IEEE Transactions on Software Engineering, Vol. 39(1), pp. 1-17, January   Article Management
    Abstract: Research into developing effective computer aided techniques for planning software projects is important and challenging for software engineering. Different from projects in other fields, software projects are people-intensive activities and their related resources are mainly human resources. Thus an adequate model for software project planning has to deal with not only the problem of project task scheduling but also the problem of human resource allocation. But as both of these two problems are difficult, existing models either suffer from a very large search space or have to restrict the flexibility of human resource allocation to simplify the model. To develop a flexible and effective model for software project planning, this paper develops a novel approach with an event-based scheduler (EBS) and an ant colony optimization (ACO) algorithm. The proposed approach represents a plan by a task list and a planned employee allocation matrix. In this way, both the issues of task scheduling and employee allocation can be taken into account. In the EBS, the beginning time of the project, the time when resources are released from finished tasks, and the time when employees join or leave the project are regarded as events. The basic idea of the EBS is to adjust the allocation of employees at events and keep the allocation unchanged at non-events. With this strategy, the proposed method enables the modeling of resource conflict and task preemption and preserves the flexibility in human resource allocation. To solve the planning problem, an ACO algorithm is further designed. Experimental results on 83 instances demonstrate that the proposed method is very promising.
    BibTeX:
    @article{ChenZ13,
      author = {Wei-Neng Chen and Jun Zhang},
      title = {Ant Colony Optimization for Software Project Scheduling and Staffing with an Event-Based Scheduler},
      journal = {IEEE Transactions on Software Engineering},
      year = {2013},
      volume = {39},
      number = {1},
      pages = {1-17},
      month = {January},
      doi = {http://dx.doi.org/10.1109/TSE.2012.17}
    }
    					
    2010.09.08 José del Sagrado, Isabel María del Águila & Francisco Javier Orellana Ant Colony Optimization for the Next Release Problem - A Comparative Study. 2010 Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10), pp. 67-76, Benevento Italy, 7-9 September   Inproceedings Requirements/Specifications
    Abstract: The selection of the enhancements to be included in the next software release is a complex task in every software development. Customers demand their own software enhancements, but all of them cannot be included in the software product, mainly due to the existence limited resources. In most of the cases, it is not feasible to develop all the new functionalities suggested by customers. Hence each new feature competes against each other to be included in the next release. This problem of minimizing development effort and maximizing customers' satisfaction is known as the next release problem (NRP). In this work we study the NRP problem as an optimisation problem. We use and describe three different meta-heuristic search techniques for solving NRP: simulated annealing, genetic algorithms and ant colony system (specifically, we show how to adapt the ant colony system to NRP). All of them obtain good but possibly sub optimal solution. Also we make a comparative study of these techniques on a case study. Furthermore, we have observed that the sub optimal solutions found applying these techniques include a high percentage of the requirements considered as most important by each individual customer.
    BibTeX:
    @inproceedings{SagradoAO10,
      author = {José del Sagrado and Isabel María del Águila and Francisco Javier Orellana},
      title = {Ant Colony Optimization for the Next Release Problem -- A Comparative Study.},
      booktitle = {Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {67-76},
      address = {Benevento, Italy},
      month = {7-9 September},
      doi = {http://dx.doi.org/10.1109/SSBSE.2010.18}
    }
    					
    2008.10.26 Francisco Chicano & Enrique Alba Ant Colony Optimization with Partial Order Reduction for Discovering Safety Property Violations in Concurrent Models 2008 Information Processing Letters, Vol. 106(6), pp. 221-231, June   Article Software/Program Verification
    Abstract: In this article we analyze the combination of ACOhg, a new metaheuristic algorithm, plus partial order reduction applied to the problem of finding safety property violations in concurrent models using a model checking approach. ACOhg is a new kind of ant colony optimization algorithm inspired by the foraging behavior of real ants equipped with internal resorts to search in very large search landscapes. We here apply ACOhg to concurrent models in scenarios located near the edge of the existing knowledge in detecting property violations. The results state that the combination is computationally beneficial for the search and represents a considerable step forward in this field with respect to exact and other heuristic techniques.
    BibTeX:
    @article{ChicanoA08b,
      author = {Francisco Chicano and Enrique Alba},
      title = {Ant Colony Optimization with Partial Order Reduction for Discovering Safety Property Violations in Concurrent Models},
      journal = {Information Processing Letters},
      year = {2008},
      volume = {106},
      number = {6},
      pages = {221-231},
      month = {June},
      doi = {http://dx.doi.org/10.1016/j.ipl.2007.11.015}
    }
    					
    2014.08.14 Roberto E. Lopez-Herrejon & Javier Ferrer A Parallel Evolutionary Algorithm for Prioritized Pairwise Testing of Software Product Lines 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1255-1262, Vancouver Canada, 12-16 July   Inproceedings Testing and Debugging
    Abstract: Software Product Lines (SPLs) are families of related software systems, which provide different feature combinations. Different SPL testing approaches have been proposed. However, despite the extensive and successful use of evolutionary computation techniques for software testing, their application to SPL testing remains largely unexplored. In this paper we present the Parallel Prioritized product line Genetic Solver (PPGS), a parallel genetic algorithm for the generation of prioritized pairwise testing suites for SPLs. We perform an extensive and comprehensive analysis of PPGS with 235 feature models from a wide range of number of features and products, using 3 different priority assignment schemes and 5 product prioritization selection strategies. We also compare PPGS with the greedy algorithm prioritized-ICPL. Our study reveals that overall PPGS obtains smaller covering arrays with an acceptable performance difference with prioritized-ICPL.
    BibTeX:
    @inproceedings{Lopez-HerrejonF14,
      author = {Roberto E. Lopez-Herrejon and Javier Ferrer},
      title = {A Parallel Evolutionary Algorithm for Prioritized Pairwise Testing of Software Product Lines},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1255-1262},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598305}
    }
    					
    2012.02.28 Linda Di Geronimo, Filomena Ferrucci, Alfonso Murolo & Federica Sarro A Parallel Genetic Algorithm Based on Hadoop MapReduce for the Automatic Generation of JUnit Test Suites 2012 Proceedings of the 5th International Workshop on Search-Based Software Testing (SBST '12), pp. 785-793, Montreal Canada, 21-21 April   Inproceedings Testing and Debugging
    Abstract: Software testing represents one of the most explored fields of application of Search-Based techniques and a range of testing problems have been successfully addressed using Genetic Algorithms. Nevertheless, to date Search-Based Software Testing (SBST) has found limited application in industry. As in other fields of Search-Based Software Engineering, this is principally due to the fact that when applied to large problems, Search-Based approaches may require too much computational efforts. In this scenario, parallelization may be a suitable way to improve the performance especially due to the fact that many of these techniques are "naturally parallelizable". Nevertheless, very few attempts have been provided for SBST parallelization.In this paper, we present a Parallel Genetic Algorithm for the automatic generation of test suites. The solution is based on Hadoop MapReduce since it is well supported to work also in the cloud and on graphic cards, thus being an ideal candidate for high scalable parallelization of Genetic Algorithms. A preliminary analysis of the proposal was carried outaiming to evaluate the speed-up with respect to the sequential execution. The analysis was based on a real world open source library.
    BibTeX:
    @inproceedings{DiGeronimoFMS12,
      author = {Linda Di Geronimo and Filomena Ferrucci and Alfonso Murolo and Federica Sarro},
      title = {A Parallel Genetic Algorithm Based on Hadoop MapReduce for the Automatic Generation of JUnit Test Suites},
      booktitle = {Proceedings of the 5th International Workshop on Search-Based Software Testing (SBST '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {785-793},
      address = {Montreal, Canada},
      month = {21-21 April},
      doi = {http://dx.doi.org/10.1109/ICST.2012.177}
    }
    					
    2010.11.24 Rafael da Veiga Cabral, Aurora Trinidad Ramirez Pozo & Silvia Regina Vergilio A Pareto Ant Colony Algorithm applied to the Class Integration and Test Order Problem 2010 Proceedings of the 22nd IFIP International Conference on Testing Software and Systems (ICTSS '10), Vol. 6435, pp. 16-29, Natal Brazil, 8-12 November   Inproceedings Testing and Debugging
    Abstract: In the context of Object-Oriented software, many works have investigated the Class Integration and Test Order (CITO) problem, proposing solutions to determine test orders for the integration test of the program classes. The existing approaches based on graphs can generate solutions that are sub-optimal, and do not consider the different factors and measures that can affect the stubbing process. To overcome this limitation, solutions based on Genetic Algorithms (GA) have presented promising results. However, the determination of a cost function, which is able to generate the best solutions, is not always a trivial task, mainly for complex systems with a great number of measures. Therefore, we introduce, in this paper, a multi-objective optimization approach to better represent the CITO problem. The approach generates a set of good solutions that achieve a balanced compromise between the different measures (objectives). It was implemented by a Pareto Ant Colony (P-ACO) algorithm, which is described in detail. The algorithm was used in a set of real programs and the obtained results are compared to the GA results. The results allow discussing the difference between single and multi-objective approaches especially for complex systems with a greater number of dependencies among the classes.
    BibTeX:
    @inproceedings{daVeigaCabralPV10,
      author = {Rafael da Veiga Cabral and Aurora Trinidad Ramirez Pozo and Silvia Regina Vergilio},
      title = {A Pareto Ant Colony Algorithm applied to the Class Integration and Test Order Problem},
      booktitle = {Proceedings of the 22nd IFIP International Conference on Testing Software and Systems (ICTSS '10)},
      publisher = {Springer},
      year = {2010},
      volume = {6435},
      pages = {16-29},
      address = {Natal, Brazil},
      month = {8-12 November},
      doi = {http://dx.doi.org/10.1007/978-3-642-16573-3_3}
    }
    					
    2015.12.08 Ankur Pachauri, Gursaran Srivastava & Gaurav Mishra A Path and Branch based Approach to Fitness Computation for Program Test Data Generation using Genetic Algorithm 2015 Proceedings of the 1st International Conference on Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE '15), pp. 49-55, Noida India, 25-27 February   Inproceedings
    Abstract: In this paper we present a novel approach for fitness computation for test data generation using genetic algorithm. Fitness computation is a two-step process. In the first step a target node sequence is determined and in the second step the actual execution path is compared with the target node sequence to compute fitness. Fitness computation uses both branch and path information. Experiments indicate that the described fitness technique results in significant improvement in search performance.
    BibTeX:
    @inproceedings{PachauriSM15,
      author = {Ankur Pachauri and Gursaran Srivastava and Gaurav Mishra},
      title = {A Path and Branch based Approach to Fitness Computation for Program Test Data Generation using Genetic Algorithm},
      booktitle = {Proceedings of the 1st International Conference on Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {49-55},
      address = {Noida, India},
      month = {25-27 February},
      doi = {http://dx.doi.org/10.1109/ABLAZE.2015.7154969}
    }
    					
    2015.12.08 Xiaofeng Xu, Yan Chen, Xiaochao Li & Donghui Guo A Path-Oriented Test Data Generation Approach for Automatic Software Testing 2008 Proceedings of the 2nd International Conference on Anti-counterfeiting, Security and Identification (ASID '08), pp. 63-66, Guiyang China, 20-23 August   Inproceedings
    Abstract: The clonal selection (CS) algorithm is an optimization algorithm based upon the clonal selection principle in the biological immune system. This paper presents a novel approach that uses CS algorithm for path-oriented test data generation. The approach takes a selected path as a target and executes sequences of operators iteratively for test case to generate. An affinity function which is made up of a similarity and penalty value is developed to guide the test generator to make successive modifications of the test data, so that test data can ever closer to satisfy the requirement. The comparison results show that this approach is more efficient than those based on other heuristic algorithms in finding solutions.
    BibTeX:
    @inproceedings{XuCLG08,
      author = {Xiaofeng Xu and Yan Chen and Xiaochao Li and Donghui Guo},
      title = {A Path-Oriented Test Data Generation Approach for Automatic Software Testing},
      booktitle = {Proceedings of the 2nd International Conference on Anti-counterfeiting, Security and Identification (ASID '08)},
      publisher = {IEEE},
      year = {2008},
      pages = {63-66},
      address = {Guiyang, China},
      month = {20-23 August},
      doi = {http://dx.doi.org/10.1109/IWASID.2008.4688344}
    }
    					
    2014.08.14 Giovani Guizzo, Thelma Elita Colanzi & Silvia Regina Vergilio A Pattern-Driven Mutation Operator for Search-Based Product Line Architecture Design 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 77-91, Fortaleza Brazil, 26-29 August   Inproceedings Design Tools and Techniques
    Abstract: The application of design patterns through mutation operators in search-based design may improve the quality of the architectures produced in the evolution process. However, we did not find, in the literature, works applying such patterns in the optimization of Product Line Architecture (PLA). Existing works offer manual approaches, which are not search-based, and only apply specific patterns in particular domains. Considering this fact, this paper introduces a meta-model and a mutation operator to allow the design patterns application in the search-based PLA design. The model represents suitable scopes, that is, set of architectural elements that are suitable to receive a pattern. The mutation operator is used with a multi-objective and evolutionary approach to obtain PLA alternatives. Quantitative and qualitative analysis of empirical results show an improvement in the quality of the obtained solutions.
    BibTeX:
    @inproceedings{GuizzoCV14,
      author = {Giovani Guizzo and Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {A Pattern-Driven Mutation Operator for Search-Based Product Line Architecture Design},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {77-91},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_6}
    }
    					
    2017-07-07 William B. Langdon, David R. White, Mark Harman, Yue Jia & Justyna Petke API-Constrained Genetic Improvement 2016 Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16), pp. 224-230, Raleigh NC USA, 8-10 October   Inproceedings
    Abstract: ACGI respects the Application Programming Interface whilst using genetic programming to optimise the implementation of the API. It reduces the scope for improvement but it may smooth the path to GI acceptance because the programmer’s code remains unaffected; only library code is modified. We applied ACGI to C++ software for the state-of-the-art OpenCV SEEDS superPixels image segmentation algorithm, obtaining a speed-up of up to 13.2% (±1.3%) to the $50 K Challenge winner announced at CVPR 2015.
    BibTeX:
    @inproceedings{LangdonWHJP16,
      author = {William B. Langdon and David R. White and Mark Harman and Yue Jia and Justyna Petke},
      title = {API-Constrained Genetic Improvement},
      booktitle = {Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16)},
      publisher = {Springer},
      year = {2016},
      pages = {224-230},
      address = {Raleigh, NC, USA},
      month = {8-10 October},
      doi = {http://dx.doi.org/10.1007/978-3-319-47106-8_16}
    }
    					
    2007.12.02 Shiann-Tsong Sheu & Yue-Ru Chuang A Pipeline-based Genetic Algorithm Accelerator for Time-Critical Processes in Real-Time Systems 2006 IEEE Transactions on Computers, Vol. 55(11), pp. 1435-1448, November   Article Design Tools and Techniques
    Abstract: The meta-heuristic methods, genetic algorithms (GAs), are frequently used to obtain optimal solutions for some complicated problems. However, due to the characteristic of natural evolution, the methods slowly converge the derived solutions to an optimal solution and are usually used to solve complicated and offline problems. While, in a real-world scenario, there are some complicated but real-time problems that require being solved within a short response time and have to obtain an optimal or near optimal solution due to performance considerations. Thus, the convergence speed of GAs becomes an important issue when it is applied to solve time-critical optimization problems. To address this, this paper presents a novel method, named hyper-generation GA (HG-GA), to improve the convergence speed of GAs. The proposed HG-GA breaks the general rule of generation-based evolution and uses a pipeline operation to accelerate the convergence speed of obtaining an optimal solution. Based on an example of a time-critical scheduling process in an optical network, both analysis and simulation results show that the HG-GA can generate more and better chromosomes than general GAs within the same evolutionary period. The rapid convergence property of the HG-GA increases its potential to solve many complicated problems in real-time systems.
    BibTeX:
    @article{SheuC06,
      author = {Shiann-Tsong Sheu and Yue-Ru Chuang},
      title = {A Pipeline-based Genetic Algorithm Accelerator for Time-Critical Processes in Real-Time Systems},
      journal = {IEEE Transactions on Computers},
      year = {2006},
      volume = {55},
      number = {11},
      pages = {1435-1448},
      month = {November},
      doi = {http://dx.doi.org/10.1109/TC.2006.171}
    }
    					
    2011.05.19 Bohuslav Křena, Zdeněk Letko, Tomáš Vojnar & Shmuel Ur A Platform for Search-based Testing of Concurrent Software 2010 Proceedings of the 8th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD '10), pp. 48-58, Trento Italy, 12-16 July   Inproceedings Testing and Debugging
    Abstract: The paper describes a generic, open-source infrastructure called SearchBestie (or S'Bestie for short) that we propose as a platform for experimenting with search-based techniques and for applying them in the area of software testing. Further, motivated by a lack of work on search-based testing targeted at identifying concurrency-related problems, we instantiate S'Bestie for search-based testing of concurrent programs using the IBM's concurrency testing infrastructure called ConTest. We demonstrate on series of experiments the capabilities of S'Bestie and (despite we have just made our first experiments with S'Bestie) also the fact that using search-based testing in the context of testing concurrent programs can be useful.
    BibTeX:
    @inproceedings{KrenaLVU10,
      author = {Bohuslav Křena and Zdeněk Letko and Tomáš Vojnar and Shmuel Ur},
      title = {A Platform for Search-based Testing of Concurrent Software},
      booktitle = {Proceedings of the 8th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD '10)},
      publisher = {ACM},
      year = {2010},
      pages = {48-58},
      address = {Trento, Italy},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/1866210.1866215}
    }
    					
    2015.12.09 Peng Sun & Xiaoping Wang Application of Ant Colony Optimization in Preventive Software Maintenance Policy 2012 Proceedings of International Conference on Information Science and Technology (ICIST '12), pp. 141-144, Hubei China, 23-25 March   Inproceedings
    Abstract: This paper studied preventive software maintenance policy based on ant colony algorithm. The entire system was divided into several subsystems and each subsystem has 4 kinds of maintenance policies with different maintenance cost. This model can obtain optimal preventive maintenance policy of each subsystem which guarantees excellent software system reliability with relatively lower costs. An example was presented to testify the effectiveness of the method we designed. The results show that the model we presented is efficient and effective.
    BibTeX:
    @inproceedings{SunW12,
      author = {Peng Sun and Xiaoping Wang},
      title = {Application of Ant Colony Optimization in Preventive Software Maintenance Policy},
      booktitle = {Proceedings of International Conference on Information Science and Technology (ICIST '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {141-144},
      address = {Hubei, China},
      month = {23-25 March},
      doi = {http://dx.doi.org/10.1109/ICIST.2012.6221624}
    }
    					
    2011.03.07 Surender Singh Dahiya, Jitender Kumar Chhabra & Shakti Kumar Application of Artificial Bee Colony Algorithm to Software Testing 2010 Proceedings of the 21st Australian Software Engineering Conference   Inproceedings Testing and Debugging
    Abstract: This paper presents an artificial bee colony based novel search technique for automatic generation of structural software tests. Test cases are symbolically generated by measuring fitness of individuals with the help of branch distance based objective function. Evaluation of the test generator was performed using ten real world programs. Some of these programs had large ranges for input variables. Results show that the new technique is a reasonable alternative for test data generation, but doesn't perform very well for large inputs and where constraints are having many equality constraints.
    BibTeX:
    @inproceedings{DahiyaCK10,
      author = {Surender Singh Dahiya and Jitender Kumar Chhabra and Shakti Kumar},
      title = {Application of Artificial Bee Colony Algorithm to Software Testing},
      booktitle = {Proceedings of the 21st Australian Software Engineering Conference},
      year = {2010},
      doi = {http://dx.doi.org/10.1109/ASWEC.2010.30}
    }
    					
    2014.08.14 Marcos Alvares, Fernando Buarque & Tshilidzi Marwala Coello Coello, C.A. (Hrsg.) Application of Computational Intelligence for Source Code Classification 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14), pp. 895-902, Beijing China, 6-11 July   Inproceedings
    Abstract: Multi-language Source Code Management systems have been largely used to collaboratively manage software development projects. These systems represent a fundamental step in order to fully use communication enhancements by producing concrete value on the way people collaborate to produce more reliable computational systems. These systems evaluate results of analyses in order to organise and optimise source code. These analyses are strongly dependent on technologies (i.e. framework, programming language, libraries) each of them with their own characteristics and syntactic structure. To overcome such limitation, source code classification is an essential pre-processing step to identify which analyses should be evaluated. This paper introduces a new approach for generating content-based classifiers by using Evolutionary Algorithms. Experiments were performed on real world source code collected from more than 200 different open source projects. Results show us that our approach can be successfully used for creating more accurate source code classifiers. The resulting classifier is also expansible and flexible to new classification scenarios (opening perspectives for new technologies).
    BibTeX:
    @inproceedings{AlvaresBM14,
      author = {Marcos Alvares and Fernando Buarque and Tshilidzi Marwala},
      title = {Application of Computational Intelligence for Source Code Classification},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {895--902},
      address = {Beijing, China},
      month = {6-11 July},
      url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6900300}
    }
    					
    2010.04.15 Praveen Ranjan Srivastava & Tai hoon Kim Application of Genetic Algorithm in Software Testing 2009 International Journal of Software Engineering and Its Applications, Vol. 3(4), pp. 87-95, October   Article Testing and Debugging
    Abstract: This paper presents a method for optimizing software testing efficiency by identifying the most critical path clusters in a program. We do this by developing variable length Genetic Algorithms that optimize and select the software path clusters which are weighted in accordance with the criticality of the path. Exhaustive software testing is rarely possible because it becomes intractable for even medium sized software. Typically only parts of a program can be tested, but these parts are not necessarily the most error prone. Therefore, we are developing a more selective approach to testing by focusing on those parts that are most critical so that these paths can be tested first. By identifying the most critical paths, the testing efficiency can be increased.
    BibTeX:
    @article{SrivastavaK09,
      author = {Praveen Ranjan Srivastava and Tai-hoon Kim},
      title = {Application of Genetic Algorithm in Software Testing},
      journal = {International Journal of Software Engineering and Its Applications},
      year = {2009},
      volume = {3},
      number = {4},
      pages = {87-95},
      month = {October},
      url = {http://www.sersc.org/journals/IJSEIA/vol3_no4_2009/6.pdf}
    }
    					
    2007.12.02 S. Xanthakis, C. Ellis, C. Skourlas, A. Le Gall, S. Katsikas & K. Karapoulios Application of Genetic Algorithms to Software Testing 1992 Proceedings of the 5th International Conference on Software Engineering and Applications, pp. 625-636, Toulouse France, 7-11 December   Inproceedings Testing and Debugging
    BibTeX:
    @inproceedings{XanthakisESLKK92,
      author = {S. Xanthakis and C. Ellis and C. Skourlas and A. Le Gall and S. Katsikas and K. Karapoulios},
      title = {Application of Genetic Algorithms to Software Testing},
      booktitle = {Proceedings of the 5th International Conference on Software Engineering and Applications},
      year = {1992},
      pages = {625-636},
      address = {Toulouse, France},
      month = {7-11 December}
    }
    					
    2010.03.24 Athanasios Tsakonas & Georgios Dounias Cordeiro, J.; Shishkov, B.; Ranchordas, A. & Helfert, M. (Hrsg.) Application of Genetic Programming in Software Engineering Empirical Data Modelling 2008 Proceedings of the 3rd International Conference on Software and Data Technologies (ICSOFT '08), pp. 295-300, Porto Portugal, July 5-8   Inproceedings Management
    Abstract: Research in software engineering data analysis has only recently incorporated computational intelligence methodologies. Among these approaches, genetic programming retains a remarkable position, facilitating symbolic regression tasks. In this paper, we demonstrate the effectiveness of the genetic programming paradigm, in two major software engineering duties, effort estimation and defect prediction. We examine data domains from both the commercial and the scientific sector, for each task. The proposed model is proved superior to past literature works.
    BibTeX:
    @inproceedings{TsakonasD08,
      author = {Athanasios Tsakonas and Georgios Dounias},
      title = {Application of Genetic Programming in Software Engineering Empirical Data Modelling},
      booktitle = {Proceedings of the 3rd International Conference on Software and Data Technologies (ICSOFT '08)},
      publisher = {INSTICC Press},
      year = {2008},
      pages = {295-300},
      address = {Porto, Portugal},
      month = {July 5-8},
      url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.148.5003}
    }
    					
    2011.09.05 Hakim Sultanov & Jane Huffman Hayes Application of Swarm Techniques to Requirements Engineering: Requirements Tracing 2010 Proceedings of the 18th IEEE International Requirements Engineering Conference (RE '10), pp. 211-220, 27 September - 1 October   Inproceedings Requirements/Specifications
    Abstract: We posit that swarm intelligence can be applied to effectively address requirements engineering problems. Specifically, this paper demonstrates the applicability of swarm intelligence to the requirements tracing problem using a simple ant colony algorithm. The technique has been validated using two real-world datasets from two problem domains. The technique can generate requirements traceability matrices (RTMs) between textual requirements artifacts (high level requirements traced to low level requirements, for example) with equivalent or better accuracy than traditional information retrieval techniques.
    BibTeX:
    @inproceedings{SultanovH10,
      author = {Hakim Sultanov and Jane Huffman Hayes},
      title = {Application of Swarm Techniques to Requirements Engineering: Requirements Tracing},
      booktitle = {Proceedings of the 18th IEEE International Requirements Engineering Conference (RE '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {211-220},
      month = {27 September - 1 October},
      doi = {http://dx.doi.org/10.1109/RE.2010.33}
    }
    					
    2011.09.05 Hakim Sultanov, Jane Huffman Hayes & Wei-Keat Kong Application of Swarm Techniques to Requirements Tracing 2011 Proceedings of the 18th IEEE International Requirements Engineering Conference (RE '11), Vol. 16(3), pp. 209-226   Article Requirements/Specifications
    Abstract: We posit that swarm intelligence can be applied to effectively address requirements engineering problems. Specifically, this paper demonstrates the applicability of swarm intelligence to the requirements tracing problem using two techniques: a simple swarm algorithm and a pheromone swarm algorithm. The techniques have been validated using two real-world datasets from two problem domains. The simple swarm technique generated requirements traceability matrices between textual requirements artifacts (high-level requirements traced to low-level requirements, for example). When compared with a baseline information retrieval tracing method, the swarm algorithms showed mixed results. The swarms achieved statistically significantly results on one of the secondary measurements for one dataset compared with the baseline method, lending support for continued investigation into swarms for tracing.
    BibTeX:
    @article{SultanovHK11,
      author = {Hakim Sultanov and Jane Huffman Hayes and Wei-Keat Kong},
      title = {Application of Swarm Techniques to Requirements Tracing},
      journal = {Proceedings of the 18th IEEE International Requirements Engineering Conference (RE '11)},
      year = {2011},
      volume = {16},
      number = {3},
      pages = {209-226},
      doi = {http://dx.doi.org/10.1007/s00766-011-0121-4}
    }
    					
    2011.07.20 Jan Staunton & John A. Clark Applications of Model Reuse when using Estimation of Distribution Algorithms to Test Concurrent Software 2011 Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11), Vol. 6956, pp. 97-111, Szeged Hungary, 10-12 September   Inproceedings Testing and Debugging
    Abstract: Previous work has shown the efficacy of using Estimation of Distribution Algorithms (EDAs) to detect faults in concurrent software/ systems. A promising feature of EDAs is the ability to analyse the information or model learned from any particular execution. The analysis performed can yield insights into the target problem allowing practitioners to adjust parameters of the algorithm or indeed the algorithm itself. This can lead to a saving in the effort required to perform future executions, which is particularly important when targeting expensive fitness functions such as searching concurrent software state spaces. In this work, we describe practical scenarios related to detecting concurrent faults in which reusing information discovered in EDA runs can save effort in future runs, and prove the potential of such reuse using an example scenario. The example scenario consists of examining problem families, and we provide empirical evidence showing real effort saving properties for three such families.
    BibTeX:
    @inproceedings{StauntonC11b,
      author = {Jan Staunton and John A. Clark},
      title = {Applications of Model Reuse when using Estimation of Distribution Algorithms to Test Concurrent Software},
      booktitle = {Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11)},
      publisher = {Springer},
      year = {2011},
      volume = {6956},
      pages = {97-111},
      address = {Szeged, Hungary},
      month = {10-12 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-23716-4_12}
    }
    					
    2015.12.09 Sumon Biswas, M.S. Kaiser & S.A. Mamun Applying Ant Colony Optimization in Software Testing to Generate Prioritized Optimal Path and Test Data 2015 Proceedings of International Conference on Electrical Engineering and Information Communication Technology (ICEEICT '15), pp. 1-6, Dhaka Bangladesh, 21-23 May   Inproceedings Testing and Debugging
    Abstract: Software testing is one of the most important parts of software development lifecycle. Among various types of software testing approaches structural testing is widely used. Structural testing can be improved largely by traversing all possible code paths of the software. Genetic algorithm is the most used search technique to automate path testing and test case generation. Recently, different novel search based optimization techniques such as Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Artificial Immune System (AIS), Particle Swarm Optimization (PSO) have been applied to generate optimal path to complete software coverage. In this paper, ant colony optimization (ACO) based algorithm has been proposed which will generate set of optimal paths and prioritize the paths. Additionally, the approach generates test data sequence within the domain to use as inputs of the generated paths. Proposed approach guarantees full software coverage with minimum redundancy. This paper also demonstrates the proposed approach applying it in a program module.
    BibTeX:
    @inproceedings{BiswasKM15,
      author = {Sumon Biswas and M. S. Kaiser and S. A. Mamun},
      title = {Applying Ant Colony Optimization in Software Testing to Generate Prioritized Optimal Path and Test Data},
      booktitle = {Proceedings of International Conference on Electrical Engineering and Information Communication Technology (ICEEICT '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-6},
      address = {Dhaka, Bangladesh},
      month = {21-23 May},
      doi = {http://dx.doi.org/10.1109/ICEEICT.2015.7307500}
    }
    					
    2014.05.28 Giovani Guizzo, Thelma Elita Colanzi & Silvia Regina Vergilio Applying Design Patterns in Product Line Search-based Design: Feasibility Analysis and Implementation Aspects 2013 Proceedings of the Chilean Computing Conference (JCC '13), Temuco Chile, 11-15 November   Inproceedings Design Tools and Techniques
    Abstract: Some works have manually applied design patterns in Product Line Architectures (PLAs) in order to improve the understanding and reuse of the PLAs artifacts. However, there is no search-based approach that considers such subject. Applying design patterns in conventional architectures through mutation processes in evolutionary approaches has been proven as an efficient technique. In this sense, this work presents results of a feasibility analysis for the automated application of GoF design patterns with a search-based PLA design approach. In addition to this, the paper proposes a metamodel representation for suitable scopes to receive the design patterns application and a mutation operator procedure to apply the patterns identified as feasible.
    BibTeX:
    @inproceedings{GuizzoCV13,
      author = {Giovani Guizzo and Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {Applying Design Patterns in Product Line Search-based Design: Feasibility Analysis and Implementation Aspects},
      booktitle = {Proceedings of the Chilean Computing Conference (JCC '13)},
      year = {2013},
      address = {Temuco, Chile},
      month = {11-15 November},
      url = {http://jcc2013.inf.uct.cl/wp-content/proceedings/SCCC/Applying%20design%20patterns%20in%20product%20line%20search-based%20design%20feasibility%20analysis%20and%20implementation%20aspects.pdf}
    }
    					
    2010.09.08 Guanzhou Lu, Rami Bahsoon & Xin Yao Applying Elementary Landscape Analysis to Search-Based Software Engineering 2010 Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10), pp. 3-8, Benevento Italy, 7-9 September   Inproceedings General Aspects and Survey
    Abstract: Recent research in search-based software engineering (SBSE) has demonstrated that a number of software engineering problems can be reformulated as a search problem, hence search algorithms can be applied to tackle it. However, most of the existing work has been of empirical nature and the techniques are predominately experimental. Therefore in-depth studies into characteritics of SE problems and appropriate algorithms to solve them are necessary. In this paper, we propose a novel method to gain insight knowledge on a variant of the next release problem (NRP) using elementary landscape analysis, which could be used to guide the design of more efficient algorithms. Preliminary experimental results are obtained to indicate the effectiveness of the proposed method.
    BibTeX:
    @inproceedings{LuBY10,
      author = {Guanzhou Lu and Rami Bahsoon and Xin Yao},
      title = {Applying Elementary Landscape Analysis to Search-Based Software Engineering},
      booktitle = {Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {3-8},
      address = {Benevento, Italy},
      month = {7-9 September},
      doi = {http://dx.doi.org/10.1109/SSBSE.2010.10}
    }
    					
    2012.03.07 Shaukat Ali Khan & Aamer Nadeem Applying Evolutionary Approaches to Data Flow Testing at Unit Level 2011 Proceedings of International Conferences ASEA, DRBC and EL, Vol. 257, Jeju Island Korea, 8-10 December   Inproceedings Testing and Debugging
    Abstract: Data flow testing is a white box testing approach that uses the dataflow relations in a program for the selection of test cases. Evolutionary testing uses the evolutionary approaches for the generation and selection of test data. This paper presents a novel approach applying evolutionary algorithms for the automatic generation of test paths using data flow relations in a program. Our approach starts with a random initial population of test paths and then based on the selected testing criteria new paths are generated by applying a genetic algorithm. A fitness function evaluates each chromosome (path) based on the selected data flow testing criteria and computes its fitness. We have applied one point crossover and mutation operators for the generation of new population. The approach has been implemented in Java by a prototype tool called ETODF for validation. In experiments with this prototype, our approach has much better results as compared to random testing.
    BibTeX:
    @inproceedings{KhanN11,
      author = {Shaukat Ali Khan and Aamer Nadeem},
      title = {Applying Evolutionary Approaches to Data Flow Testing at Unit Level},
      booktitle = {Proceedings of International Conferences ASEA, DRBC and EL},
      publisher = {Springer},
      year = {2011},
      volume = {257},
      address = {Jeju Island, Korea},
      month = {8-10 December},
      doi = {http://dx.doi.org/10.1007/978-3-642-27207-3}
    }
    					
    2007.12.02 André Baresel, Harmen-Hinrich Sthamer & Joachim Wegener Applying Evolutionary Testing to Search for Critical Defects 2004 Proceedings of the 2004 Conference on Genetic and Evolutionary Computation (GECCO '04), Vol. 3103, pp. 1427-1428, Seattle Washington USA, 26-30 June   Inproceedings Testing and Debugging
    Abstract: Software systems are used regularly in safety-relevant applications. Therefore, the occurrence of critical defects may not only cause costly recalls but may also endanger human lives. Accordingly, the development of software systems in industrial practice must comply with the highest quality requirements and standards. In practice, the most important analytical quality assurance method is dynamic testing and the most important activity to ensure this quality is test case determination. The effectiveness and efficiency of the test process can be clearly improved by Evolutionary Testing. Evolutionary Testing is a metaheuristic search technique for the generation of test cases.
    BibTeX:
    @inproceedings{BareselSW04,
      author = {André Baresel and Harmen-Hinrich Sthamer and Joachim Wegener},
      title = {Applying Evolutionary Testing to Search for Critical Defects},
      booktitle = {Proceedings of the 2004 Conference on Genetic and Evolutionary Computation (GECCO '04)},
      publisher = {Springer Berlin / Heidelberg},
      year = {2004},
      volume = {3103},
      pages = {1427-1428},
      address = {Seattle, Washington, USA},
      month = {26-30 June},
      doi = {http://dx.doi.org/10.1007/978-3-540-24855-2_163}
    }
    					
    2014.09.22 A.D. Bakar, A.B. Sultan, H. Zulzalil & J. Din Applying Evolution Programming Search Based Software Engineering (SBSE) in Selecting the Best Open Source Software Maintainability Metrics 2012 Proceedings of IEEE Symposium on Computer Applications and Industrial Electronics (ISCAIE '12), pp. 70-73, Kota Kinabalu Malaysia, 3-4 December   Inproceedings Distribution and Maintenance
    Abstract: The nature of an Open Source Software development paradigm forces individual practitioners and organization to adopt software through trial and error approach. This leads to the problems of coming across software and then abandoning it after realizing its lack of important qualities to suit their requirements or facing negative challenges in maintaining the software. These contributed by lack of recognizing guidelines to lead the practitioners in selecting out of the dozens available metrics, the best metric(s) to measure quality OSS. In this study, the novel results provide the guidelines that lead to the development of metrics model that can select the best metric(s) to predict maintainability of Open Source Software.
    BibTeX:
    @inproceedings{BakarSZD12,
      author = {A. D. Bakar and A. B. Sultan and H. Zulzalil and J. Din},
      title = {Applying Evolution Programming Search Based Software Engineering (SBSE) in Selecting the Best Open Source Software Maintainability Metrics},
      booktitle = {Proceedings of IEEE Symposium on Computer Applications and Industrial Electronics (ISCAIE '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {70-73},
      address = {Kota Kinabalu, Malaysia},
      month = {3-4 December},
      doi = {http://dx.doi.org/10.1109/ISCAIE.2012.6482071}
    }
    					
    2016.02.17 Sangeeta Sabharwal, Ritu Sibal & Chayanika Sharma Applying Genetic Algorithm for Prioritization of Test Case Scenarios Derived from UML Diagrams 2011 International Journal of Computer Science Issues, Vol. 8(3), pp. 433-444, May   Article Testing and Debugging
    Abstract: Software testing involves identifying the test cases whichdiscover errors in the program. However, exhaustive testing ofsoftware is very time consuming. In this paper, a technique isproposed to prioritize test case scenarios by identifying the critical path clusters using genetic algorithm. The test case scenarios are derived from the UML activity diagram and state chart diagram. The testing efficiency is optimized by applying the genetic algorithm on the test data. The information flow metric is adopted in this work for calculating the information flow complexity associated with each node of the activity diagram and state chart diagram.
    BibTeX:
    @article{SabharwalSS11b,
      author = {Sangeeta Sabharwal and Ritu Sibal and Chayanika Sharma},
      title = {Applying Genetic Algorithm for Prioritization of Test Case Scenarios Derived from UML Diagrams},
      journal = {International Journal of Computer Science Issues},
      year = {2011},
      volume = {8},
      number = {3},
      pages = {433-444},
      month = {May},
      url = {http://www.ijcsi.org/papers/IJCSI-8-3-2-433-444.pdf}
    }
    					
    2009.03.31 Outi Räihä Applying Genetic Algorithms in Software Architecture Design 2008 , February School: Department of Computer Sciences, University of Tampere   Mastersthesis Design Tools and Techniques
    Abstract: This thesis experiments with a novel approach to applying genetic algorithms in software architecture design by giving the structure of an architecture at a highly abstract level. Previously in the literature, genetic algorithms are used only to improve existing architectures. The structure and evaluation of software architectures and the principles of meta-heuristic search algorithms are introduced to give a basis to understand the implementation. Current research in the field of search-based software engineering is explored to give a perspective to the implementation presented in this thesis. The chosen genetic construction of software architectures is based on a model which contains information of a set of responsibilities and dependencies between them. An implementation using this model is presented, as well as test results achieved from a case study made on a sketch of an electronic home control system. The test results show that quality results can be achieved using the selected approach and that the presented implementation is a good starting point for future research.
    BibTeX:
    @mastersthesis{Raiha08,
      author = {Outi Räihä},
      title = {Applying Genetic Algorithms in Software Architecture Design},
      school = {Department of Computer Sciences, University of Tampere},
      year = {2008},
      month = {February},
      url = {http://www.cs.uta.fi/research/thesis/masters/Raiha_Outi.pdf}
    }
    					
    2009.04.02 Andres J. Ramirez, David B. Knoester, Betty H.C. Cheng & Philip K. McKinley Applying Genetic Algorithms to Decision Making in Autonomic Computing Systems 2009 Proceedings of the 6th International Conference on Autonomic Computing (ICAC '09), pp. 97-106, Barcelona Spain, 15-19 June   Inproceedings Design Tools and Techniques
    Abstract: Increasingly, applications need to be able to self-reconfigure in response to changing requirements and environmental conditions. Autonomic computing has been proposed as a means for automating software maintenance tasks. As the complexity of adaptive and autonomic systems grows, designing and managing the set of reconfiguration rules becomes increasingly challenging and may produce inconsistencies. This paper proposes an approach to leverage genetic algorithms in the decision-making process of an autonomic system. This approach enables a system to dynamically evolve reconfiguration plans at run time in response to changing requirements and environmental conditions. A key feature of this approach is incorporating system and environmental monitoring information into the genetic algorithm such that specific changes in the environment automatically drive the evolutionary process towards new viable solutions. We have applied this genetic-algorithm based approach to the dynamic reconfiguration of a collection of remote data mirrors, with the goal of minimizing costs while maximizing data reliability and network performance, even in the presence of link failures.
    BibTeX:
    @inproceedings{RamirezKCM09,
      author = {Andres J. Ramirez and David B. Knoester and Betty H.C. Cheng and Philip K. McKinley},
      title = {Applying Genetic Algorithms to Decision Making in Autonomic Computing Systems},
      booktitle = {Proceedings of the 6th International Conference on Autonomic Computing (ICAC '09)},
      publisher = {ACM},
      year = {2009},
      pages = {97-106},
      address = {Barcelona, Spain},
      month = {15-19 June},
      doi = {http://dx.doi.org/10.1145/1555228.1555258}
    }
    					
    2015.09.30 Manish Mahajan, Sumit Kumar & Rabins Porwal Applying Genetic Algorithm to Increase the Efficiency of a Data Flow-based Test Data Generation Approach 2012 ACM SIGSOFT Software Engineering Notes, Vol. 37(5), pp. 1-5, September   Article Testing and Debugging
    Abstract: The success or failure of the entire software development process relies on the software testing component which is responsible for ensuring that the software that is released is free from bugs. One of the major labor intensive activities of software testing is the generation of the test data for the purpose of applying the testing methodologies. Many approaches have been tried and tested for automating the process of generating the test data. Meta-heuristics have been applied extensively for improving the efficiency of the process. This paper analyses the effectiveness of applying genetic algorithms for generating test data automatically using data flow testing approach. An incremental coverage measurement method is used to improve the convergence.
    BibTeX:
    @article{MahajanKP12,
      author = {Manish Mahajan and Sumit Kumar and Rabins Porwal},
      title = {Applying Genetic Algorithm to Increase the Efficiency of a Data Flow-based Test Data Generation Approach},
      journal = {ACM SIGSOFT Software Engineering Notes},
      year = {2012},
      volume = {37},
      number = {5},
      pages = {1-5},
      month = {September},
      doi = {http://dx.doi.org/10.1145/2347696.2347707}
    }
    					
    2013.06.28 Justyna Petke, William B. Langdon & Mark Harman Applying Genetic Improvement to MiniSAT 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 257-262, St. Petersburg Russia, 24-26 August   Inproceedings
    Abstract: Genetic Programming (GP) has long been applied to several SBSE problems. Recently there has been much interest in using GP and its variants to solve demanding problems in which the code evolved by GP is intended for deployment. This paper investigates the application of genetic improvement to a challenging problem of improving a well-studied system: a Boolean satisfiability (SAT) solver called MiniSAT. Many programmers have tried to make this very popular solver even faster and a separate SAT competition track has been created to facilitate this goal. Thus genetically improving MiniSAT poses a great challenge. Moreover, due to a wide range of applications of SAT solving technologies any improvement could have a great impact. Our initial results show that there is some room for improvement. However, a significantly more efficient version of MiniSAT is yet to be discovered.
    BibTeX:
    @inproceedings{PetkeLH13,
      author = {Justyna Petke and William B. Langdon and Mark Harman},
      title = {Applying Genetic Improvement to MiniSAT},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {257-262},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_21}
    }
    					
    2011.03.07 Xiang Chen, Qing Gu, Jingxian Qi & Daoxu Chen Applying Particle Swarm Optimization to Pairwise Testing 2010 Proceedings of the 34th Annual Computer Software and Applications Conference (COMPSAC '10), pp. 107-116, Seoul South Korea, 19-23 July   Inproceedings Testing and Debugging
    Abstract: Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
    BibTeX:
    @inproceedings{ChenGQC10,
      author = {Xiang Chen and Qing Gu and Jingxian Qi and Daoxu Chen},
      title = {Applying Particle Swarm Optimization to Pairwise Testing},
      booktitle = {Proceedings of the 34th Annual Computer Software and Applications Conference (COMPSAC '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {107-116},
      address = {Seoul, South Korea},
      month = {19-23 July},
      doi = {http://dx.doi.org/10.1109/COMPSAC.2010.17}
    }
    					
    2009.01.28 Khin Haymar Saw Hla, YoungSik Choi & Jong Sou Park Applying Particle Swarm Optimization to Prioritizing Test Cases for Embedded Real Time Software 2008 Proceedings of the 2008 IEEE 8th International Conference on Computer and Information Technology Workshops, pp. 527-532, Sydney Australia, 8-11 July   Inproceedings Testing and Debugging
    Abstract: In recent years, complex embedded systems are used in every device that is infiltrating our daily lives. Since most of the embedded systems are multi-tasking real time systems, the task interleaving issues, dead lines and other factors needs software units retesting to follow the subsequence changes. Regression testing is used for the software maintenance that revalidates the old functionality of the software unit. Testing is one of the most complex and time-consuming activities, in which running of all combination of test cases in test suite may require a large amount of efforts. Test case prioritization techniques can take advantage that orders test cases, which attempts to increase effectiveness in regression testing. This paper proposes to use particle swarm optimization (PSO) algorithm to prioritize the test cases automatically based on the modified software units. Regarding to the recent investigations, PSO is a multi-object optimization technique that can find out the best positions of the objects. The goal is to prioritize the test cases to the new best order, based on modified software components, so that test cases, which have new higher priority, can be selected in the regression testing process. The empirical results show that by using the PSO algorithm, the test cases can be prioritized in the test suites with their new best positions effectively and efficiently.
    BibTeX:
    @inproceedings{HlaCP08,
      author = {Khin Haymar Saw Hla and YoungSik Choi and Jong Sou Park},
      title = {Applying Particle Swarm Optimization to Prioritizing Test Cases for Embedded Real Time Software},
      booktitle = {Proceedings of the 2008 IEEE 8th International Conference on Computer and Information Technology Workshops},
      publisher = {IEEE},
      year = {2008},
      pages = {527-532},
      address = {Sydney, Australia},
      month = {8-11 July},
      doi = {http://dx.doi.org/10.1109/CIT.2008.Workshops.104}
    }
    					
    2007.12.02 Andreas Windisch, Stefan Wappler & Joachim Wegener Applying Particle Swarm Optimization to Software Testing 2007 Proceedings of the 9th annual Conference on Genetic and Evolutionary Computation (GECCO '07), pp. 1121-1128, London England, 7-11 July   Inproceedings Testing and Debugging
    Abstract: Evolutionary structural testing is an approach to automatically generating test cases that achieve high structural code coverage. It typically uses genetic algorithms (GAs) to search for relevant test cases. In recent investigations particle swarm optimization (PSO), an alternative search technique, often outperformed GAs when applied to various problems. This raises the question of how PSO competes with GAs in the context of evolutionary structural testing.In order to contribute to an answer to this question, we performed experiments with 25 small artificial test objects and 13 more complex industrial test objects taken from various development projects. The results show that PSO outperforms GAs for most code elements to be covered in terms of effectiveness and efficiency.
    BibTeX:
    @inproceedings{WindischWW07,
      author = {Andreas Windisch and Stefan Wappler and Joachim Wegener},
      title = {Applying Particle Swarm Optimization to Software Testing},
      booktitle = {Proceedings of the 9th annual Conference on Genetic and Evolutionary Computation (GECCO '07)},
      publisher = {ACM},
      year = {2007},
      pages = {1121-1128},
      address = {London, England},
      month = {7-11 July},
      doi = {http://dx.doi.org/10.1145/1276958.1277178}
    }
    					
    2014.08.14 Tao Yue & Shaukat Ali Applying Search Algorithms for Optimizing Stakeholders Familiarity and Balancing Workload in Requirements Assignment 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1295-1302, Vancouver Canada, 12-16 July   Inproceedings Requirements/Specifications
    Abstract: During the early phase of project development lifecycle of large scale cyber-physical systems, a large number of requirements are needed to be assigned to different stakeholders from different organizations or different departments of the same organization for reviewing, clarifying and checking their conformance to industry standards and government or other regulations. These requirements have different characteristics such as various extents of importance to the organization, complexity, and dependencies between each other, thereby requiring different effort (workload) to review and clarify. While working with our industrial partners in the domain of cyber-physical systems, we discovered an optimization problem, where an optimal solution is required for assigning requirements to different stakeholders by maximizing their familiarities to the assigned requirements while balancing the overall workload of each stakeholder. We propose a fitness function which was investigated with four search algorithms: (1+1) Evolutionary Algorithm (EA), Genetic Algorithm, and Alternating Variable Method, whereas Random Search is used as a comparison base line. We empirically evaluated their performance for finding an optimal solution using a large-scale industrial case study and 120 artificial problems with varying complexity. Results show that (1+1) EA gives the best results together with our proposed fitness function as compared to the other three algorithms.
    BibTeX:
    @inproceedings{YueA14,
      author = {Tao Yue and Shaukat Ali},
      title = {Applying Search Algorithms for Optimizing Stakeholders Familiarity and Balancing Workload in Requirements Assignment},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1295-1302},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598309}
    }
    					
    2012.10.25 Thelma Elita Colanzi & Silvia Regina Vergilio Applying Search Based Optimization to Software Product Line Architectures: Lessons Learned 2012 Proceedings of the 4th International Symposium on Search Based Software Engineering (SSBSE '12), Vol. 7515, pp. 259-266, Riva del Garda Italy, 28-30 September   Inproceedings Design Tools and Techniques
    Abstract: The Product-Line Architecture (PLA) is a fundamental SPL artifact. However, PLA design is a people-intensive and non-trivial task, and to find the best architecture can be formulated as an optimization problem with many objectives. We found several approaches that address search-based design of software architectures by using multi-objective evolutionary algorithms. However, such approaches have not been applied to PLAs. Considering such fact, in this work, we explore the use of these approaches to optimize PLAs. An extension of existing approaches is investigated, which uses specific metrics to evaluate the PLA characteristics. Then, we performed a case study involving one SPL. From the experience acquired during this study, we can relate some lessons learned, which are discussed in this work. Furthermore, the results point out that, in the case of PLAs, it is necessary to use SPL specific measures and evolutionary operators more sensitive to the SPL context.
    BibTeX:
    @inproceedings{ColanziV12,
      author = {Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {Applying Search Based Optimization to Software Product Line Architectures: Lessons Learned},
      booktitle = {Proceedings of the 4th International Symposium on Search Based Software Engineering (SSBSE '12)},
      publisher = {Springer},
      year = {2012},
      volume = {7515},
      pages = {259-266},
      address = {Riva del Garda, Italy},
      month = {28-30 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-33119-0_19}
    }
    					
    2010.11.23 Camila Loiola Brito Maia, Fabrício Gomes de Freitas & Jerffeson Teixeira de Souza Applying Search-Based Techniques for Requirements-Based Test Case Prioritization 2010 Proceedings of the Brazilian Workshop on Optimization in Software Engineering (WOES '10), Salvador Brazil, 30-30 September   Inproceedings Testing and Debugging
    Abstract: Although software test is very important, there may be situations in which there is no time to execute all test cases. It is important to order the test cases so that the most important ones come first. Most of the works about search-based test case prioritization have used unit tests techniques, and we have to know the code in advance. This work considers requirement-based metrics to prioritize test cases, like volatility and importance. In other words, no code has to be known to prioritize the test cases by this approach. Some search-based techniques were applied to this multi-objective requirementbased formulation of the test case prioritization problem. The results show the efficiency of search-based techniques to order the test cases.
    BibTeX:
    @inproceedings{Maiadd10,
      author = {Camila Loiola Brito Maia and Fabrício Gomes de Freitas and Jerffeson Teixeira de Souza},
      title = {Applying Search-Based Techniques for Requirements-Based Test Case Prioritization},
      booktitle = {Proceedings of the Brazilian Workshop on Optimization in Software Engineering (WOES '10)},
      year = {2010},
      address = {Salvador, Brazil},
      month = {30-30 September},
      url = {http://www.uniriotec.br/~marcio.barros/woes2010/Paper04.pdf}
    }
    					
    2013.06.28 Alexey Kolesnichenko, Christopher M. Poskitt & Bertrand Meyer Applying Search in a Contract-Based Automatic Testing Tool 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: Automated random testing has been shown to be effective at finding faults in a variety of contexts and is deployed in several testing frameworks. AutoTest is one such framework, targeting programs written in Eiffel, an object-oriented language natively supporting executable pre- and postconditions; these respectively serving as test filters and test oracles. In this paper, we propose the integration of search-based techniques—along the lines of Tracey—to try and guide the tool towards input data that leads to violations of the postconditions present in the code; input data that random testing alone might miss, or take longer to find. Furthermore, we propose to minimise the performance impact of this extension by applying GPU programming to amenable parts of the computation.
    BibTeX:
    @inproceedings{KolesnichenkoPM13,
      author = {Alexey Kolesnichenko and Christopher M. Poskitt and Bertrand Meyer},
      title = {Applying Search in a Contract-Based Automatic Testing Tool},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_31}
    }
    					
    2012.10.25 Matheus Henrique Esteves Paixão, Márcia Maria Albuquerque Brasil, Thiago Gomes Nepomuceno da Silva & Jerffeson Teixeira de Souza Applying the Ant-Q Algorithm on the Prioritization of Software Requirements with Precedence 2012 Proceedings of the 3rd Brazilian Workshop on Search-Based Software Engineering (WESB '12), Natal RN Brazil, 23 September   Inproceedings Requirements/Specifications
    Abstract: Software Requirements Prioritization is related to deciding the sequence for requirements development. It should be done during the project planning. For this task, some aspects should be taken into consideration, especially meet customers' expectations, address the risk for the project and comply with requirements precedences. Ant colony based approaches have been prominent as a search strategy for solving Software Engineering problems. In this work, the Ant-Q algorithm is adapted for solving the software requirements prioritization problem with precedence. Experiments are carried out for solving problem instances.
    BibTeX:
    @inproceedings{PaixaoBSS12,
      author = {Matheus Henrique Esteves Paixão and Márcia Maria Albuquerque Brasil and Thiago Gomes Nepomuceno da Silva and Jerffeson Teixeira de Souza},
      title = {Applying the Ant-Q Algorithm on the Prioritization of Software Requirements with Precedence},
      booktitle = {Proceedings of the 3rd Brazilian Workshop on Search-Based Software Engineering (WESB '12)},
      year = {2012},
      address = {Natal, RN, Brazil},
      month = {23 September}
    }
    					
    2015.12.08 Juhi Khandelwal & Pradeep Tomar Approach for Automated Test Data Generation for Path Testing in Aspect-Oriented Programs using Genetic Algorithm 2015 Proceedings of International Conference on Computing, Communication & Automation (ICCCA '15), pp. 854-858, Noida India, 15-16 May   Inproceedings Testing and Debugging
    Abstract: Aspect-Oriented Programming (AOP) is an emerging programming paradigm that supports implementation of cross-cutting requirements into named program units called aspects. However, these Aspects are hard to deal in many stages of Software Development Life Cycle (SDLC) especially in Aspect-Oriented software testing. Main aim of testing is to find the errors during execution of the program. Error can prevail in any part of the program so this study use Control Flow Graph (CFG) to depicts all path of the program during its execution. Some path of the program executes rarely, so with the help of automated test data generation tester can execute those path because generation of test data for these path is not manually possible. Test data generation process can be automated with the help of various techniques and framework. This work provides review of some of the recent work that has been done in the area of AOP test data generation. Based on those work, this work proposes a approach for generating test data for AOP using Genetic Algorithm (GA).
    BibTeX:
    @inproceedings{KhandelwalT15,
      author = {Juhi Khandelwal and Pradeep Tomar},
      title = {Approach for Automated Test Data Generation for Path Testing in Aspect-Oriented Programs using Genetic Algorithm},
      booktitle = {Proceedings of International Conference on Computing, Communication & Automation (ICCCA '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {854-858},
      address = {Noida, India},
      month = {15-16 May},
      doi = {http://dx.doi.org/10.1109/CCAA.2015.7148494}
    }
    					
    2010.07.19 He Jiang, Jifeng Xuan & Zhilei Ren Approximate Backbone based Multilevel Algorithm for Next Release Problem 2010 Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation (GECCO '10), pp. 1333-1340, Portland Oregon USA, 7-11 July   Inproceedings Requirements/Specifications
    Abstract: The next release problem (NRP) aims to effectively select software requirements in order to acquire maximum customer profits. As an NP-hard problem in software requirement engineering, NRP lacks efficient approximate algorithms for large scale instances. The backbone is a new tool for tackling large scale NP-hard problems in recent years. In this paper, we employ the backbone to design high performance approximate algorithms for large scale NRP instances. Firstly we show that it is NP-hard to obtain the backbone of NRP. Then, we illustrate by fitness landscape analysis that the backbone can be well approximated by the shared common parts of local optimal solutions. Therefore, we propose an approximate backbone based multilevel algorithm (ABMA) to solve large scale NRP instances. This algorithm iteratively explores the search spaces by multilevel reductions and refinements. Experimental results demonstrate that ABMA outperforms existing algorithms on large instances in terms of solution quality and running time.
    BibTeX:
    @inproceedings{JiangXR10,
      author = {He Jiang and Jifeng Xuan and Zhilei Ren},
      title = {Approximate Backbone based Multilevel Algorithm for Next Release Problem},
      booktitle = {Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation (GECCO '10)},
      publisher = {ACM},
      year = {2010},
      pages = {1333-1340},
      address = {Portland, Oregon, USA},
      month = {7-11 July},
      doi = {http://dx.doi.org/10.1145/1830483.1830730}
    }
    					
    2012.03.05 Andrea Arcuri & Lionel C. Briand A Practical Guide for using Statistical Tests to Assess Randomized Algorithms in Software Engineering 2011 Proceedings of the 33rd International Conference on Software Engineering (ICSE '11), pp. 1-10, Honolulu HI USA, 21-28 May   Inproceedings General Aspects and Survey
    Abstract: Randomized algorithms have been used to successfully address many different types of software engineering problems. This type of algorithms employ a degree of randomness as part of their logic. Randomized algorithms are useful for difficult problems where a precise solution cannot be derived in a deterministic way within reasonable time. However, randomized algorithms produce different results on every run when applied to the same problem instance. It is hence important to assess the effectiveness of randomized algorithms by collecting data from a large enough number of runs. The use of rigorous statistical tests is then essential to provide support to the conclusions derived by analyzing such data. In this paper, we provide a systematic review of the use of randomized algorithms in selected software engineering venues in 2009. Its goal is not to perform a complete survey but to get a representative snapshot of current practice in software engineering research. We show that randomized algorithms are used in a significant percentage of papers but that, in most cases, randomness is not properly accounted for. This casts doubts on the validity of most empirical results assessing randomized algorithms. There are numerous statistical tests, based on different assumptions, and it is not always clear when and how to use these tests. We hence provide practical guidelines to support empirical research on randomized algorithms in software engineering.
    BibTeX:
    @inproceedings{ArcuriB11,
      author = {Andrea Arcuri and Lionel C. Briand},
      title = {A Practical Guide for using Statistical Tests to Assess Randomized Algorithms in Software Engineering},
      booktitle = {Proceedings of the 33rd International Conference on Software Engineering (ICSE '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {1-10},
      address = {Honolulu, HI, USA},
      month = {21-28 May},
      doi = {http://dx.doi.org/10.1145/1985793.1985795}
    }
    					
    2012.08.22 Adil A.A Saed, Wan M.N. Wan Kadir & Adil Yousif A Prediction Approach to Support Alternative Design Decision for Component-Based System Development 2012 Recent Researches in Engineering Education and Software Engineering   Incollection
    Abstract: Interpreting the results of performance analysis and generate alternative design to build component system is a main challenge in the software performance domain. Improving one quality feature can weaken another; quality features cannot be individually improved. Furthermore, the span of design space hinders the selection of the appropriate design alternative. In the context of Component-based system, the paper discusses the assessment of performance characteristics of software architecture, auto-generation of the new candidates, as well as relevant concepts to optimization problems such as design space and degree of freedoms. We introduce an approach supports the alternative design decision using PSO as a promised meta-heuristic technique. Performance cannot be assisted in isolation of other non-functional properties; we outline the process of evaluating the software performance considering the probability of its conflicting with reliability. Therefore, the proposed approach enables the architect to reason on the auto-provided architectures and chooses the optimal solution. Consequently, better architecture design could be obtained and time to develop the system will be reduced. Finally, a simple case study is illustrated in the paper as an example to demonstrate the applicability of the approach.
    BibTeX:
    @incollection{SaedKY12,
      author = {Adil A.A Saed and Wan M.N. Wan Kadir and Adil Yousif},
      title = {A Prediction Approach to Support Alternative Design Decision for Component-Based System Development},
      booktitle = {Recent Researches in Engineering Education and Software Engineering},
      publisher = {WSEAS Press},
      year = {2012},
      url = {http://www.wseas.us/e-library/conferences/2012/CambridgeUK/SEPED/SEPED-14.pdf}
    }
    					
    2007.12.02 Hans-Gerhard Groß A Prediction System for Evolutionary Testability applied to Dynamic Execution Time Analysis 2001 Information and Software Technology, Vol. 43(14), pp. 855-862, December   Article Testing and Debugging
    Abstract: Evolutionary testing (ET) is a test case generation technique based on the application of an evolutionary algorithm. It can be applied to timing analysis of real-time systems. In this instance, timing analysis is equivalent to testing. The test objective is to uncover temporal errors. This corresponds to the violation of the system's timing specification. Testability is the ability of the test technique to uncover faults. Evolutionary testability is the ability of an evolutionary algorithm to successfully generate test cases with the goal to uncover faults, in this instance violation of the timing specification. This process attempts to find the best- and worst-case execution time of a real-time system. Some attributes of real-time systems were found to greatly inhibit the successful generation of the best- and worst-case execution times through ET. These are small path domains, high data dependence, large input vectors and nesting. This paper defines software metrics, which aim to express the effects of these attributes on ET. ET is applied to generate the best-and worst-case execution paths of test programs. Their extreme timing paths are determined analytically and the average success of ET to cover these paths is assessed. This empirical data is mapped against the software metrics to derive a prediction system for evolutionary testability. The measurement and prediction system developed from the experiments is able to forecast evolutionary testability with almost 90% accuracy. The prediction system will be used to assess whether the application of ET to a real-time system will be sufficient to successful dynamic timing analysis, or whether additional testing strategies are needed.
    BibTeX:
    @article{Gross01,
      author = {Hans-Gerhard Groß},
      title = {A Prediction System for Evolutionary Testability applied to Dynamic Execution Time Analysis},
      journal = {Information and Software Technology},
      year = {2001},
      volume = {43},
      number = {14},
      pages = {855-862},
      month = {December},
      doi = {http://dx.doi.org/10.1016/S0950-5849(01)00191-4}
    }
    					
    2011.05.19 Simon Poulding, John A. Clark & Hélène Waeselynck A Principled Evaluation of the Effect of Directed Mutation on Search-based Statistical Testing 2011 Proceedings of the 4th International Workshop on Search-Based Software Testing (SBST '11), Berlin Germany, 21-21 March   Inproceedings Testing and Debugging
    Abstract: Statistical testing generates test inputs by sampling from a probability distribution that is carefully chosen so that the inputs exercise all parts of the software being tested. Sets of such inputs have been shown to detect more faults than test sets generated using traditional random and structural testing techniques. Search-based statistical testing employs a metaheuristic search algorithm to automate the otherwise labour-intensive process of deriving the probability distribution. This paper proposes an enhancement to this search algorithm: information obtained during fitness evaluation is used to direct the mutation operator to those parts of the representation where changes may be most beneficial. A principled empirical evaluation demonstrates that this enhancement leads to a significant improvement in algorithm performance, and so increases both the cost-effectiveness and scalability of search-based statistical testing. As part of the empirical approach, we demonstrate the use of response surface methodology as an effective and objective method of tuning algorithm parameters, and suggest innovative refinements to this methodology.
    BibTeX:
    @inproceedings{PouldingCW11,
      author = {Simon Poulding and John A. Clark and Hélène Waeselynck},
      title = {A Principled Evaluation of the Effect of Directed Mutation on Search-based Statistical Testing},
      booktitle = {Proceedings of the 4th International Workshop on Search-Based Software Testing (SBST '11)},
      publisher = {IEEE},
      year = {2011},
      address = {Berlin, Germany},
      month = {21-21 March},
      doi = {http://dx.doi.org/10.1109/ICSTW.2011.36}
    }
    					
    2012.10.25 Leandro R. Pádua, Eduardo M. Guerra, Carlos H.Q. Forster & Saul C. Leite A Probabilistic Model for Prioritizing Software Releases in the Context of Agile Methodologies 2012 Proceedings of the 3rd Brazilian Workshop on Search-Based Software Engineering (WESB '12), Natal RN Brazil, 23 September   Inproceedings Requirements/Specifications
    Abstract: In Software Engineering there are some approaches to automate software release planning and osftware requirements prioritization. Additionally, some of them aim to optimize the risks involved in the project, but without considering the possibility of delay in delivery of releases. This paper presents a solution to the software release planning problem in the context of agile methodologies, considering time delays in software releases using concepts of probability theory. Markov Decision Process is used to choose th ordering. Finally, a comparison of th presented model with traditional methods shows the advantages of this approach for managing software projects.
    BibTeX:
    @inproceedings{PaduaGFL12,
      author = {Leandro R. Pádua and Eduardo M. Guerra and Carlos H. Q. Forster and Saul C. Leite},
      title = {A Probabilistic Model for Prioritizing Software Releases in the Context of Agile Methodologies},
      booktitle = {Proceedings of the 3rd Brazilian Workshop on Search-Based Software Engineering (WESB '12)},
      year = {2012},
      address = {Natal, RN, Brazil},
      month = {23 September}
    }
    					
    2011.07.20 Arilo Claudio Dias Neto & Rosiane de Freitas Rodrigues A Proposal for a Framework to Support the Selection of Software Technologies applying Search Strategies (in Portuguese) 2011 Proceedings of the 2nd Brazilian Workshop on Search Based Software Engineering (WESB '11), Sao Paulo Brazil, 26-26 September   Inproceedings
    BibTeX:
    @inproceedings{NetoR11,
      author = {Arilo Claudio Dias Neto and Rosiane de Freitas Rodrigues},
      title = {A Proposal for a Framework to Support the Selection of Software Technologies applying Search Strategies (in Portuguese)},
      booktitle = {Proceedings of the 2nd Brazilian Workshop on Search Based Software Engineering (WESB '11)},
      year = {2011},
      address = {Sao Paulo, Brazil},
      month = {26-26 September}
    }
    					
    2010.12.16 Dongsun Kim & Sooyong Park A Q-Leaning-Based On-Line Planning Approach to Autonomous Architecture Discovery for Self-managed Software 2008 Proceedings of the OTM Confederated International Workshops and Posters on On the Move to Meaningful Internet Systems, pp. 432-441, Monterrey Mexico, 9-14 November   Inproceedings Management
    Abstract: Two key concepts for architecture-based self-managed software are flexibility and autonomy. Recent discussion have focused on flexibility in self-management, but the software engineering community has not been paying attention to autonomy as much as flexibility in self-management. In this paper, we focus on achieving the autonomy of software systems by on-line planning in which a software system can decide an appropriate plan in the presence of change, evaluate the result of the plan, and learn the result. Our approach applies Q-leaning, which is one of the reinforcement learning techniques, to self-managed systems. The paper presents a case study to illustrate the approach. The result of the case study shows that our approach is effective for self-management.
    BibTeX:
    @inproceedings{KimP08,
      author = {Dongsun Kim and Sooyong Park},
      title = {A Q-Leaning-Based On-Line Planning Approach to Autonomous Architecture Discovery for Self-managed Software},
      booktitle = {Proceedings of the OTM Confederated International Workshops and Posters on On the Move to Meaningful Internet Systems},
      publisher = {Springer},
      year = {2008},
      pages = {432-441},
      address = {Monterrey, Mexico},
      month = {9-14 November},
      doi = {http://dx.doi.org/10.1007/978-3-540-88875-8_65}
    }
    					
    2010.11.29 Robert J. Hall A Quantum Algorithm for Software Engineering Search 2009 Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering (ASE '09), pp. 40-51, Auckland New Zealand, 16-20 November   Inproceedings Software/Program Verification
    Abstract: Quantum computers can solve a few basic problems, such as factoring an integer and searching a database, much faster than classical computers. However, the complexity of software artifacts, and the types of questions software engineers ask about them, pose significant challenges for applying existing quantum approaches to software engineering search (SES) problems. This paper first describes a new quantum search algorithm, IDGS-FA, whose design is motivated by the characteristics of SES problems. Next, it describes how to apply quantum searching to three SES problems: FSM property checking, software test generation, and library-based software synthesis. Next, the paper gives the main ideas in QSAT, a novel toolkit supporting efficient simulation of the algorithms and applications discussed. Finally, it concludes with a substantial simulation-based study of IDGS-FA, showing that it improves both the reliability and speed of other approaches.
    BibTeX:
    @inproceedings{Hall09,
      author = {Robert J. Hall},
      title = {A Quantum Algorithm for Software Engineering Search},
      booktitle = {Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering (ASE '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {40-51},
      address = {Auckland, New Zealand},
      month = {16-20 November},
      doi = {http://dx.doi.org/10.1109/ASE.2009.51}
    }
    					
    2012.03.08 Sabira Khatun, Khandakar Fazley Rabbi, C.Y. Yaakub & M.F. J Klaib A Random Search based Effective Algorithm for Pairwise Test Data Generation 2011 Proceedings of International Conference on Electrical, Control and Computer Engineering (INECCE '11), pp. 293-297, Pahang Malaysia, 21-22 June   Inproceedings Testing and Debugging
    Abstract: Testing is a very important task to build error free software. As the resources and time to market is limited for a software product, it is impossible to perform exhaustive test i.e., to test all combinations of input data. To reduce the number of test cases in an acceptable level, it is preferable to use higher interaction level (t way, where t ≥ 2). Pairwise (2-way or t = 2) interaction can find most of the software faults. This paper proposes an effective random search based pairwise test data generation algorithm named R2Way to optimize the number of test cases. Java program has been used to test the performance of the algorithm. The algorithm is able to support both uniform and non-uniform values effectively with performance better than the existing algorithms/tools in terms of number of generated test cases and time consumption.
    BibTeX:
    @inproceedings{KhatunRYK11,
      author = {Sabira Khatun and Khandakar Fazley Rabbi and C. Y. Yaakub and M. F. J Klaib},
      title = {A Random Search based Effective Algorithm for Pairwise Test Data Generation},
      booktitle = {Proceedings of International Conference on Electrical, Control and Computer Engineering (INECCE '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {293-297},
      address = {Pahang, Malaysia},
      month = {21-22 June},
      doi = {http://dx.doi.org/10.1109/INECCE.2011.5953894}
    }
    					
    2010.11.29 Camila Loiola Brito Maia, Rafael Augusto Ferreira do Carmo, Gustavo Augusto Lima de Campos & Jerffeson Teixeira de Souza A Reactive GRASP Approach for Regression Test Case Prioritization 2008 Proceedings of the XL Brazilian Symposium of Operational Research (SBPO '08), João Pessoa Brazil, 2-5 September   Inproceedings Testing and Debugging
    Abstract: Some modifications in software can affect some functionality that had been working until that point. In order to detect such a problem, the ideal solution would be testing the whole system once again, but there may be insufficient time or resources for this. An alternative solution would be to order the test cases so that the most beneficial tests are executed first, in such a way only a subset of the test cases can be executed with little lost of effectiveness. Such a technique is known as Regression Test Case Prioritization. In this paper, we propose the use of the Reactive GRASP metaheuristic to prioritize test cases, and compare it with others search-based algorithms previously described in the literature. Four programs were used in the experiments. The results demonstrated good coverage performances with some time overhead.
    BibTeX:
    @inproceedings{MaiaCCS08,
      author = {Camila Loiola Brito Maia and Rafael Augusto Ferreira do Carmo and Gustavo Augusto Lima de Campos and Jerffeson Teixeira de Souza},
      title = {A Reactive GRASP Approach for Regression Test Case Prioritization},
      booktitle = {Proceedings of the XL Brazilian Symposium of Operational Research (SBPO '08)},
      year = {2008},
      address = {João Pessoa, Brazil},
      month = {2-5 September},
      url = {www.larces.uece.br/~carmorafael/Artigos/sbpo_grasp.pdf}
    }
    					
    2013.06.28 Matheus Henrique Esteves Paixão & Jerffeson Teixeira de Souza A Recoverable Robust Approach for the Next Release Problem 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 172-187, St. Petersburg Russia, 24-26 August   Inproceedings Requirements/Specifications
    Abstract: Selecting a set of requirements to be included in the next software release, which has become to be known as the Next Release Problem, is an important issue in the iterative and incremental software development model. Since software development is performed under a dynamic environment, some requirements aspects, like importance and effort cost values, are highly subject to uncertainties, which should be taken into account when solving this problem through a search technique. Current robust approaches for dealing with these uncertainties are very conservative, since they perform the selection of the requirements considering all possible uncertainties realizations. Thereby, this paper presents an evolution of this robust model, exploiting the recoverable robust optimization framework, which is capable of producing recoverable solutions for the Next Release Problem. Several experiments were performed over synthetic and real-world instances, with all results showing that the recovery strategy handles well the conservatism and adds more quality to the robust solutions.
    BibTeX:
    @inproceedings{PaixaoS13b,
      author = {Matheus Henrique Esteves Paixão and Jerffeson Teixeira de Souza},
      title = {A Recoverable Robust Approach for the Next Release Problem},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {172-187},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_14}
    }
    					
    2016.02.17 Tsong Yueh Chen, Fei-Ching Kuo, Dave Towey & Zhi Quan Zhou A Revisit of Three Studies related to Random Testing 2015 Science China Information Sciences, Vol. 58(5), pp. 1-9, May   Article Testing and Debugging
    Abstract: Software testing is an approach that ensures the quality of software through execution, with a goal being to reveal failures and other problems as quickly as possible. Test case selection is a fundamental issue in software testing, and has generated a large body of research, especially with regards to the effectiveness of random testing (RT), where test cases are randomly selected from the software's input domain. In this paper, we revisit three of our previous studies. The first study investigated a sufficient condition for partition testing (PT) to outperform RT, and was motivated by various controversial and conflicting results suggesting that sometimes PT performed better than RT, and sometimes the opposite. The second study aimed at enhancing RT itself, and was motivated by the fact that RT continues to be a fundamental and popular testing technique. This second study enhanced RT fault detection effectiveness by making use of the common observation that failure-causing inputs tend to cluster together, and resulted in a new family of RT techniques: adaptive random testing (ART), which is random testing with an even spread of test cases across the input domain. Following the successful use of failure-causing region contiguity insights to develop ART, we conducted a third study on how to make use of other characteristics of failure-causing inputs to develop more effective test case selection strategies. This third study revealed how best to approach testing strategies when certain characteristics of the failure-causing inputs are known, and produced some interesting and important results. In revisiting these three previous studies, we explore their unexpected commonalities, and identify diversity as a key concept underlying their effectiveness. This observation further prompted us to examine whether or not such a concept plays a role in other areas of software testing, and our conclusion is that, yes, diversity appears to be one of the most important concepts in the field of software testing.
    BibTeX:
    @article{ChenKTZ15,
      author = {Tsong Yueh Chen and Fei-Ching Kuo and Dave Towey and Zhi Quan Zhou},
      title = {A Revisit of Three Studies related to Random Testing},
      journal = {Science China Information Sciences},
      year = {2015},
      volume = {58},
      number = {5},
      pages = {1-9},
      month = {May},
      doi = {http://dx.doi.org/10.1007/s11432-015-5314-x}
    }
    					
    2009.08.05 David R. White & Simon Poulding A Rigorous Evaluation of Crossover and Mutation in Genetic Programming 2009 Proceedings of the 12th European Conference on Genetic Programming (EuroGP '09), Vol. 5481, pp. 220-231, Tübingen Germany, 15-17 April   Inproceedings General Aspects and Survey
    Abstract: The role of crossover and mutation in Genetic Programming (GP) has been the subject of much debate since the emergence of the field. In this paper, we contribute new empirical evidence to this argument using a rigorous and principled experimental method applied to six problems common in the GP literature. The approach tunes the algorithm parameters to enable a fair and objective comparison of two different GP algorithms, the first using a combination of crossover and reproduction, and secondly using a combination of mutation and reproduction. We find that crossover does not significantly outperform mutation on most of the problems examined. In addition, we demonstrate that the use of a straightforward Design of Experiments methodology is effective at tuning GP algorithm parameters.
    BibTeX:
    @inproceedings{WhiteP09,
      author = {David R. White and Simon Poulding},
      title = {A Rigorous Evaluation of Crossover and Mutation in Genetic Programming},
      booktitle = {Proceedings of the 12th European Conference on Genetic Programming (EuroGP '09)},
      publisher = {Springer},
      year = {2009},
      volume = {5481},
      pages = {220-231},
      address = {Tübingen, Germany},
      month = {15-17 April},
      doi = {http://dx.doi.org/10.1007/978-3-642-01181-8_19}
    }
    					
    2014.08.14 Mohamed Wiem Mkaouer, Marouane Kessentini, Slim Bechikh & Mel Ó Cinnéide A Robust Multi-objective Approach for Software Refactoring under Uncertainty 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 168-183, Fortaleza Brazil, 26-29 August   Inproceedings Distribution and Maintenance
    Abstract: Refactoring large systems involves several sources of uncertainty related to the severity levels of code smells to be corrected and the importance of the classes in which the smells are located. Due to the dynamic nature of software development, these values cannot be accurately determined in practice, leading to refactoring sequences that lack robustness. To address this problem, we introduced a multi-objective robust model, based on NSGA-II, for the software refactoring problem that tries to find the best trade-off between quality and robustness. We evaluated our approach using six open source systems and demonstrated that it is significantly better than state-of-the-art refactoring approaches in terms of robustness in 100% of experiments based on a variety of real-world scenarios. Our suggested refactoring solutions were found to be comparable in terms of quality to those suggested by existing approaches and to carry an acceptable robustness price. Our results also revealed an interesting feature about the trade-off between quality and robustness that demonstrates the practical value of taking robustness into account in software refactoring.
    BibTeX:
    @inproceedings{MkaouerKBO14,
      author = {Mohamed Wiem Mkaouer and Marouane Kessentini and Slim Bechikh and Mel Ó Cinnéide},
      title = {A Robust Multi-objective Approach for Software Refactoring under Uncertainty},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {168-183},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_12}
    }
    					
    2017.06.27 Mohamed Wiem Mkaouer, Marouane Kessentini, Mel Ó Cinnéide, Shinpei Hayashi & Kalyanmoy Deb A Robust Multi-objective Approach to Balance Severity And Importance of Refactoring Opportunities 2017 Empirical Software Engineering, Vol. 22(2), pp. 894-927, April   Article
    Abstract: Refactoring large systems involves several sources of uncertainty related to the severity levels of code smells to be corrected and the importance of the classes in which the smells are located. Both severity and importance of identified refactoring opportunities (e.g. code smells) are difficult to estimate. In fact, due to the dynamic nature of software development, these values cannot be accurately determined in practice, leading to refactoring sequences that lack robustness. In addition, some code fragments can contain severe quality issues but they are not playing an important role in the system. To address this problem, we introduced a multi-objective robust model, based on NSGA-II, for the software refactoring problem that tries to find the best trade-off between three objectives to maximize: quality improvements, severity and importance of refactoring opportunities to be fixed. We evaluated our approach using 8 open source systems and one industrial project, and demonstrated that it is significantly better than state-of-the-art refactoring approaches in terms of robustness in all the experiments based on a variety of real-world scenarios. Our suggested refactoring solutions were found to be comparable in terms of quality to those suggested by existing approaches, better prioritization of refactoring opportunities and to carry an acceptable robustness price.
    BibTeX:
    @article{MkaouerKCHD17,
      author = {Mohamed Wiem Mkaouer and Marouane Kessentini and Mel Ó Cinnéide and Shinpei Hayashi and Kalyanmoy Deb},
      title = {A Robust Multi-objective Approach to Balance Severity And Importance of Refactoring Opportunities},
      journal = {Empirical Software Engineering},
      year = {2017},
      volume = {22},
      number = {2},
      pages = {894-927},
      month = {April},
      doi = {http://dx.doi.org/10.1007/s10664-016-9426-8}
    }
    					
    2016.02.27 Matheus Henrique Esteves Paixão & Jerffeson Teixeira de Souza A Robust Optimization Approach to the Next Release Problem in the Presence of Uncertainties 2015 Journal of Systems and Software, Vol. 103, pp. 281-295, May   Article
    Abstract: The next release problem is a significant task in the iterative and incremental software development model, involving the selection of a set of requirements to be included in the next software release. Given the dynamic environment in which modern software development occurs, the uncertainties related to the input variables of this problem should be taken into account. In this context, this paper presents a formulation to the next release problem considering the robust optimization framework, which enables the production of robust solutions. In order to measure the "price of robustness", which is the loss in solution quality due to robustness, a large empirical evaluation was executed over synthetical and real-world instances. Several next release planning situations were considered, including different number of requirements, estimating skills and interdependencies between requirements. All empirical results are consistent to show that the penalization with regard to solution quality is relatively small. In addition, the proposed model's behavior is statistically the same for all considered instances, which qualifies it to be applied even in large-scale real-world software projects.
    BibTeX:
    @article{PaixaoS15,
      author = {Matheus Henrique Esteves Paixão and Jerffeson Teixeira de Souza},
      title = {A Robust Optimization Approach to the Next Release Problem in the Presence of Uncertainties},
      journal = {Journal of Systems and Software},
      year = {2015},
      volume = {103},
      pages = {281-295},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.jss.2014.09.039}
    }
    					
    2007.12.02 Giuliano Antoniol, Massimiliano Di Penta & Mark Harman A Robust Search-based Approach to Project Management in the Presence of Abandonment, Rework, Error and Uncertainty 2004 Proceedings of the 10th International Symposium on the Software Metrics (METRICS '04), pp. 172-183, Chicago USA, 11-17 September   Inproceedings Management
    Abstract: Managing a large software project involves initial estimates that may turn out to be erroneous or that might be expressed with some degree of uncertainty. Furthermore, as the project progresses, it often becomes necessary to rework some of the work packages that make up the overall project. Other work packages might have to be abandoned for a variety of reasons. In the presence of these difficulties, optimal allocation of staff to project teams and teams to work packages is far from trivial. This paper shows how genetic algorithms can be combined with a queuing simulation model to address these problems in a robust manner. A tandem genetic algorithm is used to search for the best sequence in which to process work packages and the best allocation of staff to project teams. The simulation model, that computes the project estimated completion date, guides the search. The possible impact of rework, abandonment and erroneous or uncertain initial estimates are characterised by separate error distributions. The paper presents results from the application of these techniques to data obtained from a large scale commercial software maintenance project.
    BibTeX:
    @inproceedings{AntoniolDH04b,
      author = {Giuliano Antoniol and Massimiliano Di Penta and Mark Harman},
      title = {A Robust Search-based Approach to Project Management in the Presence of Abandonment, Rework, Error and Uncertainty},
      booktitle = {Proceedings of the 10th International Symposium on the Software Metrics (METRICS '04)},
      publisher = {IEEE},
      year = {2004},
      pages = {172-183},
      address = {Chicago, USA},
      month = {11-17 September},
      doi = {http://doi.ieeecomputersociety.org/10.1109/METRIC.2004.1357901}
    }
    					
    2008.08.26 Ramón Alvarez-Valdés, E. Crespo, José Manuel Tamarit & F. Villa A Scatter Search Algorithm for Project Scheduling under Partially Renewable Resources 2006 Journal of Heuristics, Vol. 12(1-2), pp. 95-113, March   Article Management
    Abstract: In this paper we develop a heuristic algorithm, based on Scatter Search, for project scheduling problems under partially renewable resources. This new type of resource can be viewed as a generalization of renewable and non-renewable resources, and is very helpful in modelling conditions that do no fit into classical models, but which appear in real timetabling and labor scheduling problems. The Scatter Search algorithm is tested on existing test instances and obtains the best results known so far.
    BibTeX:
    @article{Alvarez-ValdesCTV06,
      author = {Ramón Alvarez-Valdés and E. Crespo and José Manuel Tamarit and F. Villa},
      title = {A Scatter Search Algorithm for Project Scheduling under Partially Renewable Resources},
      journal = {Journal of Heuristics},
      year = {2006},
      volume = {12},
      number = {1-2},
      pages = {95-113},
      month = {March},
      doi = {http://dx.doi.org/10.1007/s10732-006-5224-6}
    }
    					
    2008.08.26 Raquel Blanco, Javier Tuya, Eugenia Díaz & Belarmino Adenso-Díaz A Scatter Search Approach for Automated Branch Coverage in Software Testing 2007 International Journal of Engineering Intelligent Systems (EIS), Vol. 15(3), pp. 135-142, September   Article Testing and Debugging
    Abstract: Software testing is an expensive and difficult process, which need much time. The use of techniques for automating the generation of software test cases is very important as it can reduce the time and cost of this process. The newest methods for automatic generation of tests use metaheuristic search techniques, i.e. Genetic Algorithms, Simulated Annealing and Tabu Search. However there are other metaheuristic techniques that can be used to generate automatically test cases. In this paper it is presented a new method based on Scatter Search that allows automatic generation of test cases to obtain branch coverage in software testing.
    BibTeX:
    @article{BlancoTDD07,
      author = {Raquel Blanco and Javier Tuya and Eugenia Díaz and Belarmino Adenso-Díaz},
      title = {A Scatter Search Approach for Automated Branch Coverage in Software Testing},
      journal = {International Journal of Engineering Intelligent Systems (EIS)},
      year = {2007},
      volume = {15},
      number = {3},
      pages = {135-142},
      month = {September},
      url = {http://direct.bl.uk/bld/PlaceOrder.do?UIN=229732053&ETOC=RN&from=searchengine}
    }
    					
    2013.08.05 Matheus Henrique Esteves Paixão & Jerffeson Teixeira de Souza A Scenario-based Robust Model for The Next Release Problem 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1469-1476, Amsterdam The Netherlands, 6-10 July   Inproceedings Requirements/Specifications
    Abstract: The next release problem is a significant task in the iterative and incremental software development model, involving the selection of a set of requirements to be included in the next software release. Given the dynamic environment in which modern software development occurs, the uncertainties related to the input variables considered in this problem should be taken into account. In this context, this paper proposes a novel formulation to the next release problem based on scenarios and considering the robust optimisation framework, which enables the production of robust solutions. In order to measure the 'price of robustness,' several experiments were designed and executed over artificial and real-world instances. All experimental results are consistent to show that the penalisation with regard to solution quality due to robustness is relatively small, which qualifies the proposed model to be applied even in large-scale real-world software projects.
    BibTeX:
    @inproceedings{PaixaoS13,
      author = {Matheus Henrique Esteves Paixão and Jerffeson Teixeira de Souza},
      title = {A Scenario-based Robust Model for The Next Release Problem},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1469-1476},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463547}
    }
    					
    2011.03.07 Jules White, Brian Doughtery & Douglas C. Schmidt ASCENT: An Algorithmic Technique for Designing Hardware and Software in Tandem 2010 IEEE Transactions on Software Engineering, Vol. 36(6), pp. 838-851, November-December   Article
    Abstract: Search-based software engineering is an emerging paradigm that uses automated search algorithms to help designers iteratively find solutions to complicated design problems. For example, when designing a climate monitoring satellite, designers may want to use the minimal amount of computing hardware to reduce weight and cost while supporting the image processing algorithms running onboard. A key problem in these situations is that the hardware and software designs are locked in a tightly coupled cost-constrained producer/consumer relationship that makes it hard to find a good hardware/software design configuration. Search-based software engineering can be used to apply algorithmic techniques to automate the search for hardware/software designs that maximize the image processing accuracy while respecting cost constraints. This paper provides the following contributions to research on search-based software engineering: 1) We show how a cost-constrained producer/consumer problem can be modeled as a set of two multidimensional multiple-choice knapsack problems (MMKPs), 2) we present a polynomial-time search-based software engineering technique, called the Allocation-baSed Configuration Exploration Technique (ASCENT), for finding near optimal hardware/software codesign solutions, and 3) we present empirical results showing that ASCENT's solutions average over 95 percent of the optimal solution's value.
    BibTeX:
    @article{WhiteDS10,
      author = {Jules White and Brian Doughtery and Douglas C. Schmidt},
      title = {ASCENT: An Algorithmic Technique for Designing Hardware and Software in Tandem},
      journal = {IEEE Transactions on Software Engineering},
      year = {2010},
      volume = {36},
      number = {6},
      pages = {838-851},
      month = {November-December},
      doi = {http://dx.doi.org/10.1109/TSE.2010.77}
    }
    					
    2016.03.08 Sandro Santos Andrade & Raimundo José de A. Macêdo A Search-Based Approach for Architectural Design of Feedback Control Concerns in Self-Adaptive Systems 2013 Proceedings of IEEE 7th International Conference on Self-Adaptive and Self-Organizing Systems (SASO '13), pp. 61-70, Philadelphia USA, 9-13 September   Inproceedings
    Abstract: A number of approaches for endowing systems with self-adaptive behavior have been proposed over the past years. Among such efforts, architecture-centric solutions with explicit representation of feedback loops have currently been advocated as a promising research landscape. Although noteworthy results have been achieved on some fronts, the lack of systematic representations of architectural knowledge and effective support for well-informed trade-off decisions still poses significant challenges when designing modern self-adaptive systems. In this paper, we present a systematic and flexible representation of design dimensions related to feedback control concerns, a set of metrics which estimate quality attributes of resulting automated designs, and a search-based approach to find out a set of Pareto-optimal candidate architectures. The proposed approach has been fully implemented in a supporting tool and a case study with a self-adaptive cloud computing environment has been undertaken. The results indicate that our approach effectively captures the most prominent degrees of freedom when designing self-adaptive systems, helps to elicit effective subtle designs, and provides useful support for early analysis of trade-off decisions.
    BibTeX:
    @inproceedings{AndradeM13b,
      author = {Sandro Santos Andrade and Raimundo José de A. Macêdo},
      title = {A Search-Based Approach for Architectural Design of Feedback Control Concerns in Self-Adaptive Systems},
      booktitle = {Proceedings of IEEE 7th International Conference on Self-Adaptive and Self-Organizing Systems (SASO '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {61-70},
      address = {Philadelphia, USA},
      month = {9-13 September},
      doi = {http://dx.doi.org/10.1109/SASO.2013.42}
    }
    					
    2009.05.25 AbdulSalam Kalaji, Robert M. Hierons & Stephen Swift A Search-based Approach for Automatic Test Generation from Extended Finite State Machine (EFSM) 2009 Proceedings of Testing: Academia & Industry Conference - Practice And Research Techniques (TAIC-PART '09), pp. 131-132, Windsor UK, 4-6 September   Inproceedings Testing and Debugging
    Abstract: The extended finite state machine is a powerful model that can capture almost all the aspects of a system. However, testing from an EFSM is yet a challenging task due to two main problems: path feasibility and path test data generation. Although optimization algorithms are efficient, their applications to EFSM testing have received very little attention. The aim of this paper is to develop a novel approach that utilizes optimization algorithms to test from EFSM models.
    BibTeX:
    @inproceedings{KalajiHS09b,
      author = {AbdulSalam Kalaji and Robert M. Hierons and Stephen Swift},
      title = {A Search-based Approach for Automatic Test Generation from Extended Finite State Machine (EFSM)},
      booktitle = {Proceedings of Testing: Academia & Industry Conference - Practice And Research Techniques (TAIC-PART '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {131-132},
      address = {Windsor, UK},
      month = {4-6 September},
      doi = {http://dx.doi.org/10.1109/TAICPART.2009.19}
    }
    					
    2014.09.02 Yasaman Amannejad, Vahid Garousi, Rob Irving & Zahra Sahaf A Search-Based Approach for Cost-Effective Software Test Automation Decision Support and an Industrial Case Study 2014 Procedings of IEEE 7th International Conference on Software Testing, Verification and Validation Workshops (ICSTW '14), pp. 302-311, Cleveland OH USA, 31 March - 4 April   Inproceedings Testing and Debugging
    Abstract: Test automation is a widely-used approach to reduce the cost of manual software testing. However, if it is not planned or conducted properly, automated testing would not necessarily be more cost effective than manual testing. Deciding what parts of a given System Under Test (SUT) should be tested in an automated fashion and what parts should remain manual is a frequently-asked and challenging question for practitioner testers. In this study, we propose a search-based approach for deciding what parts of a given SUT should be tested automatically to gain the highest Return On Investment (ROI). This work is the first systematic approach for this problem, and significance of our approach is that it considers automation in the entire testing process (i.e., from test-case design, to test scripting, to test execution, and test-result evaluation). The proposed approach has been applied in an industrial setting in the context of a software product used in the oil and gas industry in Canada. Among the results of the case study is that, when planned and conducted properly using our decision-support approach, test automation provides the highest ROI. In this study, we show that if automation decision is taken effectively, test-case design, test execution, and test evaluation can result in about 307%, 675%, and 41% ROI in 10 rounds of using automated test suites.
    BibTeX:
    @inproceedings{AmannejadGIS14,
      author = {Yasaman Amannejad and Vahid Garousi and Rob Irving and Zahra Sahaf},
      title = {A Search-Based Approach for Cost-Effective Software Test Automation Decision Support and an Industrial Case Study},
      booktitle = {Procedings of IEEE 7th International Conference on Software Testing, Verification and Validation Workshops (ICSTW '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {302-311},
      address = {Cleveland, OH, USA},
      month = {31 March - 4 April},
      doi = {http://dx.doi.org/10.1109/ICSTW.2014.34}
    }
    					
    2009.03.31 Thierry Bodhuin, Massimiliano Di Penta & Luigi Troiano A Search-based Approach for Dynamically Re-packaging Downloadable Applications 2007 Proceedings of the 2007 Conference of the IBM Center for Advanced Studies on Collaborative Research (CASCON '07), pp. 27-41, Richmond Hill Ontario Canada, 22-25 October   Inproceedings Distribution and Maintenance
    Abstract: Mechanisms such as Java Web Start enable on-the-fly downloading and execution of applications installed on remote servers, without the need for having them installed on the local machine. The rapid diffusion of mobile devices (e.g., Personal Device Assistants - PDAs) connected to the Internet make these applications appealing to mobile users. However, in many cases the available bandwidth is limited, and its excessive usage can even be a cost when wireless connections are paid on a Kbyte transfer basis. This paper proposes an approach based on Genetic Algorithms and an environment that, based on previous usage information of the application (i.e. scenarios), re-packages it with the objective of limiting amount of resources transmitted for using a set of application features. The paper reports an empirical study on the application of the proposed approach on three medium-sized Java applications.
    BibTeX:
    @inproceedings{BodhuinDT07,
      author = {Thierry Bodhuin and Massimiliano Di Penta and Luigi Troiano},
      title = {A Search-based Approach for Dynamically Re-packaging Downloadable Applications},
      booktitle = {Proceedings of the 2007 Conference of the IBM Center for Advanced Studies on Collaborative Research (CASCON '07)},
      publisher = {ACM},
      year = {2007},
      pages = {27-41},
      address = {Richmond Hill, Ontario, Canada},
      month = {22-25 October},
      doi = {http://dx.doi.org/10.1145/1321211.1321215}
    }
    					
    2017-07-07 Basil Eljuse & Neil Walkinshaw A Search Based Approach for Stress-Testing Integrated Circuits 2016 Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16), pp. 80-95, Raleigh NC USA, 8-10 October   Inproceedings
    Abstract: In order to reduce software complexity and be power efficient, hardware platforms are increasingly incorporating functionality that was traditionally administered at a software-level (such as cache management). This functionality is often complex, incorporating multiple processors along with a multitude of design parameters. Such devices can only be reliably tested at a ‘system’ level, which presents various testing challenges; behaviour is often non-deterministic (from a software perspective), and finding suitable test sets to ‘stress’ the system adequately is often an inefficient, manual activity that yields fixed test sets that can rarely be reused. In this paper we investigate this problem with respect to ARM’s Cache Coherent Interconnect (CCI) Unit. We present an automated search-based testing approach that combines a parameterised test-generation framework with the hill-climbing heuristic to find test sets that maximally ‘stress’ the CCI by producing much larger numbers of data stall cycles than the corresponding manual test sets.
    BibTeX:
    @inproceedings{EljuseW16,
      author = {Basil Eljuse and Neil Walkinshaw},
      title = {A Search Based Approach for Stress-Testing Integrated Circuits},
      booktitle = {Proceedings of the 8th International Symposium on Search Based Software Engineering (SSBSE '16)},
      publisher = {Springer},
      year = {2016},
      pages = {80-95},
      address = {Raleigh, NC, USA},
      month = {8-10 October},
      doi = {http://dx.doi.org/10.1007/978-3-319-47106-8_6}
    }
    					
    2012.02.28 Matthias Woehrle A Search-based Approach for Testing for Quantitative Properties of Wireless Network Protocol Stacks 2012 Proceedings of the 5th International Workshop on Search-Based Software Testing (SBST '12), pp. 794-803, Montreal Canada, 21-21 April   Inproceedings Testing and Debugging
    Abstract: The operation of wireless network protocol stacks is heavily dependent on the actual deployment of the system and especially on the corresponding network topology, e.g. due to channel contention. The nature of wireless communication does not allow for a-priori determination of network topology, network-defining metrics such as neighbor density and routing span may drastically differ for various deployments. Therefore, it is a difficult problem to foresee and consider the large number of possible topologies that a system may run on during protocol stack development. We propose to use an automated approach for searching topologies for which a protocol stack exhibits particularly poor quantitative performance. We formulate stress testing of protocol stacks on specific topologies as a multi-objective optimization problem and use an evolutionary algorithm for finding a set of small topologies that particularly stress the protocol stack of a wireless network. For searching the topology space, we present novel problem-specific variation operators and show their improvements on search performance in case studies. We showcase our results on stress testing using two protocol stacks for wireless sensor networks.
    BibTeX:
    @inproceedings{Woehrle12,
      author = {Matthias Woehrle},
      title = {A Search-based Approach for Testing for Quantitative Properties of Wireless Network Protocol Stacks},
      booktitle = {Proceedings of the 5th International Workshop on Search-Based Software Testing (SBST '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {794-803},
      address = {Montreal, Canada},
      month = {21-21 April},
      doi = {http://dx.doi.org/10.1109/ICST.2012.178}
    }
    					
    2009.03.31 Anthony Finkelstein, Mark Harman, S. Afshin Mansouri, Jian Ren & Yuanyuan Zhang A Search based Approach to Fairness Analysis in Requirement Assignments to Aid Negotiation, Mediation and Decision Making 2009 Requirements Engineering Journal (RE '08 Special Issue), Vol. 14(4), pp. 231-245, December   Article Requirements/Specifications
    Abstract: This paper uses a multi-objective optimisation approach to support investigation of the trade-offs in various notions of fairness between multiple customers. Results are presented to validate the approach using two real-world data sets and also using data sets created specifically to stress test the approach. Simple graphical techniques are used to visualize the solution space. The paper also reports on experiments to determine the most suitable algorithm for this problem, comparing the results of the NSGA-II algorithms, a widely used multi objective evolutionary algorithm, and the Two-Archive evolutionary algorithm, a recently proposed alternative.
    BibTeX:
    @article{FinkelsteinHMRZ09,
      author = {Anthony Finkelstein and Mark Harman and S. Afshin Mansouri and Jian Ren and Yuanyuan Zhang},
      title = {A Search based Approach to Fairness Analysis in Requirement Assignments to Aid Negotiation, Mediation and Decision Making},
      journal = {Requirements Engineering Journal (RE '08 Special Issue)},
      year = {2009},
      volume = {14},
      number = {4},
      pages = {231-245},
      month = {December},
      doi = {http://dx.doi.org/10.1007/s00766-009-0075-y}
    }
    					
    2010.09.08 Felix F. Lindlar & Andreas Windisch A Search-Based Approach to Functional Hardware-in-the-Loop Testing 2010 Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10), pp. 111-119, Benevento Italy, 7-9 September   Inproceedings Testing and Debugging
    Abstract: The potential of applying search-based testing principles to functional testing has been demonstrated in various cases. The focus was mainly on simulating the system under test using a model or compiled source code in order to evaluate test cases. However, in many cases only the final hardware unit is available for testing. This research presents an approach in which evolutionary functional testing is performed using an actual electronic control unit for test case evaluation. A test environment designed to be used for large-scale industrial systems is introduced. An extensive case study has been carried out to assess its capabilities. Results indicate that the approach proposed in this work is suitable for automated functional testing of embedded control systems within a Hardware-in the-Loop test environment.
    BibTeX:
    @inproceedings{LindlarW10,
      author = {Felix F. Lindlar and Andreas Windisch},
      title = {A Search-Based Approach to Functional Hardware-in-the-Loop Testing},
      booktitle = {Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {111-119},
      address = {Benevento, Italy},
      month = {7-9 September},
      doi = {http://dx.doi.org/10.1109/SSBSE.2010.22}
    }
    					
    2009.07.26 Chen Hao, John A. Clark & Jeremy Jacob A Search-based Approach to the Automated Design of Security Protocols 2004 (YCS-2004-376), May   Techreport Network Protocols
    Abstract: Security protocols play an important role in modern communications. However, security protocol development is a delicate task; experience shows that computer security protocols are notoriously difficult to get right. Recently, Clark and Jacob provided a framework for automatic protocol generation based on combinatorial optimisation techniques and the symmetric key part of BAN logic. This paper shows how such an approach can be further developed to encompass the full BAN logic without loss of efficiency and thereby synthesise public key protocols and hybrid protocols.
    BibTeX:
    @techreport{HaoCJ04,
      author = {Chen Hao and John A. Clark and Jeremy Jacob},
      title = {A Search-based Approach to the Automated Design of Security Protocols},
      year = {2004},
      number = {YCS-2004-376},
      month = {May},
      url = {http://www.cs.york.ac.uk/ftpdir/reports/2004/YCS/376/YCS-2004-376.pdf}
    }
    					
    2007.12.02 Nigel Tracey A Search-based Automated Test-Data Generation Framework for Safety-Critical Software 2000 School: University of York, UK   Phdthesis Testing and Debugging
    Abstract: Safety-critical systems are those whose failure can lead to injury or loss of life. Software is becoming increasingly relied upon for the safe and correct operation of such systems. Software testing is one of the most important techniques used in industry to assess the quality of a software product. However, software testing is an expensive process, typically consuming at least 50% of the total costs of software development. Existing testing techniques are typically success oriented and target function correctness of the software. For safety-critical software, this is insufficient as other properties also need to be tested-functional properties, real-time properties, safety properties, behaviour under exception conditions and the meeting of assumptions. Existing approaches to automation offer limited support for safety-critical software testing as they only address single properties. This thesis develops an extensible search-based framework (exploiting heuristic optimisation techniques) to allow automated generation of test-data specifically for safety-critical software. A variety of applications of this framework are presented targeted at testing the properties of interest when developing safety-critical software. A number of case-studies show that this framework effectively and efficiently generates test-data enabling errors to be identified earlier in the development process and therefore offers the potential to reduce development costs.
    BibTeX:
    @phdthesis{Tracey00,
      author = {Nigel Tracey},
      title = {A Search-based Automated Test-Data Generation Framework for Safety-Critical Software},
      school = {University of York, UK},
      year = {2000},
      url = {http://portal.acm.org/citation.cfm?coll=GUIDE&dl=GUIDE&id=763025}
    }
    					
    2007.12.02 Nigel Tracey, John A. Clark, John McDermid & Keith Mander Henderson, P. (Hrsg.) A Search-based Automated Test-Data Generation Framework for Safety-Critical Systems 2002 Systems engineering for business process change: new directionsSystems engineering for business process change: new directions, pp. 174-213, New York NY USA   Incollection Testing and Debugging
    Abstract: This paper presents the results of a three year research program to develop an automated test-data generation framework to support the testing of safety-critical software systems. The generality of the framework comes from the exploitation of domain independent search techniques, allowing new test criteria to be addressed by constructing functions that quantify the suitability of test-data against the test-criteria. The paper presents four applications of the framework - specification falsification testing, structural testing, exception condition testing and worst-case execution time testing. The results of three industrial scale case-studies are also presented to show that the framework offers useful support in the development safety-critical software systems.
    BibTeX:
    @incollection{TraceyCMM02,
      author = {Nigel Tracey and John A. Clark and John McDermid and Keith Mander},
      title = {A Search-based Automated Test-Data Generation Framework for Safety-Critical Systems},
      booktitle = {Systems engineering for business process change: new directions},
      journal = {Systems engineering for business process change: new directions},
      publisher = {Springer-Verlag New York, Inc.},
      year = {2002},
      pages = {174-213},
      address = {New York, NY, USA},
      url = {http://portal.acm.org/citation.cfm?coll=GUIDE&dl=GUIDE&id=763025}
    }
    					
    2008.08.28 Yuan Zhan & John A. Clark A Search-based Framework for Automatic Testing of MATLAB/Simulink Models 2008 Journal of Systems and Software, Vol. 81(2), pp. 262-285, February   Article Testing and Debugging
    Abstract: Search-based test-data generation has proved successful for code-level testing but almost no search-based work has been carried out at higher levels of abstraction. In this paper the application of such approaches at the higher levels of abstraction offered by MATLAB/Simulink models is investigated and a wide-ranging framework for test-data generation and management is presented. Model-level analogues of code-level structural coverage criteria are presented and search-based approaches to achieving them are described. The paper also describes the first search-based approach to the generation of mutant-killing test data, addressing a fundamental limitation of mutation testing. Some problems remain whatever the level of abstraction considered. In particular, complexity introduced by the presence of persistent state when generating test sequences is as much a challenge at the Simulink model level as it has been found to be at the code level. The framework addresses this problem. Finally, a flexible approach to test sub-set extraction is presented, allowing testing resources to be deployed effectively and efficiently.
    BibTeX:
    @article{ZhanC08,
      author = {Yuan Zhan and John A. Clark},
      title = {A Search-based Framework for Automatic Testing of MATLAB/Simulink Models},
      journal = {Journal of Systems and Software},
      year = {2008},
      volume = {81},
      number = {2},
      pages = {262-285},
      month = {February},
      doi = {http://dx.doi.org/10.1016/j.jss.2007.05.039}
    }
    					
    2012.10.25 Fitsum Meshesha Kifetew A Search-Based Framework for Failure Reproduction 2012 Proceedings of the 4th International Symposium on Search Based Software Engineering (SSBSE '12), Vol. 7515, pp. 279-284, Riva del Garda Italy, 28-30 September   Inproceedings
    Abstract: The task of debugging software failures is generally time consuming and involves substantial manual effort. A crucial part of this task lies in the reproduction of the reported failure at the developer's site. In this paper, we propose a novel framework that aims to address the problem of failure reproduction by employing an adaptive search-based approach in combination with a limited amount of instrumentation. In particular, we formulate the problem of reproducing failures as a search problem: reproducing a software failure can be viewed as the search for a set of inputs that lead its execution to the failing path. The search is guided by information obtained through instrumentation. Preliminary experiments on small-size programs show promising results in which the proposed approach outperforms random search.
    BibTeX:
    @inproceedings{Kifetew12,
      author = {Fitsum Meshesha Kifetew},
      title = {A Search-Based Framework for Failure Reproduction},
      booktitle = {Proceedings of the 4th International Symposium on Search Based Software Engineering (SSBSE '12)},
      publisher = {Springer},
      year = {2012},
      volume = {7515},
      pages = {279-284},
      address = {Riva del Garda, Italy},
      month = {28-30 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-33119-0_22}
    }
    					
    2011.05.19 Shaukat Ali, Muhammad Zohaib Iqbal, Andrea Arcuri & Lionel C. Briand A Search-based OCL Constraint Solver for Model-based Test Data Generation 2011 Proceedings of the 11th International Conference On Quality Software (QSIC '11), pp. 41-50, Madrid Spain, 13-14 July   Inproceedings Testing and Debugging
    Abstract: Model-based testing (MBT) aims at automated, scalable, and systematic testing solutions for complex industrial software systems. To increase chances of adoption in industrial contexts, software systems should be modeled using well-established standards such as the Unified Modeling Language (UML) and Object Constraint Language (OCL). Given that test data generation is one of the major challenges to automate MBT, this is the topic of this paper with a specific focus on test data generation from OCL constraints. Though search-based software testing (SBST) has been applied to test data generation for white-box testing (e.g., branch coverage), its application to the MBT of industrial software systems has been limited. In this paper, we propose a set of search heuristics based on OCL constraints to guide test data generation and automate MBT in industrial applications. These heuristics are used to develop an OCL solver exclusively based on search, in this particular case genetic algorithm and (1+1) EA. Empirical analyses to evaluate the feasibility of our approach are carried out on one industrial system.
    BibTeX:
    @inproceedings{AliIAB11,
      author = {Shaukat Ali and Muhammad Zohaib Iqbal and Andrea Arcuri and Lionel C. Briand},
      title = {A Search-based OCL Constraint Solver for Model-based Test Data Generation},
      booktitle = {Proceedings of the 11th International Conference On Quality Software (QSIC '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {41-50},
      address = {Madrid, Spain},
      month = {13-14 July},
      doi = {http://dx.doi.org/10.1109/QSIC.2011.17}
    }
    					
    2014.09.03 Atif Aftab Jilani, Muhammad Zohaib Iqbal & Muhammad Uzair Khan A Search Based Test Data Generation Approach for Model Transformations 2014 Proceedings of the 7th International Conference on Theory and Practice of Model Transformations (ICMT '14), pp. 17-24, York UK, 21-22 July   Inproceedings Testing and Debugging
    Abstract: Model transformations are a fundamental part of Model Driven Engineering. Automated testing of model transformation is challenging due to the complexity of generating test models as test data. In the case of model transformations, the test model is an instance of a meta-model. Generating input models manually is a laborious and error prone task. Test cases are typically generated to satisfy a coverage criterion. Test data generation corresponding to various structural testing coverage criteria requires solving a number of predicates. For model transformation, these predicates typically consist of constraints on the source meta-model elements. In this paper, we propose an automated search-based test data generation approach for model transformations. The proposed approach is based on calculating approach level and branch distances to guide the search. For this purpose, we have developed specialized heuristics for calculating branch distances of model transformations. The approach allows test data generation corresponding to various coverage criteria, including statement coverage, branch coverage, and multiple condition/decision coverage. Our approach is generic and can be applied to various model transformation languages. Our developed tool, MOTTER, works with Atlas Transformation Language (ATL) as a proof of concept. We have successfully applied our approach on a well-known case study from ATL Zoo to generate test data.
    BibTeX:
    @inproceedings{JilaniIK14,
      author = {Atif Aftab Jilani and Muhammad Zohaib Iqbal and Muhammad Uzair Khan},
      title = {A Search Based Test Data Generation Approach for Model Transformations},
      booktitle = {Proceedings of the 7th International Conference on Theory and Practice of Model Transformations (ICMT '14)},
      publisher = {Springer},
      year = {2014},
      pages = {17-24},
      address = {York, UK},
      month = {21-22 July},
      doi = {http://dx.doi.org/10.1007/978-3-319-08789-4_2}
    }
    					
    2017.06.27 Sadeeq Jan, Cu D. Nguyen, Andrea Arcuri & Lionel Briand A Search-Based Testing Approach for XML Injection Vulnerabilities in Web Applications 2017 Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST '17), pp. 356-366, Tokyo Japan, 13-17 March   Inproceedings
    Abstract: In most cases, web applications communicate with web services (SOAP and RESTful). The former act as a front-end to the latter, which contain the business logic. A hacker might not have direct access to those web services (e.g., they are not on public networks), but can still provide malicious inputs to the web application, thus potentially compromising related services. Typical examples are XML injection attacks that target SOAP communications. In this paper, we present a novel, search-based approach used to generate test data for a web application in an attempt to deliver malicious XML messages to web services. Our goal is thus to detect XML injection vulnerabilities in web applications. The proposed approach is evaluated on two studies, including an industrial web application with millions of users. Results show that we are able to effectively generate test data (e.g., input values in an HTML form) that detect such vulnerabilities.
    BibTeX:
    @inproceedings{JanNAB17,
      author = {Sadeeq Jan and Cu D. Nguyen and Andrea Arcuri and Lionel Briand},
      title = {A Search-Based Testing Approach for XML Injection Vulnerabilities in Web Applications},
      booktitle = {Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST '17)},
      publisher = {IEEE},
      year = {2017},
      pages = {356-366},
      address = {Tokyo, Japan},
      month = {13-17 March},
      doi = {http://dx.doi.org/10.1109/ICST.2017.39}
    }
    					
    2016.03.09 Sihan Li, Naiwen Bian, Zhenyu Chen, Dongjiang You & Yuchen He A Simulation Study on Some Search Algorithms for Regression Test Case Prioritization 2010 Proceedings of the 10th International Conference on Quality Software (QSIC '10), pp. 72-81, Zhangjiajie China, 14-15 July   Inproceedings Testing and Debugging
    Abstract: Test case prioritization is an approach aiming at increasing the rate of faults detection during the testing phase, by reordering test case execution. Many techniques for regression test case prioritization have been proposed. In this paper, we perform a simulation experiment to study five search algorithms for test case prioritization and compare the performance of these algorithms. The target of the study is to have an in-depth investigation and improve the generality of the comparison results. The simulation study provides two useful guidelines: (1) Two search algorithms, Additional Greedy Algorithm and 2-Optimal Greedy Algorithm, outperform the other three search algorithms in most cases. (2) The performance of the five search algorithms will be affected by the overlap of test cases with regard to test requirements.
    BibTeX:
    @inproceedings{LiBCYH10,
      author = {Sihan Li and Naiwen Bian and Zhenyu Chen and Dongjiang You and Yuchen He},
      title = {A Simulation Study on Some Search Algorithms for Regression Test Case Prioritization},
      booktitle = {Proceedings of the 10th International Conference on Quality Software (QSIC '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {72-81},
      address = {Zhangjiajie, China},
      month = {14-15 July},
      doi = {http://dx.doi.org/10.1109/QSIC.2010.15}
    }
    					
    2007.12.02 Yoonsik Cheon & Myoung Kim A Specification-based Fitness Function for Evolutionary Testing of Object-oriented Programs 2006 Proceedings of the 8th annual Conference on Genetic and Evolutionary Computation (GECCO '06), pp. 1953-1954, Seattle Washington USA, 8-12 July   Inproceedings Testing and Debugging
    Abstract: Encapsulation of states in object-oriented programs hinders the search for test data using evolutionary testing. As client code is oblivious to the internal state of a server object, no guidance is available to test the client code using evolutionary testing; i.e., it is difficult to determine the fitness or goodness of test data, as it may depend on the hidden internal state. Nevertheless, evolutionary testing is a promising new approach of which effectiveness has been shown by several researchers. We propose a specification-based fitness function for evolutionary testing of object-oriented programs. Our approach is modular in that fitness value calculation doesn't depend on source code of server classes, thus it works even if the server implementation is changed or no code is available----which is frequently the case for reusable object-oriented class libraries and frameworks.
    BibTeX:
    @inproceedings{CheonK06,
      author = {Yoonsik Cheon and Myoung Kim},
      title = {A Specification-based Fitness Function for Evolutionary Testing of Object-oriented Programs},
      booktitle = {Proceedings of the 8th annual Conference on Genetic and Evolutionary Computation (GECCO '06)},
      publisher = {ACM},
      year = {2006},
      pages = {1953-1954},
      address = {Seattle, Washington, USA},
      month = {8-12 July},
      doi = {http://dx.doi.org/10.1145/1143997.1144322}
    }
    					
    2009.11.01 Lili Yang & Bryan F. Jones Assimilation Exchange Based Software Integration 2004 Proceedings of the 17th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems Innovations in Applied Artificial Intelligence IEA/AIE 2004, Vol. 3029, pp. 1229-1238, Ottawa Canada, 17-20 May   Inproceedings Design Tools and Techniques
    Abstract: This paper describes an approach of integrating software with a minimum risk using Genetic Algorithms (GA). The problem was initially proposed by the need of sharing common software components among various departments within a same organization. The main contribution of this study is that the software integration problem is formulated as a search problem and solved using a GA. A case study was based on an on-going software integration project carried out in the Derbyshire Fire Rescue Service, and is used to illustrate the application of the approach.
    BibTeX:
    @inproceedings{YangJ04,
      author = {Lili Yang and Bryan F. Jones},
      title = {Assimilation Exchange Based Software Integration},
      booktitle = {Proceedings of the 17th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems Innovations in Applied Artificial Intelligence IEA/AIE 2004},
      publisher = {Springer},
      year = {2004},
      volume = {3029},
      pages = {1229-1238},
      address = {Ottawa, Canada},
      month = {17-20 May},
      doi = {http://dx.doi.org/10.1007/978-3-540-24677-0_126}
    }
    					
    2014.02.21 Ramón Sagarna, Alexander Mendiburu, Iñaki Inza & José A. Lozano Assisting in Search Heuristics Selection through Multidimensional Supervised Classification: A Case Study on Software Testing 2014 Information Sciences, Vol. 258, pp. 122–139, February   Article Testing and Debugging
    Abstract: A fundamental question in the field of approximation algorithms, for a given problem instance, is the selection of the best (or a suitable) algorithm with regard to some performance criteria. A practical strategy for facing this problem is the application of machine learning techniques. However, limited support has been given in the literature to the case of more than one performance criteria, which is the natural scenario for approximation algorithms. We propose multidimensional Bayesian network (mBN) classifiers as a relatively simple, yet well-principled, approach for helping to solve this problem. Precisely, we relax the algorithm selection decision problem into the elucidation of the nondominated subset of algorithms, which contains the best. This formulation can be used in different ways to elucidate the main problem, each of which can be tackled with an mBN classifier. Namely, we deal with two of them: the prediction of the whole nondominated set and whether an algorithm is nondominated or not. We illustrate the feasibility of the approach for real-life scenarios with a case study in the context of Search Based Software Test Data Generation (SBSTDG). A set of five SBSTDG generators is considered and the aim is to assist a hypothetical test engineer in elucidating good generators to fulfil the branch testing of a given programme.
    BibTeX:
    @article{SagarnaMIL14,
      author = {Ramón Sagarna and Alexander Mendiburu and Iñaki Inza and José A. Lozano},
      title = {Assisting in Search Heuristics Selection through Multidimensional Supervised Classification: A Case Study on Software Testing},
      journal = {Information Sciences},
      year = {2014},
      volume = {258},
      pages = {122–139},
      month = {February},
      doi = {http://dx.doi.org/10.1016/j.ins.2013.09.050}
    }
    					
    2008.07.28 Mark O'Keeffe & Mel Ó Cinnéide A Stochastic Approach to Automated Design Improvement 2003 Proceedings of the 2nd International Conference on Principles and Practice of Programming in Java (PPPJ '03), pp. 59-62, Kilkenny City Ireland, 16-18 June   Inproceedings Design Tools and Techniques
    Abstract: The object-oriented approach to software development facilitates and encourages programming practices that increase reusability, correctness and maintainability in code. This is achieved in Java by providing mechanisms for inheritance, abstraction and encapsulation. By measuring properties that indicate to what extent these mechanisms are utilised we can determine to a large extent how good a design is. However, because these properties often conflict with other goals such as high cohesion and low coupling it can be difficult for a programmer to recognise the best compromise. We propose the novel approach of treating object-oriented design as a combinatorial optimisation problem, where the goal is maximisation of a set of design metrics. With a view to developing a fully automated design improvement tool we have developed a prototype that uses a small metrics suite combined with automated refactorings to move methods to their optimum positions in the class hierarchy. The action of this simulated annealing system produces a new design that is superior in terms of the metrics used and when judged on object-oriented principles.
    BibTeX:
    @inproceedings{OKeeffeO03,
      author = {Mark O'Keeffe and Mel Ó Cinnéide},
      title = {A Stochastic Approach to Automated Design Improvement},
      booktitle = {Proceedings of the 2nd International Conference on Principles and Practice of Programming in Java (PPPJ '03)},
      publisher = {Computer Science Press, Inc.},
      year = {2003},
      pages = {59-62},
      address = {Kilkenny City, Ireland},
      month = {16-18 June},
      url = {http://portal.acm.org/citation.cfm?doid=957289.957308}
    }
    					
    2016.03.08 Samia Naciri, Mohammed Abdou Janati Idrissi & Noureddine Kerzazi A Strategic Release Planning Model from TPM Point of View 2015 Proceedings of the 10th International Conference on Intelligent Systems: Theories and Applications (SITA '15), pp. 1-9, Rabat Morocco, 20-21 October   Inproceedings
    Abstract: Release planning is a tedious task, especially within the context of Third Party Application Maintenance (TPM). Software engineering fields acknowledge multiple methods and techniques applied to solve problems of release planning. However, few of them propose a pragmatic and decision support dedicated to TPM managers. This research attempts to apply theory and methods Search-based software engineering (SBSE) area such as meta-heuristic techniques to solve complex release planning problems faced by TPM organizations. The aim of this paper is to introduce a strategic release planning model based on several TPM factors and constraints, allowing the TPM manager to carry out an effective roadmap and industrialize the planning process of software maintenance.
    BibTeX:
    @inproceedings{NaciriJK15,
      author = {Samia Naciri and Mohammed Abdou Janati Idrissi and Noureddine Kerzazi},
      title = {A Strategic Release Planning Model from TPM Point of View},
      booktitle = {Proceedings of the 10th International Conference on Intelligent Systems: Theories and Applications (SITA '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-9},
      address = {Rabat, Morocco},
      month = {20-21 October},
      doi = {http://dx.doi.org/10.1109/SITA .2015.7358396}
    }
    					
    2009.03.29 José Carlos Bregieiro Ribeiro, Mário Alberto Zenha-Rela & Francisco Fernandéz de Vega A Strategy for Evaluating Feasible and Unfeasible Test Cases for the Evolutionary Testing of Object-Oriented Software 2008 Proceedings of the 3rd International Workshop on Automation of Software Test (AST '08), pp. 85-92, Leipzig Germany, 11-11 May   Inproceedings Testing and Debugging
    Abstract: Evolutionary Testing is an emerging methodology for automatically producing high quality test data. The focus of our on-going work is precisely on generating test data for the structural unit-testing of object-oriented Java programs. The primary objective is that of efficiently guiding the search process towards the definition of a test set that achieves full structural coverage of the test object. However, the state problem of object-oriented programs requires specifying carefully fine-tuned methodologies that promote the traversal of problematic structures and difficult control-flow paths - which often involves the generation of complex and intricate test cases, that define elaborate state scenarios. This paper proposes a methodology for evaluating the quality of both feasible and unfeasible test cases - i.e., those that are effectively completed and terminate with a call to the method under test, and those that abort prematurely because a runtime exception is thrown during test case execution. With our approach, unfeasible test cases are considered at certain stages of the evolutionary search, promoting diversity and enhancing the possibility of achieving full coverage.
    BibTeX:
    @inproceedings{RibeiroZV08b,
      author = {José Carlos Bregieiro Ribeiro and Mário Alberto Zenha-Rela and Francisco Fernandéz de Vega},
      title = {A Strategy for Evaluating Feasible and Unfeasible Test Cases for the Evolutionary Testing of Object-Oriented Software},
      booktitle = {Proceedings of the 3rd International Workshop on Automation of Software Test (AST '08)},
      publisher = {ACM},
      year = {2008},
      pages = {85-92},
      address = {Leipzig, Germany},
      month = {11-11 May},
      doi = {http://dx.doi.org/10.1145/1370042.1370061}
    }
    					
    2007.12.02 Bryan F. Jones, David E. Eyres & Harmen-Hinrich Sthamer A Strategy for using Genetic Algorithms to Automate Branch and Fault-based Testing 1998 Computer Journal, Vol. 41(2), pp. 98-107   Article Testing and Debugging
    Abstract: Genetic algorithms have been used successfully to generate software test data automatically; all branches were covered with substantially fewer generated tests than simple random testing. We generated test sets which executed all branches in a variety of programs including a quadratic equation solver, remainder, linear and binary search procedures, and a triangle classifier comprising a system of five procedures. We regard the generation of test sets as a search through the input domain for appropriate inputs. The genetic algorithms generated test data to give 100% branch coverage in up to two orders of magnitude fewer tests than random testing. Whilst some of this benefit is offset by increased computation effort, the adequacy of the test data is improved by the genetic algorithm's ability to generate test sets which are at or close to the input subdomain boundaries. Genetic algorithms may be used for fault-based testing where faults associated with mistakes in branch predicates are revealed. The software has been deliberately seeded with faults in the branch predicates (i.e. mutation testing), and our system successfully killed 97% of the mutants.
    BibTeX:
    @article{JonesES98,
      author = {Bryan F. Jones and David E. Eyres and Harmen-Hinrich Sthamer},
      title = {A Strategy for using Genetic Algorithms to Automate Branch and Fault-based Testing},
      journal = {Computer Journal},
      year = {1998},
      volume = {41},
      number = {2},
      pages = {98-107},
      doi = {http://dx.doi.org/10.1093/comjnl/41.2.98}
    }
    					
    2016.03.09 Yen-Ching Hsu, Kuan-Li Peng & Chin-Yu Huang A Study of Applying Severity-weighted Greedy Algorithm to Software Test Case Prioritization During Testing 2014 Proceedings of IEEE International Conference on Industrial Engineering and Engineering Management (IEEM '14), pp. 1086-1090, Bandar Sunway Malaysia, 9-12 December   Inproceedings Testing and Debugging
    Abstract: Regression testing is a very useful technique for software testing. Traditionally, there are several techniques for test case prioritization; two of the most used techniques are Greedy and Additional Greedy Algorithm (GA and AGA). However, it can be found that they may not consider the severity while prioritizing test cases. In this paper, an Enhanced Additional Greedy Algorithm (EAGA) is proposed for test case prioritization. Experiments with eight subject programs are performed to investigate the effects of different techniques under different criteria and fault severity. Experimental results show that proposed EAGA perform well than other techniques.
    BibTeX:
    @inproceedings{HsuPH14,
      author = {Yen-Ching Hsu and Kuan-Li Peng and Chin-Yu Huang},
      title = {A Study of Applying Severity-weighted Greedy Algorithm to Software Test Case Prioritization During Testing},
      booktitle = {Proceedings of IEEE International Conference on Industrial Engineering and Engineering Management (IEEM '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {1086-1090},
      address = {Bandar Sunway, Malaysia},
      month = {9-12 December},
      doi = {http://dx.doi.org/10.1109/IEEM.2014.7058806}
    }
    					
    2010.02.24 Yanfu Li, Min Xie & T.N. Goh A Study of Mutual Information Based Feature Selection for Case Based Reasoning in Software Cost Estimation 2009 Expert Systems with Applications, Vol. 36(3, Part 2), pp. 5921-5931, April   Article Management
    Abstract: Software cost estimation is one of the most crucial activities in software development process. In the past decades, many methods have been proposed for cost estimation. Case based reasoning (CBR) is one of these techniques. Feature selection is an important preprocessing stage of case based reasoning. Most existing feature selection methods of case based reasoning are [`]wrappers' which can usually yield high fitting accuracy at the cost of high computational complexity and low explanation of the selected features. In our study, the mutual information based feature selection (MICBR) is proposed. This approach hybrids both [`]wrapper' and [`]filter' mechanism which is another kind of feature selector with much lower complexity than wrappers, and the features selected by filters are likely to be generalized to other conditions. The MICBR is then compared with popular feature selectors and the published works. The results show that the MICBR is an effective feature selector for case based reasoning by overcoming some of the limitations and computational complexities of other feature selection techniques in the field.
    BibTeX:
    @article{LiXG09,
      author = {Yanfu Li and Min Xie and T.N. Goh},
      title = {A Study of Mutual Information Based Feature Selection for Case Based Reasoning in Software Cost Estimation},
      journal = {Expert Systems with Applications},
      year = {2009},
      volume = {36},
      number = {3, Part 2},
      pages = {5921-5931},
      month = {April},
      doi = {http://dx.doi.org/10.1016/j.eswa.2008.07.062}
    }
    					
    2011.03.04 Juan J. Durillo, Yuanyuan Zhang, Enrique Alba, Mark Harman & Antonio J. Nebro A Study of the Bi-Objective Next Release Problem 2011 Empirical Software Engineering, Vol. 16(1), pp. 29-60, February   Article Requirements/Specifications
    Abstract: One important issue addressed by software companies is to determine which features should be included in the next release of their products, in such a way that the highest possible number of customers get satisfied while entailing the minimum cost for the company. This problem is known as the Next Release Problem (NRP). Since minimizing the total cost of including new features into a software package and maximizing the total satisfaction of customers are contradictory objectives, the problem has a multi-objective nature. In this work, we apply three state-of-the-art multi-objective metaheuristics (two genetic algorithms, NSGA-II and MOCell, and one evolutionary strategy, PAES) for solving NRP. Our goal is twofold: on the one hand, we are interested in analyzing the results obtained by these metaheuristics over a benchmark composed of six academic problems plus a real world data set provided by Motorola; on the other hand, we want to provide insight about the solution to the problem. The obtained results show three different kinds of conclusions: NSGA-II is the technique computing the highest number of optimal solutions, MOCell provides the product manager with the widest range of different solutions, and PAES is the fastest technique (but with the least accurate results). Furthermore, we have observed that the best solutions found so far are composed of a high percentage of low-cost requirements and of those requirements that produce the largest satisfaction on the customers as well.
    BibTeX:
    @article{DurilloZAHN11,
      author = {Juan J. Durillo and Yuanyuan Zhang and Enrique Alba and Mark Harman and Antonio J. Nebro},
      title = {A Study of the Bi-Objective Next Release Problem},
      journal = {Empirical Software Engineering},
      year = {2011},
      volume = {16},
      number = {1},
      pages = {29-60},
      month = {February},
      doi = {http://dx.doi.org/10.1007/s10664-010-9147-3}
    }
    					
    2009.05.14 Juan J. Durillo, Yuanyuan Zhang, Enrique Alba & Antonio J. Nebro A Study of the Multi-Objective Next Release Problem 2009 Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09), pp. 49-58, Cumberland Lodge Windsor UK, 13-15 May   Inproceedings Requirements/Specifications
    Abstract: One of the first issues which has to be taken into account by software companies is to determine what should be included in the next release of their products, in such a way that the highest possible number of customers get satisfied while this entails a minimum cost for the company. This problem is known as the Next Release Problem (NRP). Since minimizing the total cost of including new features into a software package and maximizing the total satisfaction of customers are contradictory objectives, the problem has a multi-objective nature. In this work we study the NRP problem from the multi-objective point of view, paying attention to the quality of the obtained solutions, the number of solutions, the range of solutions covered by these fronts, and the number of optimal solutions obtained.Also, we evaluate the performance of two state-of-the-art multi-objective metaheuristics for solving NRP: NSGA-II and MOCell. The obtained results show that MOCell outperforms NSGA-II in terms of the range of solutions covered, while this latter is able of obtaining better solutions than MOCell in large instances. Furthermore, we have observed that the optimal solutions found are composed of a high percentage of low-cost requirements and, also, the requirements that produce most satisfaction on the customers.
    BibTeX:
    @inproceedings{DurilloZAN09,
      author = {Juan J. Durillo and Yuanyuan Zhang and Enrique Alba and Antonio J. Nebro},
      title = {A Study of the Multi-Objective Next Release Problem},
      booktitle = {Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {49-58},
      address = {Cumberland Lodge, Windsor, UK},
      month = {13-15 May},
      doi = {http://dx.doi.org/10.1109/SSBSE.2009.21}
    }
    					
    2011.05.19 Chao-Jung Hsu & Chin-Yu Huang A Study on the Applicability of Modified Genetic Algorithms for the Parameter Estimation of Software Reliability Modeling 2010 Proceedings of the 34th Annual IEEE International Computer Software and Applications Conference (COMPSAC '10), pp. 531-540, Seoul Korea, 19-23 July   Inproceedings Distribution and Maintenance
    Abstract: In order to assure software quality and assess software reliability, many software reliability growth models (SRGMs) have been proposed for estimation of reliability growth of products in the past three decades. In principle, two widely used methods for the parameter estimation of SRGMs are the maximum likelihood estimation (MLE) and the least squares estimation (LSE). However, the approach of these two estimations may impose some restrictions on SRGMs, such as the existence of derivatives from formulated models or the needs for complex calculation. Thus in this paper, we propose a modified genetic algorithm (MGA) to estimate the parameters of SRGMs. Experiments based on real software failure data are performed, and the results show that the proposed genetic algorithm is more effective and faster than traditional genetic algorithms.
    BibTeX:
    @inproceedings{HsuH10,
      author = {Chao-Jung Hsu and Chin-Yu Huang},
      title = {A Study on the Applicability of Modified Genetic Algorithms for the Parameter Estimation of Software Reliability Modeling},
      booktitle = {Proceedings of the 34th Annual IEEE International Computer Software and Applications Conference (COMPSAC '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {531-540},
      address = {Seoul, Korea},
      month = {19-23 July},
      doi = {http://dx.doi.org/10.1109/COMPSAC.2010.59}
    }
    					
    2011.03.07 Changhai Nie & Hareton Leung A Survey of Combinatorial Testing 2011 ACM Computing Surveys, Vol. 43(2), pp. 11:1-29, January   Article Testing and Debugging
    Abstract: Combinatorial Testing (CT) can detect failures triggered by interactions of parameters in the Software Under Test (SUT) with a covering array test suite generated by some sampling mechanisms. It has been an active field of research in the last twenty years. This article aims to review previous work on CT, highlights the evolution of CT, and identifies important issues, methods, and applications of CT, with the goal of supporting and directing future practice and research in this area. First, we present the basic concepts and notations of CT. Second, we classify the research on CT into the following categories: modeling for CT, test suite generation, constraints, failure diagnosis, prioritization, metric, evaluation, testing procedure and the application of CT. For each of the categories, we survey the motivation, key issues, solutions, and the current state of research. Then, we review the contribution from different research groups, and present the growing trend of CT research. Finally, we recommend directions for future CT research, including: (1) modeling for CT, (2) improving the existing test suite generation algorithm, (3) improving analysis of testing result, (4) exploring the application of CT to different levels of testing and additional types of systems, (5) conducting more empirical studies to fully understand limitations and strengths of CT, and (6) combining CT with other testing techniques.
    BibTeX:
    @article{NieL11,
      author = {Changhai Nie and Hareton Leung},
      title = {A Survey of Combinatorial Testing},
      journal = {ACM Computing Surveys},
      year = {2011},
      volume = {43},
      number = {2},
      pages = {11:1-29},
      month = {January},
      doi = {http://dx.doi.org/10.1145/1883612.1883618}
    }
    					
    2011.07.20 Márcio de Oliveira Barros & Arilo Claudio Dias Neto A Survey of Empirical Investigations on SSBSE Papers 2011 Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11), Vol. 6956, pp. 268-268, Szeged Hungary, 10-12 September   Inproceedings General Aspects and Survey
    Abstract: We present a survey based on papers published in the first two editions of the Symposium on Search-Based Software Engineering (2009 and 2010). The survey addresses how empirical studies are being designed and used by researchers to support evidence on the effectiveness and efficiency of heuristic search techniques when applied to Software Engineering problems. The survey reuses the structure and research questions proposed by a systematic review published in the context of search-based software testing and covering research papers up to 2007. A list of validity threats for SBSE experiments is also proposed and the extent to which the selected research papers address these threats is evaluated. We have compared our results with those presented by the former systematic review and observed that the number of Search-based Software Engineering (SBSE) research papers supported by well-designed and well-reported empirical studies seems to be growing over the years. On the other hand, construct, internal, and external validity threats are still not properly addressed in the design of many SBSE experiments.
    BibTeX:
    @inproceedings{BarrosD11,
      author = {Márcio de Oliveira Barros and Arilo Claudio Dias Neto},
      title = {A Survey of Empirical Investigations on SSBSE Papers},
      booktitle = {Proceedings of the 3rd International Symposium on Search Based Software Engineering (SSBSE '11)},
      publisher = {Springer},
      year = {2011},
      volume = {6956},
      pages = {268-268},
      address = {Szeged, Hungary},
      month = {10-12 September},
      doi = {http://dx.doi.org/10.1007/978-3-642-23716-4_24}
    }
    					
    2017.06.29 Ilhem Boussaïd, Patrick Siarry & Mohamed Ahmed-Nacer A Survey on Search-based Model-driven Engineering 2017 Automated Software Engineering, Vol. 24(2), pp. 233-294, June   Article
    Abstract: Model-driven engineering (MDE) and search-based software engineering (SBSE) are both relevant approaches to software engineering. MDE aims to raise the level of abstraction in order to cope with the complexity of software systems, while SBSE involves the application of metaheuristic search techniques to complex software engineering problems, reformulating engineering tasks as optimization problems. The purpose of this paper is to survey the relatively recent research activity lying at the interface between these two fields, an area that has come to be known as search-based model-driven engineering. We begin with an introduction to MDE, the concepts of models, of metamodels and of model transformations. We also give a brief introduction to SBSE and metaheuristics. Then, we survey the current research work centered around the combination of search-based techniques and MDE. The literature survey is accompanied by the presentation of references for further details.
    BibTeX:
    @article{BoussaidSA17,
      author = {Ilhem Boussaïd and Patrick Siarry and Mohamed Ahmed-Nacer},
      title = {A Survey on Search-based Model-driven Engineering},
      journal = {Automated Software Engineering},
      year = {2017},
      volume = {24},
      number = {2},
      pages = {233-294},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s10515-017-0215-4}
    }
    					
    2011.01.19 Outi Räihä A Survey on Search-based Software Design 2010 Computer Science Review, Vol. 4(4), pp. 203-249, November   Article Design Tools and Techniques
    Abstract: This survey investigates search-based approaches to software design. The basics of the most popular meta-heuristic algorithms are presented as background to the search-based viewpoint. Software design is considered from a wide viewpoint, including topics that can also be categorized as software maintenance or re-engineering. Search-based approaches have been used in research from the high architecture design level to software clustering and finally software refactoring. Enhancing and predicting software quality with search-based methods is also taken into account as a part of the design process. The background for the underlying software engineering problems is discussed, after which search-based approaches are presented. Summarizing remarks and tables collecting the fundamental issues of approaches for each type of problem are given. The choices regarding critical decisions, such as representation and fitness function, when used in meta-heuristic search algorithms, are emphasized and discussed in detail. Ideas for future research directions are also given.
    BibTeX:
    @article{Raiha10,
      author = {Outi Räihä},
      title = {A Survey on Search-based Software Design},
      journal = {Computer Science Review},
      year = {2010},
      volume = {4},
      number = {4},
      pages = {203-249},
      month = {November},
      doi = {http://dx.doi.org/10.1016/j.cosrev.2010.06.001}
    }
    					
    2009.03.31 Outi Räihä A Survey on Search-based Software Designs 2009 (D-2009-1), March   Techreport General Aspects and Survey
    Abstract: Search-based approaches to software design are investigated. Software design is considered from a wide view, including topics that can also be categorized under software maintenance or re-engineering. Search-based approaches have been used in research from high architecture level design to software clustering and finally software refactoring. Enhancing and predicting software quality with search-based methods is also taken into account as a part of the design process. The choices regarding fundamental decisions, such as representation and fitness function, when using in meta-heuristic search algorithms, are emphasized and discussed in detail. Ideas for future research directions are also given.
    BibTeX:
    @techreport{Raiha09,
      author = {Outi Räihä},
      title = {A Survey on Search-based Software Designs},
      year = {2009},
      number = {D-2009-1},
      month = {March},
      url = {http://www.cs.uta.fi/reports/dsarja/D-2009-1.pdf}
    }
    					
    2016.02.17 Chayanika Sharma, Sangeeta Sabharwal & Ritu Sibal A Survey on Software Testing Techniques using Genetic Algorithm 2013 International Journal of Computer Science Issues, Vol. 10(1), pp. 381-393, January   Article Testing and Debugging
    Abstract: The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.
    BibTeX:
    @article{SharmaSS13,
      author = {Chayanika Sharma and Sangeeta Sabharwal and Ritu Sibal},
      title = {A Survey on Software Testing Techniques using Genetic Algorithm},
      journal = {International Journal of Computer Science Issues},
      year = {2013},
      volume = {10},
      number = {1},
      pages = {381-393},
      month = {January},
      url = {http://www.ijcsi.org/papers/IJCSI-10-1-1-381-393.pdf}
    }
    					
    2014.08.12 Zoltan A. Kocsis & Jerry Swan Asymptotic Genetic Improvement Programming with Type Functors and Catamorphisms 2014 Proceedings of Semantic Methods in Genetic Programming (SMGP) at Parallel Problem Solving from Nature (PPSN XIV), Ljubljana Slovenia   Inproceedings
    BibTeX:
    @inproceedings{KocsisS14,
      author = {Zoltan A. Kocsis and Jerry Swan},
      title = {Asymptotic Genetic Improvement Programming with Type Functors and Catamorphisms},
      booktitle = {Proceedings of Semantic Methods in Genetic Programming (SMGP) at Parallel Problem Solving from Nature (PPSN XIV)},
      year = {2014},
      address = {Ljubljana, Slovenia},
      url = {http://www.cs.put.poznan.pl/kkrawiec/smgp/uploads/Site/Kocsis.pdf}
    }
    					
    2011.02.05 An Ngo-The & Günther Ruhe A Systematic Approach for Solving the Wicked Problem of Software Release Planning 2008 Soft Computing - A Fusion of Foundations, Methodologies and Applications, Vol. 12(1), pp. 95-108, August   Article Requirements/Specifications
    Abstract: Release planning is known to be a cognitively and computationally difficult problem. Different kinds of uncertainties make it hard to formulate and solve the problem. Our solution approach called EVOLVE+ mitigates these difficulties by (i) an evolutionary problem solving method combining rigorous solution methods to solve the actual formalization of the problem combined with the interactive involvement of the human experts in this process, (ii) provision of a portfolio of diversified and qualified solutions at each iteration of the solution process, and (iii) the application of a multi-criteria decision aid method (ELECTRE IS) to assist the selection of the final solution from a set of qualified solutions. At the final stage of the process, an outranking relation is established among the qualified candidate solutions to address existing soft constraints or objectives. A case study is provided to illustrate and initially evaluate the given approach. The proposed method and results are not limited to software release planning, but can be adapted to a wider class of wicked planning problems.
    BibTeX:
    @article{NgoTheR08,
      author = {An Ngo-The and Günther Ruhe},
      title = {A Systematic Approach for Solving the Wicked Problem of Software Release Planning},
      journal = {Soft Computing - A Fusion of Foundations, Methodologies and Applications},
      year = {2008},
      volume = {12},
      number = {1},
      pages = {95-108},
      month = {August},
      doi = {http://dx.doi.org/10.1007/s00500-007-0219-2}
    }
    					
    2015.02.05 Roberto E. Lopez-Herrejon, Lukas Linsbauer & Alexander Egyed A Systematic Mapping Study of Search-Based Software Engineering for Software Product Lines 2015 Information and Software Technology, Vol. 61, pp. 33-51, May   Article
    Abstract: Context Search-Based Software Engineering (SBSE) is an emerging discipline that focuses on the application of search-based optimization techniques to software engineering problems. Software Product Lines (SPLs) are families of related software systems whose members are distinguished by the set of features each one provides. SPL development practices have proven benefits such as improved software reuse, better customization, and faster time to market. A typical SPL usually involves a large number of systems and features, a fact that makes them attractive for the application of SBSE techniques which are able to tackle problems that involve large search spaces. Objective The main objective of our work is to identify the quantity and the type of research on the application of SBSE techniques to SPL problems. More concretely, the SBSE techniques that have been used and at what stage of the SPL life cycle, the type of case studies employed and their empirical analysis, and the fora where the research has been published. Method A systematic mapping study was conducted with five research questions and assessed 77 publications from 2001, when the term SBSE was coined, until 2014. Conclusions Our study attested the great synergy existing between both fields, corroborated the increasing and ongoing interest in research on the subject, and revealed challenging open research questions.
    BibTeX:
    @article{Lopez-HerrejonLE15,
      author = {Roberto E. Lopez-Herrejon and Lukas Linsbauer and Alexander Egyed},
      title = {A Systematic Mapping Study of Search-Based Software Engineering for Software Product Lines},
      journal = {Information and Software Technology},
      year = {2015},
      volume = {61},
      pages = {33-51},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.infsof.2015.01.008}
    }
    					
    2013.10.09 Gerardo Quintana & Martin Solari A Systematic Mapping Study on Experiments with Automatic Structural Test Case Generation (in Spanish) 2012 Proceedings of 2012 XXXVIII Latin American Conference on Informatics (CLEI '12), pp. 1-10, Medellín, 1-5 October   Inproceedings Testing and Debugging
    Abstract: In the literature there are many empirical studies to evaluate techniques for automatic structural test case generation. However, there isn't a systematic study about the kind of experiments that are conducted to automate the process. The main objective of this paper is the classification and thematic analysis of the experiments reported in the literature to increase the efficacy and efficiency of the automation of structural testing and a classification of the techniques for generating test data. The methodology is a systematic mapping study. The results indicate that the experiments are mainly focused in three areas (test data generation, reduction of test suites and techniques for dealing with complex structures of programs), and that there are different combined approaches to make the generation more effective and efficient.
    BibTeX:
    @inproceedings{QuintanaS12,
      author = {Gerardo Quintana and Martin Solari},
      title = {A Systematic Mapping Study on Experiments with Automatic Structural Test Case Generation (in Spanish)},
      booktitle = {Proceedings of 2012 XXXVIII Latin American Conference on Informatics (CLEI '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {1-10},
      address = {Medellín},
      month = {1-5 October},
      doi = {http://dx.doi.org/10.1109/CLEI.2012.6427203}
    }
    					
    2008.09.02 Wasif Afzal, Richard Torkar & Robert Feldt A Systematic Mapping Study on Non-Functional Search-based Software Testing 2008 Proceedings of the 20th International Conference on Software Engineering and Knowledge Engineering (SEKE '08), pp. 488-493, San Francisco USA, 1-3 July   Inproceedings Testing and Debugging
    Abstract: Automated software test generation has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional), greybox (combination of structural and functional) and nonfunctional testing. In this paper, we undertake a systematic mapping study to present a broad review of primary studies on the application of search-based optimization techniques to non-functional testing. The motivation is to identify the evidence available on the topic and to identify gaps in the application of search-based optimization techniques to different types of non-functional testing. The study is based on a comprehensive set of 35 papers obtained after using a multi-stage selection criteria and are published in workshops, conferences and journals in the time span 1996-2007. We conclude that the search-based software testing community needs to do more and broader studies on nonfunctional search-based software testing (NFSBST) and the results from our systematic map can help direct such efforts.
    BibTeX:
    @inproceedings{AfzalTF08,
      author = {Wasif Afzal and Richard Torkar and Robert Feldt},
      title = {A Systematic Mapping Study on Non-Functional Search-based Software Testing},
      booktitle = {Proceedings of the 20th International Conference on Software Engineering and Knowledge Engineering (SEKE '08)},
      publisher = {Knowledge Systems Institute Graduate School},
      year = {2008},
      pages = {488-493},
      address = {San Francisco, USA},
      month = {1-3 July},
      url = {http://www.torkar.se/Site/Publications_files/a_systematic_mapping_study_on_non-fu.pdf}
    }
    					
    2009.02.16 Wasif Afzal, Richard Torkar & Robert Feldt A Systematic Review of Search-based Testing for Non-Functional System Properties 2009 Information and Software Technology   Article Testing and Debugging
    Abstract: Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996–2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.
    BibTeX:
    @article{AfzalTF09,
      author = {Wasif Afzal and Richard Torkar and Robert Feldt},
      title = {A Systematic Review of Search-based Testing for Non-Functional System Properties},
      journal = {Information and Software Technology},
      year = {2009},
      doi = {http://dx.doi.org/10.1016/j.infsof.2008.12.005}
    }
    					
    2013.06.28 Antônio Mauricio Pitangueira, Rita Suzana P. Maciel, Márcio de Oliveira Barros & Aline Santos Andrade A Systematic Review of Software Requirements Selection and Prioritization using SBSE Approaches 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 188-208, St. Petersburg Russia, 24-26 August   Inproceedings Requirements/Specifications
    Abstract: Selection and prioritization of software requirements represents an area of interest in Search-Based Software Engineering (SBSE) and its main focus is finding and selecting a set of requirements that may be part of a software release. This paper uses a systematic review to investigate which SBSE approaches have been proposed to address software requirement selection and prioritization problems. The search strategy identified 30 articles in this area and they were analyzed for 18 previously established quality criteria. The results of this systematic review show which aspects of the requirements selection and prioritization problems were addressed by researchers, the methods approaches and search techniques currently adopted to address these problems, and the strengths and weaknesses of each of these techniques. The review provides a map showing the gaps and trends in the field, which can be useful to guide further research.
    BibTeX:
    @inproceedings{PitangueiraMBA13,
      author = {Antônio Mauricio Pitangueira and Rita Suzana P. Maciel and Márcio de Oliveira Barros and Aline Santos Andrade},
      title = {A Systematic Review of Software Requirements Selection and Prioritization using SBSE Approaches},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {188-208},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_15}
    }
    					
    2009.07.03 Shaukat Ali, Lionel C. Briand, Hadi Hemmati & Rajwinder Kaur Panesar-Walawege A Systematic Review of the Application and Empirical Investigation of Search-Based Test-Case Generation 2010 IEEE Transactions on Software Engineering, Vol. 36(6), pp. 742-762, Los Alamitos CA USA, November-December   Article Testing and Debugging
    Abstract: Metaheuristic search techniques have been extensively used to automate the process of generating test cases and thus providing solutions for a more cost-effective testing process. This approach to test automation, often coined as "Search-based Software Testing" (SBST), has been used for a wide variety of test case generation purposes. Since SBST techniques are heuristic by nature, they must be empirically investigated in terms of how costly and effective they are at reaching their test objectives and whether they scale up to realistic development artifacts. However, approaches to empirically study SBST techniques have shown wide variation in the literature. This paper presents the results of a systematic, comprehensive review that aims at characterizing how empirical studies have been designed to investigate SBST cost-effectiveness and what empirical evidence is available in the literature regarding SBST cost-effectiveness and scalability. We also provide a framework that drives the data collection process of this systematic review and can be the starting point of guidelines on how SBST techniques can be empirically assessed. The intent is to aid future researchers doing empirical studies in SBST by providing an unbiased view of the body of empirical evidence and by guiding them in performing well designed empirical studies.
    BibTeX:
    @article{AliBHP10,
      author = {Shaukat Ali and Lionel C. Briand and Hadi Hemmati and Rajwinder Kaur Panesar-Walawege},
      title = {A Systematic Review of the Application and Empirical Investigation of Search-Based Test-Case Generation},
      journal = {IEEE Transactions on Software Engineering},
      publisher = {IEEE Computer Society},
      year = {2010},
      volume = {36},
      number = {6},
      pages = {742-762},
      address = {Los Alamitos, CA, USA},
      month = {November-December},
      doi = {http://dx.doi.org/10.1109/TSE.2009.52}
    }
    					
    2017.06.27 Thainá Mariani & Silvia Regina Vergilio A Systematic Review on Search-based Refactoring 2017 Information and Software Technology, Vol. 83, pp. 14-34, March   Article
    Abstract: Context: To find the best sequence of refactorings to be applied in a software artifact is an optimization problem that can be solved using search techniques, in the field called Search-Based Refactoring (SBR). Over the last years, the field has gained importance, and many SBR approaches have appeared, arousing research interest.

    Objective: The objective of this paper is to provide an overview of existing SBR approaches, by presenting their common characteristics, and to identify trends and research opportunities.

    Method: A systematic review was conducted following a plan that includes the definition of research questions, selection criteria, a search string, and selection of search engines. 71 primary studies were selected, published in the last sixteen years. They were classified considering dimensions related to the main SBR elements, such as addressed artifacts, encoding, search technique, used metrics, available tools, and conducted evaluation.

    Results: Some results show that code is the most addressed artifact, and evolutionary algorithms are the most employed search technique. Furthermore, most times, the generated solution is a sequence of refactorings. In this respect, the refactorings considered are usually the ones of the Fowler's Catalog. Some trends and opportunities for future research include the use of models as artifacts, the use of many objectives, the study of the bad smells effect, and the use of hyper-heuristics.

    Conclusions: We have found many SBR approaches, most of them published recently. The approaches are presented, analyzed, and grouped following a classification scheme. The paper contributes to the SBR field as we identify a range of possibilities that serve as a basis to motivate future researches.

    BibTeX:
    @article{MarianiV17,
      author = {Thainá Mariani and Silvia Regina Vergilio},
      title = {A Systematic Review on Search-based Refactoring},
      journal = {Information and Software Technology},
      year = {2017},
      volume = {83},
      pages = {14-34},
      month = {March},
      doi = {http://dx.doi.org/10.1016/j.infsof.2016.11.009}
    }
    					
    2011.06.07 Mikael Svahnberg, Tony Gorschek, Robert Feldt, Richard Torkar, Saad Bin Saleem & Muhammad Usman Shafique A Systematic Review on Strategic Release Planning Models 2010 Information and Software Technology, Vol. 52(3), pp. 237-248, March   Article Requirements/Specifications
    Abstract: Context Strategic release planning (sometimes referred to as road-mapping) is an important phase of the requirements engineering process performed at product level. It is concerned with selection and assignment of requirements in sequences of releases such that important technical and resource constraints are fulfilled. Objectives In this study we investigate which strategic release planning models have been proposed, their degree of empirical validation, their factors for requirements selection, and whether they are intended for a bespoke or market-driven requirements engineering context. Methods In this systematic review a number of article sources are used, including Compendex, Inspec, IEEE Xplore, ACM Digital Library, and Springer Link. Studies are selected after reading titles and abstracts to decide whether the articles are peer reviewed, and relevant to the subject. Results Twenty four strategic release planning models are found and mapped in relation to each other, and a taxonomy of requirements selection factors is constructed. Conclusions We conclude that many models are related to each other and use similar techniques to address the release planning problem. We also conclude that several requirement selection factors are covered in the different models, but that many methods fail to address factors such as stakeholder value or internal value. Moreover, we conclude that there is a need for further empirical validation of the models in full scale industry trials.
    BibTeX:
    @article{SvahnbergGFTSS10,
      author = {Mikael Svahnberg and Tony Gorschek and Robert Feldt and Richard Torkar and Saad Bin Saleem and Muhammad Usman Shafique},
      title = {A Systematic Review on Strategic Release Planning Models},
      journal = {Information and Software Technology},
      year = {2010},
      volume = {52},
      number = {3},
      pages = {237-248},
      month = {March},
      doi = {http://dx.doi.org/10.1016/j.infsof.2009.11.006}
    }
    					
    2012.08.22 Claire Le Goues, Michael Dewey-Vogt, Stephanie Forrest & Westley Weimer A Systematic Study of Automated Program Repair: Fixing 55 Out of 105 Bugs for $8 Each 2012 Proceedings of the 34th International Conference on Software Engineering (ICSE '12), pp. 3-13, Zurich Switzerland, 2-9 June   Inproceedings Testing and Debugging
    Abstract: There are more bugs in real-world programs than human programmers can realistically address. This paper evaluates two research questions: "What fraction of bugs can be repaired automatically?" and "How much does it cost to repair a bug automatically?" In previous work, we presented GenProg, which uses genetic programming to repair defects in off-the-shelf C programs. To answer these questions, we: (1) propose novel algorithmic improvements to GenProg that allow it to scale to large programs and find repairs 68% more often, (2) exploit GenProg's inherent parallelism using cloud computing resources to provide grounded, human-competitive cost measurements, and (3) generate a large, indicative benchmark set to use for systematic evaluations. We evaluate GenProg on 105 defects from 8 open-source programs totaling 5.1 million lines of code and involving 10,193 test cases. GenProg automatically repairs 55 of those 105 defects. To our knowledge, this evaluation is the largest available of its kind, and is often two orders of magnitude larger than previous work in terms of code or test suite size or defect count. Public cloud computing prices allow our 105 runs to be reproduced for $403; a successful repair completes in 96 minutes and costs $7.32, on average.
    BibTeX:
    @inproceedings{GouesDFW12,
      author = {Claire Le Goues and Michael Dewey-Vogt and Stephanie Forrest and Westley Weimer},
      title = {A Systematic Study of Automated Program Repair: Fixing 55 Out of 105 Bugs for $8 Each},
      booktitle = {Proceedings of the 34th International Conference on Software Engineering (ICSE '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {3-13},
      address = {Zurich, Switzerland},
      month = {2-9 June},
      doi = {http://dx.doi.org/10.1109/ICSE.2012.6227211}
    }
    					
    2008.03.18 Eugenia Díaz, Javier Tuya, Raquel Blanco & José Javier Dolado A Tabu Search Algorithm for Structural Software Testing 2008 Computers & Operations Research, Vol. 35(10), pp. 3052-3072, October   Article Testing and Debugging
    Abstract: This paper presents a tabu search metaheuristic algorithm for the automatic generation of structural software tests. It is a novel work since tabu search is applied to the automation of the test generation task, whereas previous works have used other techniques such as genetic algorithms. The developed test generator has a cost function for intensifying the search and another for diversifying the search that is used when the intensification is not successful. It also combines the use of memory with a backtracking process to avoid getting stuck in local minima. Evaluation of the generator was performed using complex programs under test and large ranges for input variables. Results show that the developed generator is both effective and efficient.
    BibTeX:
    @article{DiazTBD08,
      author = {Eugenia Díaz and Javier Tuya and Raquel Blanco and José Javier Dolado},
      title = {A Tabu Search Algorithm for Structural Software Testing},
      journal = {Computers & Operations Research},
      year = {2008},
      volume = {35},
      number = {10},
      pages = {3052-3072},
      month = {October},
      doi = {http://dx.doi.org/10.1016/j.cor.2007.01.009}
    }
    					
    2009.05.14 AbdulSalam Kalaji, Robert M. Hierons & Stephen Swift A Testability Transformation Approach for State-Based Programs 2009 Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09), pp. 85-88, Cumberland Lodge Windsor UK, 13-15 May   Inproceedings Testing and Debugging
    Abstract: Search based testing approaches are efficient in test data generation; however they are likely to perform poorly when applied to programs with state variables. The problem arises when the target function includes guards that reference some of the program state variables whose values depend on previous function calls. Thus, merely considering the target function to derive test data is not sufficient. This paper introduces a testability transformation approach based on the analysis of control and data flow dependencies to bypass the state variable problem. It achieves this by eliminating state variables from guards and/ or determining which functions to call in order to satisfy guards with state variables. A number of experiments demonstrate the value of the proposed approach.
    BibTeX:
    @inproceedings{KalajiHS09,
      author = {AbdulSalam Kalaji and Robert M. Hierons and Stephen Swift},
      title = {A Testability Transformation Approach for State-Based Programs},
      booktitle = {Proceedings of the 1st International Symposium on Search Based Software Engineering (SSBSE '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {85-88},
      address = {Cumberland Lodge, Windsor, UK},
      month = {13-15 May},
      doi = {http://dx.doi.org/10.1109/SSBSE.2009.14}
    }
    					
    2007.12.02 K.F. Fischer A Test Case Selection Method for the Validation of Software Maintenance Modifications 1977 Proceedings of International Computer Software and Applications Conference (COMPSAC '77), pp. 421-426, Chicago USA, November   Inproceedings Testing and Debugging
    Abstract: The literature in the areas of software management and software engineering admit to a possible reduction in the reliability of software after modifications have been made. Validation of maintenance modifications is commonly referred to as retest, and has yet to be adequately resolved. The problem is how to efficiently select previously run test cases to be rerun on the software to assure no degradation of reliability. This paper develops several alternative retest philosophies and identifies a common operations research technique for solution. Detailed examples show how 0-1 integer programming can identify a minimum number of previously executed tests necessary to fully retest every affected program element at least once. Use of this model to determine proper selection of test cases can reduce the cost of software maintenance and increase confidence in the reliability of the code.
    BibTeX:
    @inproceedings{Fischer77,
      author = {K.F. Fischer},
      title = {A Test Case Selection Method for the Validation of Software Maintenance Modifications},
      booktitle = {Proceedings of International Computer Software and Applications Conference (COMPSAC '77)},
      year = {1977},
      pages = {421-426},
      address = {Chicago, USA},
      month = {November}
    }
    					
    2008.07.27 Chang ming Hsu A Test Data Evolution Strategy under Program Changes 2007 , Taiwan China, July School: National Sun Yat-sen University, Department of Information Management   Mastersthesis Testing and Debugging
    Abstract: Since the cost of software testing has continuously accounted for large proportion of the software development total cost, automatic test data generation becomes a hot topic in recent software testing research. These researches attempt to reduce the cost of software testing by generating test data automatically, but they are discussed only for the single version programs not for the programs which are needed re-testing after changing. On the other hand, the regression testing researches discuss about how to re-test programs after changing, but they don't talk about how to generate test data automatically. Therefore, we propose an automatic test data evolution strategy in this paper. We use the method of regression testing to find out the part of programs which need re-testing, then automatic evolutes the test data by hybrid genetic algorithm. According to the experiment result, our strategy has the same or better testing ability but needs less cost than the other strategies.
    BibTeX:
    @mastersthesis{Hsu07,
      author = {Chang-ming Hsu},
      title = {A Test Data Evolution Strategy under Program Changes},
      school = {National Sun Yat-sen University, Department of Information Management},
      year = {2007},
      address = {Taiwan, China},
      month = {July},
      url = {http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0723107-171116}
    }
    					
    2008.03.18 Mark Harman & Phil McMinn A Theoretical & Empirical Analysis of Evolutionary Testing and Hill Climbing for Structural Test Data Generation 2007 Proceedings of the International Symposium on Software Testing and Analysis (ISSTA '07), pp. 73-83, London England, 9-12 July   Inproceedings Testing and Debugging
    Abstract: Evolutionary testing has been widely studied as a technique for automating the process of test case generation. However, to date, there has been no theoretical examination of when and why it works. Furthermore, the empirical evidence for the effectiveness of evolutionary testing consists largely of small scale laboratory studies. This paper presents a first theoretical analysis of the scenarios in which evolutionary algorithms are suitable for structural test case generation. The theory is backed up by an empirical study that considers real world programs, the search spaces of which are several orders of magnitude larger than those previously considered.
    BibTeX:
    @inproceedings{HarmanM07,
      author = {Mark Harman and Phil McMinn},
      title = {A Theoretical & Empirical Analysis of Evolutionary Testing and Hill Climbing for Structural Test Data Generation},
      booktitle = {Proceedings of the International Symposium on Software Testing and Analysis (ISSTA '07)},
      publisher = {ACM},
      year = {2007},
      pages = {73-83},
      address = {London, England},
      month = {9-12 July},
      doi = {http://dx.doi.org/10.1145/1273463.1273475}
    }
    					
    2014.09.22 Andrea Arcuri A Theoretical and Empirical Analysis of the Role of Test Sequence Length in Software Testing for Structural Coverage 2012 IEEE Transactions on Software EngineeringIEEE Transactions on Software Engineering, Vol. 38(3), pp. 497-519, May/June   Article Testing and Debugging
    Abstract: In the presence of an internal state, often a sequence of function calls is required to test software. In fact, to cover a particular branch of the code, a sequence of previous function calls might be required to put the internal state in the appropriate configuration. Internal states are not only present in object-oriented software, but also in procedural software (e.g., static variables in C programs). In the literature, there are many techniques to test this type of software. However, to the best of our knowledge, the properties related to the choice of the length of these sequences have received only a little attention in the literature. In this paper, we analyze the role that the length plays in software testing, in particular branch coverage. We show that, on "difficult" software testing benchmarks, longer test sequences make their testing trivial. Hence, we argue that the choice of the length of the test sequences is very important in software testing. Theoretical analyses and empirical studies on widely used benchmarks and on an industrial software are carried out to support our claims.
    BibTeX:
    @article{Arcuri12,
      author = {Andrea Arcuri},
      title = {A Theoretical and Empirical Analysis of the Role of Test Sequence Length in Software Testing for Structural Coverage},
      booktitle = {IEEE Transactions on Software Engineering},
      journal = {IEEE Transactions on Software Engineering},
      year = {2012},
      volume = {38},
      number = {3},
      pages = {497-519},
      month = {May/June},
      doi = {http://dx.doi.org/10.1109/TSE.2011.44}
    }
    					
    2009.07.26 Mark Harman & Phil McMinn A Theoretical and Empirical Study of Search Based Testing: Local, Global and Hybrid Search 2010 IEEE Transactions on Software Engineering, Vol. 36(2), pp. 226-247, March-April   Article Testing and Debugging
    Abstract: Search-based optimization techniques have been applied to structural software test data generation since 1992, with a recent upsurge in interest and activity within this area. However, despite the large number of recent studies on the applicability of different search-based optimization approaches, there has been very little theoretical analysis of the types of testing problem for which these techniques are well suited. There are also few empirical studies that present results for larger programs. This paper presents a theoretical exploration of the most widely studied approach, the global search technique embodied by Genetic Algorithms. It also presents results from a large empirical study that compares the behavior of both global and local search-based optimization on real-world programs. The results of this study reveal that cases exist of test data generation problem that suit each algorithm, thereby suggesting that a hybrid global-local search (a Memetic Algorithm) may be appropriate. The paper presents a Memetic Algorithm along with further empirical results studying its performance.
    BibTeX:
    @article{HarmanM10,
      author = {Mark Harman and Phil McMinn},
      title = {A Theoretical and Empirical Study of Search Based Testing: Local, Global and Hybrid Search},
      journal = {IEEE Transactions on Software Engineering},
      year = {2010},
      volume = {36},
      number = {2},
      pages = {226-247},
      month = {March-April},
      doi = {http://dx.doi.org/10.1109/TSE.2009.71}
    }
    					
    2013.08.05 Joseph Kempka, Phil McMinn & Dirk Sudholt A Theoretical Runtime and Empirical Analysis of Different Alternating Variable Searches for Search-based Testing 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1445-1452, Amsterdam The Netherlands, 6-10 July   Inproceedings Testing and Debugging
    Abstract: The Alternating Variable Method (AVM) has been shown to be a surprisingly effective and efficient means of generating branch-covering inputs for procedural programs. However, there has been little work that has sought to analyse the technique and further improve its performance. This paper proposes two new local searches that may be used in conjunction with the AVM, Geometric and Lattice Search. A theoretical runtime analysis shows that under certain conditions, the use of these searches is proved to outperform the original AVM. These theoretical results are confirmed by an empirical study with four programs, which shows that increases of speed of over 50percent are possible in practice.
    BibTeX:
    @inproceedings{KempkaMS13,
      author = {Joseph Kempka and Phil McMinn and Dirk Sudholt},
      title = {A Theoretical Runtime and Empirical Analysis of Different Alternating Variable Searches for Search-based Testing},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1445-1452},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463549}
    }
    					
    2017.06.27 Matthew Patrick, Ruairi Donnelly & Christopher A. Gilligan A Toolkit for Testing Stochastic Simulations against Statistical Oracles 2017 Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST '17), pp. 448-453, Tokyo Japan, 13-17 March   Inproceedings
    Abstract: Stochastic simulations are developed and employed across many fields, to advise governmental policy decisions and direct future research. Faulty simulation software can have serious consequences, but its correctness is difficult to determine due to complexity and random behaviour. Stochastic simulations may output a different result each time they are run, whereas most testing techniques are designed for programs which (for a given set of inputs) always produce the same behaviour. In this paper, we introduce a new approach towards testing stochastic simulations using statistical oracles and transition probabilities. Our approach was implemented as a toolkit, which allows the frequency of state transitions to be tested, along with their final output distribution. We evaluated our toolkit on eight simulation programs from a variety fields and found it can detect errors at least three times smaller (and in one case, over 1000 times smaller) than a conventional (tolerance threshold) approach.
    BibTeX:
    @inproceedings{PatrickDG17,
      author = {Matthew Patrick and Ruairi Donnelly and Christopher A. Gilligan},
      title = {A Toolkit for Testing Stochastic Simulations against Statistical Oracles},
      booktitle = {Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST '17)},
      publisher = {IEEE},
      year = {2017},
      pages = {448-453},
      address = {Tokyo, Japan},
      month = {13-17 March},
      doi = {http://dx.doi.org/10.1109/ICST.2017.50}
    }
    					
    2016.02.27 Ning Zhang, Yuhua Huang & Xinye Cai A Two-Phase External Archive Guided Multiobjective Evolutionary Algorithm for the Software Next Release Problem 2015 Proceedings of the 10the International Conference on Bio-Inspired Computing - Theories and Applications (BIC-TA '15), pp. 664-675, Hefei China, 25-28 September   Inproceedings
    Abstract: Decomposition based multiobjective evolutionary algorithms have been used widely in solving the multiobjective optimization problems, by decomposing a MOP into several single objective subproblems and optimizing them simultaneously. This paper proposes an adaptive mechanism to decide the evolutionary stages and dynamically allocate computational resource by using the information extracted from the external archive. Different from previous proposed EAG-MOEA/D [2], the information extracted from the external archive is explicitly divided into two categories - the convergence information and the diversity information. The proposed algorithm is compared with five well-known algorithms on the Software Next Release Problem. Experimental results show that our proposed algorithm performs better than other algorithms.
    BibTeX:
    @inproceedings{ZhangHC15,
      author = {Ning Zhang and Yuhua Huang and Xinye Cai},
      title = {A Two-Phase External Archive Guided Multiobjective Evolutionary Algorithm for the Software Next Release Problem},
      booktitle = {Proceedings of the 10the International Conference on Bio-Inspired Computing - Theories and Applications (BIC-TA '15)},
      publisher = {Springer},
      year = {2015},
      pages = {664-675},
      address = {Hefei, China},
      month = {25-28 September},
      doi = {http://dx.doi.org/10.1007/978-3-662-49014-3_59}
    }
    					
    2008.07.28 Myra B. Cohen, Charles Joseph Colbourn & Alan C.H. Ling Augmenting Simulated Annealing to Build Interaction Test Suites 2003 Proceedings of the 14th International Symposium on Software Reliability Engineering (ISSRE '03), pp. 394-405, Denver Colorado USA, 17-21 November   Inproceedings Testing and Debugging
    Abstract: Component based software development is prone to unexpected interaction faults. The goal is to test as many potential interactions as is feasible within time and budget constraints. Two combinatorial objects, the orthogonal array and the covering array, can be used to generate test suites that provide a guarantee for coverage of all t-sets of component interactions in the case when the testing of all interactions is not possible. Methods for construction of these types of test suites have focused on two main areas. The first is finding new algebraic constructions that produce smaller test suites. The second is refining computational search algorithms to find smaller test suites more quickly. In this paper we explore one method for constructing covering arrays of strength three that combines algebraic constructions with computational search. This method leverages the computational efficiency and optimality of size obtained through algebraic constructions while benefiting from the generality of a heuristic search. We present a few examples of specific constructions and provide some new bounds for some strength three covering arrays.
    BibTeX:
    @inproceedings{CohenCL03,
      author = {Myra B. Cohen and Charles Joseph Colbourn and Alan C. H. Ling},
      title = {Augmenting Simulated Annealing to Build Interaction Test Suites},
      booktitle = {Proceedings of the 14th International Symposium on Software Reliability Engineering (ISSRE '03)},
      publisher = {IEEE},
      year = {2003},
      pages = {394-405},
      address = {Denver, Colorado, USA},
      month = {17-21 November},
      doi = {http://dx.doi.org/10.1109/ISSRE.2003.1251061}
    }
    					
    2014.09.22 Marwa Shousha, Lionel C. Briand & Yvan Labiche A UML/MARTE Model Analysis Method for Uncovering Scenarios Leading to Starvation and Deadlocks in Concurrent Systems 2012 IEEE Transactions on Software Engineering, Vol. 38(2), pp. 354-374, March/April   Article
    Abstract: Concurrency problems such as starvation and deadlocks should be identified early in the design process. As larger, more complex concurrent systems are being developed, this is made increasingly difficult. We propose here a general approach based on the analysis of specialized design models expressed in the Unified Modeling Language (UML) that uses a specifically designed genetic algorithm to detect concurrency problems. Though the current paper addresses deadlocks and starvation, we will show how the approach can be easily tailored to other concurrency issues. Our main motivations are 1) to devise solutions that are applicable in the context of the UML design of concurrent systems without requiring additional modeling and 2) to use a search technique to achieve scalable automation in terms of concurrency problem detection. To achieve the first objective, we show how all relevant concurrency information is extracted from systems' UML models that comply with the UML Modeling and Analysis of Real-Time and Embedded Systems (MARTE) profile. For the second objective, a tailored genetic algorithm is used to search for execution sequences exhibiting deadlock or starvation problems. Scalability in terms of problem detection is achieved by showing that the detection rates of our approach are, in general, high and are not strongly affected by large increases in the size of complex search spaces.
    BibTeX:
    @article{ShoushaBL12,
      author = {Marwa Shousha and Lionel C. Briand and Yvan Labiche},
      title = {A UML/MARTE Model Analysis Method for Uncovering Scenarios Leading to Starvation and Deadlocks in Concurrent Systems},
      journal = {IEEE Transactions on Software Engineering},
      year = {2012},
      volume = {38},
      number = {2},
      pages = {354-374},
      month = {March/April},
      doi = {http://dx.doi.org/10.1109/TSE.2010.107}
    }
    					
    2007.12.02 Xiyang Liu, Hehui Liu, Bin Wang, Ping Chen & Xiyao Cai A Unified Fitness Function Calculation Rule for Flag Conditions to Improve Evolutionary Testing 2005 Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering (ASE '05), pp. 337-341, Long Beach CA USA, 7-11 November   Inproceedings Testing and Debugging
    Abstract: Evolutionary testing (ET), automatically generating test data with good quality, is an effective technique based on evolutionary algorithm. However, the presence of flag variables will make it degenerate to random testing in structural testing. Much of previous work has addressed this problem, but all can be characterized as program-specific. In this paper, flag cost function is introduced as the main component of fitness function, whose value changes with the variation of flag problem. Based on this, a unified fitness calculation rule for flag conditions is proposed. The experiments on programs with flag problems, once considered as inextricable in previous work, and the Traffic Alert and Collision Avoidance System (TCAS) code showed the effectiveness of our unified approach.
    BibTeX:
    @inproceedings{LiuLWCC05,
      author = {Xiyang Liu and Hehui Liu and Bin Wang and Ping Chen and Xiyao Cai},
      title = {A Unified Fitness Function Calculation Rule for Flag Conditions to Improve Evolutionary Testing},
      booktitle = {Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering (ASE '05)},
      publisher = {ACM},
      year = {2005},
      pages = {337-341},
      address = {Long Beach, CA, USA},
      month = {7-11 November},
      doi = {http://dx.doi.org/10.1145/1101908.1101964}
    }
    					
    2014.09.02 Kiran Lakhotia, Mark Harman & Hamilton Gross AUSTIN: An Open Source Tool for Search Based Software Testing of C Programs 2013 Information and Software Technology, Vol. 55(1), pp. 112-125, January   Article Testing and Debugging
    Abstract: Context Despite the large number of publications on Search-Based Software Testing (SBST), there remain few publicly available tools. This paper introduces AUSTIN, a publicly available open source SBST tool for the C language.1 The paper is an extension of previous work [1]. It includes a new hill climb algorithm implemented in AUSTIN and an investigation into the effectiveness and efficiency of different pointer handling techniques implemented by AUSTIN's test data generation algorithms. Objective To evaluate the different search algorithms implemented within AUSTIN on open source systems with respect to effectiveness and efficiency in achieving branch coverage. Further, to compare AUSTIN against a non-publicly available, state-of-the-art Evolutionary Testing Framework (ETF). Method First, we use example functions from open source benchmarks as well as common data structure implementations to check if the decision procedure for pointer inputs, introduced in this paper, differs in terms of effectiveness and efficiency compared to a simpler alternative that generates random memory graphs. A second empirical study formulates two alternate hypotheses regarding the effectiveness and efficiency of AUSTIN compared to the ETF. These hypotheses are tested using a paired Wilcoxon test. Results and Conclusion The first study highlights some practical problems with the decision procedure for pointer inputs described in this paper. In particular, if the code under test contains insufficient guard statements to enforce constraints over pointers, then using a constraint solver for pointer inputs may be suboptimal compared to a method that generates random memory graphs. The programs used in the second study do not require any constraint solving for pointer inputs and consist of eight non-trivial, real-world C functions drawn from three embedded automotive software modules. For these functions, AUSTIN is competitive compared to the ETF, achieving an equal or higher branch coverage for six of the functions. In addition, for functions where AUSTIN's branch coverage is equal or higher, AUSTIN is more efficient than the ETF.
    BibTeX:
    @article{LakhotiaHG13,
      author = {Kiran Lakhotia and Mark Harman and Hamilton Gross},
      title = {AUSTIN: An Open Source Tool for Search Based Software Testing of C Programs},
      journal = {Information and Software Technology},
      year = {2013},
      volume = {55},
      number = {1},
      pages = {112-125},
      month = {January},
      doi = {http://dx.doi.org/10.1016/j.infsof.2012.03.009}
    }
    					
    2010.09.08 Kiran Lakhotia, Mark Harman & Hamilton Gross AUSTIN: A tool for Search Based Software Testing for the C Language and its Evaluation on Deployed Automotive Systems 2010 Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10), pp. 101-110, Benevento Italy, 7-9 September   Inproceedings Testing and Debugging
    Abstract: Despite the large number of publications on Search--Based Software Testing (SBST), there remain few publicly available tools. This paper introduces AUSTIN, a publicly available SBST tool for the C language. The paper validates the tool with an empirical study of its effectiveness and efficiency in achieving branch coverage compared to random testing and the Evolutionary Testing Framework (ETF), which is used in-house by Daimler and others for Evolutionary Testing. The programs used in the study consist of eight non--trivial, real-world C functions drawn from three embedded automotive software modules. For the majority of the functions, AUSTIN is at least as effective (in terms of achieved branch coverage) as the ETF, and is considerably more efficient.
    BibTeX:
    @inproceedings{LakhotiaHG10,
      author = {Kiran Lakhotia and Mark Harman and Hamilton Gross},
      title = {AUSTIN: A tool for Search Based Software Testing for the C Language and its Evaluation on Deployed Automotive Systems},
      booktitle = {Proceedings of the 2nd International Symposium on Search Based Software Engineering (SSBSE '10)},
      publisher = {IEEE},
      year = {2010},
      pages = {101-110},
      address = {Benevento, Italy},
      month = {7-9 September},
      doi = {http://dx.doi.org/10.1109/SSBSE.2010.21}
    }
    					
    2011.05.19 Fawad Qayum Automated Assistance for Search-Based Refactoring Using Unfolding of Graph Transformation Systems 2010 Proceedings of the 5th international conference on Graph transformations (ICGT '10), Vol. 6372, pp. 407-409, Enschede The Netherlands, 27 September - 2 October   Inproceedings Distribution and Maintenance
    Abstract: Refactoring has emerged as a successful technique to enhance the internal structure of software by a series of small, behaviour-preserving transformations [4]. However, due to complex dependencies and conflicts between the individual refactorings, it is difficult to choose the best sequence of refactoring steps in order to effect a specific improvement. In the case of large systems the situation becomes acute because existing tools offer only limited support for their automated application [8]. Therefore, search-based approaches have been suggested in order to provide automation in discovering appropriate refactoring sequences [6,11]. The idea is to see the design process as a combinatorial optimization problem, attempting to derive the best solution (with respect to a quality measure called objective function) from a given initial design [9].
    BibTeX:
    @inproceedings{Qayum10,
      author = {Fawad Qayum},
      title = {Automated Assistance for Search-Based Refactoring Using Unfolding of Graph Transformation Systems},
      booktitle = {Proceedings of the 5th international conference on Graph transformations (ICGT '10)},
      publisher = {Springer},
      year = {2010},
      volume = {6372},
      pages = {407-409},
      address = {Enschede, The Netherlands},
      month = {27 September - 2 October},
      doi = {http://dx.doi.org/10.1007/978-3-642-15928-2_34}
    }
    					
    2010.03.30 Mark O'Keeffe & Mel Ó Cinnéide Automated Design Improvement by Example 2007 Proceedings of the 2007 Conference on New Trends in Software Methodologies, Tools and Techniques (SoMeT '07), pp. 315-329, Rome Italy, 7-9 November   Inproceedings Distribution and Maintenance
    Abstract: The high cost of software maintenance could potentially be reduced by automatically improving the design of object-oriented programs without altering their behaviour. We have constructed a software tool capable of refactoring object-oriented programs to conform more closely to design quality models based on a set of metrics, by formulating the task as a search problem in the space of alternative designs. However, no consensus exists on a single quality model for object-oriented design, since the definition of 'quality' can depend on the purpose, pedigree and perception of the maintenance programmer. We therefore demonstrate here the flexibility of our approach by automatically refactoring several Java programs to conform with quality models based on the metric values of example programs. Results show that an object-oriented program can be automatically refactored to reduce its dissimilarity in terms of a set of design metrics to another program having some desirable trait, such as ease of maintenance.
    BibTeX:
    @inproceedings{OKeeffeO07b,
      author = {Mark O'Keeffe and Mel Ó Cinnéide},
      title = {Automated Design Improvement by Example},
      booktitle = {Proceedings of the 2007 Conference on New Trends in Software Methodologies, Tools and Techniques (SoMeT '07)},
      publisher = {IOS Press},
      year = {2007},
      pages = {315-329},
      address = {Rome, Italy},
      month = {7-9 November},
      url = {http://portal.acm.org/citation.cfm?id=1566997}
    }
    					
    2016.02.02 Saemundur O. Haraldsson & John R. Woodward Automated Design of Algorithms and Genetic Improvement: Contrast and Commonalities 2014 Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1373-1380, Vancouver Canada, 12-16 July   Inproceedings
    Abstract: Automated Design of Algorithms (ADA) and Genetic Improvement (GI) are two relatively young fields of research that have been receiving more attention in recent years. Both methodologies can improve programs using evolutionary search methods and successfully produce human competitive programs. ADA and GI are used for improving functional properties such as quality of solution and non-functional properties, e.g. speed, memory and, energy consumption. Only GI of the two has been used to fix bugs, probably because it is applied globally on the whole source code while ADA typically replaces a function or a method locally. While GI is applied directly to the source code ADA works ex-situ, i.e. as a separate process from the program it is improving. Although the methodologies overlap in many ways they differ on some fundamentals and for further progress to be made researchers from both disciplines should be aware of each other's work.
    BibTeX:
    @inproceedings{HaraldssonW14,
      author = {Saemundur O. Haraldsson and John R. Woodward},
      title = {Automated Design of Algorithms and Genetic Improvement: Contrast and Commonalities},
      booktitle = {Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1373-1380},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2598394.2609874}
    }
    					
    2015.09.28 Muzammil Shahbaz, Phil McMinn & Mark Stevenson Automated Discovery of Valid Test Strings from the Web Using Dynamic Regular Expressions Collation and Natural Language Processing 2012 Proceedings of the 12th International Conference on Quality Software (QSIC '12), pp. 79-88, Xi'an China, 27-29 August   Inproceedings Testing and Debugging
    Abstract: Classic approaches to test input generation -- such as dynamic symbolic execution and search-based testing -- are commonly driven by a test adequacy criterion such as branch coverage. However, there is no guarantee that these techniques will generate meaningful and realistic inputs, particularly in the case of string test data. Also, these techniques have trouble handling path conditions involving string operations that are inherently complex in nature. This paper presents a novel approach of finding valid values by collating suitable regular expressions dynamically that validate the format of the string values, such as an email address. The regular expressions are found using web searches that are driven by the identifiers appearing in the program, for example a string parameter called email Address. The identifier names are processed through natural language processing techniques to tailor the web queries. Once a regular expression has been found, a secondary web search is performed for strings matching the regular expression. An empirical study is performed on case studies involving String input validation code from 10 open source projects. Compared to other approaches, the precision of generating valid strings is significantly improved by employing regular expressions and natural language processing techniques.
    BibTeX:
    @inproceedings{ShahbazMS12,
      author = {Muzammil Shahbaz and Phil McMinn and Mark Stevenson},
      title = {Automated Discovery of Valid Test Strings from the Web Using Dynamic Regular Expressions Collation and Natural Language Processing},
      booktitle = {Proceedings of the 12th International Conference on Quality Software (QSIC '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {79-88},
      address = {Xi'an, China},
      month = {27-29 August},
      doi = {http://dx.doi.org/10.1109/QSIC.2012.15}
    }
    					
    2015.09.29 Jie Zhang, Rui Yang, Zhenyu Chen, Zhihong Zhao & Baowen Xu Automated EFSM-based Test Case Generation with Scatter Search 2012 Proceedings of the 7th International Workshop on Automation of Software Test (AST '12), pp. 76-82, Zurich Switzerland, 2-3 June   Inproceedings Testing and Debugging
    Abstract: Extended Finite State Machine (EFSM) is widely-used to represent system specifications. Automated test data generation based on EFSM models is still a challenging task due to the complexity of transition paths. In this paper, we introduce a new approach to generate test cases automatically for given transition paths of an EFSM model. An executable EFSM model is used to provide run-time feedback information as fitness function. And then scatter search algorithm is used to search for test data that can trigger given transition paths. Based on the executable model, the expected outputs associated with test data are also collected for construction of test oracles automatically. Finally, test data (inputs) and test oracles (expected outputs) are combined to be test cases. The experimental results show that our approach can effectively generate test cases to exercise the feasible transition paths.
    BibTeX:
    @inproceedings{ZhangYCZX12,
      author = {Jie Zhang and Rui Yang and Zhenyu Chen and Zhihong Zhao and Baowen Xu},
      title = {Automated EFSM-based Test Case Generation with Scatter Search},
      booktitle = {Proceedings of the 7th International Workshop on Automation of Software Test (AST '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {76-82},
      address = {Zurich, Switzerland},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1109/IWAST.2012.6228994}
    }
    					
    2014.05.28 Sergio Segura, José A. Parejo, Robert M. Hierons, David Benavides & Antonio Ruiz-Cortés Automated Generation of Computationally Hard Feature Models using Evolutionary Algorithms 2014 Expert Systems with Applications, Vol. 41(8), pp. 3975-3992, June   Article
    Abstract: A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.
    BibTeX:
    @article{SeguraPHBR14,
      author = {Sergio Segura and José A. Parejo and Robert M. Hierons and David Benavides and Antonio Ruiz-Cortés},
      title = {Automated Generation of Computationally Hard Feature Models using Evolutionary Algorithms},
      journal = {Expert Systems with Applications},
      year = {2014},
      volume = {41},
      number = {8},
      pages = {3975-3992},
      month = {June},
      doi = {http://dx.doi.org/10.1016/j.eswa.2013.12.028}
    }
    					
    2014.05.27 Paolo Tonella, Cu Duy Nguyen, Alessandro Marchetto, Kiran Lakhotia & Mark Harman Automated Generation of State Abstraction Functions using Data Invariant Inference 2013 Proceedings of the 8th International Workshop on Automation of Software Test (AST '13), pp. 75-81, San Francisco CA USA, 18-19 May   Inproceedings Testing and Debugging
    Abstract: Model based testing relies on the availability of models that can be defined manually or by means of model inference techniques. To generate models that include meaningful state abstractions, model inference requires a set of abstraction functions as input. However, their specification is difficult and involves substantial manual effort. In this paper, we investigate a technique to automatically infer both the abstraction functions necessary to perform state abstraction and the finite state models based on such abstractions. The proposed approach uses a combination of clustering, invariant inference and genetic algorithms to optimize the abstraction functions along three quality attributes that characterize the resulting models: size, determinism and infeasibility of the admitted behaviors. Preliminary results on a small e-commerce application are extremely encouraging because the automatically produced models include the set of manually defined gold standard models.
    BibTeX:
    @inproceedings{TonellaNMLH13,
      author = {Paolo Tonella and Cu Duy Nguyen and Alessandro Marchetto and Kiran Lakhotia and Mark Harman},
      title = {Automated Generation of State Abstraction Functions using Data Invariant Inference},
      booktitle = {Proceedings of the 8th International Workshop on Automation of Software Test (AST '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {75-81},
      address = {San Francisco, CA, USA},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/IWAST.2013.6595795}
    }
    					
    2015.12.09 Chandra Prakash Vudatha, Sastry KR Jammalamadaka, Sateesh Nalliboena & Bala Krishna Kamesh Duvvuri Automated Generation of Test Cases from Output Domain and Critical Regions of Embedded Systems using Genetic Algorithms 2011 Proceedings of the 2nd National Conference on Emerging Trends and Applications in Computer Science (NCETACS '11), pp. 1-6, Shillong India, 4-5 March   Inproceedings Testing and Debugging
    Abstract: A primary issue in black-box testing is how to generate adequate test cases from input domain of the system under test on the basis of user's requirement specification. However, for some types of systems including embedded systems, developing test cases from output domain is more suitable than developing from input domain, especially, when the output domain is smaller. Exhaustive testing of the embedded systems in the critical regions is important as the embedded systems must be basically fail safe systems. The Critical regions of the input space of the embedded systems can be pre-identified and supplied as seeds. In this paper, the authors presents an Automated Test Case Generator (ATCG) that uses Genetic algorithms (GAs) to automate the generation of test cases from output domain and the criticality regions of an embedded System. The approach is applied to a pilot project `Temperature monitoring and controlling of Nuclear Reactor System' (TMCNRS) which is an embedded system developed using modified Cleanroom Software Engineering methodology. The ATCG generates test cases which are useful to conduct pseudo-exhaustive testing to detect single, double and several multimode faults in the system. The generator considers most of the combinations of outputs, and finds the corresponding inputs while optimizing the number of test cases generated.
    BibTeX:
    @inproceedings{VudathaJND11,
      author = {Chandra Prakash Vudatha and Sastry KR Jammalamadaka and Sateesh Nalliboena and Bala Krishna Kamesh Duvvuri},
      title = {Automated Generation of Test Cases from Output Domain and Critical Regions of Embedded Systems using Genetic Algorithms},
      booktitle = {Proceedings of the 2nd National Conference on Emerging Trends and Applications in Computer Science (NCETACS '11)},
      publisher = {IEEE},
      year = {2011},
      pages = {1-6},
      address = {Shillong, India},
      month = {4-5 March},
      doi = {http://dx.doi.org/10.1109/NCETACS.2011.5751411}
    }
    					
    2009.04.07 Ajay M. Joshi, Lieven Eeckhout, Lizy K. John & Ciji Isen Automated Microprocessor Stressmark Generation 2008 Proceedings of the 14th IEEE International Symposium on High Performance Computer Architecture (HPCA '08), pp. 229-239, Salt Lake City UT USA, 16-20 February   Inproceedings
    Abstract: Estimating the maximum power and thermal characteristics of a processor is essential for designing its power delivery system, packaging, cooling, and power/thermal management schemes. Typical benchmark suites used in performance evaluation do not stress the processor to its limit though, and current practice in industry is to develop artificial benchmarks that are specifically written to generate maximum processor (component) activity. However, manually developing and tuning so called stressmarks is extremely tedious and time-consuming while requiring an intimate understanding of the processor. A synthetic program that can be tuned to produce a variety of benchmark characteristics would significantly help in addressing this problem by enabling the automatic exploration of the large temperature and power design space. This paper demonstrates that with a suitable choice of only 40 hardware-independent program characteristics related to the instruction mix, instruction-level parallelism, control flow behavior, and memory access patterns, it is possible to generate a synthetic benchmark whose performance relates to that of general-purpose and commercial applications. Leveraging this abstract workload modeling approach, we propose StressMaker, a framework that uses machine learning for the automated generation of stressmarks. A comparison with an exhaustive exploration of a large power design space demonstrates that StressMaker is very effective in automatically generating stressmarks in a limited amount of time.
    BibTeX:
    @inproceedings{JoshiEJI08,
      author = {Ajay M. Joshi and Lieven Eeckhout and Lizy K. John and Ciji Isen},
      title = {Automated Microprocessor Stressmark Generation},
      booktitle = {Proceedings of the 14th IEEE International Symposium on High Performance Computer Architecture (HPCA '08)},
      publisher = {IEEE},
      year = {2008},
      pages = {229-239},
      address = {Salt Lake City, UT, USA},
      month = {16-20 February},
      doi = {http://dx.doi.org/10.1109/HPCA.2008.4658642}
    }
    					
    2009.07.29 Raluca Lefticaru, Florentin Ipate & Cristina Tudose Automated Model Design using Genetic Algorithms and Model Checking 2009 Proceedings of the 4th Balkan Conference in Informatics (BCI '09), pp. 79-84, Thessaloniki Greece, 17-19 September   Inproceedings Software/Program Verification
    Abstract: In recent years there has been a growing interest in applying metaheuristic search algorithms in model-checking. On the other hand, model checking has been used far less in other software engineering activities, such as model design and software testing. In this paper we propose an automated model design strategy, by integrating genetic algorithms (used for model generation) with model checking (used to evaluate the fitness, which takes into account the satisfied/unsatisfied specifications). Genetic programming is the process of evolving computer programs, by using a fitness value determined by the program's ability to perform a given computational task. This evaluation is based on the output produced by the program for a set of training input samples. The consequence is that the evolved program can function well for the sample set used for training, but there is no guarantee that the program will behave properly for every possible input. Instead of training samples, in this paper we use a model checker, which verifies if the generated model satisfies the specifications. This approach is empirically evaluated for the generation of finite state-based models. Furthermore, the previous fitness function proposed in the literature, that takes into account only the number of unsatisfied specifications, presents plateaux and so does not offer a good guidance for the search. This paper proposes and evaluates the performance of a number of new fitness functions, which, by taking also into account the counterexamples provided by the model checker, improve the success rate of the genetic algorithm.
    BibTeX:
    @inproceedings{LefticaruIT09,
      author = {Raluca Lefticaru and Florentin Ipate and Cristina Tudose},
      title = {Automated Model Design using Genetic Algorithms and Model Checking},
      booktitle = {Proceedings of the 4th Balkan Conference in Informatics (BCI '09)},
      publisher = {IEEE},
      year = {2009},
      pages = {79-84},
      address = {Thessaloniki, Greece},
      month = {17-19 September},
      doi = {http://dx.doi.org/10.1109/BCI.2009.15}
    }
    					
    2013.06.28 Reza Matinnejad, Shiva Nejati, Lionel Briand, Thomas Bruckmann & Claude Poull Automated Model-in-the-Loop Testing of Continuous Controllers using Search 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 141-157, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: The number and the complexity of software components embedded in today's vehicles is rapidly increasing. A large group of these components monitor and control the operating conditions of physical devices (e.g., components controlling engines, brakes, and airbags). These controllers are known as continuous controllers. In this paper, we study testing of continuous controllers at the Model-in-Loop (MiL) level where both the controller and the environment are represented by models and connected in a closed feedback loop system. We identify a set of common requirements characterizing the desired behavior of continuous controllers, and develop a search-based technique to automatically generate test cases for these requirements. We evaluated our approach by applying it to a real automotive air compressor module. Our experience shows that our approach automatically generates several test cases for which the MiL level simulations indicate potential violations of the system requirements. Further, not only do our approach generates better test cases faster than random test case generation, but we also achieve better results than test scenarios devised by domain experts.
    BibTeX:
    @inproceedings{MatinnejadNBBP13,
      author = {Reza Matinnejad and Shiva Nejati and Lionel Briand and Thomas Bruckmann and Claude Poull},
      title = {Automated Model-in-the-Loop Testing of Continuous Controllers using Search},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {141-157},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_12}
    }
    					
    2011.05.19 Mark Harman Automated Patching Techniques: The Fix Is In: Technical Perspective 2010 Communications of the ACM, Vol. 53(5), pp. 108-108, May   Article Testing and Debugging
    Abstract: no abstract
    BibTeX:
    @article{Harman10c,
      author = {Mark Harman},
      title = {Automated Patching Techniques: The Fix Is In: Technical Perspective},
      journal = {Communications of the ACM},
      year = {2010},
      volume = {53},
      number = {5},
      pages = {108-108},
      month = {May},
      doi = {http://dx.doi.org/10.1145/1735223.1735248}
    }
    					
    2007.12.02 Nigel Tracey, John A. Clark & Keith Mander Automated Program Flaw Finding using Simulated Annealing 1998 Proceedings of the 1998 ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA '98), pp. 73-81, Clearwater Beach Florida USA, 2-4 March   Inproceedings Testing and Debugging
    Abstract: One of the major costs in a software project is the construction of test-data. This paper outlines a generalised test-case data generation framework based on optimisation techniques. The framework can incorporate a number of testing criteria, for both functional and non-functional properties. Application of the optimisation framework to testing specification failures and exception conditions is illustrated. The results of a number of small case studies are presented and show the efficiency and effectiveness of this dynamic optimisation-base approach to generating test-data.
    BibTeX:
    @inproceedings{TraceyCM98b,
      author = {Nigel Tracey and John A. Clark and Keith Mander},
      title = {Automated Program Flaw Finding using Simulated Annealing},
      booktitle = {Proceedings of the 1998 ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA '98)},
      publisher = {ACM},
      year = {1998},
      pages = {73-81},
      address = {Clearwater Beach, Florida, USA},
      month = {2-4 March},
      doi = {http://dx.doi.org/10.1145/271771.271792}
    }
    					
    2011.07.26 Eric Schulte, Stephanie Forrest & Westley Weimer Automated Program Repair through the Evolution of Assembly Code 2010 Proceedings of the IEEE/ACM international conference on Automated Software Engineering, pp. 313-316, Antwerp, 20-24 September   Inproceedings Testing and Debugging
    Abstract: A method is described for automatically repairing legacy software at the assembly code level using evolutionary computation. The technique is demonstrated on Java byte code and x86 assembly programs, showing how to find program variations that correct defects while retaining desired behaviour. Test cases are used to demonstrate the defect and define required functionality. The paper explores advantages of assembly-level repair over earlier work at the source code level; the ability to repair programs written in many different languages; and the ability to repair bugs that were previously intractable. The paper reports experimental results showing reasonable performance of assembly language repair even on non-trivial programs.
    BibTeX:
    @inproceedings{SchulteFW10,
      author = {Eric Schulte and Stephanie Forrest and Westley Weimer},
      title = {Automated Program Repair through the Evolution of Assembly Code},
      booktitle = {Proceedings of the IEEE/ACM international conference on Automated Software Engineering},
      publisher = {ACM},
      year = {2010},
      pages = {313--316},
      address = {Antwerp},
      month = {20-24 September},
      url = {http://www.cs.unm.edu/~eschulte/data/ase2010-asm-preprint.pdf},
      doi = {http://dx.doi.org/10.1145/1858996.1859059}
    }
    					
    2014.09.17 Mustafa Bozkurt Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing 2013 , June School: University College London   Phdthesis Testing and Debugging
    Abstract: Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testing.
    BibTeX:
    @phdthesis{Bozkurt13b,
      author = {Mustafa Bozkurt},
      title = {Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing},
      school = {University College London},
      year = {2013},
      month = {June},
      url = {http://discovery.ucl.ac.uk/1400300/1/m.bozkurt_thesis_final.pdf}
    }
    					
    2012.03.08 Frank Hermans Automated Reengineering using Evolutionary Coupling 2011 , August School: Eindhoven University of Technology   Mastersthesis Design Tools and Techniques
    Abstract: As software systems are changed in order to add, remove or modify functionality, the quality of the modular design of the code degrades. In order to improve the code structure again, the source code needs to be reengineered. However, reengineering is labor intensive work, and thus it is desirable to automate this process. Currently available research on automated reengineering has at least one of three problems. First, a lot of research only focusses on automating a single step of the reengineering process (that is, problem detection, or finding an improved design). Secondly, existing approaches often change the package structure, making it difficult for developers familiar with the original structure to understand the new structure. Finally, the known techniques use metrics which have been shown to have problems in the way they measure coupling and cohesion in another study. To solve these problems, a new approach to automated reengineering is proposed. Other studies have shown that evolutionary coupling is capable of detecting relations between code entities, based on the change history of the code. Based on evolutionary coupling, we define metrics capable of quantifying the modularization quality of a software system. A case study shows that these metrics are successful in measuring the modularization quality. Furthermore, an optimization technique is proposed, which uses a genetic algorithm to improve the code structure by optimizing the evolutionary coupling metrics. The genetic algorithm uses multi-objective optimization to simultaneously reduce the number of changes made to the code, ensuring some form of code stability. Experiments performed for validating the optimization strategy show promising results. The proposed technique is capable of improving the code structure. Furthermore, the results show that multi-objective optimization is preferable over single-objective optimization. As care needs to be taken when reengineering software, in order to ensure the behavior is not changed, further research into automated reengineering is required. For this purpose, a platform is developed for conducting research into automated reengineering. This platform is also used to execute the experiments of the validation studies performed in this thesis.
    BibTeX:
    @mastersthesis{Hermans11,
      author = {Frank Hermans},
      title = {Automated Reengineering using Evolutionary Coupling},
      school = {Eindhoven University of Technology},
      year = {2011},
      month = {August},
      url = {http://alexandria.tue.nl/extra1/afstversl/wsk-i/hermans2011.pdf}
    }
    					
    2011.05.19 Mel Ó Cinnéide, Dermot Boyle & Iman Hemati Moghadam Automated Refactoring for Testability 2011 ICST Workshop on Refactoring and Testing, pp. 437-443, Berlin Germany, 21-25 March   Inproceedings Distribution and Maintenance
    Abstract: Current software practice places a strong emphasis on unit testing, to the extent that the amount of test code produced on a project can exceed the amount of actual application code required. This illustrates the importance of testability as a feature of software. In this paper we investigate if it is possible to improve a program¿s testability using an automated refactoring approach. We conduct a quasi-experiment where we create a small application that scores poorly using a proven cohesion metric, LSCC. Using our automated refactoring platform, Code-Imp, this application is automatically refactored using the LSCC metric to guide the search for better solutions. To evaluate the results, a number of industrial software engineers were asked to write test cases for the application both before and after refactoring and compare the relative difficulty involved. The results were interesting though inconclusive, and suggest that further work is required.
    BibTeX:
    @inproceedings{CinneideBM11,
      author = {Mel Ó Cinnéide and Dermot Boyle and Iman Hemati Moghadam},
      title = {Automated Refactoring for Testability},
      booktitle = {ICST Workshop on Refactoring and Testing},
      publisher = {IEEE},
      year = {2011},
      pages = {437-443},
      address = {Berlin, Germany},
      month = {21-25 March},
      doi = {http://dx.doi.org/10.1109/ICSTW.2011.23}
    }
    					
    2012.07.23 Iman Hemati Moghadam & Mel Ó Cinnéide Automated Refactoring using Design Differencing 2012 Proceedings of the 16th European Conference on Software Maintenance and Reengineering (CSMR '12), pp. 43-52, Szeged Hungary, 27-30 March   Inproceedings Distribution and Maintenance
    Abstract: Software systems that undergo repeated addition of functionality commonly suffer a loss of quality in their underlying designs, termed design erosion. This leads to the maintenance of a system becoming increasingly difficult and time-consuming during its lifetime. Refactoring can reduce the effects of design erosion, but this process requires significant effort on the part of the maintenance programmer. Research into automated refactoring has had some success in reducing the effort involved, however source code refactoring uses refactoring steps that are too small to effect major design changes. Design-level refactoring is also possible, but these approaches operate on design models and do little to help in the subsequent refactoring of the source code. In this paper, we present a novel refactoring approach that refactors a program based both on its desired design and on its source code. The maintenance programmer first creates a desired design (a UML class model) for the software based on the current software design and their understanding of how it may be required to evolve. Then, the source code is refactored using the desired design as a target. This resulting source code has the same behavior as the original, but its design more closely correlates to the desired design. We conducted an investigation using several open source Java applications to determine how precisely it is possible to refactor program source code to a desired design. Our findings were that the original program could be refactored to the desired design with an accuracy of over 90%, hence demonstrating the viability of automated refactoring using design differencing.
    BibTeX:
    @inproceedings{MoghadamC12,
      author = {Iman Hemati Moghadam and Mel Ó Cinnéide},
      title = {Automated Refactoring using Design Differencing},
      booktitle = {Proceedings of the 16th European Conference on Software Maintenance and Reengineering (CSMR '12)},
      publisher = {IEEE},
      year = {2012},
      pages = {43-52},
      address = {Szeged, Hungary},
      month = {27-30 March},
      doi = {http://dx.doi.org/10.1109/CSMR.2012.15}
    }
    					
    2016.02.12 Eric Schulte, Jonathan DiLorenzo, Westley Weimer & Stephanie Forrest Automated Repair of Binary and Assembly Programs for Cooperating Embedded Devices 2013 Proceedings of the 18th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '13), pp. 317-328, Houston USA, 16-20 March   Inproceedings
    Abstract: We present a method for automatically repairing arbitrary software defects in embedded systems, which have limited memory, disk and CPU capacities, but exist in great numbers. We extend evolutionary computation (EC) algorithms that search for valid repairs at the source code level to assembly and ELF format binaries, compensating for limited system resources with several algorithmic innovations. Our method does not require access to the source code or build toolchain of the software under repair, does not require program instrumentation, specialized execution environments, or virtual machines, or prior knowledge of the bug type. We repair defects in ARM and x86 assembly as well as ELF binaries, observing decreases of 86% in memory and 95% in disk requirements, with 62% decrease in repair time, compared to similar source-level techniques. These advances allow repairs previously possible only with C source code to be applied to any ARM or x86 assembly or ELF executable. Efficiency gains are achieved by introducing stochastic fault localization, with much lower overhead than comparable deterministic methods, and low-level program representations. When distributed over multiple devices, our algorithm finds repairs faster than predicted by naive parallelism. Four devices using our approach are five times more efficient than a single device because of our collaboration model. The algorithm is implemented on Nokia N900 smartphones, with inter-phone communication fitting in 900 bytes sent in 7 SMS text messages per device per repair on average.
    BibTeX:
    @inproceedings{SchulteDWF13,
      author = {Eric Schulte and Jonathan DiLorenzo and Westley Weimer and Stephanie Forrest},
      title = {Automated Repair of Binary and Assembly Programs for Cooperating Embedded Devices},
      booktitle = {Proceedings of the 18th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '13)},
      publisher = {ACM},
      year = {2013},
      pages = {317-328},
      address = {Houston, USA},
      month = {16-20 March},
      doi = {http://dx.doi.org/10.1145/2451116.2451151}
    }
    					
    2011.03.07 Sukhee Lee, Gigon Bae, Heung Seok Chae, Doo-Hwan Bae & Yong Rae Kwon Automated Scheduling for Clone-based Refactoring using a Competent GA 2011 Software: Practice and Experience, Vol. 41(5), pp. 521-550, April   Article Distribution and Maintenance
    Abstract: Refactoring is a widely accepted technique to improve the software quality by restructuring its design without changing its behavior. In general, a sequence of refactorings needs to be applied until the quality of the code is improved satisfactorily. In this case, the final design after refactoring can vary with the application order of refactorings, thereby producing different quality improvements. Therefore, it is necessary to determine a proper refactoring schedule to obtain as many benefits as possible. However, there is little research on the problem of generating appropriate schedules to maximize quality improvement. In this paper, we propose an approach to automatically determine an appropriate schedule to maximize quality improvement through refactoring. We first detect code clones that are suitable for refactoring and generate the most beneficial refactoring schedule to remove them. It is straightforward to select the best from the exhaustively enumerated schedules. However, such a technique becomes NP-hard, as the number of available refactorings increases. We apply a genetic algorithm (GA) to generate the best refactoring schedule within a reasonable time to cope with this problem. We compare the GA-based approach with manual scheduling, greedy heuristic-based, and exhaustive approaches for four open systems. The results show that the proposed GA-based approach generates more beneficial schedules than the others.
    BibTeX:
    @article{LeeBCBK11,
      author = {Sukhee Lee and Gigon Bae and Heung Seok Chae and Doo-Hwan Bae and Yong Rae Kwon},
      title = {Automated Scheduling for Clone-based Refactoring using a Competent GA},
      journal = {Software: Practice and Experience},
      year = {2011},
      volume = {41},
      number = {5},
      pages = {521-550},
      month = {April},
      doi = {http://dx.doi.org/10.1002/spe.1031}
    }
    					
    2009.04.29 Vittorio Cortellessa, Fabrizio Marinelli & Pasqualina Potena Automated Selection of Software Components Based on Cost/Reliability Tradeoff 2006 Proceedings of the 3rd European Workshop on Software Architecture (EWSA '06), Vol. 4344, pp. 66-81, Nantes France, 4-5 September   Inproceedings Requirements/Specifications
    Abstract: Functional criteria often drive the component selection in the assembly of a software system. Minimal distance strategies are frequently adopted to select the components that require minimal adaptation effort. This type of approach hides to developers the non-functional characteristics of components, although they may play a crucial role to meet the system specifications. In this paper we introduce the CODER framework, based on an optimization model, that supports "build-or-buy" decisions in selecting components. The selection criterion is based on cost minimization of the whole assembly subject to constraints on system reliability and delivery time. The CODER framework is composed by: an UML case tool, a model builder, and a model solver. The output of CODER indicates the components to buy and the ones to build, and the amount of testing to be performed on the latter in order to achieve the desired level of reliability.
    BibTeX:
    @inproceedings{CortellessaMP06,
      author = {Vittorio Cortellessa and Fabrizio Marinelli and Pasqualina Potena},
      title = {Automated Selection of Software Components Based on Cost/Reliability Tradeoff},
      booktitle = {Proceedings of the 3rd European Workshop on Software Architecture (EWSA '06)},
      publisher = {Springer},
      year = {2006},
      volume = {4344},
      pages = {66-81},
      address = {Nantes, France},
      month = {4-5 September},
      doi = {http://dx.doi.org/10.1007/11966104_6}
    }
    					
    2009.03.31 Bogdan Korel Automated Software Test Data Generation 1990 IEEE Transactions on Software Engineering, Vol. 16(8), pp. 870-879, August   Article Testing and Debugging
    Abstract: An alternative approach to test-data generation based on actual execution of the program under test, function-minimization methods and dynamic data-flow analysis is presented. Test data are developed for the program using actual values of input variables. When the program is executed, the program execution flow is monitored. If during program execution an undesirable execution flow is observed then function-minimization search algorithms are used to automatically locate the values of input variables for which the selected path is traversed. In addition, dynamic data-flow analysis is used to determine those input variables responsible for the undesirable program behavior, significantly increasing the speed of the search process. The approach to generating test data is then extended to programs with dynamic data structures and a search method based on dynamic data-flow analysis and backtracking is presented. In the approach described, values of array indexes and pointers are known at each step of program execution; this information is used to overcome difficulties of array and pointer handling.
    BibTeX:
    @article{Korel90,
      author = {Bogdan Korel},
      title = {Automated Software Test Data Generation},
      journal = {IEEE Transactions on Software Engineering},
      year = {1990},
      volume = {16},
      number = {8},
      pages = {870-879},
      month = {August},
      doi = {http://dx.doi.org/10.1109/32.57624}
    }
    					
    2008.07.28 Christoph C. Michael & Gary E. McGraw Automated Software Test Data Generation for Complex Programs 1998 Proceedings of the 13th IEEE International Conference on Automated software engineering (ASE '98), pp. 136-146, Honolulu Hawaii USA, 13-16 October   Inproceedings Testing and Debugging
    Abstract: We report on GADGET, a new software test generation system that uses combinatorial optimization to obtain condition/decision coverage of C/C++ programs. The GADGET system is fully automatic and supports all C/C++ language constructs. This allows us to generate tests for programs more complex than those previously reported in the literature. We address a number of issues that are encountered when automatically generating tests for complex software systems. These issues have not been discussed in earlier work on test-data generation, which concentrates on small programs (most often single functions) written in restricted programming languages.
    BibTeX:
    @inproceedings{MichaelM98,
      author = {Christoph C. Michael and Gary E. McGraw},
      title = {Automated Software Test Data Generation for Complex Programs},
      booktitle = {Proceedings of the 13th IEEE International Conference on Automated software engineering (ASE '98)},
      publisher = {IEEE},
      year = {1998},
      pages = {136-146},
      address = {Honolulu, Hawaii, USA},
      month = {13-16 October},
      doi = {http://dx.doi.org/10.1109/ASE.1998.732605}
    }
    					
    2008.07.21 Min Pei, Erik D. Goodman, Zongyi Gao & Kaixiang Zhong Automated Software Test Data Generation using a Genetic Algorithm