Repository of Publications on Search Based Software Engineering

This page is maintained by Yuanyuan Zhang, CREST, Department of Computer Science, University College London, London, UK.

Email: yuanyuan.zhang [AT] ucl.ac.uk

The SBSE authors' information can be found in Who's Who.

 

If you would like to cite the repository website please use this BibTeX entry:

@misc{yzmham:sbse-repository,
author = Yuanyuan Zhang and Mark Harman and Afshin Mansouri,
title = The {SBSE} Repository: {A} repository and analysis of authors and research articles on Search Based Software Engineering,
publisher = {{CREST Centre, UCL}},
note = crestweb.cs.ucl.ac.uk/resources/sbse_repository/
}

 

 

Click on any column header to sort

Support for global search, search per column, selection per year, reference type and application.

Global Quick Search:   Number of matching entries: 0

Search Settings

    Time Stamp Author Title Year Journal / Proceedings / Book BibTeX Type Application
    2016.03.09 Ali Ouni, Marouane Kessentini, Katsuro Inoue & Mel Ó Cinnéide Search-based Web Service Antipatterns Detection To appear IEEE Transactions on Services Computing   Article
    Abstract: Service Oriented Architecture (SOA) is widely used in industry and is regarded as one of the preferred architectural design technologies. As with any other software system, service-based systems (SBSs) may suffer from poor design, i.e., antipatterns, for many reasons such as poorly planned changes, time pressure or bad design choices. Consequently, this may lead to an SBS product that is difficult to evolve and that exhibits poor quality of service (QoS). Detecting Web service antipatterns is a manual, time-consuming and error-prone process for software developers. In this paper, we propose an automated approach for detection of Web service antipatterns using a cooperative parallel evolutionary algorithm (P-EA). The idea is that several detection methods are combined and executed in parallel during an optimization process to find a consensus regarding the identification of Web service antipatterns. We report the results of an empirical study using eight types of common Web service antipatterns. We compare the implementation of our cooperative P-EA approach with random search, two single population-based approaches and one state-of-the-art detection technique not based on heuristic search. Statistical analysis of the obtained results demonstrates that our approach is efficient in antipattern detection, with a precision score of 89% and a recall score of 93%.
    BibTeX:
    @article{OuniKIO,
      author = {Ali Ouni and Marouane Kessentini and Katsuro Inoue and Mel Ó Cinnéide},
      title = {Search-based Web Service Antipatterns Detection},
      journal = {IEEE Transactions on Services Computing},
      year = {To appear},
      doi = {http://dx.doi.org/10.1109/TSC.2015.2502595}
    }
    					
    2017.06.29 Annibale Panichella, Fitsum Meshesha Kifetew & Paolo Tonella Automated Test Case Generation as a Many-Objective Optimisation Problem with Dynamic Selection of the Targets To appear IEEE Transactions on Software Engineering   Article
    Abstract: The test case generation is intrinsically a multi-objective problem, since the goal is covering multiple test targets (e.g., branches). Existing search-based approaches either consider one target at a time or aggregate all targets into a single fitness function (whole-suite approach). Multi and many-objective optimisation algorithms (MOAs) have never been applied to this problem, because existing algorithms do not scale to the number of coverage objectives that are typically found in real-world software. In addition, the final goal for MOAs is to find alternative trade-off solutions in the objective space, while in test generation the interesting solutions are only those test cases covering one or more uncovered targets. In this paper, we present DynaMOSA (Dynamic Many-Objective Sorting Algorithm), a novel many-objective solver specifically designed to address the test case generation problem in the context of coverage testing. DynaMOSA extends our previous many-objective technique MOSA (Many-Objective Sorting Algorithm) with dynamic selection of the coverage targets based on the control dependency hierarchy. Such extension makes the approach more effective and efficient in case of limited search budget. We carried out an empirical study on 346 Java classes using three coverage criteria (i.e., statement, branch, and strong mutation coverage) to assess the performance of DynaMOSA with respect to the whole-suite approach (WS), its archive-based variant (WSA) and MOSA. The results show that DynaMOSA outperforms WSA in 28% of the classes for branch coverage (+8% more coverage on average) and in 27% of the classes for mutation coverage (+11% more killed mutants on average). It outperforms WS in 51% of the classes for statement coverage, leading to +11% more coverage on average. Moreover, DynaMOSA outperforms its predecessor MOSA for all the three coverage criteria in 19% of the classes with +8% more code coverage on average.
    BibTeX:
    @article{PanichellaKT,
      author = {Annibale Panichella and Fitsum Meshesha Kifetew and Paolo Tonella},
      title = {Automated Test Case Generation as a Many-Objective Optimisation Problem with Dynamic Selection of the Targets},
      journal = {IEEE Transactions on Software Engineering},
      year = {To appear},
      doi = {http://dx.doi.org/10.1109/TSE.2017.2663435}
    }
    					
    2017.06.27 Federica Sarro, Filomena Ferrucci, Mark Harman, Alessandra Manna & Jian Ren Adaptive Multi-objective Evolutionary Algorithms for Overtime Planning in Software Projects To appear IEEE Transactions on Software Engineering   Article
    Abstract: Software engineering and development is wellknown to suffer from unplanned overtime, which causes stress and illness in engineers and can lead to poor quality software with higher defects. Recently, we introduced a multi-objective decision support approach to help balance project risks and duration against overtime, so that software engineers can better plan overtime. This approach was empirically evaluated on six real world software projects and compared against state-of-the-art evolutionary approaches and currently used overtime strategies. The results showed that our proposal comfortably outperformed all the benchmarks considered. This paper extends our previous work by investigating adaptive multi-objective approaches to meta-heuristic operator selection, thereby extending and (as the results show) improving algorithmic performance. We also extended our empirical study to include two new real world software projects, thereby enhancing the scientific evidence for the technical performance claims made in the paper. Our new results, over all eight projects studied, showed that our adaptive algorithm outperforms the considered state of the art multi-objective approaches in 93% of the experiments (with large effect size). The results also confirm that our approach significantly outperforms current overtime planning practices in 100% of the experiments (with large effect size).
    BibTeX:
    @article{SarroFHMR,
      author = {Federica Sarro and Filomena Ferrucci and Mark Harman and Alessandra Manna and Jian Ren},
      title = {Adaptive Multi-objective Evolutionary Algorithms for Overtime Planning in Software Projects},
      journal = {IEEE Transactions on Software Engineering},
      year = {To appear},
      doi = {http://dx.doi.org/10.1109/TSE.2017.2650914}
    }
    					
    2016.04.29 Aldeida Aleti, I. Moser & Lars Grunske Analysing the Fitness Landscape of Search-based Software Testing Problems 2017 Automated Software Engineering, Vol. 24(3), pp. 603-621, September   Article
    Abstract: Search-based software testing automatically derives test inputs for a software system with the goal of improving various criteria, such as branch coverage. In many cases, evolutionary algorithms are implemented to find near-optimal test suites for software systems. The result of the search is usually received without any indication of how successful the search has been. Fitness landscape characterisation can help understand the search process and its probability of success. In this study, we recorded the information content, negative slope coefficient and the number of improvements during the progress of a genetic algorithm within the EvoSuite framework. Correlating the metrics with the branch and method coverages and the fitness function values reveals that the problem formulation used in EvoSuite could be improved by revising the objective function. It also demonstrates that given the current formulation, the use of crossover has no benefits for the search as the most problematic landscape features are not the number of local optima but the presence of many plateaus.
    BibTeX:
    @article{AletiMG17,
      author = {Aldeida Aleti and I. Moser and Lars Grunske},
      title = {Analysing the Fitness Landscape of Search-based Software Testing Problems},
      journal = {Automated Software Engineering},
      year = {2017},
      volume = {24},
      number = {3},
      pages = {603-621},
      month = {September},
      doi = {http://dx.doi.org/10.1007/s10515-016-0197-7}
    }
    					
    2017.06.29 Ilhem Boussaïd, Patrick Siarry & Mohamed Ahmed-Nacer A Survey on Search-based Model-driven Engineering 2017 Automated Software Engineering, Vol. 24(2), pp. 233-294, June   Article
    Abstract: Model-driven engineering (MDE) and search-based software engineering (SBSE) are both relevant approaches to software engineering. MDE aims to raise the level of abstraction in order to cope with the complexity of software systems, while SBSE involves the application of metaheuristic search techniques to complex software engineering problems, reformulating engineering tasks as optimization problems. The purpose of this paper is to survey the relatively recent research activity lying at the interface between these two fields, an area that has come to be known as search-based model-driven engineering. We begin with an introduction to MDE, the concepts of models, of metamodels and of model transformations. We also give a brief introduction to SBSE and metaheuristics. Then, we survey the current research work centered around the combination of search-based techniques and MDE. The literature survey is accompanied by the presentation of references for further details.
    BibTeX:
    @article{BoussaidSA17,
      author = {Ilhem Boussaïd and Patrick Siarry and Mohamed Ahmed-Nacer},
      title = {A Survey on Search-based Model-driven Engineering},
      journal = {Automated Software Engineering},
      year = {2017},
      volume = {24},
      number = {2},
      pages = {233-294},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s10515-017-0215-4}
    }
    					
    2017.06.27 Saheed A. Busari & Emmanuel Letier RADAR: A Lightweight Tool for Requirements and Architecture Decision Analysis 2017 Proceedings of the 39th International Conference on Software Engineering (ICSE '17), pp. 552-562, Buenos Aires Argentina, 20-28 May   Inproceedings
    Abstract: Uncertainty and conflicting stakeholders' objectives make many requirements and architecture decisions particularly hard. Quantitative probabilistic models allow software architects to analyse such decisions using stochastic simulation and multi-objective optimisation, but the difficulty of elaborating the models is an obstacle to the wider adoption of such techniques. To reduce this obstacle, this paper presents a novel modelling language and analysis tool, called RADAR, intended to facilitate requirements and architecture decision analysis. The language has relations to quantitative AND/OR goal models used in requirements engineering and to feature models used in software product lines. However, it simplifies such models to a minimum set of language constructs essential for decision analysis. The paper presents RADAR's modelling language, automated support for decision analysis, and evaluates its application to four real-world examples.
    BibTeX:
    @inproceedings{BusariL17,
      author = {Saheed A. Busari and Emmanuel Letier},
      title = {RADAR: A Lightweight Tool for Requirements and Architecture Decision Analysis},
      booktitle = {Proceedings of the 39th International Conference on Software Engineering (ICSE '17)},
      publisher = {IEEE},
      year = {2017},
      pages = {552-562},
      address = {Buenos Aires, Argentina},
      month = {20-28 May},
      doi = {http://dx.doi.org/10.1109/ICSE.2017.57}
    }
    					
    2016.02.27 Xinye Cai, Xin Cheng, Zhun Fan, Erik Goodman & Lisong Wang An Adaptive Memetic Framework for Multi-objective Combinatorial Optimization Problems: Studies on Software Next Release and Travelling Salesman Problems 2017 Soft Computing, Vol. 21(9), pp. 2215-2236, May   Article
    Abstract: In this paper, we propose two multi-objective memetic algorithms (MOMAs) using two different adaptive mechanisms to address combinatorial optimization problems (COPs). One mechanism adaptively selects solutions for local search based on the solutions’ convergence toward the Pareto front. The second adaptive mechanism uses the convergence and diversity information of an external set (dominance archive), to guide the selection of promising solutions for local search. In addition, simulated annealing is integrated in this framework as the local refinement process. The multi-objective memetic algorithms with the two adaptive schemes (called uMOMA-SA and aMOMA-SA) are tested on two COPs and compared with some well-known multi-objective evolutionary algorithms. Experimental results suggest that uMOMA-SA and aMOMA-SA outperform the other algorithms with which they are compared. The effects of the two adaptive mechanisms are also investigated in the paper. In addition, uMOMA-SA and aMOMA-SA are compared with three single-objective and three multi-objective optimization approaches on software next release problems using real instances mined from bug repositories (Xuan et al. IEEE Trans Softw Eng 38(5):1195–1212, 2012). The results show that these multi-objective optimization approaches perform better than these single-objective ones, in general, and that aMOMA-SA has the best performance among all the approaches compared.
    BibTeX:
    @article{CaiCFGW17,
      author = {Xinye Cai and Xin Cheng and Zhun Fan and Erik Goodman and Lisong Wang},
      title = {An Adaptive Memetic Framework for Multi-objective Combinatorial Optimization Problems: Studies on Software Next Release and Travelling Salesman Problems},
      journal = {Soft Computing},
      year = {2017},
      volume = {21},
      number = {9},
      pages = {2215-2236},
      month = {May},
      doi = {http://dx.doi.org/10.1007/s00500-015-1921-0}
    }
    					
    2017.06.29 Zongzheng Chi, Jifeng Xuan, Zhilei Ren, Xiaoyuan Xie & He Guo Multi-Level Random Walk for Software Test Suite Reduction 2017 IEEE Computational Intelligence Magazine, Vol. 12(2), pp. 24-33, May   Article
    Abstract: Software testing is important and time-consuming. A test suite, i.e., a set of test cases, plays a key role in validating the expected program behavior. In modern test-driven development, a test suite pushes the development progress. Software evolves over time; its test suite is executed to detect whether a new code change adds bugs to the existing code. Executing all test cases after each code change is unnecessary and may be impossible due to the limited development cycle. On the one hand, multiple test cases may focus on an identical piece of code; then several test cases cannot detect extra bugs. On the other hand, even executing a test suite once in a large project takes around one hour [1]; frequent code changes require much time for conducting testing. For instance, in Hadoop, a framework of distributed computing, 2,847 version commits are accepted within one year from September 2014 with a peak of 135 commits in one week [2].
    BibTeX:
    @article{ChiXRXG17,
      author = {Zongzheng Chi and Jifeng Xuan and Zhilei Ren and Xiaoyuan Xie and He Guo},
      title = {Multi-Level Random Walk for Software Test Suite Reduction},
      journal = {IEEE Computational Intelligence Magazine},
      year = {2017},
      volume = {12},
      number = {2},
      pages = {24-33},
      month = {May},
      doi = {http://dx.doi.org/10.1109/MCI.2017.2670460}
    }
    					
    2017.06.27 Thiago Nascimento Ferreira, Silvia Regina Vergilio & Jerffeson Teixeira de Souza Incorporating User Preferences in Search-based Software Engineering: A Systematic Mapping Study 2017 Information and Software Technology, Vol. 90, pp. 55-69, October   Article
    Abstract: Context

    Search-based algorithms have been successfully applied to solve software engineering problems in the field known as Search-based Software Engineering (SBSE). However, in practice, the user may reject the obtained solutions, since many characteristics of the problem cannot be mathematically modeled. To cope with this situation, preference-based algorithms have been investigated and raised interest in the SBSE field.

    Objective

    To identify the quantity and type of research on SBSE preference-based approaches and to contribute to this new research subject, named here Preference and Search-Based Software Engineering (PSBSE),

    Method

    We conducted a systematic mapping, following a research plan to locate, assess, extract and group the outcomes from relevant studies.

    Results

    Few software engineering activities have been addressed. The most used algorithms are evolutionary and single-objective. In most studies the preferences are provided interactively and, in many cases, the user preferences are incorporated in the fitness functions. We observe a lack of evaluation measures and works comparing existing approaches.

    Conclusions

    The use of preference-based algorithms in SBSE is an underexplored subject, and many research opportunities exist.

    BibTeX:
    @article{FerreiraVS17,
      author = {Thiago Nascimento Ferreira and Silvia Regina Vergilio and Jerffeson Teixeira de Souza},
      title = {Incorporating User Preferences in Search-based Software Engineering: A Systematic Mapping Study},
      journal = {Information and Software Technology},
      year = {2017},
      volume = {90},
      pages = {55-69},
      month = {October},
      doi = {http://dx.doi.org/10.1016/j.infsof.2017.05.003}
    }
    					
    2017.06.29 Giovani Guizzo, Silvia Regina Vergilio, Aurora Trinidad Ramirez Pozo & Gian Mauricio Fritsche A Multi-objective and Evolutionary Hyper-heuristic Applied to the Integration and Test Order Problem 2017 Applied Soft Computing, Vol. 56, pp. 331-344, July   Article
    Abstract: The field of Search-Based Software Engineering (SBSE) has widely utilized Multi-Objective Evolutionary Algorithms (MOEAs) to solve complex software engineering problems. However, the use of such algorithms can be a hard task for the software engineer, mainly due to the significant range of parameter and algorithm choices. To help in this task, the use of Hyper-heuristics is recommended. Hyper-heuristics can select or generate low-level heuristics while optimization algorithms are executed, and thus can be generically applied. Despite their benefits, we find only a few works using hyper-heuristics in the SBSE field. Considering this fact, we describe HITO, a Hyper-heuristic for the Integration and Test Order Problem, to adaptively select search operators while MOEAs are executed using one of the selection methods: Choice Function and Multi-Armed Bandit. The experimental results show that HITO can outperform the traditional MOEAs NSGA-II and MOEA/DD. HITO is also a generic algorithm, since the user does not need to select crossover and mutation operators, nor adjust their parameters.
    BibTeX:
    @article{GuizzoVPF17,
      author = {Giovani Guizzo and Silvia Regina Vergilio and Aurora Trinidad Ramirez Pozo and Gian Mauricio Fritsche},
      title = {A Multi-objective and Evolutionary Hyper-heuristic Applied to the Integration and Test Order Problem},
      journal = {Applied Soft Computing},
      year = {2017},
      volume = {56},
      pages = {331-344},
      month = {July},
      doi = {http://dx.doi.org/10.1016/j.asoc.2017.03.012}
    }
    					
    2017.06.27 Sadeeq Jan, Cu D. Nguyen, Andrea Arcuri & Lionel Briand A Search-Based Testing Approach for XML Injection Vulnerabilities in Web Applications 2017 Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST '17), pp. 356-366, Tokyo Japan, 13-17 March   Inproceedings
    Abstract: In most cases, web applications communicate with web services (SOAP and RESTful). The former act as a front-end to the latter, which contain the business logic. A hacker might not have direct access to those web services (e.g., they are not on public networks), but can still provide malicious inputs to the web application, thus potentially compromising related services. Typical examples are XML injection attacks that target SOAP communications. In this paper, we present a novel, search-based approach used to generate test data for a web application in an attempt to deliver malicious XML messages to web services. Our goal is thus to detect XML injection vulnerabilities in web applications. The proposed approach is evaluated on two studies, including an industrial web application with millions of users. Results show that we are able to effectively generate test data (e.g., input values in an HTML form) that detect such vulnerabilities.
    BibTeX:
    @inproceedings{JanNAB17,
      author = {Sadeeq Jan and Cu D. Nguyen and Andrea Arcuri and Lionel Briand},
      title = {A Search-Based Testing Approach for XML Injection Vulnerabilities in Web Applications},
      booktitle = {Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST '17)},
      publisher = {IEEE},
      year = {2017},
      pages = {356-366},
      address = {Tokyo, Japan},
      month = {13-17 March},
      doi = {http://dx.doi.org/10.1109/ICST.2017.39}
    }
    					
    2017.06.27 Salim Kebir, Isabelle Borne & Djamel Meslati A Genetic Algorithm-based Approach for Automated Refactoring of Component-based Software 2017 Information and Software Technology, Vol. 88, pp. 17-36, August   Article
    Abstract: Context: During its lifecycle, a software system undergoes repeated modifications to quickly fulfill new requirements, but its underlying design is not properly adjusted after each update. This leads to the emergence of bad smells. Refactoring provides a de facto behavior-preserving approach to eliminate these anomalies. However, manually determining and performing useful refactorings is a formidable challenge, as stated in the literature. Therefore, framing object-oriented automated refactoring as a search-based technique has been proposed. However, the literature shows that search-based refactoring of component-based software has not yet received proper attention.

    Objective: This paper presents a genetic algorithm-based approach for the automated refactoring of component-based software. This approach consists of detecting component-relevant bad smells and eliminating these bad smells by searching for the best sequence of refactorings using a genetic algorithm.

    Method: Our approach consists of four steps. The first step includes studying the literature related to component-relevant bad smells and formulating bad smell detection rules. The second step involves proposing a catalog of component-relevant refactorings. The third step consists of constructing a source code model by extracting facts from the source code of a component-based software. The final step seeks to identify the best sequence of refactorings to apply to reduce the presence of bad smells in the source code model using a genetic algorithm. The latter uses bad smell detection rules as a fitness function and the catalog of refactorings as a means to explore the search space.

    Results: As a case study, we conducted experiments on an unbiased set of four real-world component-based applications. The results indicate that our approach is able to efficiently reduce the total number of bad smells by more than one half, which is an acceptable value compared to the recent literature. Moreover, we determined that our approach is also accurate in refactoring only components suffering from bad smells while leaving the remaining components untouched whenever possible. Furthermore, a statistical analysis shows that our genetic algorithm outperforms random search and local search in terms of efficiency and accuracy on almost all the systems investigated in this work.

    Conclusion: This paper presents a search-based approach for the automated refactoring of component-based software. To the best of our knowledge, our approach is the first to focus on component-based refactoring, whereas the state-of-the-art approaches focus only on object-oriented refactoring.

    BibTeX:
    @article{KebirBM17,
      author = {Salim Kebir and Isabelle Borne and Djamel Meslati},
      title = {A Genetic Algorithm-based Approach for Automated Refactoring of Component-based Software},
      journal = {Information and Software Technology},
      year = {2017},
      volume = {88},
      pages = {17-36},
      month = {August},
      doi = {http://dx.doi.org/10.1016/j.infsof.2017.03.009}
    }
    					
    2016.04.29 Fitsum Meshesha Kifetew, Roberto Tiella & Paolo Tonella Generating Valid Grammar-based Test Inputs by Means of Genetic Programming and Annotated Grammars 2017 Empirical Software Engineering, Vol. 22(2), pp. 928-961, April   Article
    Abstract: Automated generation of system level tests for grammar based systems requires the generation of complex and highly structured inputs, which must typically satisfy some formal grammar. In our previous work, we showed that genetic programming combined with probabilities learned from corpora gives significantly better results over the baseline (random) strategy. In this work, we extend our previous work by introducing grammar annotations as an alternative to learned probabilities, to be used when finding and preparing the corpus required for learning is not affordable. Experimental results carried out on six grammar based systems of varying levels of complexity show that grammar annotations produce a higher number of valid sentences and achieve similar levels of coverage and fault detection as learned probabilities.
    BibTeX:
    @article{KifetewTT17,
      author = {Fitsum Meshesha Kifetew and Roberto Tiella and Paolo Tonella},
      title = {Generating Valid Grammar-based Test Inputs by Means of Genetic Programming and Annotated Grammars},
      journal = {Empirical Software Engineering},
      year = {2017},
      volume = {22},
      number = {2},
      pages = {928-961},
      month = {April},
      doi = {http://dx.doi.org/10.1007/s10664-015-9422-4}
    }
    					
    2017.06.27 William B. Langdon, Brian Yee Hong Lam, Marc Modat, Justyna Petke & Mark Harman Genetic Improvement of GPU Software 2017 Genetic Programming and Evolvable Machines, Vol. 18(1), pp. 5-44, March   Article
    Abstract: We survey genetic improvement (GI) of general purpose computing on graphics cards. We summarise several experiments which demonstrate four themes. Experiments with the gzip program show that genetic programming can automatically port sequential C code to parallel code. Experiments with the StereoCamera program show that GI can upgrade legacy parallel code for new hardware and software. Experiments with NiftyReg and BarraCUDA show that GI can make substantial improvements to current parallel CUDA applications. Finally, experiments with the pknotsRG program show that with semi-automated approaches, enormous speed ups can sometimes be had by growing and grafting new code with genetic programming in combination with human input.
    BibTeX:
    @article{LangdonLMPH17,
      author = {William B. Langdon and Brian Yee Hong Lam and Marc Modat and Justyna Petke and Mark Harman},
      title = {Genetic Improvement of GPU Software},
      journal = {Genetic Programming and Evolvable Machines},
      year = {2017},
      volume = {18},
      number = {1},
      pages = {5-44},
      month = {March},
      doi = {http://dx.doi.org/10.1007/s10710-016-9273-9}
    }
    					
    2017.06.29 Usman Mansoor, Marouane Kessentini, Bruce R. Maxim & Kalyanmoy Deb Multi-objective Code-smells Detection using Good and Bad Design Examples 2017 Software Quality Journal, Vol. 25(2), pp. 529-552, June   Article
    Abstract: Code-smells are identified, in general, by using a set of detection rules. These rules are manually defined to identify the key symptoms that characterize a code-smell using combinations of mainly quantitative (metrics), structural, and/or lexical information. We propose in this work to consider the problem of code-smell detection as a multi-objective problem where examples of code-smells and well-designed code are used to generate detection rules. To this end, we use multi-objective genetic programming (MOGP) to find the best combination of metrics that maximizes the detection of code-smell examples and minimizes the detection of well-designed code examples. We evaluated our proposal on seven large open-source systems and found that, on average, most of the different five code-smell types were detected with an average of 87 % of precision and 92 % of recall. Statistical analysis of our experiments over 51 runs shows that MOGP performed significantly better than state-of-the-art code-smell detectors.
    BibTeX:
    @article{MansoorKMD17,
      author = {Usman Mansoor and Marouane Kessentini and Bruce R. Maxim and Kalyanmoy Deb},
      title = {Multi-objective Code-smells Detection using Good and Bad Design Examples},
      journal = {Software Quality Journal},
      year = {2017},
      volume = {25},
      number = {2},
      pages = {529-552},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s11219-016-9309-7}
    }
    					
    2017.06.27 Thainá Mariani & Silvia Regina Vergilio A Systematic Review on Search-based Refactoring 2017 Information and Software Technology, Vol. 83, pp. 14-34, March   Article
    Abstract: Context: To find the best sequence of refactorings to be applied in a software artifact is an optimization problem that can be solved using search techniques, in the field called Search-Based Refactoring (SBR). Over the last years, the field has gained importance, and many SBR approaches have appeared, arousing research interest.

    Objective: The objective of this paper is to provide an overview of existing SBR approaches, by presenting their common characteristics, and to identify trends and research opportunities.

    Method: A systematic review was conducted following a plan that includes the definition of research questions, selection criteria, a search string, and selection of search engines. 71 primary studies were selected, published in the last sixteen years. They were classified considering dimensions related to the main SBR elements, such as addressed artifacts, encoding, search technique, used metrics, available tools, and conducted evaluation.

    Results: Some results show that code is the most addressed artifact, and evolutionary algorithms are the most employed search technique. Furthermore, most times, the generated solution is a sequence of refactorings. In this respect, the refactorings considered are usually the ones of the Fowler’s Catalog. Some trends and opportunities for future research include the use of models as artifacts, the use of many objectives, the study of the bad smells effect, and the use of hyper-heuristics.

    Conclusions: We have found many SBR approaches, most of them published recently. The approaches are presented, analyzed, and grouped following a classification scheme. The paper contributes to the SBR field as we identify a range of possibilities that serve as a basis to motivate future researches.

    BibTeX:
    @article{MarianiV17,
      author = {Thainá Mariani and Silvia Regina Vergilio},
      title = {A Systematic Review on Search-based Refactoring},
      journal = {Information and Software Technology},
      year = {2017},
      volume = {83},
      pages = {14-34},
      month = {March},
      doi = {http://dx.doi.org/10.1016/j.infsof.2016.11.009}
    }
    					
    2017.06.27 Mohamed Wiem Mkaouer, Marouane Kessentini, Mel Ó Cinnéide, Shinpei Hayashi & Kalyanmoy Deb A Robust Multi-objective Approach to Balance Severity And Importance of Refactoring Opportunities 2017 Empirical Software Engineering, Vol. 22(2), pp. 894-927, April   Article
    Abstract: Refactoring large systems involves several sources of uncertainty related to the severity levels of code smells to be corrected and the importance of the classes in which the smells are located. Both severity and importance of identified refactoring opportunities (e.g. code smells) are difficult to estimate. In fact, due to the dynamic nature of software development, these values cannot be accurately determined in practice, leading to refactoring sequences that lack robustness. In addition, some code fragments can contain severe quality issues but they are not playing an important role in the system. To address this problem, we introduced a multi-objective robust model, based on NSGA-II, for the software refactoring problem that tries to find the best trade-off between three objectives to maximize: quality improvements, severity and importance of refactoring opportunities to be fixed. We evaluated our approach using 8 open source systems and one industrial project, and demonstrated that it is significantly better than state-of-the-art refactoring approaches in terms of robustness in all the experiments based on a variety of real-world scenarios. Our suggested refactoring solutions were found to be comparable in terms of quality to those suggested by existing approaches, better prioritization of refactoring opportunities and to carry an acceptable robustness price.
    BibTeX:
    @article{MkaouerKCHD17,
      author = {Mohamed Wiem Mkaouer and Marouane Kessentini and Mel Ó Cinnéide and Shinpei Hayashi and Kalyanmoy Deb},
      title = {A Robust Multi-objective Approach to Balance Severity And Importance of Refactoring Opportunities},
      journal = {Empirical Software Engineering},
      year = {2017},
      volume = {22},
      number = {2},
      pages = {894-927},
      month = {April},
      doi = {http://dx.doi.org/10.1007/s10664-016-9426-8}
    }
    					
    2017.06.29 Ali Ouni, Marouane Kessentini, Mel Ó Cinnéide, Houari Sahraoui, Kalyanmoy Deb & Katsuro Inoue MORE: A Multi-objective Refactoring Recommendation Approach to Introducing Design Patterns and Fixing Code Smells 2017 Journal of Software: Evolution and Process, Vol. 29(5), May   Article
    Abstract: Refactoring is widely recognized as a crucial technique applied when evolving object-oriented software systems. If applied well, refactoring can improve different aspects of software quality including readability, maintainability, and extendibility. However, despite its importance and benefits, recent studies report that automated refactoring tools are underused much of the time by software developers. This paper introduces an automated approach for refactoring recommendation, called MORE, driven by 3 objectives: (1) to improve design quality (as defined by software quality metrics), (2) to fix code smells, and (3) to introduce design patterns. To this end, we adopt the recent nondominated sorting genetic algorithm, NSGA-III, to find the best trade-off between these 3 objectives. We evaluated the efficacy of our approach using a benchmark of 7 medium and large open-source systems, 7 commonly occurring code smells (god class, feature envy, data class, spaghetti code, shotgun surgery, lazy class, and long parameter list), and 4 common design pattern types (visitor, factory method, singleton, and strategy). Our approach is empirically evaluated through a quantitative and qualitative study to compare it against 3 different state-of-the art approaches, 2 popular multiobjective search algorithms, and random search. The statistical analysis of the results confirms the efficacy of our approach in improving the quality of the studied systems while successfully fixing 84% of code smells and introducing an average of 6 design patterns. In addition, the qualitative evaluation shows that most of the suggested refactorings (an average of 69%) are considered by developers to be relevant and meaningful.
    BibTeX:
    @article{OuniKCSDI17,
      author = {Ali Ouni and Marouane Kessentini and Mel Ó Cinnéide and Houari Sahraoui and Kalyanmoy Deb and Katsuro Inoue},
      title = {MORE: A Multi-objective Refactoring Recommendation Approach to Introducing Design Patterns and Fixing Code Smells},
      journal = {Journal of Software: Evolution and Process},
      year = {2017},
      volume = {29},
      number = {5},
      month = {May},
      doi = {http://dx.doi.org/10.1002/smr.1843}
    }
    					
    2017.06.27 Ali Ouni, Raula Gaikovina Kula, Marouane Kessentini, Takashi Ishio, Daniel M. German & Katsuro Inoue Search-based Software Library Recommendation using Multi-objective Optimization 2017 Information and Software Technology, Vol. 83, pp. 55-75, March   Article
    Abstract: Context: Software library reuse has significantly increased the productivity of software developers, reduced time-to-market and improved software quality and reusability. However, with the growing number of reusable software libraries in code repositories, finding and adopting a relevant software library becomes a fastidious and complex task for developers.

    Objective: In this paper, we propose a novel approach called LibFinder to prevent missed reuse opportunities during software maintenance and evolution. The goal is to provide a decision support for developers to easily find “useful” third-party libraries to the implementation of their software systems.

    Method: To this end, we used the non-dominated sorting genetic algorithm (NSGA-II), a multi-objective search-based algorithm, to find a trade-off between three objectives : 1) maximizing co-usage between a candidate library and the actual libraries used by a given system, 2) maximizing the semantic similarity between a candidate library and the source code of the system, and 3) minimizing the number of recommended libraries.

    Results: We evaluated our approach on 6083 different libraries from Maven Central super repository that were used by 32,760 client systems obtained from Github super repository. Our results show that our approach outperforms three other existing search techniques and a state-of-the art approach, not based on heuristic search, and succeeds in recommending useful libraries at an accuracy score of 92%, precision of 51% and recall of 68%, while finding the best trade-off between the three considered objectives. Furthermore, we evaluate the usefulness of our approach in practice through an empirical study on two industrial Java systems with developers. Results show that the top 10 recommended libraries was rated by the original developers with an average of 3.25 out of 5.

    Conclusion: This study suggests that (1) library usage history collected from different client systems and (2) library semantics/content embodied in library identifiers should be balanced together for an efficient library recommendation technique.

    BibTeX:
    @article{OuniKKIGI17,
      author = {Ali Ouni and Raula Gaikovina Kula and Marouane Kessentini and Takashi Ishio and Daniel M. German and Katsuro Inoue},
      title = {Search-based Software Library Recommendation using Multi-objective Optimization},
      journal = {Information and Software Technology},
      year = {2017},
      volume = {83},
      pages = {55-75},
      month = {March},
      doi = {http://dx.doi.org/10.1016/j.infsof.2016.11.007}
    }
    					
    2017.06.27 Matthew Patrick, Ruairi Donnelly & Christopher A. Gilligan A Toolkit for Testing Stochastic Simulations against Statistical Oracles 2017 Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST '17), pp. 448-453, Tokyo Japan, 13-17 March   Inproceedings
    Abstract: Stochastic simulations are developed and employed across many fields, to advise governmental policy decisions and direct future research. Faulty simulation software can have serious consequences, but its correctness is difficult to determine due to complexity and random behaviour. Stochastic simulations may output a different result each time they are run, whereas most testing techniques are designed for programs which (for a given set of inputs) always produce the same behaviour. In this paper, we introduce a new approach towards testing stochastic simulations using statistical oracles and transition probabilities. Our approach was implemented as a toolkit, which allows the frequency of state transitions to be tested, along with their final output distribution. We evaluated our toolkit on eight simulation programs from a variety fields and found it can detect errors at least three times smaller (and in one case, over 1000 times smaller) than a conventional (tolerance threshold) approach.
    BibTeX:
    @inproceedings{PatrickDG17,
      author = {Matthew Patrick and Ruairi Donnelly and Christopher A. Gilligan},
      title = {A Toolkit for Testing Stochastic Simulations against Statistical Oracles},
      booktitle = {Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST '17)},
      publisher = {IEEE},
      year = {2017},
      pages = {448-453},
      address = {Tokyo, Japan},
      month = {13-17 March},
      doi = {http://dx.doi.org/10.1109/ICST.2017.50}
    }
    					
    2017.06.27 José Miguel Rojas, Mattia Vivanti, Andrea Arcuri & Gordon Fraser A Detailed Investigation of the Effectiveness of Whole Test Suite Generation 2017 Empirical Software Engineering, Vol. 22(2), pp. 852-893, April   Article
    Abstract: A common application of search-based software testing is to generate test cases for all goals defined by a coverage criterion (e.g., lines, branches, mutants). Rather than generating one test case at a time for each of these goals individually, whole test suite generation optimizes entire test suites towards satisfying all goals at the same time. There is evidence that the overall coverage achieved with this approach is superior to that of targeting individual coverage goals. Nevertheless, there remains some uncertainty on (a) whether the results generalize beyond branch coverage, (b) whether the whole test suite approach might be inferior to a more focused search for some particular coverage goals, and (c) whether generating whole test suites could be optimized by only targeting coverage goals not already covered. In this paper, we perform an in-depth analysis to study these questions. An empirical study on 100 Java classes using three different coverage criteria reveals that indeed there are some testing goals that are only covered by the traditional approach, although their number is only very small in comparison with those which are exclusively covered by the whole test suite approach. We find that keeping an archive of already covered goals along with the tests covering them and focusing the search on uncovered goals overcomes this small drawback on larger classes, leading to an improved overall effectiveness of whole test suite generation.
    BibTeX:
    @article{RojasVAF17,
      author = {José Miguel Rojas and Mattia Vivanti and Andrea Arcuri and Gordon Fraser},
      title = {A Detailed Investigation of the Effectiveness of Whole Test Suite Generation},
      journal = {Empirical Software Engineering},
      year = {2017},
      volume = {22},
      number = {2},
      pages = {852-893},
      month = {April},
      doi = {http://dx.doi.org/10.1007/s10664-015-9424-2}
    }
    					
    2016.02.27 Isabel María del Águila & José Del Sagrado Three Steps Multiobjective Decision Process for Software Release Planning 2016 Complexity, Vol. 21(S1), pp. 250-262, September/October   Article
    Abstract: This paper deals with how to determine which features should be included in the software to be developed. Metaheuristic techniques have been applied to this problem and can help software developers when they face contradictory goals. We show how the knowledge and experience of human experts can be enriched by these techniques, with the idea of obtaining a better requirements selection than that produced by expert judgment alone. This objective is achieved by embedding metaheuristics techniques into a requirements management tool that takes advantage of them during the execution of the development stages of any software development project.
    BibTeX:
    @article{AguilaS16,
      author = {Isabel María del Águila and José Del Sagrado},
      title = {Three Steps Multiobjective Decision Process for Software Release Planning},
      journal = {Complexity},
      year = {2016},
      volume = {21},
      number = {S1},
      pages = {250-262},
      month = {September/October},
      doi = {http://dx.doi.org/10.1002/cplx.21739}
    }
    					
    2016.04.29 Shaukat Ali, Muhammad Zohaib Iqbal, Maham Khalid & Andrea Arcuri Improving the Performance of OCL Constraint Solving with Novel Heuristics for Logical Operations: A Search-based Approach 2016 Empirical Software Engineering, Vol. 21(21), pp. 2459-2502, December   Article
    Abstract: A common practice to specify constraints on the Unified Modeling Language (UML) models is using the Object Constraint Language (OCL). Such constraints serve various purposes, ranging from simply providing precise meaning to the models to supporting complex verification and validation activities. In many applications, these constraints have to be solved to obtain values satisfying the constraints, for example, in the case of model-based testing (MBT) to generate test data for the purpose of generating executable test cases. In our previous work, we proposed novel heuristics for various OCL constructs to efficiently solve them using search algorithms. These heuristics are enhanced in this paper to further improve the performance of OCL constraint solving. We performed an empirical evaluation comprising of three case studies using three search algorithms: Alternating Variable Method (AVM), (1 + 1) Evolutionary Algorithm (EA), and a Genetic Algorithm (GA) and in addition Random Search (RS) was used as a comparison baseline. In the first case study, we evaluated each heuristics using carefully designed artificial problems. In the second case study, we evaluated the heuristics on various constraints of Cisco’s Video Conferencing Systems defined to support MBT. Finally, the third case study is about EU-Rent Car Rental specification and is obtained from the literature. The results of the empirical evaluation showed that (1 + 1) EA and AVM with the improved heuristics significantly outperform the rest of the algorithms.
    BibTeX:
    @article{AliIKA16,
      author = {Shaukat Ali and Muhammad Zohaib Iqbal and Maham Khalid and Andrea Arcuri},
      title = {Improving the Performance of OCL Constraint Solving with Novel Heuristics for Logical Operations: A Search-based Approach},
      journal = {Empirical Software Engineering},
      year = {2016},
      volume = {21},
      number = {21},
      pages = {2459-2502},
      month = {December},
      doi = {http://dx.doi.org/10.1007/s10664-015-9392-6}
    }
    					
    2015.12.08 Nazia Bibi, Zeeshan Anwar & Ali Ahsan Comparison of Search-Based Software Engineering Algorithms for Resource Allocation Optimization 2016 Journal of Intelligent Systems, Vol. 25(4), pp. 629-642, October   Article
    Abstract: A project manager balances the resource allocation using resource leveling algorithms after assigning resources to project activities. However, resource leveling does not ensure optimized allocation of resources. Furthermore, the duration and cost of a project may increase after leveling resources. The objectives of resource allocation optimization used in our research are to (i) increase resource utilization, (ii) decrease project cost, and (iii) decrease project duration. We implemented three search-based software engineering algorithms, i.e. multiobjective genetic algorithm, multiobjective particle swarm algorithm (MOPSO), and elicit nondominated sorting evolutionary strategy. Twelve experiments to optimize the resource allocation are performed on a published case study. The experimental results are analyzed and compared in the form of Pareto fronts, average Pareto fronts, percent increase in resource utilization, percent decrease in project cost, and percent decrease in project duration. The experimental results show that MOPSO is the best technique for resource optimization because after optimization with MOPSO, resource utilization is increased and the project cost and duration are reduced.
    BibTeX:
    @article{BibiAA16,
      author = {Nazia Bibi and Zeeshan Anwar and Ali Ahsan},
      title = {Comparison of Search-Based Software Engineering Algorithms for Resource Allocation Optimization},
      journal = {Journal of Intelligent Systems},
      year = {2016},
      volume = {25},
      number = {4},
      pages = {629-642},
      month = {October},
      doi = {http://dx.doi.org/10.1515/jisys-2015-0016}
    }
    					
    2017.06.27 Bogdan Marculescu, Simon Poulding, Robert Feldt, Kai Petersen & Richard Torkar Tester Interactivity Makes a Difference in Search-based Software Testing: A Controlled Experiment 2016 Information and Software Technology, Vol. 78, pp. 66-82, October   Article
    Abstract: Context: Search-based software testing promises to provide users with the ability to generate high quality test cases, and hence increase product quality, with a minimal increase in the time and effort required.
    The development of the Interactive Search-Based Software Testing (ISBST) system was motivated by a previous study to investigate the application of search-based software testing (SBST) in an industrial setting. ISBST allows users to interact with the underlying SBST system, guiding the search and assessing the results. An industrial evaluation indicated that the ISBST system could find test cases that are not created by testers employing manual techniques. The validity of the evaluation was threatened, however, by the low number of participants.

    Objective: This paper presents a follow-up study, to provide a more rigorous evaluation of the ISBST system.
    Method: To assess the ISBST system a two-way crossover controlled experiment was conducted with 58 students taking a Verification and Validation course. The NASA Task Load Index (NASA-TLX) is used to assess the workload experienced by the participants in the experiment.

    Results:The experimental results validated the hypothesis that the ISBST system generates test cases that are not found by the same participants employing manual testing techniques. A follow-up laboratory experiment also investigates the importance of interaction in obtaining the results.
    In addition to this main result, the subjective workload was assessed for each participant by means of the NASA-TLX tool. The evaluation showed that, while the ISBST system required more effort from the participants, they achieved the same performance.

    Conclusions: The paper provides evidence that the ISBST system develops test cases that are not found by manual techniques, and that interaction plays an important role in achieving that result.

    BibTeX:
    @article{MarculescuPFPT16,
      author = {Bogdan Marculescu and Simon Poulding and Robert Feldt and Kai Petersen and Richard Torkar},
      title = {Tester Interactivity Makes a Difference in Search-based Software Testing: A Controlled Experiment},
      journal = {Information and Software Technology},
      year = {2016},
      volume = {78},
      pages = {66-82},
      month = {October},
      doi = {http://dx.doi.org/10.1016/j.infsof.2016.05.009}
    }
    					
    2016.03.09 Aurora Ramrez, José Raúl Romero & Sebastián Ventura A Comparative Study of Many-objective Evolutionary Algorithms for the Discovery of Software Architectures 2016 Empirical Software Engineering, Vol. 21(6), pp. 2546-2600, December   Article
    Abstract: During the design of complex systems, software architects have to deal with a tangle of abstract artefacts, measures and ideas to discover the most fitting underlying architecture. A common way to structure such complex systems is in terms of their interacting software components, whose composition and connections need to be properly adjusted. Along with the expected functionality, non-functional requirements are key at this stage to guide the many design alternatives to be evaluated by software architects. The appearance of Search Based Software Engineering (SBSE) brings an approach that supports the software engineer along the design process. Evolutionary algorithms can be applied to deal with the abstract and highly combinatorial optimisation problem of architecture discovery from a multiple objective perspective. The definition and resolution of many-objective optimisation problems is currently becoming an emerging challenge in SBSE, where the application of sophisticated techniques within the evolutionary computation field needs to be considered. In this paper, diverse non-functional requirements are selected to guide the evolutionary search, leading to the definition of several optimisation problems with up to 9 metrics concerning the architectural maintainability. An empirical study of the behaviour of 8 multi- and many-objective evolutionary algorithms is presented, where the quality and type of the returned solutions are analysed and discussed from the perspective of both the evolutionary performance and those aspects of interest to the expert. Results show how some many-objective evolutionary algorithms provide useful mechanisms to effectively explore design alternatives on highly dimensional objective spaces.
    BibTeX:
    @article{RamirezRV16,
      author = {Aurora Ramrez and José Raúl Romero and Sebastián Ventura},
      title = {A Comparative Study of Many-objective Evolutionary Algorithms for the Discovery of Software Architectures},
      journal = {Empirical Software Engineering},
      year = {2016},
      volume = {21},
      number = {6},
      pages = {2546-2600},
      month = {December},
      doi = {http://dx.doi.org/10.1007/s10664-015-9399-z}
    }
    					
    2016.02.02 Xiao-Ning Shen, Leandro L. Minku, Rami Bahsoon & Xin Yao Dynamic Software Project Scheduling through a Proactive-rescheduling Method 2016 IEEE Transactions on Software Engineering, Vol. 42(7), pp. 658-686, July   Article
    Abstract: Software project scheduling in dynamic and uncertain environments is of significant importance to real-world software development. Yet most studies schedule software projects by considering static and deterministic sce-narios only, which may cause performance deterioration or even infeasibility when facing disruptions. In order to cap-turemore dynamic features of software project scheduling than the previous work, this paper formulates the project scheduling problemby considering uncertainties and dynamic events that often occur during software project devel-opment, and constructs a mathematical model for the resulting Multi-objective Dynamic Project Scheduling Problem (MODPSP), where the four objectives of project cost, duration, robustness and stability are considered simultaneously under a variety of practical constraints. In order to solve MODPSP appropriately, a multi-objective evolutionary algo-rithm (MOEA)based proactive-rescheduling method is proposed, which generates a robust schedule predictively and adapts the previous schedule in response to critical dynamic events during the project execution. Extensive experi-mental results on 21 problem instances, including three instances derived from real-world software projects, show that our novel method is very effective. By introducing the robustness and stability objectives, and incorporating the dy-namic optimization strategies specifically designed for MODPSP, our proactive-rescheduling method achieves a very good overall performance in a dynamic environment.
    BibTeX:
    @article{ShenMBY16,
      author = {Xiao-Ning Shen and Leandro L. Minku and Rami Bahsoon and Xin Yao},
      title = {Dynamic Software Project Scheduling through a Proactive-rescheduling Method},
      journal = {IEEE Transactions on Software Engineering},
      year = {2016},
      volume = {42},
      number = {7},
      pages = {658-686},
      month = {July},
      doi = {http://dx.doi.org/10.1109/TSE.2015.2512266}
    }
    					
    2014.05.30 Aldeida Aleti Designing Automotive Embedded Systems with Adaptive Genetic Algorithms 2015 Automated Software Engineering, Vol. 22(2), pp. 199-240, June   Article Design Tools and Techniques
    Abstract: One of the most common problems faced by planners, whether in industry or government, is optimisation—finding the optimal solution to a problem. Even a one percent improvement in a solution can make a difference of millions of dollars in some cases. Traditionally optimisation problems are solved by analytic means or exact optimisation methods. Today, however, many optimisation problems in the design of embedded architectures involve complex combinatorial systems that make such traditional approaches unsuitable or intractable. Genetic algorithms, instead, tackle these kind of problems by finding good solutions in a reasonable amount of time. Their successful application, however, relies on algorithm parameters which are problem dependent, and usually even depend on the problem instance at hand. To address this issue, we propose an adaptive parameter control method for genetic algorithms, which adjusts parameters during the optimisation process. The central aim of this work is to assist practitioners in solving complex combinatorial optimisation problems by adapting the optimisation strategy to the problem being solved. We present a case study from the automotive industry, which shows the efficiency and applicability of the proposed adaptive optimisation approach. The experimental evaluation indicates that the proposed approach outperforms optimisation methods with pre-tuned parameter values and three prominent adaptive parameter control techniques.
    BibTeX:
    @article{Aleti15,
      author = {Aldeida Aleti},
      title = {Designing Automotive Embedded Systems with Adaptive Genetic Algorithms},
      journal = {Automated Software Engineering},
      year = {2015},
      volume = {22},
      number = {2},
      pages = {199-240},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s10515-014-0148-0}
    }
    					
    2015.11.06 Aldeida Aleti & Madalina Drugan Adaptive Neighbourhood Search for the Component Deployment Problem 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 188-202, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Since the establishment of the area of search-based software engineering, a wide range of optimisation techniques have been applied to automate various stages of software design and development. Architecture optimisation is one of the aspects that has been automated with methods like genetic algorithms, local search, and ant colony optimisation. A key challenge with all of these approaches is to adequately set the balance between exploration of the search space and exploitation of best candidate solutions. Different settings are required for different problem instances, and even different stages of the optimisation process. To address this issue, we investigate combinations of different search operators, which focus the search on either exploration or exploitation for an efficient variable neighbourhood search method. Three variants of the variable neighbourhood search method are investigated: the first variant has a deterministic schedule, the second variant uses fixed probabilities to select a search operator, and the third method adapts the search strategy based on feedback from the optimisation process. The adaptive strategy selects an operator based on its performance in the previous iterations. Intuitively, depending on the features of the fitness landscape, at different stages of the optimisation process different search strategies would be more suitable. Hence, the feedback from the optimisation process provides useful guidance in the choice of the best search operator, as evidenced by the experimental evaluation designed with problems of different sizes and levels of difficulty to evaluate the efficiency of varying the search strategy.
    BibTeX:
    @inproceedings{AletiD15,
      author = {Aldeida Aleti and Madalina Drugan},
      title = {Adaptive Neighbourhood Search for the Component Deployment Problem},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {188-202},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_13}
    }
    					
    2016.03.08 Yazan A. Alsariera, Mazlina A. Majid & Kamal Z. Zamli SPLBA: An Interaction Strategy for Testing Software Product Lines using the Bat-Inspired Algorithm 2015 Proceedings of the 4th International Conference on Software Engineering and Computer Systems (ICSECS '15), pp. 148-153, Kuantan Malaysia, 19-21 August   Inproceedings Testing and Debugging
    Abstract: Software product lines (SPLs) represent an engineering method for creating a portfolio of similar software systems for a shared set of software product assets. Owing to the significant growth of SPLs, there is a need for systematic approach for ensuring the quality of the resulting product derivatives. Combinatorial t-way testing (where t indicates the interaction strength) has been known to be effective especially when the number of product's features and constraints in the SPLs of interest are huge. In line with the recent emergence of Search based Software Engineering (SBSE), this article presents a novel strategy for SPLs tests reduction using Bat-inspired algorithm (BA), called SPLBA. Our experience with SPLBA has been promising as the strategy performed well against existing strategies in the literature.
    BibTeX:
    @inproceedings{AlsarieraMZ15,
      author = {Yazan A. Alsariera and Mazlina A. Majid and Kamal Z. Zamli},
      title = {SPLBA: An Interaction Strategy for Testing Software Product Lines using the Bat-Inspired Algorithm},
      booktitle = {Proceedings of the 4th International Conference on Software Engineering and Computer Systems (ICSECS '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {148-153},
      address = {Kuantan, Malaysia},
      month = {19-21 August},
      doi = {http://dx.doi.org/10.1109/ICSECS.2015.7333100}
    }
    					
    2016.03.08 Wesley Klewerton Guez Assunção Search-Based Migration of Model Variants to Software Product Line Architectures 2015 Proceedings of IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE '15), pp. 895-898, Florence Italy, 16-24 May   Inproceedings
    Abstract: Software Product Lines (SPLs) are families of related software systems developed for specific market segments or domains. Commonly, SPLs emerge from sets of existing variants when their individual maintenance becomes infeasible. However, current approaches for SPL migration do not support design models, are partially automated, or do not reflect constraints from SPL domains. To tackle these limitations, the goal of this doctoral research plan is to propose an automated approach to the SPL migration process at the design level. This approach consists of three phases: detection, analysis and transformation. It uses as input the class diagrams and lists of features for each system variant, and relies on search-based algorithms to create a product line architecture that best captures the variability present in the variants. Our expected contribution is to support the adoption of SPL practices in companies that face the scenario of migrating variants to SPLs.
    BibTeX:
    @inproceedings{Assuncao15,
      author = {Wesley Klewerton Guez Assunção},
      title = {Search-Based Migration of Model Variants to Software Product Line Architectures},
      booktitle = {Proceedings of IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {895-898},
      address = {Florence, Italy},
      month = {16-24 May},
      doi = {http://dx.doi.org/10.1109/ICSE.2015.286}
    }
    					
    2015.08.07 Wesley Klewerton Guez Assunção, Roberto E. Lopez-Herrejon, Lukas Linsbauer, Silvia Regina Vergilio & Alexander Egyed Extracting Variability-Safe Feature Models from Source Code Dependencies in System Variants 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 1303-1310, Madrid Spain, 11-15 July   Inproceedings
    Abstract: To effectively cope with increasing customization demands, companies that have developed variants of software systems are faced with the challenge of consolidating all the variants into a Software Product Line, a proven development paradigm capable of handling such demands. A crucial step in this challenge is to reverse engineer feature models that capture all the required feature combinations of each system variant. Current research has explored this task using propositional logic, natural language, and search-based techniques. However, using knowledge from the implementation artifacts for the reverse engineering task has not been studied. We propose a multi-objective approach that not only uses standard precision and recall metrics for the combinations of features but that also considers variability-safety, i.e. the property that, based on structural dependencies among elements of implementation artifacts, asserts whether all feature combinations of a feature model are in fact well-formed software systems. We evaluate our approach with five case studies and highlight its benefits for the software engineer.
    BibTeX:
    @inproceedings{AssuncaoLLVE15,
      author = {Wesley Klewerton Guez Assunção and Roberto E. Lopez-Herrejon and Lukas Linsbauer and Silvia Regina Vergilio and Alexander Egyed},
      title = {Extracting Variability-Safe Feature Models from Source Code Dependencies in System Variants},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {1303-1310},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754720}
    }
    					
    2016.02.02 Earl T. Barr, Mark Harman, Yue Jia, Alexandru Marginean & Justyna Petke Automated Software Transplantation 2015 Proceedings of the 2015 International Symposium on Software Testing and Analysis (ISSTA '15), pp. 257-269, Baltimore USA, 14-17 July   Inproceedings
    Abstract: Automated transplantation would open many exciting avenues for software development: suppose we could autotransplant code from one system into another, entirely unrelated, system. This paper introduces a theory, an algorithm, and a tool that achieve this. Leveraging lightweight annotation, program analysis identifies an organ (interesting behavior to transplant); testing validates that the organ exhibits the desired behavior during its extraction and after its implantation into a host. While we do not claim automated transplantation is now a solved problem, our results are encouraging: we report that in 12 of 15 experiments, involving 5 donors and 3 hosts (all popular real-world systems), we successfully autotransplanted new functionality and passed all regression tests. Autotransplantation is also already useful: in 26 hours computation time we successfully autotransplanted the H.264 video encoding functionality from the x264 system to the VLC media player; compare this to upgrading x264 within VLC, a task that we estimate, from VLC's version history, took human programmers an average of 20 days of elapsed, as opposed to dedicated, time.
    BibTeX:
    @inproceedings{BarrHJMP15,
      author = {Earl T. Barr and Mark Harman and Yue Jia and Alexandru Marginean and Justyna Petke},
      title = {Automated Software Transplantation},
      booktitle = {Proceedings of the 2015 International Symposium on Software Testing and Analysis (ISSTA '15)},
      publisher = {ACM},
      year = {2015},
      pages = {257-269},
      address = {Baltimore, USA},
      month = {14-17 July},
      doi = {http://dx.doi.org/10.1145/2771783.2771796}
    }
    					
    2015.02.05 Márcio de Oliveira Barros, Fábio de Almeida Farzat & Guilherme Horta Travassos Learning from Optimization: A Case Study with Apache Ant 2015 Information and Software Technology, Vol. 57, pp. 684-704, January   Article
    Abstract: Context Software architecture degrades when changes violating the design-time architectural intents are imposed on the software throughout its life cycle. Such phenomenon is called architecture erosion. When changes are not controlled, erosion makes maintenance harder and negatively affects software evolution. Objective To study the effects of architecture erosion on a large software project and determine whether search-based module clustering might reduce the conceptual distance between the current architecture and the design-time one. Method To run an exploratory study with Apache Ant. First, we characterize Ant’s evolution in terms of size, change dispersion, cohesion, and coupling metrics, highlighting the potential introduction of architecture and code-level problems that might affect the cost of changing the system. Then, we reorganize the distribution of Ant’s classes using a heuristic search approach, intending to re-emerge its design-time architecture. Results In characterizing the system, we observed that its original, simple design was lost due to maintenance and the addition of new features. In optimizing its architecture, we found that current models used to drive search-based software module clustering produce complex designs, which maximize the characteristics driving optimization while producing class distributions that would hardly be acceptable to developers maintaining Ant. Conclusion The structural perspective promoted by the coupling and cohesion metrics precludes observing the adequate software module clustering from the perspective of software engineers when considering a large open source system. Our analysis adds evidence to the criticism of the dogma of driving design towards high cohesion and low coupling, at the same time observing the need for better models to drive design decisions. Apart from that, we see SBSE as a learning tool, allowing researchers to test Software Engineering models in extreme situations that would not be easily found in software projects.
    BibTeX:
    @article{BarrosdT15,
      author = {Márcio de Oliveira Barros and Fábio de Almeida Farzat and Guilherme Horta Travassos},
      title = {Learning from Optimization: A Case Study with Apache Ant},
      journal = {Information and Software Technology},
      year = {2015},
      volume = {57},
      pages = {684-704},
      month = {January},
      doi = {http://dx.doi.org/10.1016/j.infsof.2014.07.015}
    }
    					
    2015.11.06 Yi Bian, Serkan Kirbas, Mark Harman, Yue Jia & Zheng Li Regression Test Case Prioritisation for Guava 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 221-227, Bergamo Italy, 5-7 September   Inproceedings Testing and Debugging
    Abstract: We present a three objective formulation of regression test prioritisation. Our formulation involves the well-known, and widely-used objectives of Average Percentage of Statement Coverage (APSC) and Effective Execution Time (EET). However, we additionally include the Average Percentage of Change Coverage (APCC), which has not previously been used in search-based regression test optimisation. We apply our approach to prioritise the base and the collection package of the Guava project, which contains over 26,815 test cases. Our results demonstrate the value of search-based test case prioritisation: the sequences we find require only 0.2 % of the 26,815 test cases and only 0.45 % of their effective execution time. However, we find solutions that achieve more than 99.9 % of both regression testing objectives; covering both changed code and existing code. We also investigate the tension between these two objectives for Guava.
    BibTeX:
    @inproceedings{BianKHJL15,
      author = {Yi Bian and Serkan Kirbas and Mark Harman and Yue Jia and Zheng Li},
      title = {Regression Test Case Prioritisation for Guava},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {221-227},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_15}
    }
    					
    2015.12.09 Sumon Biswas, M.S. Kaiser & S.A. Mamun Applying Ant Colony Optimization in Software Testing to Generate Prioritized Optimal Path and Test Data 2015 Proceedings of International Conference on Electrical Engineering and Information Communication Technology (ICEEICT '15), pp. 1-6, Dhaka Bangladesh, 21-23 May   Inproceedings Testing and Debugging
    Abstract: Software testing is one of the most important parts of software development lifecycle. Among various types of software testing approaches structural testing is widely used. Structural testing can be improved largely by traversing all possible code paths of the software. Genetic algorithm is the most used search technique to automate path testing and test case generation. Recently, different novel search based optimization techniques such as Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Artificial Immune System (AIS), Particle Swarm Optimization (PSO) have been applied to generate optimal path to complete software coverage. In this paper, ant colony optimization (ACO) based algorithm has been proposed which will generate set of optimal paths and prioritize the paths. Additionally, the approach generates test data sequence within the domain to use as inputs of the generated paths. Proposed approach guarantees full software coverage with minimum redundancy. This paper also demonstrates the proposed approach applying it in a program module.
    BibTeX:
    @inproceedings{BiswasKM15,
      author = {Sumon Biswas and M. S. Kaiser and S. A. Mamun},
      title = {Applying Ant Colony Optimization in Software Testing to Generate Prioritized Optimal Path and Test Data},
      booktitle = {Proceedings of International Conference on Electrical Engineering and Information Communication Technology (ICEEICT '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-6},
      address = {Dhaka, Bangladesh},
      month = {21-23 May},
      doi = {http://dx.doi.org/10.1109/ICEEICT.2015.7307500}
    }
    					
    2015.11.06 Mahmoud A. Bokhari, Thorsten Bormer & Markus Wagner An Improved Beam-Search for the Test Case Generation for Formal Verification Systems 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 77-92, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: The correctness of software verification systems is vital, since they are used to confirm that safety and security critical software systems satisfy their requirements. Modern verification systems need to understand their target software, which can be done by using an axiomatization base. It captures the semantics of the programming language used for writing the target software. To ensure their correctness, it is necessary to validate both parts: the implementation and the axiomatization base. As a result, it is essential to increase the axiom coverage in order to verify its correctness. However, creating test cases manually is a time consuming and difficult task even for verification engineers. We present a beam search approach to automatically generate test cases by modifying existing test cases as well as a comparison between axiomatization and code coverage. Our results show that the overall coverage of the existing test suite can be improved by more than 20 %. In addition, our approach explores the search space more efficiently than existing ones.
    BibTeX:
    @inproceedings{BokhariBW15,
      author = {Mahmoud A. Bokhari and Thorsten Bormer and Markus Wagner},
      title = {An Improved Beam-Search for the Test Case Generation for Formal Verification Systems},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {77-92},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_6}
    }
    					
    2016.03.08 Mohamed Boussaa, Olivier Barais, Gerson Sunyé & Benoît Baudry A Novelty Search Approach for Automatic Test Data Generation 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 40-43, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: In search-based structural testing, metaheuristic search techniques have been frequently used to automate the test data generation. In Genetic Algorithms (GAs) for example, test data are rewarded on the basis of an objective function that represents generally the number of statements or branches covered. However, owing to the wide diversity of possible test data values, it is hard to find the set of test data that can satisfy a specific coverage criterion. In this paper, we introduce the use of Novelty Search (NS) algorithm to the test data generation problem based on statement-covered criteria. We believe that such approach to test data generation is attractive because it allows the exploration of the huge space of test data within the input domain. In this approach, we seek to explore the search space without regard to any objectives. In fact, instead of having a fitness-based selection, we select test cases based on a novelty score showing how different they are compared to all other solutions evaluated so far.
    BibTeX:
    @inproceedings{BoussaaBSB15,
      author = {Mohamed Boussaa and Olivier Barais and Gerson Sunyé and Benoît Baudry},
      title = {A Novelty Search Approach for Automatic Test Data Generation},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {40-43},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.17}
    }
    					
    2015.08.07 Bobby R. Bruce Energy Optimisation via Genetic Improvement: A SBSE technique for a new era in Software Development 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 819-820, Madrid Spain, 12-12 July   Inproceedings
    BibTeX:
    @inproceedings{Bruce15,
      author = {Bobby R. Bruce},
      title = {Energy Optimisation via Genetic Improvement: A SBSE technique for a new era in Software Development},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {819-820},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768420}
    }
    					
    2015.08.07 Bobby R. Bruce, Justyna Petke & Mark Harman Reducing Energy Consumption using Genetic Improvement 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 1327-1334, Madrid Spain, 11-15 July   Inproceedings
    Abstract: Genetic Improvement (GI) is an area of Search Based Software Engineering which seeks to improve software's non-functional properties by treating program code as if it were genetic material which is then evolved to produce more optimal solutions. Hitherto, the majority of focus has been on optimising program's execution time which, though important, is only one of many non-functional targets. The growth in mobile computing, cloud computing infrastructure, and ecological concerns are forcing developers to focus on the energy their software consumes. We report on investigations into using GI to automatically find more energy efficient versions of the MiniSAT Boolean satisfiability solver when specialising for three downstream applications. Our results find that GI can successfully be used to reduce energy consumption by up to 25%.
    BibTeX:
    @inproceedings{BrucePH15,
      author = {Bobby R. Bruce and Justyna Petke and Mark Harman},
      title = {Reducing Energy Consumption using Genetic Improvement},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {1327-1334},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754752}
    }
    					
    2015.11.06 Nathan Burles, Edward Bowles, Alexander E.I. Brownlee, Zoltan A. Kocsis, Jerry Swan & Nadarajen Veerapen Object-Oriented Genetic Improvement for Improved Energy Consumption in Google Guava 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 255-261, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: In this work we use metaheuristic search to improve Google’s Guava library, finding a semantically equivalent version of com.google.common.collect.ImmutableMultimap with reduced energy consumption. Semantics-preserving transformations are found in the source code, using the principle of subtype polymorphism. We introduce a new tool, Opacitor, to deterministically measure the energy consumption, and find that a statistically significant reduction to Guava’s energy consumption is possible. We corroborate these results using Jalen, and evaluate the performance of the metaheuristic search compared to an exhaustive search—finding that the same result is achieved while requiring almost 200 times fewer fitness evaluations. Finally, we compare the metaheuristic search to an independent exhaustive search at each variation point, finding that the metaheuristic has superior performance.
    BibTeX:
    @inproceedings{BurlesBBKSV15,
      author = {Nathan Burles and Edward Bowles and Alexander E. I. Brownlee and Zoltan A. Kocsis and Jerry Swan and Nadarajen Veerapen},
      title = {Object-Oriented Genetic Improvement for Improved Energy Consumption in Google Guava},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {255-261},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_20}
    }
    					
    2015.11.06 Nathan Burles, Edward Bowles, Bobby R. Bruce & Komsan Srivisut Specialising Guava’s Cache to Reduce Energy Consumption 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 276-281, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: In this article we use a Genetic Algorithm to perform parameter tuning on Google Guava’s Cache library, specialising it to OpenTripPlanner. A new tool, Opacitor, is used to deterministically measure the energy consumed, and we find that the energy consumption of OpenTripPlanner may be significantly reduced by tuning the default parameters of Guava’s Cache library. Finally we use Jalen, which uses time and CPU utilisation as a proxy to calculate energy consumption, to corroborate these results.
    BibTeX:
    @inproceedings{BurlesBBS15,
      author = {Nathan Burles and Edward Bowles and Bobby R. Bruce and Komsan Srivisut},
      title = {Specialising Guava’s Cache to Reduce Energy Consumption},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {276-281},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_23}
    }
    					
    2015.08.07 Nathan Burles, Jerry Swan, Edward Bowles, Alexander E.I. Brownlee, Zoltan A. Kocsis & Nadarajen Veerapen Embedded Dynamic Improvement 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 831-832, Madrid Spain, 12-12 July   Inproceedings
    Abstract: We discuss the useful role that can be played by a subtype of improvement programming, which we term `Embedded Dynamic Improvement'. In this approach, developer-specified variation points define the scope of improvement. A search framework is embedded at these variation points, facilitating the creation of adaptive software that can respond online to changes in its execution environment.
    BibTeX:
    @inproceedings{BurlesSBBKV15,
      author = {Nathan Burles and Jerry Swan and Edward Bowles and Alexander E. I. Brownlee and Zoltan A. Kocsis and Nadarajen Veerapen},
      title = {Embedded Dynamic Improvement},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {831-832},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768423}
    }
    					
    2015.12.08 Maxim Buzdalov & Anatoly Shalyto Hard Test Generation for Augmenting Path Maximum Flow Algorithms using Genetic Algorithms: Revisited 2015 Proceedings of IEEE Congress on Evolutionary Computation (CEC '15), pp. 2121-2128, Sendai Japan, 25-28 May   Inproceedings
    Abstract: To estimate performance of computer science algorithms reliably, one has to create worst-case execution time tests. For certain algorithms this task can be difficult. To reduce the amount of human effort, authors attempt using search-based optimization techniques, such as genetic algorithms. Our previous paper addressed test generation for several maximum flow algorithms. Genetic algorithms were applied for test generation and showed promising results. However, one of the aspects of maximum flow algorithm implementation was missing in that paper: parallel edges (edges which share source and target vertices) were not merged into one single edge (which is allowed in solving maximum flow problems). In this paper, parallel edge merging is implemented and new results are reported. A surprising fact is shown that fitness functions and choices of genetic operators which were the most efficient in the previous paper are much less efficient in the new setup and vice versa. What is more, the set of maximum flow algorithms, for which significantly better tests are generated, changed completely as well.
    BibTeX:
    @inproceedings{BuzdalovS15,
      author = {Maxim Buzdalov and Anatoly Shalyto},
      title = {Hard Test Generation for Augmenting Path Maximum Flow Algorithms using Genetic Algorithms: Revisited},
      booktitle = {Proceedings of IEEE Congress on Evolutionary Computation (CEC '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {2121-2128},
      address = {Sendai, Japan},
      month = {25-28 May},
      doi = {http://dx.doi.org/10.1109/CEC.2015.7257146}
    }
    					
    2015.11.06 José Campos, Gordon Fraser, Andrea Arcuri & Rui Abreu Continuous Test Generation on Guava 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 228-234, Bergamo Italy, 5-7 September   Inproceedings Testing and Debugging
    Abstract: Search-based testing can be applied to automatically generate unit tests that achieve high levels of code coverage on object-oriented classes. However, test generation takes time, in particular if projects consist of many classes, like in the case of the Guava library. To allow search-based test generation to scale up and to integrate it better into software development, continuous test generation applies test generation incrementally during continuous integration. In this paper, we report on the application of continuous test generation with EvoSuite at the SSBSE’15 challenge on the Guava library. Our results show that continuous test generation reduces the time spent on automated test generation by 96 %, while increasing code coverage by 13.9 % on average.
    BibTeX:
    @inproceedings{CamposFAA15,
      author = {José Campos and Gordon Fraser and Andrea Arcuri and Rui Abreu},
      title = {Continuous Test Generation on Guava},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {228-234},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_16}
    }
    					
    2016.03.09 Alexander Chatzigeorgiou, Apostolos Ampatzoglou, Areti Ampatzoglou & Theodoros Amanatidis Estimating the Breaking Point for Technical Debt 2015 Proceedings of IEEE 7th International Workshop on Managing Technical Debt (MTD '15), pp. 53-56, Bremen Germany, 2-2 October   Inproceedings
    Abstract: In classic economics, when borrowing an amount of money that causes a debt to the issuer, it is not usual to have interest which can become larger than the principal. In the context of technical debt however, accumulated debt in the form of interest can in some cases quickly sum up to an amount that at some point, becomes larger than the effort required to repay the initial amount of technical debt. In this paper we propose an approach for estimating this breaking point. Anticipating how late the breaking point is expected to come can support decision making with respect to investments on improving quality. The approach is based on a search-based optimization tool that is capable of identifying the distance of an actual object-oriented design to the corresponding optimum one.
    BibTeX:
    @inproceedings{ChatzigeorgiouAAA15,
      author = {Alexander Chatzigeorgiou and Apostolos Ampatzoglou and Areti Ampatzoglou and Theodoros Amanatidis},
      title = {Estimating the Breaking Point for Technical Debt},
      booktitle = {Proceedings of IEEE 7th International Workshop on Managing Technical Debt (MTD '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {53-56},
      address = {Bremen, Germany},
      month = {2-2 October},
      doi = {http://dx.doi.org/10.1109/MTD.2015.7332625}
    }
    					
    2015.02.05 José M. Chaves-González & Miguel A. Pérez-Toledano Differential Evolution with Pareto Tournament for the Multi-objective Next Release Problem 2015 Applied Mathematics and Computation, Vol. 252, pp. 1-13, February   Article Requirements/Specifications
    Abstract: Software requirements selection is the engineering process in which the set of new requirements which will be included in the next release of a software product are chosen. This NP-hard problem is an important issue involving several contradictory objectives that have to be tackled by software companies when developing new releases of software packages. Software projects have to stick to a budget, but they also have to cover the highest number of customer requirements. Furthermore, in real instances of the problem, the requirements tackled suffer interactions and other restrictions which complicate the problem. In this paper, we use an adapted multi-objective version of the differential evolution (DE) evolutionary algorithm which has been successfully applied to several real instances of the problem. For doing this, the software requirements selection problem has been formulated as a multiobjective optimization problem with two objectives: the total software development cost and the overall customer’s satisfaction, and with three interaction constraints. On the other hand, the original DE algorithm has been adapted to solve real instances of the problem generated from data provided by experts. Numerical experiments with case studies on software requirements selection have been carried out to demonstrate the effectiveness of the multiobjective proposal and the obtained results show that the developed algorithm performs better than other relevant algorithms previously published in the literature under a set of public datasets.
    BibTeX:
    @article{Chaves-GonzalezP15,
      author = {José M. Chaves-González and Miguel A. Pérez-Toledano},
      title = {Differential Evolution with Pareto Tournament for the Multi-objective Next Release Problem},
      journal = {Applied Mathematics and Computation},
      year = {2015},
      volume = {252},
      pages = {1-13},
      month = {February},
      doi = {http://dx.doi.org/10.1016/j.amc.2014.11.093}
    }
    					
    2016.02.27 José M. Chaves-González, Miguel A. Pérez-Toledano & Amparo Navasa Teaching Learning based Optimization with Pareto Tournament for the Multiobjective Software Requirements Selection 2015 Engineering Applications of Artificial Intelligence, Vol. 43, pp. 89-101, August   Article
    Abstract: Software requirements selection is a problem which consists of choosing the set of new requirements which will be included in the next release of a software package. This NP-hard problem is an important issue involving several contradictory objectives which have to be tackled by software companies when developing new releases of software packages. Software projects have to stick to a budget, but they also have to satisfy the highest number of customer requirements. Furthermore, when managing real instances of the problem, the requirements tackled suffer interactions and other restrictions which make the problem even harder. In this paper, a novel multi-objective teaching learning based optimization (TLBO) algorithm has been successfully applied to several instances of the problem. For doing this, the software requirements selection problem has been formulated as a multiobjective optimization problem with two objectives: the total software development cost and the overall customer׳s satisfaction. In addition, three interaction constraints have been also managed. In this context, the original TLBO algorithm has been adapted to solve real instances of the problem generated from data provided by experts. Numerical experiments with case studies on software requirements selection have been carried out in order to prove the effectiveness of the multiobjective proposal. In fact, the obtained results show that the developed algorithm performs better than other relevant algorithms previously published in the literature.
    BibTeX:
    @article{Chaves-GonzalezPN15,
      author = {José M. Chaves-González and Miguel A. Pérez-Toledano and Amparo Navasa},
      title = {Teaching Learning based Optimization with Pareto Tournament for the Multiobjective Software Requirements Selection},
      journal = {Engineering Applications of Artificial Intelligence},
      year = {2015},
      volume = {43},
      pages = {89-101},
      month = {August},
      doi = {http://dx.doi.org/10.1016/j.engappai.2015.04.002}
    }
    					
    2016.02.27 José M. Chaves-González, Miguel A. Pérez-Toledano & Amparo Navasa Software Requirement Optimization using a Multiobjective Swarm Intelligence Evolutionary Algorithm 2015 Chaves-GonzalezPN15, Vol. 83, pp. 105-115, July   Article
    Abstract: The selection of the new requirements which should be included in the development of the release of a software product is an important issue for software companies. This problem is known in the literature as the Next Release Problem (NRP). It is an NP-hard problem which simultaneously addresses two apparently contradictory objectives: the total cost of including the selected requirements in the next release of the software package, and the overall satisfaction of a set of customers who have different opinions about the priorities which should be given to the requirements, and also have different levels of importance within the company. Moreover, in the case of managing real instances of the problem, the proposed solutions have to satisfy certain interaction constraints which arise among some requirements. In this paper, the NRP is formulated as a multiobjective optimization problem with two objectives (cost and satisfaction) and three constraints (types of interactions). A multiobjective swarm intelligence metaheuristic is proposed to solve two real instances generated from data provided by experts. Analysis of the results showed that the proposed algorithm can efficiently generate high quality solutions. These were evaluated by comparing them with different proposals (in terms of multiobjective metrics). The results generated by the present approach surpass those generated in other relevant work in the literature (e.g. our technique can obtain a HV of over 60% for the most complex dataset managed, while the other approaches published cannot obtain an HV of more than 40% for the same dataset).
    BibTeX:
    @article{Chaves-GonzalezPNb15,
      author = {José M. Chaves-González and Miguel A. Pérez-Toledano and Amparo Navasa},
      title = {Software Requirement Optimization using a Multiobjective Swarm Intelligence Evolutionary Algorithm},
      journal = {Chaves-GonzalezPN15},
      year = {2015},
      volume = {83},
      pages = {105-115},
      month = {July},
      doi = {http://dx.doi.org/10.1016/j.knosys.2015.03.012}
    }
    					
    2016.02.17 Tsong Yueh Chen, Fei-Ching Kuo, Dave Towey & Zhi Quan Zhou A Revisit of Three Studies related to Random Testing 2015 Science China Information Sciences, Vol. 58(5), pp. 1-9, May   Article Testing and Debugging
    Abstract: Software testing is an approach that ensures the quality of software through execution, with a goal being to reveal failures and other problems as quickly as possible. Test case selection is a fundamental issue in software testing, and has generated a large body of research, especially with regards to the effectiveness of random testing (RT), where test cases are randomly selected from the software’s input domain. In this paper, we revisit three of our previous studies. The first study investigated a sufficient condition for partition testing (PT) to outperform RT, and was motivated by various controversial and conflicting results suggesting that sometimes PT performed better than RT, and sometimes the opposite. The second study aimed at enhancing RT itself, and was motivated by the fact that RT continues to be a fundamental and popular testing technique. This second study enhanced RT fault detection effectiveness by making use of the common observation that failure-causing inputs tend to cluster together, and resulted in a new family of RT techniques: adaptive random testing (ART), which is random testing with an even spread of test cases across the input domain. Following the successful use of failure-causing region contiguity insights to develop ART, we conducted a third study on how to make use of other characteristics of failure-causing inputs to develop more effective test case selection strategies. This third study revealed how best to approach testing strategies when certain characteristics of the failure-causing inputs are known, and produced some interesting and important results. In revisiting these three previous studies, we explore their unexpected commonalities, and identify diversity as a key concept underlying their effectiveness. This observation further prompted us to examine whether or not such a concept plays a role in other areas of software testing, and our conclusion is that, yes, diversity appears to be one of the most important concepts in the field of software testing.
    BibTeX:
    @article{ChenKTZ15,
      author = {Tsong Yueh Chen and Fei-Ching Kuo and Dave Towey and Zhi Quan Zhou},
      title = {A Revisit of Three Studies related to Random Testing},
      journal = {Science China Information Sciences},
      year = {2015},
      volume = {58},
      number = {5},
      pages = {1-9},
      month = {May},
      doi = {http://dx.doi.org/10.1007/s11432-015-5314-x}
    }
    					
    2016.02.17 Brendan Cody-Kenny Genetic Programming Bias with Software Performance Analysis 2015 , January School: University of Dublin   Phdthesis
    Abstract: The complexities of modern software systems make their engineering costly and time consuming. This thesis explores and develops techniques to improve software by automating re-design. Source code can be randomly modified and subsequently tested for correctness to search for improvements in existing software. By iteratively selecting useful programms for modification a randomised search of program variants can be guided toward improved programs. Genetic Programming (GP) is a search algorithm which crucially relies on selection to guide the evolution of programs. Applying GP to software improvement represents a scalability challenge given the number of possible modification locations in even the smallest of programs. The problem addressed in this thesis is locating performance improvements within programs. By randomly modifying a location within a program and measuring the change in performance and functionality we determine the probability of finding a performance improvement at that location under further modication. Locating performance improvements can be performed during GP as GP relies on mutation. A probabilistic overlay of bias values for modification emerges as GP progresses and the software evolves. Measuring different aspects of program change can fine-tune the GP algorithm to focus on code which is particularly relevant to the measured aspect. Measuring execution cost reduction can indicate where an improvement is likely to exist and increase the chances of finding an improvement during GP.
    BibTeX:
    @phdthesis{Cody-Kenny15,
      author = {Brendan Cody-Kenny},
      title = {Genetic Programming Bias with Software Performance Analysis},
      school = {University of Dublin},
      year = {2015},
      month = {January},
      url = {http://hdl.handle.net/2262/76251}
    }
    					
    2015.08.07 Brendan Cody-Kenny, Edgar Galván-López & Stephen Barrett locoGP: Improving Performance by Genetic Programming Java Source Code 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 811-818, Madrid Spain, 12-12 July   Inproceedings
    Abstract: We present locoGP, a Genetic Programming (GP) system written in Java for evolving Java source code. locoGP was designed to improve the performance of programs as measured in the number of operations executed. Variable test cases are used to maintain functional correctness during evolution. The operation of locoGP is demonstrated on a number of typically constructed "off-the-shelf" hand-written implementations of sort and prefix-code programs. locoGP was able to find improvement opportunities in all test problems.
    BibTeX:
    @inproceedings{Cody-KennyGB15,
      author = {Brendan Cody-Kenny and Edgar Galván-López and Stephen Barrett},
      title = {locoGP: Improving Performance by Genetic Programming Java Source Code},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {811-818},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768419}
    }
    					
    2015.12.09 Yueming Dai, Yiting Wu & Dinghui Wu Particle Swarm Optimization Algorithm For Test Case Automatic Generation Based On Clustering Thought 2015 Proceedings of IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER '15), pp. 1479-1485, Shenyang China, 8-12 June   Inproceedings
    Abstract: In order to improve the efficiency and quality of software test case automatic generation, a kind of particle swarm optimization was proposed. It had adaptive optimization based on the clustering thought. The algorithm divided the population into two types which were main particle and secondary particle when the algorithm was executed. They used different search strategies so that the algorithm expanded the search scope of particles to speed up the algorithm running. The experimental result shows that the proposed algorithm has more advantages and is more effective than the other contrastive algorithms in the software test case automatic generation.
    BibTeX:
    @inproceedings{DaiWW15,
      author = {Yueming Dai and Yiting Wu and Dinghui Wu},
      title = {Particle Swarm Optimization Algorithm For Test Case Automatic Generation Based On Clustering Thought},
      booktitle = {Proceedings of IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1479-1485},
      address = {Shenyang, China},
      month = {8-12 June},
      doi = {http://dx.doi.org/10.1109/CYBER.2015.7288163}
    }
    					
    2015.11.06 Ermira Daka, José Campos, Jonathan Dorn, Gordon Fraser & Westley Weimer Generating Readable Unit Tests for Guava 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 235-241, Bergamo Italy, 5-7 September   Inproceedings Testing and Debugging
    Abstract: Unit tests for object-oriented classes can be generated automatically using search-based testing techniques. As the search algorithms are typically guided by structural coverage criteria, the resulting unit tests are often long and confusing, with possible negative implications for developer adoption of such test generation tools, and the difficulty of the test oracle problem and test maintenance. To counter this problem, we integrate a further optimization target based on a model of test readability learned from human annotation data. We demonstrate on a selection of classes from the Guava library how this approach produces more readable unit tests without loss of coverage.
    BibTeX:
    @inproceedings{DakaCDFW15,
      author = {Ermira Daka and José Campos and Jonathan Dorn and Gordon Fraser and Westley Weimer},
      title = {Generating Readable Unit Tests for Guava},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {235-241},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_17}
    }
    					
    2015.11.06 Altino Dantas, Italo Yeltsin, Allysson Allex Araújo & Jerffeson Souza Interactive Software Release Planning with Preferences Base 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 341-346, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: The release planning is a complex task in the software development process and involves many aspects related to the decision about which requirements should be allocated in each system release. Several search based techniques have been proposed to tackle this problem, but in most cases the human expertise and preferences are not effectively considered. In this context, this work presents an approach in which the search is guided according to a Preferences Base supplied by the user. Preliminary empirical results showed the approach is able to find solutions which satisfy the most important user preferences.
    BibTeX:
    @inproceedings{DantasYAS15,
      author = {Altino Dantas and Italo Yeltsin and Allysson Allex Araújo and Jerffeson Souza},
      title = {Interactive Software Release Planning with Preferences Base},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {341-346},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_32}
    }
    					
    2015.11.06 Kenneth A. De Jong Co-Evolutionary Algorithms: A Useful Computational Abstraction? 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 3-11, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Interest in co-evolutionary algorithms was triggered in part with Hillis 1991 paper describing his success in using one to evolve sorting networks. Since then there have been heightened expectations for using this nature-inspired technique to improve on the range and power of evolutionary algorithms for solving difficult computation problems. However, after more than two decades of exploring this promise, the results have been somewhat mixed. In this talk I summarize the progress made and the lessons learned with a goal of understanding how they are best used and identify a variety of interesting open issues that need to be explored in order to make further progress in this area.
    BibTeX:
    @inproceedings{DeJong15,
      author = {Kenneth A. De Jong},
      title = {Co-Evolutionary Algorithms: A Useful Computational Abstraction?},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {3-11},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_1}
    }
    					
    2014.05.30 José del Sagrado, Isabel María del Águila & Francisco Javier Orellana Multi-objective Ant Colony Optimization for Requirements Selection 2015 Empirical Software Engineering, Vol. 20(3), pp. 577-610, June   Article Requirements/Specifications
    Abstract: The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.
    BibTeX:
    @article{delSagradodO15,
      author = {José del Sagrado and Isabel María del Águila and Francisco Javier Orellana},
      title = {Multi-objective Ant Colony Optimization for Requirements Selection},
      journal = {Empirical Software Engineering},
      year = {2015},
      volume = {20},
      number = {3},
      pages = {577-610},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s10664-013-9287-3}
    }
    					
    2015.11.06 Dario Di Nucci, Annibale Panichella, Andy Zaidman & Andrea De Lucia Hypervolume-Based Search for Test Case Prioritization 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 157-172, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Test case prioritization (TCP) is aimed at finding an ideal ordering for executing the available test cases to reveal faults earlier. To solve this problem greedy algorithms and meta-heuristics have been widely investigated, but in most cases there is no statistically significant difference between them in terms of effectiveness. The fitness function used to guide meta-heuristics condenses the cumulative coverage scores achieved by a test case ordering using the Area Under Curve (AUC) metric. In this paper we notice that the AUC metric represents a simplified version of the hypervolume metric used in many objective optimization and we propose HGA, a Hypervolume-based Genetic Algorithm, to solve the TCP problem when using multiple test criteria. The results shows that HGA is more cost-effective than the additional greedy algorithm on large systems and on average requires 36% of the execution time required by the additional greedy algorithm.
    BibTeX:
    @inproceedings{DiNucciPZD15,
      author = {Dario Di Nucci and Annibale Panichella and Andy Zaidman and Andrea De Lucia},
      title = {Hypervolume-Based Search for Test Case Prioritization},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {157-172},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_11}
    }
    					
    2015.11.06 Duany Dreyton, Allysson Allex Araújo, Altino Dantas, Átila Freitas & Jerffeson Souza Search-Based Bug Report Prioritization for Kate Editor Bugs Repository 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 295-300, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: The prioritization of bugs in online repositories can be considered a complex and important task. Thus, providing an automatic strategy to deal with this challenge can be useful and significantly collaborate with the repository use. In this paper, a search-based approach to prioritize bugs in the Kate Editor Bugs Repository is proposed, taking into account some valuable information given by the repository users about the bugs. Experiments demonstrate the proposed approach can be calibrated to fit particular scenarios and can produce intelligent bug orders.
    BibTeX:
    @inproceedings{DreytonADFS15,
      author = {Duany Dreyton and Allysson Allex Araújo and Altino Dantas and Átila Freitas and Jerffeson Souza},
      title = {Search-Based Bug Report Prioritization for Kate Editor Bugs Repository},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {295-300},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_26}
    }
    					
    2016.02.02 Xin Du, Youcong Ni, Peng Ye, Xin Yao, Leandro L. Minku & Ruliang Xiao An Evolutionary Algorithm for Performance Optimization at Software Architecture Level 2015 Proceedings of IEEE Congress on Evolutionary Computation (CEC '15), pp. 2129 - 2136, Sendai Japan, 25-28 May   Inproceedings
    Abstract: Architecture-based software performance optimization can not only significantly save time but also reduce cost. A few rule-based performance optimization approaches at software architecture (SA) level have been proposed in recent years. However, in these approaches, the number of rules being used and the order of application of each rule are uncertain in the optimization process and these uncertainties have not been fully considered so far. As a result, the search space for performance improvement is limited, possibly excluding optimal solutions. Aiming to solve this problem, we propose an evolutionary algorithm for rule-based performance optimization at SA level named EA4PO. First, the rule-based software performance optimization at SA level is abstracted into a mathematical model called RPOM. RPOM can precisely characterize the mathematical relation between the usage of rules and the optimal solution in the performance improvement space. Then, a framework named RSEF is designed to support the execution of rule sequences. Based on RPOM and RSEF, EA4PO is proposed to find the optimal performance improvement solution. In EA4PO, an adaptive mutation operator is designed to guide the search direction by fully considering heuristic information of rule usage during the evolution. Finally, the effectiveness of EA4PO is validated by comparing EA4PO with a typical rule-based approach. The results show that EA4PO can explore a relatively larger space and get better solutions.
    BibTeX:
    @inproceedings{DuNYYMX15,
      author = {Xin Du and Youcong Ni and Peng Ye and Xin Yao and Leandro L. Minku and Ruliang Xiao},
      title = {An Evolutionary Algorithm for Performance Optimization at Software Architecture Level},
      booktitle = {Proceedings of IEEE Congress on Evolutionary Computation (CEC '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {2129 - 2136},
      address = {Sendai, Japan},
      month = {25-28 May},
      doi = {http://dx.doi.org/10.1109/CEC.2015.7257147}
    }
    					
    2016.04.29 Ahmed El-Serafy, Ghada El-Sayed, Cherif Salama & Ayman Wahba Enhanced Genetic Algorithm for MC/DC Test Data Generation 2015 Proceedings of International Symposium on Innovations in Intelligent SysTems and Applications (INISTA '15), pp. 1-8, Madrid Spain, 2-4 September   Inproceedings
    Abstract: Structural testing is concerned with the internal structures of the written software. The targeted structural coverage criteria are usually based on the criticality of the application. Modified Condition/Decision Coverage (MC/DC) is a structural coverage criterion that was introduced to the industry by NASA. Also, MC/DC comes either highly recommended or mandated by multiple standards, including ISO 26262 from the automotive industry and DO-178C from the aviation industry due to its efficiency in bug finding while maintaining a compact test suite. However, due to its complexity, huge amount of resources are dedicated to fulfilling it. Hence, automation efforts were directed to generate test data that satisfy MC/DC. Genetic Algorithms (GA) in particular showed promising results in achieving high coverage percentages. Our results show that coverage levels could be further improved using a batch of enhancements applied on the GA search.
    BibTeX:
    @inproceedings{El-SerafyESW15,
      author = {Ahmed El-Serafy and Ghada El-Sayed and Cherif Salama and Ayman Wahba},
      title = {Enhanced Genetic Algorithm for MC/DC Test Data Generation},
      booktitle = {Proceedings of International Symposium on Innovations in Intelligent SysTems and Applications (INISTA '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-8},
      address = {Madrid, Spain},
      month = {2-4 September},
      doi = {http://dx.doi.org/10.1109/INISTA.2015.7276794}
    }
    					
    2015.11.06 Édipo Luis Féderle, Thiago do Nascimento Ferreira, Thelma Elita Colanzi & Silvia Regina Vergilio Optimizing Software Product Line Architectures with OPLA-Tool 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 325-331, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: MOA4PLA is an approach proposed for Product Line Architecture (PLA) design optimization, based on multi-objective algorithms and different metrics that consider specific PLA characteristics. To allow the practical use of MOA4PLA, this paper describes OPLA-Tool, a supporting tool that implements the complete MOA4PLA process. OPLA-Tool has a graphical interface used to choose algorithms, parameters, search operators used in the optimization, and to visualize the alternative PLAs (solutions), with their fitness values associated and corresponding class diagrams. The paper also describes an experiment conducted to evaluate the usefulness of OPLA-Tool. Results show that OPLA-Tool achieves its purpose and that improved solutions are obtained.
    BibTeX:
    @inproceedings{FederleFCV15,
      author = {Édipo Luis Féderle and Thiago do Nascimento Ferreira and Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {Optimizing Software Product Line Architectures with OPLA-Tool},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {325-331},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_30}
    }
    					
    2015.12.08 Édipo Luis Féderle, Thiago do Nascimento Ferreira, Thelma Elita Colanzi & Silvia Regina Vergilio OPLA-tool: A Support Tool for Search-based Product Line Architecture Design 2015 Proceedings of the 19th International Conference on Software Product Line (SPLC '15), pp. 370-373, Nashville TN USA, 20-24 July   Inproceedings
    Abstract: The Product Line Architecture (PLA) design is a complex task, influenced by many factors such as feature modularization and PLA extensibility, which are usually evaluated according to different metrics. Hence, the PLA design is an optimization problem and problems like that have been successfully solved in the Search-Based Software Engineering (SBSE) area, by using metaheuristics such as Genetic Algorithm. Considering this fact, this paper introduces a tool named OPLA-Tool, conceived to provide computer support to a search-based approach for PLA design. OPLA-Tool implements all the steps necessary to use multi-objective optimization algorithms, including PLA transformations and visualization through a graphical interface. OPLA-Tool receives as input a PLA at the class diagram level, and produces a set of good alternative diagrams in terms of cohesion, feature modularization and reduction of crosscutting concerns.
    BibTeX:
    @inproceedings{FederleFCV15b,
      author = {Édipo Luis Féderle and Thiago do Nascimento Ferreira and Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {OPLA-tool: A Support Tool for Search-based Product Line Architecture Design},
      booktitle = {Proceedings of the 19th International Conference on Software Product Line (SPLC '15)},
      publisher = {ACM},
      year = {2015},
      pages = {370-373},
      address = {Nashville, TN, USA},
      month = {20-24 July},
      doi = {http://dx.doi.org/10.1145/2791060.2791096}
    }
    					
    2016.03.08 Robert Feldt & Simon Poulding Broadening the Search in Search-Based Software Testing: It Need Not Be Evolutionary 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 1-7, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: Search-based software testing (SBST) can potentially help software practitioners create better test suites using less time and resources by employing powerful methods for search and optimization. However, research on SBST has typically focused on only a few search approaches and basic techniques. A majority of publications in recent years use some form of evolutionary search, typically a genetic algorithm, or, alternatively, some other optimization algorithm inspired from nature. This paper argues that SBST researchers and practitioners should not restrict themselves to a limited choice of search algorithms or approaches to optimization. To support our argument we empirically investigate three alternatives and compare them to the de facto SBST standards in regards to performance, resource efficiency and robustness on different test data generation problems: classic algorithms from the optimization literature, bayesian optimization with gaussian processes from machine learning, and nested monte carlo search from game playing / reinforcement learning. In all cases we show comparable and sometimes better performance than the current state-of-the-SBST-art. We conclude that SBST researchers should consider a more general set of solution approaches, more consider combinations and hybrid solutions and look to other areas for how to develop the field.
    BibTeX:
    @inproceedings{FeldtP15,
      author = {Robert Feldt and Simon Poulding},
      title = {Broadening the Search in Search-Based Software Testing: It Need Not Be Evolutionary},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-7},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.8}
    }
    					
    2014.09.03 Javier Ferrer, Peter M. Kruse, Francisco Chicano & Enrique Alba Search Based Algorithms for Test Sequence Generation in Functional Testing 2015 Information and Software Technology, Vol. 58, pp. 419-432, February   Article Testing and Debugging
    Abstract: Context The generation of dynamic test sequences from a formal specification, complementing traditional testing methods in order to find errors in the source code. Objective In this paper we extend one specific combinatorial test approach, the Classification Tree Method (CTM), with transition information to generate test sequences. Although we use CTM, this extension is also possible for any combinatorial testing method. Method The generation of minimal test sequences that fulfill the demanded coverage criteria is an NP-hard problem. Therefore, search-based approaches are required to find such (near) optimal test sequences. Results The experimental analysis compares the search-based technique with a greedy algorithm on a set of 12 hierarchical concurrent models of programs extracted from the literature. Our proposed search-based approaches (GTSG and ACOts) are able to generate test sequences by finding the shortest valid path to achieve full class (state) and transition coverage. Conclusion The extended classification tree is useful for generating of test sequences. Moreover, the experimental analysis reveals that our search-based approaches are better than the greedy deterministic approach, especially in the most complex instances. All presented algorithms are actually integrated into a professional tool for functional testing.
    BibTeX:
    @article{FerrerKCA15,
      author = {Javier Ferrer and Peter M. Kruse and Francisco Chicano and Enrique Alba},
      title = {Search Based Algorithms for Test Sequence Generation in Functional Testing},
      journal = {Information and Software Technology},
      year = {2015},
      volume = {58},
      pages = {419-432},
      month = {February},
      doi = {http://dx.doi.org/10.1016/j.infsof.2014.07.014}
    }
    					
    2015.12.09 Rui Angelo Matnei Filho & Silvia Regina Vergilio A Mutation and Multi-objective Test Data Generation Approach for Feature Testing of Software Product Lines 2015 Proceedings of the 29th Brazilian Symposium on Software Engineering (SBES '15), pp. 21-30, Belo Horizonte Brazil, 21-26 September   Inproceedings Testing and Debugging
    Abstract: Mutation approaches have been recently applied for feature testing of Software Product Lines (SPLs). The idea is to select products, associated to mutation operators that describe possible faults in the Feature Model (FM). In this way, the operators and mutation score can be used to evaluate and generate a test set, that is a set of SPL products to be tested. However, the generation of test sets to kill all the mutants with a reduced, possible minimum, number of products is a complex task. To solve such problem, this paper introduces a multiobjective approach that includes a representation to the problem, search operators and two objectives related to the number of test cases and dead mutants. The approach was implemented with three representative multi-objective and evolutionary algorithms: NSGA-II, SPEA2 and IBEA. The conducted evaluation analyses the solutions obtained and compares the algorithms. An advantage of this approach is to offer a set of good solutions to the tester with a reduced number of products and high mutation score values, that is, with high probability of revealing faults described by the mutation testing.
    BibTeX:
    @inproceedings{FilhoV15,
      author = {Rui Angelo Matnei Filho and Silvia Regina Vergilio},
      title = {A Mutation and Multi-objective Test Data Generation Approach for Feature Testing of Software Product Lines},
      booktitle = {Proceedings of the 29th Brazilian Symposium on Software Engineering (SBES '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {21-30},
      address = {Belo Horizonte, Brazil},
      month = {21-26 September},
      doi = {http://dx.doi.org/10.1109/SBES.2015.17}
    }
    					
    2014.09.02 Gordon Fraser & Andrea Arcuri Achieving Scalable Mutation-based Generation of Whole Test Suites 2015 Empirical Software Engineering, Vol. 20(3), pp. 783-812, June   Article Testing and Debugging
    Abstract: Without complete formal specification, automatically generated software tests need to be manually checked in order to detect faults. This makes it desirable to produce the strongest possible test set while keeping the number of tests as small as possible. As commonly applied coverage criteria like branch coverage are potentially weak, mutation testing has been proposed as a stronger criterion. However, mutation based test generation is hampered because usually there are simply too many mutants, and too many of these are either trivially killed or equivalent. On such mutants, any effort spent on test generation would per definition be wasted. To overcome this problem, our search-based EvoSuite test generation tool integrates two novel optimizations: First, we avoid redundant test executions on mutants by monitoring state infection conditions, and second we use whole test suite generation to optimize test suites towards killing the highest number of mutants, rather than selecting individual mutants. These optimizations allowed us to apply EvoSuite to a random sample of 100 open source projects, consisting of a total of 8,963 classes and more than two million lines of code, leading to a total of 1,380,302 mutants. The experiment demonstrates that our approach scales well, making mutation testing a viable test criterion for automated test case generation tools, and allowing us to analyze the relationship of branch coverage and mutation testing in detail.
    BibTeX:
    @article{FraserA15,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {Achieving Scalable Mutation-based Generation of Whole Test Suites},
      journal = {Empirical Software Engineering},
      year = {2015},
      volume = {20},
      number = {3},
      pages = {783-812},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s10664-013-9299-z}
    }
    					
    2014.09.02 Gordon Fraser & Andrea Arcuri 1600 Faults in 100 Projects: Automatically Finding Faults While Achieving High Coverage with Evosuite 2015 Empirical Software Engineering, Vol. 20(3), pp. 611-639, June   Article Testing and Debugging
    Abstract: Automated unit test generation techniques traditionally follow one of two goals: Either they try to find violations of automated oracles (e.g., assertions, contracts, undeclared exceptions), or they aim to produce representative test suites (e.g., satisfying branch coverage) such that a developer can manually add test oracles. Search-based testing (SBST) has delivered promising results when it comes to achieving coverage, yet the use in conjunction with automated oracles has hardly been explored, and is generally hampered as SBST does not scale well when there are too many testing targets. In this paper we present a search-based approach to handle both objectives at the same time, implemented in the EvoSuite tool. An empirical study applying EvoSuite on 100 randomly selected open source software projects (the SF100 corpus) reveals that SBST has the unique advantage of being well suited to perform both traditional goals at the same time-efficiently triggering faults, while producing representative test sets for any chosen coverage criterion. In our study, EvoSuite detected twice as many failures in terms of undeclared exceptions as a traditional random testing approach, witnessing thousands of real faults in the 100 open source projects. Two out of every five classes with undeclared exceptions have actual faults, but these are buried within many failures that are caused by implicit preconditions. This “noise” can be interpreted as either a call for further research in improving automated oracles-or to make tools like EvoSuite an integral part of software development to enforce clean program interfaces.
    BibTeX:
    @article{FraserA15b,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {1600 Faults in 100 Projects: Automatically Finding Faults While Achieving High Coverage with Evosuite},
      journal = {Empirical Software Engineering},
      year = {2015},
      volume = {20},
      number = {3},
      pages = {611-639},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s10664-013-9288-2}
    }
    					
    2016.03.08 Gordon Fraser & Andrea Arcuri EvoSuite at the SBST 2015 Tool Competition 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 25-27, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: EvoSuite is a mature research prototype that automatically generates unit tests for Java code. This paper summarizes the results and experiences of Evo Suite's participation at the third unit testing competition at SBST 2015. An unfortunate issue of conflicting dependency versions in two out of the nine benchmark projects reduced Evo Suite's overall score to 190.6, leading to the overall second rank.
    BibTeX:
    @inproceedings{FraserA15c,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {EvoSuite at the SBST 2015 Tool Competition},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {25-27},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.13}
    }
    					
    2014.09.02 Gordon Fraser, Andrea Arcuri & Phil McMinn A Memetic Algorithm for Whole Test Suite Generation 2015 Journal of Systems and Software, Vol. 103, pp. 311-327, May   Article Testing and Debugging
    Abstract: The generation of unit-level test cases for structural code coverage is a task well-suited to Genetic Algorithms. Method call sequences must be created that construct objects, put them into the right state and then execute uncovered code. However, the generation of primitive values, such as integers and doubles, characters that appear in strings, and arrays of primitive values, are not so straightforward. Often, small local changes are required to drive the value toward the one needed to execute some target structure. However, global searches like Genetic Algorithms tend to make larger changes that are not concentrated on any particular aspect of a test case. In this paper, we extend the Genetic Algorithm behind the EvoSuite test generation tool into a Memetic Algorithm, by equipping it with several local search operators. These operators are designed to efficiently optimize primitive values and other aspects of a test suite that allow the search for test cases to function more effectively. We evaluate our operators using a rigorous experimental methodology on over 12,000 Java classes, comprising open source classes of various different kinds, including numerical applications and text processors. Our study shows that increases in branch coverage of up to 53% are possible for an individual class in practice.
    BibTeX:
    @article{FraserAM15,
      author = {Gordon Fraser and Andrea Arcuri and Phil McMinn},
      title = {A Memetic Algorithm for Whole Test Suite Generation},
      journal = {Journal of Systems and Software},
      year = {2015},
      volume = {103},
      pages = {311-327},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.jss.2014.05.032}
    }
    					
    2016.03.08 Erik M. Fredericks & Betty H.C. Cheng An Empirical Analysis of Providing Assurance for Self-Adaptive Systems at Different Levels of Abstraction in the Face of Uncertainty 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 8-14, Florence Italy, 18-19 May   Inproceedings
    Abstract: Self-adaptive systems (SAS) must frequently continue to deliver acceptable behavior at run time even in the face of uncertainty. Particularly, SAS applications can self-reconfigure in response to changing or unexpected environmental conditions and must therefore ensure that the system performs as expected. Assurance can be addressed at both design time and run time, where environmental uncertainty poses research challenges for both settings. This paper presents empirical results from a case study in which search-based software engineering techniques have been systematically applied at different levels of abstraction, including requirements analysis, code implementation, and run-time validation, to a remote data mirroring application that must efficiently diffuse data while experiencing adverse operating conditions. Experimental results suggest that our techniques perform better in terms of providing assurance than alternative software engineering techniques at each level of abstraction.
    BibTeX:
    @inproceedings{FredericksC15,
      author = {Erik M. Fredericks and Betty H. C. Cheng},
      title = {An Empirical Analysis of Providing Assurance for Self-Adaptive Systems at Different Levels of Abstraction in the Face of Uncertainty},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {8-14},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.9}
    }
    					
    2015.08.07 Giovani Guizzo, Gian Mauricio Fritsche, Silvia Regina Vergilio & Aurora Trinidad Ramirez Pozo A Hyper-Heuristic for the Multi-Objective Integration and Test Order Problem 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 1343-1350, Madrid Spain, 11-15 July   Inproceedings
    Abstract: Multi-objective evolutionary algorithms (MOEAs) have been efficiently applied to Search-Based Software Engineering (SBSE) problems. However, skilled software engineers waste significant effort designing such algorithms for a particular problem, adapting them, selecting operators and configuring parameters. Hyper-heuristics can help in these tasks by dynamically selecting or creating heuristics. Despite of such advantages, we observe a lack of works regarding this subject in the SBSE field. Considering this fact, this work introduces HITO, a Hyper-heuristic for the Integration and Test Order Problem. It includes a set of well-defined steps and is based on two selection functions (Choice Function and Multi-armed Bandit) to select the best low-level heuristic (combination of mutation and crossover operators) in each mating. To perform the selection, a quality measure is proposed to assess the performance of low-level heuristics throughout the evolutionary process. HITO was implemented using NSGA-II and evaluated to solve the integration and test order problem in seven systems. The introduced hyper-heuristic obtained the best results for all systems, when compared to a traditional algorithm.
    BibTeX:
    @inproceedings{GuizzoFVP15,
      author = {Giovani Guizzo and Gian Mauricio Fritsche and Silvia Regina Vergilio and Aurora Trinidad Ramirez Pozo},
      title = {A Hyper-Heuristic for the Multi-Objective Integration and Test Order Problem},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {1343-1350},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754725}
    }
    					
    2016.03.08 Giovani Guizzo, Silvia Regina Vergilio & Aurora Pozo Evaluating a Multi-objective Hyper-Heuristic for the Integration and Test Order Problem 2015 Proceedings of Brazilian Conference on Intelligent Systems (BRACIS '15), pp. 1-6, Natal Brazil, 4-7 November   Inproceedings
    Abstract: Multi-objective evolutionary algorithms (MOEAs) have been successfully applied for solving different software engineering problems. However, adapting and configuring these algorithms for a specific problem can demand significant effort from software engineers. Therefore, to help in this task, a hyper-heuristic, named HITO (Hyper-heuristic for the Integration and Test Order problem) was proposed to adaptively select search operators during the optimization process. HITO was successfully applied using NSGA-II for solving the integration and test order problem. HITO can use two hyper-heuristic selection methods: Choice Function and Multi-armed Bandit. However, a hypotheses behind this study is that HITO does not depend of NSGA-II and can be used with other MOEAs. To this aim, this paper presents results from evaluation experiments comparing the performance of HITO using two different MOEAs: NSGA-II and SPEA2. The results show that HITO is able to outperform both MOEAs.
    BibTeX:
    @inproceedings{GuizzoVP15,
      author = {Giovani Guizzo and Silvia Regina Vergilio and Aurora Pozo},
      title = {Evaluating a Multi-objective Hyper-Heuristic for the Integration and Test Order Problem},
      booktitle = {Proceedings of Brazilian Conference on Intelligent Systems (BRACIS '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-6},
      address = {Natal, Brazil},
      month = {4-7 November},
      doi = {http://dx.doi.org/10.1109/BRACIS.2015.11}
    }
    					
    2015.08.07 Saemundur O. Haraldsson & John R. Woodward Genetic Improvement of Energy Usage is only as Reliable as the Measurements are Accurate 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 821-822, Madrid Spain, 12-12 July   Inproceedings
    Abstract: Energy has recently become an objective for Genetic Improvement. Measuring software energy use is complicated which might tempt us to use simpler measurements. However if we base the GI on inaccurate measurements we can not expect improvements. This paper seeks to highlight important issues when evaluating energy use of programs.
    BibTeX:
    @inproceedings{HaraldssonW15,
      author = {Saemundur O. Haraldsson and John R. Woodward},
      title = {Genetic Improvement of Energy Usage is only as Reliable as the Measurements are Accurate},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {821-822},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768421}
    }
    					
    2015.11.06 Mark Harman, Yue Jia & Yuanyuan Zhang Achievements, Open Problems and Challenges for Search Based Software Testing (keynote) 2015 Proceeding of the 8th IEEE International Conference on Software Testing, Verification and Validation (ICST ’15), pp. 1-12, Graz Austria, 13-17 April   Inproceedings Testing and Debugging
    Abstract: Search Based Software Testing (SBST) formulates testing as an optimisation problem, which can be attacked using computational search techniques from the field of Search Based Software Engineering (SBSE). We present an analysis of the SBST research agenda, focusing on the open problems and challenges of testing non-functional properties, in particular a topic we call 'Search Based Energy Testing' (SBET), Multi-objective SBST and SBST for Test Strategy Identification. We conclude with a vision of FIFIVERIFY tools, which would automatically find faults, fix them and verify the fixes. We explain why we think such FIFIVERIFY tools constitute an exciting challenge for the SBSE community that already could be within its reach.
    BibTeX:
    @inproceedings{HarmanJZ15,
      author = {Mark Harman and Yue Jia and Yuanyuan Zhang},
      title = {Achievements, Open Problems and Challenges for Search Based Software Testing (keynote)},
      booktitle = {Proceeding of the 8th IEEE International Conference on Software Testing, Verification and Validation (ICST ’15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-12},
      address = {Graz, Austria},
      month = {13-17 April},
      doi = {http://dx.doi.org/10.1109/ICST.2015.7102580}
    }
    					
    2015.08.07 Mark Harman & Justyna Petke GI4GI: Improving Genetic Improvement Fitness Functions 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 793-794, Madrid Spain, 12-12 July   Inproceedings
    Abstract: Genetic improvement (GI) has been successfully used to optimise non-functional properties of software, such as execution time, by automatically manipulating program's source code. Measurement of non-functional properties, however, is a non-trivial task; energy consumption, for instance, is highly dependant on the hardware used. Therefore, we propose the GI4GI framework (and two illustrative applications). GI4GI first applies GI to improve the fitness function for the particular environment within which software is subsequently optimised using traditional GI.
    BibTeX:
    @inproceedings{HarmanP15,
      author = {Mark Harman and Justyna Petke},
      title = {GI4GI: Improving Genetic Improvement Fitness Functions},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {793-794},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768415}
    }
    					
    2015.02.05 S.M.H. Hasheminejad & S. Jalili CCIC: Clustering Analysis Classes to Identify Software Components 2015 Information and Software Technology, Vol. 57, pp. 329-351, January   Article
    Abstract: Context Component identification during software design phase denotes a process of partitioning the functionalities of a system into distinct components. Several component identification methods have been proposed that cannot be customized to software architect’s preferences. Objectives In this paper, we propose a clustering-based method by the name of CCIC (Clustering analysis Classes to Identify software Components) to identify logical components from analysis classes according to software architect’s preferences. Method CCIC uses a customized HEA (Hierarchical Evolutionary Algorithm) to automatically classify analysis classes into appropriate logical components and avoid the problem of searching for the proper number of components. Furthermore, it allows software architects to determine the constraints in their deployment and implementation framework. Results A series of experiments were conducted for four real-world case studies according to various proposed weighting schemes. Conclusion According to experimental results, it is concluded that CCIC can identify more cohesive and independent components with respect to software architect’s preferences in comparison with the existing component identification methods such as FCA-based and CRUD-based methods.
    BibTeX:
    @article{HasheminejadJ15,
      author = {S.M.H. Hasheminejad and S. Jalili},
      title = {CCIC: Clustering Analysis Classes to Identify Software Components},
      journal = {Information and Software Technology},
      year = {2015},
      volume = {57},
      pages = {329-351},
      month = {January},
      doi = {http://dx.doi.org/10.1016/j.infsof.2014.05.013}
    }
    					
    2016.03.08 Yue Jia Hyperheuristic Search for SBST 2015 Proceedings of IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 15-16, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: This paper argues that incorporating hyper heuristic techniques into existing SBST approaches could help to increase their applicability and generality. We propose a general two layer selective hyper heuristic approach for SBST and provide an example of its use for Combinatorial Interaction Testing (CIT).
    BibTeX:
    @inproceedings{Jia15,
      author = {Yue Jia},
      title = {Hyperheuristic Search for SBST},
      booktitle = {Proceedings of IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      year = {2015},
      pages = {15-16},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.10}
    }
    					
    2016.02.03 Yue Jia, Myra B. Cohen, Mark Harman & Justyna Petke Learning Combinatorial Interaction Test Generation Strategies using Hyperheuristic Search 2015 Proceedings of IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE ‘15), pp. 540-550, Florence Italy, 16-24 May   Inproceedings Testing and Debugging
    Abstract: The surge of search based software engineering research has been hampered by the need to develop customized search algorithms for different classes of the same problem. For instance, two decades of bespoke Combinatorial Interaction Testing (CIT) algorithm development, our exemplar problem, has left software engineers with a bewildering choice of CIT techniques, each specialized for a particular task. This paper proposes the use of a single hyperheuristic algorithm that learns search strategies across a broad range of problem instances, providing a single generalist approach. We have developed a Hyperheuristic algorithm for CIT, and report experiments that show that our algorithm competes with known best solutions across constrained and unconstrained problems: For all 26 real-world subjects, it equals or outperforms the best result previously reported in the literature. We also present evidence that our algorithm's strong generic performance results from its unsupervised learning. Hyperheuristic search is thus a promising way to relocate CIT design intelligence from human to machine.
    BibTeX:
    @inproceedings{JiaCHP15,
      author = {Yue Jia and Myra B. Cohen and Mark Harman and Justyna Petke},
      title = {Learning Combinatorial Interaction Test Generation Strategies using Hyperheuristic Search},
      booktitle = {Proceedings of IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE ‘15)},
      publisher = {IEEE},
      year = {2015},
      pages = {540-550},
      address = {Florence, Italy},
      month = {16-24 May},
      doi = {http://dx.doi.org/10.1109/ICSE.2015.71}
    }
    					
    2015.11.06 Yue Jia, Mark Harman, William B. Langdon & Alexandru Marginean Grow and Serve: Growing Django Citation Services using SBSE 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 269-275, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: We introduce a ‘grow and serve’ approach to Genetic Improvement (GI) that grows new functionality as a web service running on the Django platform. Using our approach, we successfully grew and released a citation web service. This web service can be invoked by existing applications to introduce a new citation counting feature. We demonstrate that GI can grow genuinely useful code in this way, so we deployed the SBSE-grown web service into widely-used publications repositories, such as the GP bibliography. In the first 24 hours of deployment alone, the service was used to provide GP bibliography citation data 369 times from 29 countries.
    BibTeX:
    @inproceedings{JiaHLM15,
      author = {Yue Jia and Mark Harman and William B. Langdon and Alexandru Marginean},
      title = {Grow and Serve: Growing Django Citation Services using SBSE},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {269-275},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_22}
    }
    					
    2016.03.09 Bo Jiang, W.K. Chan & T.H. Tse PORA: Proportion-Oriented Randomized Algorithm for Test Case Prioritization 2015 Proceedings of IEEE International Conference on Software Quality, Reliability and Security (QRS '15), pp. 131-140, Vancouver Canada, 3-5 August   Inproceedings Testing and Debugging
    Abstract: Effective testing is essential for assuring software quality. While regression testing is time-consuming, the fault detection capability may be compromised if some test cases are discarded. Test case prioritization is a viable solution. To the best of our knowledge, the most effective test case prioritization approach is still the additional greedy algorithm, and existing search-based algorithms have been shown to be visually less effective than the former algorithms in previous empirical studies. This paper proposes a novel Proportion-Oriented Randomized Algorithm (PORA) for test case prioritization. PORA guides test case prioritization by optimizing the distance between the prioritized test suite and a hierarchy of distributions of test input data. Our experiment shows that PORA test case prioritization techniques are as effective as, if not more effective than, the total greedy, additional greedy, and ART techniques, which use code coverage information. Moreover, the experiment shows that PORA techniques are more stable in effectiveness than the others.
    BibTeX:
    @inproceedings{JiangCT15,
      author = {Bo Jiang and W.K. Chan and T.H. Tse},
      title = {PORA: Proportion-Oriented Randomized Algorithm for Test Case Prioritization},
      booktitle = {Proceedings of IEEE International Conference on Software Quality, Reliability and Security (QRS '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {131-140},
      address = {Vancouver, Canada},
      month = {3-5 August},
      doi = {http://dx.doi.org/10.1109/QRS.2015.28}
    }
    					
    2015.11.06 He Jiang, Zhilei Ren, Xiaochen Li & Xiaochen Lai Transformed Search Based Software Engineering: A New Paradigm of SBSE 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 203-218, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Recent years have witnessed the sharp growth of research interests in Search Based Software Engineering (SBSE) from the society of Software Engineering (SE). In SBSE, a SE task is generally transferred into a combinatorial optimization problem and search algorithms are employed to achieve solutions within its search space. Since the terrain of the search space is rugged with numerous local optima, it remains a great challenge for search algorithms to achieve high-quality solutions in SBSE. In this paper, we propose a new paradigm of SBSE, namely Transformed Search Based Software Engineering (TSBSE). Given a new SE task, TSBSE first transforms its search space into either a reduced one or a series of gradually smoothed spaces, then employ search algorithms to effectively seek high-quality solutions. More specifically, we investigate two techniques for TSBSE, namely search space reduction and search space smoothing. We demonstrate the effectiveness of these new techniques over a typical SE task, namely the Next Release Problem (NRP). The work of this paper provides a new way for tackling SE tasks in SBSE.
    BibTeX:
    @inproceedings{JiangRLL15,
      author = {He Jiang and Zhilei Ren and Xiaochen Li and Xiaochen Lai},
      title = {Transformed Search Based Software Engineering: A New Paradigm of SBSE},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {203-218},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_14}
    }
    					
    2015.08.07 Yue Jia, Fan Wu, Mark Harman & Jens Krinke Genetic Improvement using Higher Order Mutation 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 803-804, Madrid Spain, 12-12 July   Inproceedings
    Abstract: This paper presents a brief outline of a higher-order mutation-based framework for Genetic Improvement (GI). We argue that search-based higher-order mutation testing can be used to implement a form of genetic programming (GP) to increase the search granularity and testability of GI.
    BibTeX:
    @inproceedings{JiaWHK15,
      author = {Yue Jia and Fan Wu and Mark Harman and Jens Krinke},
      title = {Genetic Improvement using Higher Order Mutation},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {803-804},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768417}
    }
    					
    2015.08.07 Colin G. Johnson & John R. Woodward Fitness as Task-relevant Information Accumulation 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 855-856, Madrid Spain, 12-12 July   Inproceedings
    Abstract: "If you cannot measure it, you cannot improve it. Lord Kelvin Fitness in GP/GI is usually a short-sighted greedy fitness function counting the number of satisfied test cases (or some other score based on error). If GP/GI is to be extended to successfully tackle "full software systems", which is the stated domain of Genetic Improvement, with loops, conditional statements and function calls, then this kind of fitness will fail to scale. One alternative approach is to measure the fitness gain in terms of the accumulated information at each executed step of the program. This paper discusses methods for measuring the way in which programs accumulate information relevant to their task as they run, by building measures of this information gain based on information theory and model complexity.
    BibTeX:
    @inproceedings{JohnsonW15,
      author = {Colin G. Johnson and John R. Woodward},
      title = {Fitness as Task-relevant Information Accumulation},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {855-856},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768428}
    }
    					
    2016.04.29 Joseph Kempka, Phil McMinn & Dirk Sudholt Design and Analysis of Different Alternating Variable Searches for Search-based Software Testing 2015 Theoretical Computer Science, Vol. 605, pp. 1-20, November   Article
    Abstract: Manual software testing is a notoriously expensive part of the software development process, and its automation is of high concern. One aspect of the testing process is the automatic generation of test inputs. This paper studies the Alternating Variable Method (AVM) approach to search-based test input generation. The AVM has been shown to be an effective and efficient means of generating branch-covering inputs for procedural programs. However, there has been little work that has sought to analyse the technique and further improve its performance. This paper proposes two different local searches that may be used in conjunction with the AVM, Geometric and Lattice Search. A theoretical runtime analysis proves that under certain conditions, the use of these searches results in better performance compared to the original AVM. These theoretical results are confirmed by an empirical study with five programs, which shows that increases of speed of over 50% are possible in practice.
    BibTeX:
    @article{KempkaMS15,
      author = {Joseph Kempka and Phil McMinn and Dirk Sudholt},
      title = {Design and Analysis of Different Alternating Variable Searches for Search-based Software Testing},
      journal = {Theoretical Computer Science},
      year = {2015},
      volume = {605},
      pages = {1-20},
      month = {November},
      doi = {http://dx.doi.org/10.1016/j.tcs.2014.12.009}
    }
    					
    2015.12.08 Juhi Khandelwal & Pradeep Tomar Approach for Automated Test Data Generation for Path Testing in Aspect-Oriented Programs using Genetic Algorithm 2015 Proceedings of International Conference on Computing, Communication & Automation (ICCCA '15), pp. 854-858, Noida India, 15-16 May   Inproceedings Testing and Debugging
    Abstract: Aspect-Oriented Programming (AOP) is an emerging programming paradigm that supports implementation of cross-cutting requirements into named program units called aspects. However, these Aspects are hard to deal in many stages of Software Development Life Cycle (SDLC) especially in Aspect-Oriented software testing. Main aim of testing is to find the errors during execution of the program. Error can prevail in any part of the program so this study use Control Flow Graph (CFG) to depicts all path of the program during its execution. Some path of the program executes rarely, so with the help of automated test data generation tester can execute those path because generation of test data for these path is not manually possible. Test data generation process can be automated with the help of various techniques and framework. This work provides review of some of the recent work that has been done in the area of AOP test data generation. Based on those work, this work proposes a approach for generating test data for AOP using Genetic Algorithm (GA).
    BibTeX:
    @inproceedings{KhandelwalT15,
      author = {Juhi Khandelwal and Pradeep Tomar},
      title = {Approach for Automated Test Data Generation for Path Testing in Aspect-Oriented Programs using Genetic Algorithm},
      booktitle = {Proceedings of International Conference on Computing, Communication & Automation (ICCCA '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {854-858},
      address = {Noida, India},
      month = {15-16 May},
      doi = {http://dx.doi.org/10.1109/CCAA.2015.7148494}
    }
    					
    2016.04.29 Cody Kinneer, Gregory M. Kapfhammer, Chris J. Wright & Phil McMinn Automatically Evaluating the Efficiency of Search-Based Test Data Generation for Relational Database Schemas 2015 Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering (SEKE '15), Pittsburgh USA, 6-8 July   Inproceedings
    Abstract: The characterization of an algorithm’s worst-case time complexity is useful because it succinctly captures how its runtime will grow as the input size becomes arbitrarily large. However, for certain algorithms—such as those performing search-based test data generation—a theoretical analysis to determine worst-case time complexity is difficult to generalize and thus not often reported in the literature. This paper introduces a framework that empirically determines an algorithm’s worst-case time complexity by doubling the size of the input and observing the change in runtime. Since the relational database is a centerpiece of modern software and the database’s schema is frequently untested, we apply the doubling technique to the domain of data generation for relational database schemas, a field where worst-case time complexities are often unknown. In addition to demonstrating the feasibility of suggesting the worst-case runtimes of the chosen algorithms and configurations, the results of our study reveal performance tradeoffs in testing strategies for relational database schemas.
    BibTeX:
    @inproceedings{KinneerKWM15,
      author = {Cody Kinneer and Gregory M. Kapfhammer and Chris J. Wright and Phil McMinn},
      title = {Automatically Evaluating the Efficiency of Search-Based Test Data Generation for Relational Database Schemas},
      booktitle = {Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering (SEKE '15)},
      year = {2015},
      address = {Pittsburgh, USA},
      month = {6-8 July},
      doi = {http://dx.doi.org/10.18293/SEKE2015-205}
    }
    					
    2015.11.06 Zoltan A. Kocsis, Alexander E.I. Brownlee, Jerry Swan & Richard Senington Haiku - a Scala Combinator Toolkit for Semi-automated Composition of Metaheuristics 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 125-140, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: There is an emerging trend towards the automated design of metaheuristics at the software component level. In principle, metaheuristics have a relatively clean decomposition, where well-known frameworks such as ILS and EA are parametrised by variant components for acceptance, perturbation etc. Automated generation of these frameworks is not so simple in practice, since the coupling between components may be implementation specific. Compositionality is the ability to freely express a space of designs ‘bottom up’ in terms of elementary components: previous work in this area has used combinators, a modular and functional approach to componentisation arising from foundational Computer Science. In this article, we describe Haiku, a combinator tool-kit written in the Scala language, which builds upon previous work to further automate the process by automatically composing the external dependencies of components. We provide examples of use and give a case study in which a programatically-generated heuristic is applied to the Travelling Salesman Problem within an Evolutionary Strategies framework.
    BibTeX:
    @inproceedings{KocsisBSS15,
      author = {Zoltan A. Kocsis and Alexander E. I. Brownlee and Jerry Swan and Richard Senington},
      title = {Haiku - a Scala Combinator Toolkit for Semi-automated Composition of Metaheuristics},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {125-140},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_9}
    }
    					
    2015.12.09 Chahine Koleejan, Bing Xue & Mengjie Zhang Code Coverage Optimisation in Genetic Algorithms and Particle Swarm Optimisation for Automatic Software Test Data Generation 2015 Proceedings of the IEEE Congress on Evolutionary Computation (CEC '15), pp. 1204-1211, Sendai Japan, 25-28 May   Inproceedings
    Abstract: Automatic software test data generation is the process of generating a set of test cases for a given program which can achieve a high code coverage. Genetic algorithms (GAs) and particle swarm optimisation (PSO) can automatically evolve a set of test data, but the traditional representation in GAs and PSO produces solutions with a single set of data cases, which may not achieve good performance on programs with many complex conditions. This paper proposes a multi-vector representation in GAs and PSO, which can generate multiple sets of data cases in a single run, to generate test data for complex test programs. Experiments have been conducted to examine and compare the performance of GAs and PSO on six commonly used benchmark test programs and three newly developed programs with a relatively large number of complex conditions. The experimental results show that the proposed multi-vector representation can improve the performance of GAs and PSO on all the nine tested programs, achieving the optimal 100% code coverage on the relatively easy programs. PSO outperforms GAs in terms of both the code coverage and the computational efficiency, especially on the hard programs.
    BibTeX:
    @inproceedings{KoleejanXZ15,
      author = {Chahine Koleejan and Bing Xue and Mengjie Zhang},
      title = {Code Coverage Optimisation in Genetic Algorithms and Particle Swarm Optimisation for Automatic Software Test Data Generation},
      booktitle = {Proceedings of the IEEE Congress on Evolutionary Computation (CEC '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1204-1211},
      address = {Sendai, Japan},
      month = {25-28 May},
      doi = {http://dx.doi.org/10.1109/CEC.2015.7257026}
    }
    					
    2015.12.09 Patipat Konsaard & Lachana Ramingwong Total Coverage Based Regression Test Case Prioritization using Genetic Algorithm 2015 Proceedings of the 12th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON '15), pp. 1-6, Hua Hin Thailand, 24-27 June   Inproceedings Testing and Debugging
    Abstract: Regression Testing is a test to ensure that a program that was changed is still working. Changes introduced to a software product often come with defects. Additional test cases are, this could reduce the main challenges of regression testing is test case prioritization. Time, effort and budget needed to retest the software. Former studies in test case prioritization confirm the benefits of prioritization techniques. Most prioritization techniques concern with choosing test cases based on their ability to cover more faults. Other techniques aim to maximize code coverage. Thus, the test cases selected should secure the total coverage to assure the adequacy of software testing. In this paper, we present an algorithm to prioritize test cases based on total coverage using a modified genetic algorithm. Its performance on the average percentage of condition covered and execution time are compared with five other approaches.
    BibTeX:
    @inproceedings{KonsaardR15,
      author = {Patipat Konsaard and Lachana Ramingwong},
      title = {Total Coverage Based Regression Test Case Prioritization using Genetic Algorithm},
      booktitle = {Proceedings of the 12th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-6},
      address = {Hua Hin, Thailand},
      month = {24-27 June},
      doi = {http://dx.doi.org/10.1109/ECTICon.2015.7207103}
    }
    					
    2016.02.19 Joseph Krall, Tim Menzies & Misty Davies GALE: Geometric Active Learning for Search-Based Software Engineering 2015 IEEE Transactions on Software Engineering, Vol. 41(10), pp. 1001-1018, October   Article
    Abstract: Multi-objective evolutionary algorithms (MOEAs) help software engineers find novel solutions to complex problems. When automatic tools explore too many options, they are slow to use and hard to comprehend. GALE is a near-linear time MOEA that builds a piecewise approximation to the surface of best solutions along the Pareto frontier. For each piece, GALE mutates solutions towards the better end. In numerous case studies, GALE finds comparable solutions to standard methods (NSGA-II, SPEA2) using far fewer evaluations (e.g. 20 evaluations, not 1,000). GALE is recommended when a model is expensive to evaluate, or when some audience needs to browse and understand how an MOEA has made its conclusions.
    BibTeX:
    @article{KrallMD15,
      author = {Joseph Krall and Tim Menzies and Misty Davies},
      title = {GALE: Geometric Active Learning for Search-Based Software Engineering},
      journal = {IEEE Transactions on Software Engineering},
      year = {2015},
      volume = {41},
      number = {10},
      pages = {1001-1018},
      month = {October},
      doi = {http://dx.doi.org/10.1109/TSE.2015.2432024}
    }
    					
    2016.02.12 Krzysztof Krawiec, Jerry Swan & Una-May O’Reilly Behavioral Program Synthesis: Insights and Prospects 2015 Genetic Programming Theory and Practice XIII   Incollection
    Abstract: Genetic programming (GP) is a stochastic, iterative generate-andtest approach to synthesizing programs from tests, i.e. examples of the desired input-output mapping. The number of passed tests, or the total error in continuous domains, is a natural objective measure of a program’s performance and a common yardstick when experimentally comparing algorithms. In GP, it is also by default used to guide the evolutionary search process. An assumption that an objective function should also be an efficient ‘search driver’ is common for all metaheuristics, such as the evolutionary algorithms which GP is a member of. Programs are complex combinatorial structures that exhibit even more complex input-output behavior, and in this chapter we discuss why this complexity cannot be effectively reflected by a single scalar objective. In consequence, GP algorithms are systemically ‘underinformed’ about the characteristics of programs they operate on, and pay for this with unsatisfactory performance and limited scalability. This chapter advocates behavioral program synthesis, where programs are characterized by informative execution traces that enable multifaceted evaluation and substantially change the roles of components in an evolutionary infrastructure. We provide a unified perspective on past work in this area, discuss the consequences of the behavioral viewpoint, outlining the future avenues for program synthesis and the wider application areas that lie beyond.
    BibTeX:
    @incollection{KrawiecSO15,
      author = {Krzysztof Krawiec and Jerry Swan and Una-May O’Reilly},
      title = {Behavioral Program Synthesis: Insights and Prospects},
      booktitle = {Genetic Programming Theory and Practice XIII},
      publisher = {Springer},
      year = {2015},
      url = {http://www.cs.put.poznan.pl/kkrawiec/wiki/uploads/Research/2015GPTP.pdf}
    }
    					
    2016.03.08 Robert Lagerström, Pontus Johnson & Mathias Ekstedt Search-Based Design of Large Software Systems-of-Systems 2015 Proceedings of IEEE/ACM 3rd International Workshop on Software Engineering for Systems-of-Systems (SESoS '15), pp. 44-47, Florence Italy, 17-17 May   Inproceedings
    Abstract: This work in progress paper presents the foundation for an Automatic Designer of large software systems-of-systems. The core formalism for the Automatic Designer is UML. The Automatic Designer extends UML with a fitness function, which uses analysis of non-functional requirements, utility theory, and stakeholder requirements, as the basis for its design suggestions. This extension logic is formalized using an OCL-based Predictive, Probabilistic Architecture Modeling Framework (called P2AMF). A set of manipulation operators is used on the UML model in order to modify it. Then, from a component library (with OTS products), new components will be introduced to the design. Using operators, a search algorithm will look for an optimal solution.
    BibTeX:
    @inproceedings{LagerstromJE15,
      author = {Robert Lagerström and Pontus Johnson and Mathias Ekstedt},
      title = {Search-Based Design of Large Software Systems-of-Systems},
      booktitle = {Proceedings of IEEE/ACM 3rd International Workshop on Software Engineering for Systems-of-Systems (SESoS '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {44-47},
      address = {Florence, Italy},
      month = {17-17 May},
      doi = {http://dx.doi.org/10.1109/SESoS.2015.15}
    }
    					
    2015.08.07 Jason Landsborough, Stephen Harding & Sunny Fugate Removing the Kitchen Sink from Software 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 833-838, Madrid Spain, 12-12 July   Inproceedings
    Abstract: We would all benefit if software were slimmer, thinner, and generally only did what we needed and nothing more. To this end, our research team has been exploring methods for removing unused and undesirable features from compiled programs. Our primary goal is to improve software security by removing rarely used features in order to decrease a pro- gram's attack surface. We describe two different approaches for "thinning" binary images of compiled programs. The first approach removes specific program features using dynamic tracing as a guide. This approach is safer than many alterna- tives, but is limited to removing code which is reachable in a trace when an undesirable feature is enabled. The second ap- proach uses a genetic algorithm (GA) to mutate a program until a suitable variant is found. Our GA-based approach can potentially remove any code that is not strictly required for proper execution, but may break program semantics in unpredictable ways. We show results of these approaches on a simple program and real-world software and explore some of the implications for software security.
    BibTeX:
    @inproceedings{LandsboroughHF15,
      author = {Jason Landsborough and Stephen Harding and Sunny Fugate},
      title = {Removing the Kitchen Sink from Software},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {833-838},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768424}
    }
    					
    2015.11.06 William B. Langdon Genetic Improvement of Software for Multiple Objectives 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 12-28, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Genetic programming (GP) can increase computer program’s functional and non-functional performance. It can automatically port or refactor legacy code written by domain experts. Working with programmers it can grow and graft (GGGP) new functionality into legacy systems and parallel Bioinformatics GPGPU code. We review Genetic Improvement (GI) and SBSE research on evolving software.
    BibTeX:
    @inproceedings{Langdon15,
      author = {William B. Langdon},
      title = {Genetic Improvement of Software for Multiple Objectives},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {12-28},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_2}
    }
    					
    2015.12.08 William B. Langdon Genetically Improved Software 2015 , pp. 181-220   Inbook
    Abstract: Genetic programming (GP) can dramatically increase computer programs’ performance. It can automatically port or refactor legacy code written by domain experts and specialist software engineers. After reviewing SBSE research on evolving software we describe an open source parallel StereoCamera image processing application in which GI optimisation gave a seven fold speedup on nVidia Tesla GPU hardware not even imagined when the original state-of-the-art CUDA GPGPU C++ code was written.
    BibTeX:
    @inbook{Langdon15b,
      author = {William B. Langdon},
      title = {Genetically Improved Software},
      publisher = {Springer},
      year = {2015},
      pages = {181-220},
      doi = {http://dx.doi.org/10.1007/978-3-319-20883-1_8}
    }
    					
    2016.02.02 William B. Langdon Performance of Genetic Programming Optimised Bowtie2 on Genome Comparison and Analytic Testing (GCAT) Benchmarks 2015 BioData Mining, Vol. 8(1), pp. 1-7, June   Article
    Abstract: Background Genetic studies are increasingly based on short noisy next generation scanners. Typically complete DNA sequences are assembled by matching short NextGen sequences against reference genomes. Despite considerable algorithmic gains since the turn of the millennium, matching both single ended and paired end strings to a reference remains computationally demanding. Further tailoring Bioinformatics tools to each new task or scanner remains highly skilled and labour intensive. With this in mind, we recently demonstrated a genetic programming based automated technique which generated a version of the state-of-the-art alignment tool Bowtie2 which was considerably faster on short sequences produced by a scanner at the Broad Institute and released as part of The Thousand Genome Project. Results Bowtie2 G P and the original Bowtie2 release were compared on bioplanet’s GCAT synthetic benchmarks. Bowtie2 G P enhancements were also applied to the latest Bowtie2 release (2.2.3, 29 May 2014) and retained both the GP and the manually introduced improvements. Conclusions On both singled ended and paired-end synthetic next generation DNA sequence GCAT benchmarks Bowtie2GP runs up to 45% faster than Bowtie2. The lost in accuracy can be as little as 0.2–0.5% but up to 2.5% for longer sequences.
    BibTeX:
    @article{Langdon15c,
      author = {William B. Langdon},
      title = {Performance of Genetic Programming Optimised Bowtie2 on Genome Comparison and Analytic Testing (GCAT) Benchmarks},
      journal = {BioData Mining},
      year = {2015},
      volume = {8},
      number = {1},
      pages = {1-7},
      month = {June},
      doi = {http://dx.doi.org/10.1186/s13040-014-0034-0}
    }
    					
    2015.08.07 William B. Langdon & Mark Harman Grow and Graft a Better CUDA pknotsRG for RNA Pseudoknot Free Energy Calculation 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 805-810, Madrid Spain, 12-12 July   Inproceedings
    Abstract: Grow and graft genetic programming greatly improves GPGPU dynamic programming software for predicting the minimum binding energy for folding of RNA molecules. The parallel code inserted into the existing CUDA version of pknots was grown using a BNF grammar. On an nVidia Tesla K40 GPU GGGP gives a speed up of up to 10000 times.
    BibTeX:
    @inproceedings{LangdonH15,
      author = {William B. Langdon and Mark Harman},
      title = {Grow and Graft a Better CUDA pknotsRG for RNA Pseudoknot Free Energy Calculation},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {805-810},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768418}
    }
    					
    2016.02.02 William B. Langdon & Mark Harman Optimizing Existing Software with Genetic Programming 2015 IEEE Transactions on Evolutionary Computation, Vol. 19(1), pp. 118-135, February   Article
    Abstract: We show that the genetic improvement of programs (GIP) can scale by evolving increased performance in a widely-used and highly complex 50000 line system. Genetic improvement of software for multiple objective exploration (GISMOE) found code that is 70 times faster (on average) and yet is at least as good functionally. Indeed, it even gives a small semantic gain.
    BibTeX:
    @article{LangdonH15b,
      author = {William B. Langdon and Mark Harman},
      title = {Optimizing Existing Software with Genetic Programming},
      journal = {IEEE Transactions on Evolutionary Computation},
      year = {2015},
      volume = {19},
      number = {1},
      pages = {118-135},
      month = {February},
      doi = {http://dx.doi.org/10.1109/TEVC.2013.2281544}
    }
    					
    2016.02.02 William B. Langdon & Brian Yee Hong Lam Genetically Improved BarraCUDA 2015 (RN/15/03), May   Techreport
    Abstract: BarraCUDA is a C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60% more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU Barracuda running on a single K80 Tesla GPU can align short paired end nextgen sequences up to ten times faster than bwa on a 12 core CPU.
    BibTeX:
    @techreport{LangdonL15,
      author = {William B. Langdon and Brian Yee Hong Lam},
      title = {Genetically Improved BarraCUDA},
      year = {2015},
      number = {RN/15/03},
      month = {May},
      url = {http://www.cs.ucl.ac.uk/fileadmin/UCL-CS/research/Research_Notes/rn-15-03.pdf}
    }
    					
    2015.08.07 William B. Langdon, Brian Yee Hong Lam, Justyna Petke & Mark Harman Improving CUDA DNA Analysis Software with Genetic Programming 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 1063-1070, Madrid Spain, 11-15 July   Inproceedings
    Abstract: We genetically improve BarraCUDA using a BNF grammar incorporating C scoping rules with GP. Barracuda maps next generation DNA sequences to the human genome using the Burrows-Wheeler algorithm (BWA) on nVidia Tesla parallel graphics hardware (GPUs). GI using phenotypic tabu search with manually grown code can graft new features giving more than 100 fold speed up on a performance critical kernel without loss of accuracy.
    BibTeX:
    @inproceedings{LangdonLPH15,
      author = {William B. Langdon and Brian Yee Hong Lam and Justyna Petke and Mark Harman},
      title = {Improving CUDA DNA Analysis Software with Genetic Programming},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {1063-1070},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754652}
    }
    					
    2016.03.08 Maurizio Leotta, Andrea Stocco, Filippo Ricca & Paolo Tonella Meta-heuristic Generation of Robust XPath Locators for Web Testing 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 36-39, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: Test scripts used for web testing rely on DOM locators, often expressed as XPaths, to identify the active web page elements and the web page data to be used in assertions. When the web application evolves, the major cost incurred for the evolution of the test scripts is due to broken locators, which fail to locate the target element in the new version of the software. We formulate the problem of automatically generating robust XPath locators as a graph exploration problem, for which we provide an optimal, greedy algorithm. Since such an algorithm has exponential time and space complexity, we present also a genetic algorithm.
    BibTeX:
    @inproceedings{LeottaSRT15,
      author = {Maurizio Leotta and Andrea Stocco and Filippo Ricca and Paolo Tonella},
      title = {Meta-heuristic Generation of Robust XPath Locators for Web Testing},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {36-39},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.16}
    }
    					
    2015.11.06 Lingbo Li, Mark Harman, Fan Wu & Yuanyuan Zhang SBSelector: Search Based Component Selection for Budget Hardware 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 89-294, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Determining which functional components should be integrated to a large system is a challenging task, when hardware constraints, such as available memory, are taken into account. We formulate such problem as a multi-objective component selection problem, which searches for feature subsets that balance the provision of maximal functionality at minimal memory resource cost. We developed a search-based component selection tool, and applied it to the KDE-based application, Kate, to find a set of Kate instantiations that balance functionalities and memory consumption. Our results report that, compared to the best attainment of random search, our approach can reduce at most 23.70% memory consumption with respect to the same number components. While comparing to greedy search, the memory reduction can be up to 19.04%. SBSelector finds a instantiation of Kate that provides 16 more components, while only increasing memory by 1.7%.
    BibTeX:
    @inproceedings{LiHWZ15,
      author = {Lingbo Li and Mark Harman and Fan Wu and Yuanyuan Zhang},
      title = {SBSelector: Search Based Component Selection for Budget Hardware},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {89-294},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_25}
    }
    					
    2015.08.07 Roberto E. Lopez-Herrejon, Lukas Linsbauer, Wesley Klewerton Guez Assunção, Stefan Fischer, Silvia Regina Vergilio & Alexander Egyed Genetic Improvement for Software Product Lines: An Overview and a Roadmap 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 823-830, Madrid Spain, 12-12 July   Inproceedings
    Abstract: Software Product Lines (SPLs) are families of related software systems that provide different combinations of features. Extensive research and application attest to the significant economical and technological benefits of employing SPL practices. However, there are still several challenges that remain open. Salient among them is reverse engineering SPLs from existing variants of software systems and their subsequent evolution. In this paper, we aim at sketching connections between research on these open SPL challenges and ongoing work on Genetic Improvement. Our hope is that by drawing such connections we can spark the interest of both research communities on the exciting synergies at the intersection of these subject areas.
    BibTeX:
    @inproceedings{Lopez-HerrejonLAFVE15,
      author = {Roberto E. Lopez-Herrejon and Lukas Linsbauer and Wesley Klewerton Guez Assunção and Stefan Fischer and Silvia Regina Vergilio and Alexander Egyed},
      title = {Genetic Improvement for Software Product Lines: An Overview and a Roadmap},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {823-830},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768422}
    }
    					
    2015.02.05 Roberto E. Lopez-Herrejon, Lukas Linsbauer & Alexander Egyed A Systematic Mapping Study of Search-Based Software Engineering for Software Product Lines 2015 Information and Software Technology, Vol. 61, pp. 33-51, May   Article
    Abstract: Context Search-Based Software Engineering (SBSE) is an emerging discipline that focuses on the application of search-based optimization techniques to software engineering problems. Software Product Lines (SPLs) are families of related software systems whose members are distinguished by the set of features each one provides. SPL development practices have proven benefits such as improved software reuse, better customization, and faster time to market. A typical SPL usually involves a large number of systems and features, a fact that makes them attractive for the application of SBSE techniques which are able to tackle problems that involve large search spaces. Objective The main objective of our work is to identify the quantity and the type of research on the application of SBSE techniques to SPL problems. More concretely, the SBSE techniques that have been used and at what stage of the SPL life cycle, the type of case studies employed and their empirical analysis, and the fora where the research has been published. Method A systematic mapping study was conducted with five research questions and assessed 77 publications from 2001, when the term SBSE was coined, until 2014. Conclusions Our study attested the great synergy existing between both fields, corroborated the increasing and ongoing interest in research on the subject, and revealed challenging open research questions.
    BibTeX:
    @article{Lopez-HerrejonLE15,
      author = {Roberto E. Lopez-Herrejon and Lukas Linsbauer and Alexander Egyed},
      title = {A Systematic Mapping Study of Search-Based Software Engineering for Software Product Lines},
      journal = {Information and Software Technology},
      year = {2015},
      volume = {61},
      pages = {33-51},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.infsof.2015.01.008}
    }
    					
    2016.03.08 Lei Ma, Cyrille Artho, Cheng Zhang, Hiroyuki Sato, Masami Hagiya, Yoshinori Tanabe & Mitsuharu Yamamoto GRT at the SBST 2015 Tool Competition 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 48-51, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: GRT (Guided Random Testing) is an automatic test generation tool for Java code, which leverages static and dynamic program analysis to guide run-time test generation. In this paper, we summarize competition results and experiences of GRT in participating in SBST 2015, where GRT ranked first with a score of 203.73 points over 63 Java classes from 10 packages of 9 open-source software projects.
    BibTeX:
    @inproceedings{MaAZSHTY15,
      author = {Lei Ma and Cyrille Artho and Cheng Zhang and Hiroyuki Sato and Masami Hagiya and Yoshinori Tanabe and Mitsuharu Yamamoto},
      title = {GRT at the SBST 2015 Tool Competition},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {48-51},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.19}
    }
    					
    2016.02.27 Andréa Magalhães Magdaleno, Marcio de Oliveira Barros, Cláudia Maria Lima Werner, Renata Mendes de Araujo & Carlos Freud Alves Batista Collaboration Optimization in Software Process Composition 2015 Journal of Systems and Software, Vol. 103, May   Article
    Abstract: Purpose: The purpose of this paper is to describe an optimization approach to maximize collaboration in software process composition. The research question is: how to compose a process for a specific software development project context aiming to maximize collaboration among team members? The optimization approach uses heuristic search algorithms to navigate the solution space and look for acceptable solutions. Design/methodology/approach: The process composition approach was evaluated through an experimental study conducted in the context of a large oil company in Brazil. The objective was to evaluate the feasibility of composing processes for three software development projects. We have also compared genetic algorithm (GA) and hill climbing (HC) algorithms driving the optimization with a simple random search (RS) in order to determine which would be more effective in addressing the problem. In addition, human specialist point-of-view was explored to verify if the composed processes were in accordance with his/her expectations regarding size, complexity, diversity, and reasonable sequence of components. Findings: The main findings indicate that GA is more effective (best results regarding the fitness function) than HC and RS in the search of solutions for collaboration optimization in software process composition for large instances. However, all algorithms are competitive for small instances and even brute force can be a feasible alternative in such a context. These SBSE results were complemented by the feedback given by specialist, indicating his satisfaction with the correctness, diversity, adherence to the project context, and support to the project manager during the decision making in process composition. Research limitations: This work was evaluated in the context of a single company and used only three project instances. Due to confidentiality restrictions, the data describing these instances could not be disclosed to be used in other research works. The reduced size of the sample prevents generalization for other types of projects or different contexts. Implications: This research is important for practitioners who are facing challenges to handle diversity in software process definition, since it proposes an approach based on context, reuse and process composition. It also contributes to research on collaboration by presenting a collaboration management solution (COMPOOTIM) that includes both an approach to introduce collaboration in organizations through software processes and a collaboration measurement strategy. From the standpoint of software engineering looking for collaborative solutions in distributed software development, free/open source software, agile, and ecosystems initiatives, the results also indicate how to increase collaboration in software development. Originality/value: This work proposes a systematic strategy to manage collaboration in software development process composition. Moreover, it brings together a mix of computer-oriented and human-oriented studies on the search-based software engineering (SBSE) research area. Finally, this work expands the body of knowledge in SBSE to the field of software process which has not been properly explored by former research.
    BibTeX:
    @article{MagdalenoBWAB15,
      author = {Andréa Magalhães Magdaleno and Marcio de Oliveira Barros and Cláudia Maria Lima Werner and Renata Mendes de Araujo and Carlos Freud Alves Batista},
      title = {Collaboration Optimization in Software Process Composition},
      journal = {Journal of Systems and Software},
      year = {2015},
      volume = {103},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.jss.2014.11.036}
    }
    					
    2015.12.09 Surendra Mahajan, Shashank D. Joshi & V. Khanaa Component-Based Software System Test Case Prioritization with Genetic Algorithm Decoding Technique using Java Platform 2015 Proceedings of International Conference on Computing Communication Control and Automation (ICCUBEA '15), pp. 847-851, Pune India, 26-27 February   Inproceedings Testing and Debugging
    Abstract: Test case prioritization includes testing experiments in a request that builds the viability in accomplishing some execution objectives. The importance amongst the most imperative testing objectives is the fast rate of fault recognition. Test case ought to run in a request that extends the likelihood of fault discovery furthermore that detects the most serious issues at the early stage of testing life cycle. In this paper, we develop and prove the necessity of Component-Based Software testing prioritization framework which plans to uncover more extreme bugs at an early stage and to enhance software product deliverable quality utilizing Genetic Algorithm (GA) with java decoding technique. For this, we propose a set of prioritization keys to plan the proposed Component-Based Software java framework. In our proposed method, we allude to these keys as Prioritization Keys (PK). These keys may be project size, scope of the code, information stream, and bug inclination and impact of fault or bug on overall system, which prioritizes the Component-Based Software framework testing. The integrity of these keys was measured with implementation of key assessment metric called KAM that will likewise be ascertained. This paper demonstrates how software testing can be efficient with management of data integrity factor to avoid major security issues. One of the main advantages of our approach is that domain specific semantics can be integrated with the data quality test cases prioritization, thus being able to discover test feed data quality problems beyond conventional quality measures.
    BibTeX:
    @inproceedings{MahajanJK15,
      author = {Surendra Mahajan and Shashank D. Joshi and V. Khanaa},
      title = {Component-Based Software System Test Case Prioritization with Genetic Algorithm Decoding Technique using Java Platform},
      booktitle = {Proceedings of International Conference on Computing Communication Control and Automation (ICCUBEA '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {847-851},
      address = {Pune, India},
      month = {26-27 February},
      doi = {http://dx.doi.org/10.1109/ICCUBEA.2015.169}
    }
    					
    2015.12.08 Bogdan Marculescu, Robert Feldt, Richard Torkar & Simon Poulding An Initial Industrial Evaluation of Interactive Search-based Testing for Embedded Software 2015 Applied Soft Computing, Vol. 29, pp. 26-39, April   Article Testing and Debugging
    Abstract: Search-based software testing promises the ability to generate and evaluate large numbers of test cases at minimal cost. From an industrial perspective, this could enable an increase in product quality without a matching increase in the time and effort required to do so. Search-based software testing, however, is a set of quite complex techniques and approaches that do not immediately translate into a process for use with most companies. For example, even if engineers receive the proper education and training in these new approaches, it can be hard to develop a general fitness function that covers all contingencies. Furthermore, in industrial practice, the knowledge and experience of domain specialists are often key for effective testing and thus for the overall quality of the final software system. But it is not clear how such domain expertise can be utilized in a search-based system. This paper presents an interactive search-based software testing (ISBST) system designed to operate in an industrial setting and with the explicit aim of requiring only limited expertise in software testing. It uses SBST to search for test cases for an industrial software module, while also allowing domain specialists to use their experience and intuition to interactively guide the search. In addition to presenting the system, this paper reports on an evaluation of the system in a company developing a framework for embedded software controllers. A sequence of workshops provided regular feedback and validation for the design and improvement of the ISBST system. Once developed, the ISBST system was evaluated by four electrical and system engineers from the company (the ‘domain specialists’ in this context) used the system to develop test cases for a commonly used controller module. As well as evaluating the utility of the ISBST system, the study generated interaction data that were used in subsequent laboratory experimentation to validate the underlying search-based algorithm in the presence of realistic, but repeatable, interactions. The results validate the importance that automated software testing tools in general, and search-based tools, in particular, can leverage input from domain specialists while generating tests. Furthermore, the evaluation highlighted benefits of using such an approach to explore areas that the current testing practices do not cover or cover insufficiently.
    BibTeX:
    @article{MarculescuFTP15,
      author = {Bogdan Marculescu and Robert Feldt and Richard Torkar and Simon Poulding},
      title = {An Initial Industrial Evaluation of Interactive Search-based Testing for Embedded Software},
      journal = {Applied Soft Computing},
      year = {2015},
      volume = {29},
      pages = {26-39},
      month = {April},
      doi = {http://dx.doi.org/10.1016/j.asoc.2014.12.025}
    }
    					
    2015.11.06 Alexandru Marginean, Earl T. Barr, Mark Harman & Yue Jia Automated Transplantation of Call Graph and Layout Features into Kate 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 262-268, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: We report the automated transplantation of two features currently missing from Kate: call graph generation and automatic layout for C programs, which have been requested by users on the Kate development forum. Our approach uses a lightweight annotation system with Search Based techniques augmented by static analysis for automated transplantation. The results are promising: on average, our tool requires 101 min of standard desktop machine time to transplant the call graph feature, and 31 min to transplant the layout feature. We repeated each experiment 20 times and validated the resulting transplants using unit, regression and acceptance test suites. In 34 of 40 experiments conducted our search-based autotransplantation tool, μ Scalpel, was able to successfully transplant the new functionality, passing all tests.
    BibTeX:
    @inproceedings{MargineanBHJ15,
      author = {Alexandru Marginean and Earl T. Barr and Mark Harman and Yue Jia},
      title = {Automated Transplantation of Call Graph and Layout Features into Kate},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {262-268},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_21}
    }
    					
    2015.11.06 Thainá Mariani, Silvia Regina Vergilio & Thelma Elita Colanzi Optimizing Aspect-Oriented Product Line Architectures with Search-Based Algorithms 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 173-187, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: The adoption of Aspect-Oriented Product Line Architectures (AOPLA) brings many benefits to the software product line design. It contributes to improve modularity, stability and to reduce feature tangling and scattering. Improvements can also be obtained with a search-based and multi-objective approach, such as MOA4PLA, which generates PLAs with the best trade-off between different measures, such as cohesion, coupling and feature modularization. However, MOA4PLA operators may violate the aspect-oriented modeling (AOM) rules, impacting negatively on the architecture understanding. In order to solve this problem, this paper introduces a more adequate representation for AOPLAs and a set of search operators, called SO4ASPAR (Search Operators for Aspect-Oriented Architectures). Results from an empirical evaluation show that the proposed operators yield better solutions regarding the fitness values, besides preserving the AOM rules.
    BibTeX:
    @inproceedings{MarianiVC15,
      author = {Thainá Mariani and Silvia Regina Vergilio and Thelma Elita Colanzi},
      title = {Optimizing Aspect-Oriented Product Line Architectures with Search-Based Algorithms},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {173-187},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_12}
    }
    					
    2016.03.08 Thainá Mariani, Silvia Regina Vergilio & Thelma Elita Colanzi Search Based Design of Layered Product Line Architectures 2015 Proceedings of IEEE 39th Annual Computer Software and Applications Conference (COMPSAC '15), pp. 270-275, Taichung Taiwan, 1-5 July   Inproceedings
    Abstract: The adoption of the layered architectural style can improve the Product Line Architecture (PLA) design by providing a better organization of the elements, flexibility and maintainability. Search based optimization approaches can also benefit the PLA design, by generating PLA alternatives associated with the best trade-offs between different measures (fitness) related to cohesion, coupling and feature modularization. However, the usage of existing search operators changes the PLA organization, and consequently may violate layered rules, impacting negatively in the architecture understanding. In order to solve this problem, this work introduces search operators that consider the layered architectural style rules. Results show that the proposed operators contribute to obtain better solutions, maintaining the adopted style, with better or equivalent fitness values.
    BibTeX:
    @inproceedings{MarianiVC15b,
      author = {Thainá Mariani and Silvia Regina Vergilio and Thelma Elita Colanzi},
      title = {Search Based Design of Layered Product Line Architectures},
      booktitle = {Proceedings of IEEE 39th Annual Computer Software and Applications Conference (COMPSAC '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {270-275},
      address = {Taichung, Taiwan},
      month = {1-5 July},
      doi = {http://dx.doi.org/10.1109/COMPSAC.2015.30}
    }
    					
    2016.02.17 Matias Martinez & Martin Monperrus Mining Software Repair Models for Reasoning on the Search Space of Automated Program Fixing 2015 Empirical Software Engineering, Vol. 20(1), pp. 176-205, February   Article
    Abstract: This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.
    BibTeX:
    @article{MartinezM15,
      author = {Matias Martinez and Martin Monperrus},
      title = {Mining Software Repair Models for Reasoning on the Search Space of Automated Program Fixing},
      journal = {Empirical Software Engineering},
      year = {2015},
      volume = {20},
      number = {1},
      pages = {176-205},
      month = {February},
      doi = {http://dx.doi.org/10.1007/s10664-013-9282-8}
    }
    					
    2015.11.06 Andrea Mattavelli, Alberto Goffi & Alessandra Gorla Synthesis of Equivalent Method Calls in Guava 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 248-254, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: We developed a search-based technique to automatically synthesize sequences of method calls that are functionally equivalent to a given target method. This paper presents challenges and results of applying our technique to Google Guava. Guava heavily uses Java generics, and the large number of classes, methods and parameter values required us to tune our technique to deal with a search space that is much larger than what we originally envisioned. We modified our technique to cope with such challenges. The evaluation of the improved version of our technique shows that we can synthesize 188 equivalent method calls for relevant components of Guava, outperforming by 86% the original version.
    BibTeX:
    @inproceedings{MattavelliGG15,
      author = {Andrea Mattavelli and Alberto Goffi and Alessandra Gorla},
      title = {Synthesis of Equivalent Method Calls in Guava},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {248-254},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_19}
    }
    					
    2015.08.07 Vojtech Mrazek, Zdenek Vasicek & Lukas Sekanina Evolutionary Approximation of Software for Embedded Systems: Median Function 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 795-801, Madrid Spain, 12-12 July   Inproceedings
    Abstract: This paper deals with genetic programming-based improvement of non-functional properties of programs intended for low-cost microcontrollers. As the objective is to significantly reduce power consumption and execution time, the approximate computing scenario is considered in which occasional errors in results are acceptable. The method is based on Cartesian genetic programming and evaluated in the task of approximation of 9-input and 25-input median function. Resulting approximations show a significant improvement in the execution time and power consumption with respect to the accurate median function while the observed errors are moderate.
    BibTeX:
    @inproceedings{MrazekVS15,
      author = {Vojtech Mrazek and Zdenek Vasicek and Lukas Sekanina},
      title = {Evolutionary Approximation of Software for Embedded Systems: Median Function},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {795-801},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768416}
    }
    					
    2016.03.08 Samia Naciri, Mohammed Abdou Janati Idrissi & Noureddine Kerzazi A Strategic Release Planning Model from TPM Point of View 2015 Proceedings of the 10th International Conference on Intelligent Systems: Theories and Applications (SITA '15), pp. 1-9, Rabat Morocco, 20-21 October   Inproceedings
    Abstract: Release planning is a tedious task, especially within the context of Third Party Application Maintenance (TPM). Software engineering fields acknowledge multiple methods and techniques applied to solve problems of release planning. However, few of them propose a pragmatic and decision support dedicated to TPM managers. This research attempts to apply theory and methods Search-based software engineering (SBSE) area such as meta-heuristic techniques to solve complex release planning problems faced by TPM organizations. The aim of this paper is to introduce a strategic release planning model based on several TPM factors and constraints, allowing the TPM manager to carry out an effective roadmap and industrialize the planning process of software maintenance.
    BibTeX:
    @inproceedings{NaciriJK15,
      author = {Samia Naciri and Mohammed Abdou Janati Idrissi and Noureddine Kerzazi},
      title = {A Strategic Release Planning Model from TPM Point of View},
      booktitle = {Proceedings of the 10th International Conference on Intelligent Systems: Theories and Applications (SITA '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-9},
      address = {Rabat, Morocco},
      month = {20-21 October},
      doi = {http://dx.doi.org/10.1109/SITA .2015.7358396}
    }
    					
    2016.03.09 Reetika Nagar, Arvind Kumar, Gaurav Pratap Singh & Sachin Kumar Test Case Selection and Prioritization using Cuckoos Search Algorithm 2015 Proceedings of the 1st International Conference on Futuristic trend in Computational Analysis and Knowledge Management (ABLAZE '15), pp. 283-288, Noida India, 25-27 February   Inproceedings Testing and Debugging
    Abstract: Regression Testing is an inevitable and very costly activity that is implemented to ensure the validity of new version of software in a time and resource constrained environment. Execution of entire test suite is not possible so it is necessary to apply techniques like Test Case Selection and Test Case Prioritization for proper selection and schedule of test cases in a specific sequence, fulfilling some chosen criteria. Cuckoo search (CS) algorithm is an optimization algorithm proposed by Yang and Deb [13]. It is inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of other host birds. Cuckoo Search is very easy to implement as it depends on single parameter only unlike other optimization algorithms. In this paper a test case selection and prioritization algorithm has been proposed using Cuckoo Search. This algorithm selects and prioritizes the test cases based on the number of faults covered in minimum time. The proposed algorithm is an optimistic approach which provides optimum best results in minimum time.
    BibTeX:
    @inproceedings{NagarKSK15,
      author = {Reetika Nagar and Arvind Kumar and Gaurav Pratap Singh and Sachin Kumar},
      title = {Test Case Selection and Prioritization using Cuckoos Search Algorithm},
      booktitle = {Proceedings of the 1st International Conference on Futuristic trend in Computational Analysis and Knowledge Management (ABLAZE '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {283-288},
      address = {Noida, India},
      month = {25-27 February},
      doi = {http://dx.doi.org/10.1109/ABLAZE.2015.7155012}
    }
    					
    2015.11.06 Geoffrey Neumann, Mark Harman & Simon Poulding Transformed Vargha-Delaney Effect Size 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 318-324, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Researchers without a technical background in statistics may be tempted to apply analytical techniques in a ritualistic manner. SBSE research is not immune to this problem. We argue that emerging rituals surrounding the use of the Vargha-Delaney effect size statistic may pose serious threats to the scientific validity of the findings. We believe investigations of effect size are important, but more care is required in the application of this statistic. In this paper, we illustrate the problems that can arise, and give guidelines for avoiding them, by applying a ‘transformed’ Vargha-Delaney effect size measurement. We argue that researchers should always consider which transformation is best suited to their problem domain before calculating the Vargha-Delaney statistic.
    BibTeX:
    @inproceedings{NeumannHP15,
      author = {Geoffrey Neumann and Mark Harman and Simon Poulding},
      title = {Transformed Vargha-Delaney Effect Size},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {318-324},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_29}
    }
    					
    2016.02.27 Matias Nicoletti, Silvia Schiaffino & J. Andres Diaz-Pace An Optimization-based Tool to Support the Cost-effective Productionof Software Architecture Documentation 2015 Journal of Software: Evolution and Process, Vol. 27(9), pp. 674-699, September   Article
    Abstract: Some of the challenges faced by most software projects are tight budget constraints and schedules, which often make managers and developers prioritize the delivery of a functional product over other engineering activities, such as software documentation. In particular, having little or low-quality documentation of the software architecture of a system can have negative consequences for the project, as the architecture is the main container of the key design decisions to fulfill the stakeholders' goals. To further complicate this situation, generating and maintaining architectural documentation is a non-trivial and time-consuming activity. In this context, we present a tool approach that aims at (i) assisting the documentation writer in their tasks and (ii) ensuring a cost-effective documentation process by means of optimization techniques. Our tool, called SADHelper, follows the principle of producing reader-oriented documentation, in order to focus the available, and often limited, resources on generating just enough documentation that satisfies the stakeholders' concerns. The approach was evaluated in two experiments with users of software architecture documents, with encouraging results. These results show evidence that our tool can be useful to reduce the documentation costs and even improve the documentation quality, as perceived by their stakeholders.
    BibTeX:
    @article{NicolettiSD15,
      author = {Matias Nicoletti and Silvia Schiaffino and J. Andres Diaz-Pace},
      title = {An Optimization-based Tool to Support the Cost-effective Productionof Software Architecture Documentation},
      journal = {Journal of Software: Evolution and Process},
      year = {2015},
      volume = {27},
      number = {9},
      pages = {674-699},
      month = {September},
      doi = {http://dx.doi.org/10.1002/smr.1734}
    }
    					
    2015.12.09 Tadahiro Noguchi, Hironori Washizaki, Yoshiaki Fukazawa, Atsutoshi Sato & Kenichiro Ota History-Based Test Case Prioritization for Black Box Testing using Ant Colony Optimization 2015 Proceedings of the 8th International Conference on Software Testing, Verification and Validation (ICST '15), pp. 1-2, Graz Austria, 13-17 April   Inproceedings Testing and Debugging
    Abstract: Test case prioritization is a technique to improve software testing. Although a lot of work has investigated test case prioritization, they focus on white box testing or regression testing. However, software testing is often outsourced to a software testing company, in which testers are rarely able to access to source code due to a contract. Herein a framework is proposed to prioritize test cases for black box testing on a new product using the test execution history collected from a similar prior product and the Ant Colony Optimization. A simulation using two actual products shows the effectiveness and practicality of our proposed framework.
    BibTeX:
    @inproceedings{NoguchiWFSO15,
      author = {Tadahiro Noguchi and Hironori Washizaki and Yoshiaki Fukazawa and Atsutoshi Sato and Kenichiro Ota},
      title = {History-Based Test Case Prioritization for Black Box Testing using Ant Colony Optimization},
      booktitle = {Proceedings of the 8th International Conference on Software Testing, Verification and Validation (ICST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-2},
      address = {Graz, Austria},
      month = {13-17 April},
      doi = {http://dx.doi.org/10.1109/ICST.2015.7102622}
    }
    					
    2014.05.30 Ali Ouni, Marouane Kessentini, Slim Bechikh & Houari Sahraoui Prioritizing Code-smells Correction Tasks using Chemical Reaction Optimization 2015 Software Quality Journal, Vol. 23(2), pp. 323-361, June   Article Distribution and Maintenance
    Abstract: The presence of code-smells increases significantly the cost of maintenance of systems and makes them difficult to change and evolve. To remove code-smells, refactoring operations are used to improve the design of a system by changing its internal structure without altering the external behavior. In large-scale systems, the number of code-smells to fix can be very large and not all of them can be fixed automatically. Thus, the prioritization of the list of code-smells is required based on different criteria such as the risk and importance of classes. However, most of the existing refactoring approaches treat the code-smells to fix with the same importance. In this paper, we propose an approach based on a chemical reaction optimization metaheuristic search to find the suitable refactoring solutions (i.e., sequence of refactoring operations) that maximize the number of fixed riskiest code-smells according to the maintainer’s preferences/criteria. We evaluate our approach on five medium- and large-sized open-source systems and seven types of code-smells. Our experimental results show the effectiveness of our approach compared to other existing approaches and three different others metaheuristic searches.
    BibTeX:
    @article{OuniKBS15,
      author = {Ali Ouni and Marouane Kessentini and Slim Bechikh and Houari Sahraoui},
      title = {Prioritizing Code-smells Correction Tasks using Chemical Reaction Optimization},
      journal = {Software Quality Journal},
      year = {2015},
      volume = {23},
      number = {2},
      pages = {323-361},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s11219-014-9233-7}
    }
    					
    2015.08.07 Ali Ouni, Raula Gaikovina Kula, Marouane Kessentini & Katsuro Inoue Web Service Antipatterns Detection Using Genetic Programming 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 1351-1358, Madrid Spain, 11-15 July   Inproceedings
    Abstract: Service-Oriented Architecture (SOA) is an emerging paradigm that has radically changed the way software applications are architected, designed and implemented. SOA allows developers to structure their systems as a set of ready-made, reusable and compostable services. The leading technology used today for implementing SOA is Web Services. Indeed, like all software, Web services are prone to change constantly to add new user requirements or to adapt to environment changes. Poorly planned changes may risk introducing antipatterns into the system. Consequently, this may ultimately leads to a degradation of software quality, evident by poor quality of service (QoS). In this paper, we introduce an automated approach to detect Web service antipatterns using genetic programming. Our approach consists of using knowledge from real-world examples of Web service antipatterns to generate detection rules based on combinations of metrics and threshold values. We evaluate our approach on a benchmark of 310 Web services and a variety of five types of Web service antipatterns. The statistical analysis of the obtained results provides evidence that our approach is efficient to detect most of the existing antipatterns with a score of 85% of precision and 87% of recall.
    BibTeX:
    @inproceedings{OuniKKI15,
      author = {Ali Ouni and Raula Gaikovina Kula and Marouane Kessentini and Katsuro Inoue},
      title = {Web Service Antipatterns Detection Using Genetic Programming},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {1351-1358},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754724}
    }
    					
    2015.12.08 Ankur Pachauri & Gursaran Srivastava Towards A Parallel Approach for Test Data Generation for Branch Coverage with Genetic Algorithm using the Extended Path Prefix Strategy 2015 Proceedings of the 2nd International Conference on Computing for Sustainable Global Development (INDIACom '15), pp. 1786-1792, New Delhi India, 11-13 March   Inproceedings
    Abstract: In this paper we present a proposal for an approach to test data generation for branch coverage with a structured genetic algorithm (GA) using the extended path prefix strategy. The structured GA implements a parallel master-slave distributed model in which each slave implements an elitist panmictic GA. Branches to be covered are selected by the master using the extended path prefix strategy and then dispatched to slaves. The slaves then conduct search for test data to cover the assigned target branch. The extended path prefix strategy ensures that each time a branch is selected for coverage, the sibling branch is already covered and that individuals are available that traverse the sibling. The strategy also permits a variable number of slaves to be used which can help speed up the test data generation process. Experiments on two programs with real inputs indicate that significant improvements are achieved over a simple panmictic GA in terms of number of generations and the coverage achieved.
    BibTeX:
    @inproceedings{PachauriS15,
      author = {Ankur Pachauri and Gursaran Srivastava},
      title = {Towards A Parallel Approach for Test Data Generation for Branch Coverage with Genetic Algorithm using the Extended Path Prefix Strategy},
      booktitle = {Proceedings of the 2nd International Conference on Computing for Sustainable Global Development (INDIACom '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1786-1792},
      address = {New Delhi, India},
      month = {11-13 March},
      url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100554}
    }
    					
    2015.12.08 Ankur Pachauri, Gursaran Srivastava & Gaurav Mishra A Path and Branch based Approach to Fitness Computation for Program Test Data Generation using Genetic Algorithm 2015 Proceedings of the 1st International Conference on Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE '15), pp. 49-55, Noida India, 25-27 February   Inproceedings
    Abstract: In this paper we present a novel approach for fitness computation for test data generation using genetic algorithm. Fitness computation is a two-step process. In the first step a target node sequence is determined and in the second step the actual execution path is compared with the target node sequence to compute fitness. Fitness computation uses both branch and path information. Experiments indicate that the described fitness technique results in significant improvement in search performance.
    BibTeX:
    @inproceedings{PachauriSM15,
      author = {Ankur Pachauri and Gursaran Srivastava and Gaurav Mishra},
      title = {A Path and Branch based Approach to Fitness Computation for Program Test Data Generation using Genetic Algorithm},
      booktitle = {Proceedings of the 1st International Conference on Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {49-55},
      address = {Noida, India},
      month = {25-27 February},
      doi = {http://dx.doi.org/10.1109/ABLAZE.2015.7154969}
    }
    					
    2015.11.06 Matheus Paixao, Mark Harman & Yuanyuan Zhang Multi-objective Module Clustering for Kate 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 282-288, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: This paper applies multi-objective search based software remodularization to the program Kate, showing how this can improve cohesion and coupling, and investigating differences between weighted and unweighted approaches and between equal-size and maximising clusters approaches. We also investigate the effects of considering omnipresent modules. Overall, we provide evidence that search based modularization can benefit Kate developers.
    BibTeX:
    @inproceedings{PaixaoHZ15,
      author = {Matheus Paixao and Mark Harman and Yuanyuan Zhang},
      title = {Multi-objective Module Clustering for Kate},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {282-288},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_24}
    }
    					
    2015.11.06 Matheus Paixao, Mark Harman & Yuanyuan Zhang Improving the Module Clustering of a C/C++ Editor using a Multi-objective Genetic Algorithm 2015 (RN/15/02), May   Techreport
    Abstract: This Technical Report applies multi-objective search based software remodularization to a C/C++ editor called Kate, showing how this can improve cohesion and coupling, and investigating differences between weighted and unweighted approaches and between equal-size and maximising clusters approaches. We also investigate the effects of considering omnipresent modules. Overall, we provide evidence that search based modularization can benefit Kate developers.
    BibTeX:
    @techreport{PaixaoHZ15b,
      author = {Matheus Paixao and Mark Harman and Yuanyuan Zhang},
      title = {Improving the Module Clustering of a C/C++ Editor using a Multi-objective Genetic Algorithm},
      year = {2015},
      number = {RN/15/02},
      month = {May},
      url = {http://www.cs.ucl.ac.uk/fileadmin/UCL-CS/research/Research_Notes/rn-15-02-matheus_paixao.pdf}
    }
    					
    2016.02.27 Matheus Henrique Esteves Paixão & Jerffeson Souza A Robust Optimization Approach to the Next Release Problem in the Presence of Uncertainties 2015 Journal of Systems and Software, Vol. 103, pp. 281-295, May   Article
    Abstract: The next release problem is a significant task in the iterative and incremental software development model, involving the selection of a set of requirements to be included in the next software release. Given the dynamic environment in which modern software development occurs, the uncertainties related to the input variables of this problem should be taken into account. In this context, this paper presents a formulation to the next release problem considering the robust optimization framework, which enables the production of robust solutions. In order to measure the “price of robustness”, which is the loss in solution quality due to robustness, a large empirical evaluation was executed over synthetical and real-world instances. Several next release planning situations were considered, including different number of requirements, estimating skills and interdependencies between requirements. All empirical results are consistent to show that the penalization with regard to solution quality is relatively small. In addition, the proposed model's behavior is statistically the same for all considered instances, which qualifies it to be applied even in large-scale real-world software projects.
    BibTeX:
    @article{PaixaoS15,
      author = {Matheus Henrique Esteves Paixão and Jerffeson Souza},
      title = {A Robust Optimization Approach to the Next Release Problem in the Presence of Uncertainties},
      journal = {Journal of Systems and Software},
      year = {2015},
      volume = {103},
      pages = {281-295},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.jss.2014.09.039}
    }
    					
    2016.02.17 Annibale Panichella, Fitsum Meshesha Kifetew & Paolo Tonella Reformulating Branch Coverage as a Many-Objective Optimization Problem 2015 Proceedings of IEEE 8th International Conference on Software Testing, Verification and Validation (ICST '15), pp. 1-10, Graz Austria, 13-17 April   Inproceedings Testing and Debugging
    Abstract: Test data generation has been extensively investigated as a search problem, where the search goal is to maximize the number of covered program elements (e.g., branches). Recently, the whole suite approach, which combines the fitness functions of single branches into an aggregate, test suite-level fitness, has been demonstrated to be superior to the traditional single-branch at a time approach. In this paper, we propose to consider branch coverage directly as a many-objective optimization problem, instead of aggregating multiple objectives into a single value, as in the whole suite approach. Since programs may have hundreds of branches (objectives), traditional many-objective algorithms that are designed for numerical optimization problems with less than 15 objectives are not applicable. Hence, we introduce a novel highly scalable many-objective genetic algorithm, called MOSA (Many-Objective Sorting Algorithm), suitably defined for the many- objective branch coverage problem. Results achieved on 64 Java classes indicate that the proposed many-objective algorithm is significantly more effective and more efficient than the whole suite approach. In particular, effectiveness (coverage) was significantly improved in 66% of the subjects and efficiency (search budget consumed) was improved in 62% of the subjects on which effectiveness remains the same.
    BibTeX:
    @inproceedings{PanichellaKT15,
      author = {Annibale Panichella and Fitsum Meshesha Kifetew and Paolo Tonella},
      title = {Reformulating Branch Coverage as a Many-Objective Optimization Problem},
      booktitle = {Proceedings of IEEE 8th International Conference on Software Testing, Verification and Validation (ICST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {1-10},
      address = {Graz, Austria},
      month = {13-17 April},
      doi = {http://dx.doi.org/10.1109/ICST.2015.7102604}
    }
    					
    2016.03.08 Annibale Panichella, Fitsum Meshesha Kifetew & Paolo Tonella Results for EvoSuite - MOSA at the Third Unit Testing Tool Competition 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 28-31, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: Evo Suite-MOSA is a unit test data generation tool that employs a novel many-objective optimization algorithm suitably developed for branch coverage. It was implemented by extending the Evo Suite test data generation tool. In this paper we present the results achieved by Evo Suite-MOSA in the third Unit Testing Tool Competition at SBST'15. Among six participants, Evo Suite-MOSA stood third with an overall score of 189.22.
    BibTeX:
    @inproceedings{PanichellaKT15b,
      author = {Annibale Panichella and Fitsum Meshesha Kifetew and Paolo Tonella},
      title = {Results for EvoSuite - MOSA at the Third Unit Testing Tool Competition},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {28-31},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.14}
    }
    					
    2015.11.06 David Paterson, Jonathan Turner, Thomas White & Gordon Fraser Parameter Control in Search-Based Generation of Unit Test Suites 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 141-156, Bergamo Italy, 5-7 September   Inproceedings Testing and Debugging
    Abstract: Search-based testing supports developers by automatically generating test suites with high coverage, but the effectiveness of a search-based test generator depends on numerous parameters. It is unreasonable to expect developers to understand search algorithms well enough to find the optimal parameter settings for a problem at hand, and even if they did, a static value for a parameter can be suboptimal at any given point during the search. To counter this problem, parameter control methods have been devised to automatically determine and adapt parameter values throughout the search. To investigate whether parameter control methods can also improve search-based generation of test suites, we have implemented and evaluated different methods to control the crossover and mutation rate in the EvoSuite unit test generation tool. Evaluation on a selection of open source Java classes reveals that while parameter control improves the values of mutation and crossover rate successfully during runtime, the positive effects of this improvement are often countered by increased costs of fitness evaluation.
    BibTeX:
    @inproceedings{PatersonTWF15,
      author = {David Paterson and Jonathan Turner and Thomas White and Gordon Fraser},
      title = {Parameter Control in Search-Based Generation of Unit Test Suites},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {141-156},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_10}
    }
    					
    2015.11.06 Matthew Patrick & Yue Jia Exploring the Landscape of Non-Functional Program Properties using Spatial Analysis 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 332-338, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Deciding on a trade-off between the non-functional properties of a system is challenging, as it is never possible to have complete information about what can be achieved. We may at first assume it is vitally important to minimise the processing requirements of a system, but if it is possible to halve the response time with only a small increase in computational power, would this cause us to change our minds? This lack of clarity makes program optimisation difficult, as it is unclear which non-functional properties to focus on improving. We propose to address this problem by applying spatial analysis techniques used in ecology to characterise and explore the landscape of non-functional properties. We can use these techniques to extract and present key information about the trade-offs that exist between non-functional properties, so that developers have a clearer understanding of the decisions they are making.
    BibTeX:
    @inproceedings{PatrickJ15,
      author = {Matthew Patrick and Yue Jia},
      title = {Exploring the Landscape of Non-Functional Program Properties using Spatial Analysis},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {332-338},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_31}
    }
    					
    2015.11.06 Justyna Petke Testing Django Configurations using Combinatorial Interaction Testing 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 242-247, Bergamo Italy, 5-7 September   Inproceedings Testing and Debugging
    Abstract: Combinatorial Interaction Testing (CIT) is important because it tests the interactions between the many parameters that make up the configuration space of software systems. We apply this testing paradigm to a Python-based framework for rapid development of web-based applications called Django. In particular, we automatically create a CIT model for Django website configurations and run a state-of-the-art tool for CIT test suite generation to obtain sets of test configurations. Our automatic CIT-based approach is able to efficiently detect invalid configurations.
    BibTeX:
    @inproceedings{Petke15,
      author = {Justyna Petke},
      title = {Testing Django Configurations using Combinatorial Interaction Testing},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {242-247},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_18}
    }
    					
    2016.03.08 Justyna Petke Constraints: The Future of Combinatorial Interaction Testing 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 17-18, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: Combinatorial Interaction Testing (CIT) has gained a lot of attention in the area of software engineering in the last few years. CIT problems have their roots in combinatorics. Mathematicians have been concerned with the NP-complete problem of finding minimal covering arrays (in other words, minimal CIT test suites) since early nineties. With the adoption of these techniques into the area of software testing, an important gap has been identified - namely consideration of real-world constraints. We show that indeed finding an efficient way of handling constraints during search is the key factor in wider applicability of CIT techniques.
    BibTeX:
    @inproceedings{Petke15b,
      author = {Justyna Petke},
      title = {Constraints: The Future of Combinatorial Interaction Testing},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {17-18},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.11}
    }
    					
    2015.12.08 Antônio Mauricio Pitangueira, Rita Suzana P. Maciel & Márcio de Oliveira Barros Software Requirements Selection and Prioritization using SBSE Approaches: A Systematic Review and Mapping of the Literature 2015 Journal of Systems and Software, Vol. 103, pp. 267-280, May   Article
    Abstract: The selection and prioritization of software requirements represents an area of interest in Search-Based Software Engineering (SBSE) and its main focus is finding and selecting a set of requirements that may be part of a software release. This paper presents a systematic review and mapping that investigated, analyzed, categorized and classified the SBSE approaches that have been proposed to address software requirement selection and prioritization problems, reporting quantitative and qualitative assessment. Initially 39 papers returned from our search strategy in this area and they were analyzed by 18 previously established quality criteria. The results of this systematic review show which aspects of the requirements selection and prioritization problems were addressed by researchers, which approaches and search techniques are currently adopted to address these problems, as well as the strengths and weaknesses in this research area highlighted from the quality criteria.
    BibTeX:
    @article{PitangueiraMB15,
      author = {Antônio Mauricio Pitangueira and Rita Suzana P. Maciel and Márcio de Oliveira Barros},
      title = {Software Requirements Selection and Prioritization using SBSE Approaches: A Systematic Review and Mapping of the Literature},
      journal = {Journal of Systems and Software},
      year = {2015},
      volume = {103},
      pages = {267-280},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.jss.2014.09.038}
    }
    					
    2015.08.07 Simon Poulding & Robert Feldt Heuristic Model Checking using a Monte-Carlo Tree Search Algorithm 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 1359-1366, Madrid Spain, 11-15 July   Inproceedings
    Abstract: Monte-Carlo Tree Search algorithms have proven extremely effective at playing games that were once thought to be difficult for AI techniques owing to the very large number of possible game states. The key feature of these algorithms is that rather than exhaustively searching game states, the algorithm navigates the tree using information returned from a relatively small number of random game simulations. A practical limitation of software model checking is the very large number of states that a model can take. Motivated by an analogy between exploring game states and checking model states, we propose that Monte-Carlo Tree Search algorithms might also be applied in this domain to efficiently navigate the model state space with the objective of finding counterexamples which correspond to potential software faults. We describe such an approach based on Nested Monte-Carlo Search---a tree search algorithm applicable to single player games---and compare its efficiency to traditional heuristic search algorithms when using Java PathFinder to locate deadlocks in 12 Java programs.
    BibTeX:
    @inproceedings{PouldingF15,
      author = {Simon Poulding and Robert Feldt},
      title = {Heuristic Model Checking using a Monte-Carlo Tree Search Algorithm},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {1359-1366},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754767}
    }
    					
    2015.12.08 Dipesh Pradhan Test Optimization using Weight Based Search Algorithms in a Maritime Application 2015 School: Department of Informatics, University of Oslo   Mastersthesis
    BibTeX:
    @mastersthesis{Pradhan15,
      author = {Dipesh Pradhan},
      title = {Test Optimization using Weight Based Search Algorithms in a Maritime Application},
      school = {Department of Informatics, University of Oslo},
      year = {2015},
      url = {https://www.duo.uio.no/bitstream/handle/10852/44698/Thesis_DipeshPradhan.pdf?sequence=1&isAllowed=y}
    }
    					
    2016.03.08 I.S.W.B. Prasetya T3: Benchmarking at Third Unit Testing Tool Contest 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 44-47, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: T3 is a light weight automated unit testing tool for Java. This paper presents the result of benchmarking of T3 at the 3rd Java Unit Testing Tool Contest organized at the 8th International Workshop on Search-Based Software Testing (SBST) in 2015.
    BibTeX:
    @inproceedings{Prasetya15,
      author = {I.S.W.B. Prasetya},
      title = {T3: Benchmarking at Third Unit Testing Tool Contest},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {44-47},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.18}
    }
    					
    2016.02.12 Zichao Qi, Fan Long, Sara Achour & Martin Rinard An Analysis of Patch Plausibility and Correctness for Generate-and-Validate Patch Generation Systems 2015 Proceedings of the 2015 International Symposium on Software Testing and Analysis (ISSTA '15), pp. 24-36, Baltimore July, 14-17 July   Inproceedings
    Abstract: We analyze reported patches for three existing generate-and- validate patch generation systems (GenProg, RSRepair, and AE). The basic principle behind generate-and-validate systems is to accept only plausible patches that produce correct outputs for all inputs in the validation test suite. Because of errors in the patch evaluation infrastructure, the majority of the reported patches are not plausible — they do not produce correct outputs even for the inputs in the validation test suite. The overwhelming majority of the reported patches are not correct and are equivalent to a single modification that simply deletes functionality. Observed negative effects include the introduction of security vulnerabilities and the elimination of desirable functionality. We also present Kali, a generate-and-validate patch generation system that only deletes functionality. Working with a simpler and more effectively focused search space, Kali generates at least as many correct patches as prior GenProg, RSRepair, and AE systems. Kali also generates at least as many patches that produce correct outputs for the inputs in the validation test suite as the three prior systems. We also discuss the patches produced by ClearView, a generate-and-validate binary hot patching system that lever- ages learned invariants to produce patches that enable systems to survive otherwise fatal defects and security attacks. Our analysis indicates that ClearView successfully patches 9 of the 10 security vulnerabilities used to evaluate the system. At least 4 of these patches are correct.
    BibTeX:
    @inproceedings{QiLAR15,
      author = {Zichao Qi and Fan Long and Sara Achour and Martin Rinard},
      title = {An Analysis of Patch Plausibility and Correctness for Generate-and-Validate Patch Generation Systems},
      booktitle = {Proceedings of the 2015 International Symposium on Software Testing and Analysis (ISSTA '15)},
      publisher = {ACM},
      year = {2015},
      pages = {24-36},
      address = {Baltimore, July},
      month = {14-17 July},
      doi = {http://dx.doi.org/10.1145/2771783.2771791}
    }
    					
    2015.02.23 Aurora Ramrez, José Raúl Romero & Sebastián Ventura An Approach for the Evolutionary Discovery of Software Architectures 2015 Information Sciences, Vol. 305, pp. 234-255, June   Article
    Abstract: Software architectures constitute important analysis artefacts in software projects, as they reflect the main functional blocks of the software. They provide high-level analysis artefacts that are useful when architects need to analyse the structure of working systems. Normally, they do this process manually, supported by their prior experiences. Even so, the task can be very tedious when the actual design is unclear due to continuous uncontrolled modifications. Since the recent appearance of search based software engineering, multiple tasks in the area of software engineering have been formulated as complex search and optimisation problems, where evolutionary computation has found a new area of application. This paper explores the design of an evolutionary algorithm (EA) for the discovery of the underlying architecture of software systems. Important efforts have been directed towards the creation of a generic and human-oriented process. Hence, the selection of a comprehensible encoding, a fitness function inspired by accurate software design metrics, and a genetic operator simulating architectural transformations all represent important characteristics of the proposed approach. Finally, a complete parameter study and experimentation have been performed using real software systems, looking for a generic evolutionary approach to help software engineers towards their decision making process.
    BibTeX:
    @article{RamirezRV15,
      author = {Aurora Ramrez and José Raúl Romero and Sebastián Ventura},
      title = {An Approach for the Evolutionary Discovery of Software Architectures},
      journal = {Information Sciences},
      year = {2015},
      volume = {305},
      pages = {234-255},
      month = {June},
      doi = {http://dx.doi.org/10.1016/j.ins.2015.01.017}
    }
    					
    2015.12.08 José Carlos Bregieiro Ribeiro, Ana Filipa Nogueira, Francisco Fernandéz de Vega & Mário Alberto Zenha-Rela eCrash: a Genetic Programming-Based Testing Tool for Object-Oriented Software 2015 , pp. 575-593   Inbook Testing and Debugging
    Abstract: This paper describes the methodology, architecture and features of the eCrash framework, a Java-based tool which employs Strongly-Typed Genetic Programming to automate the generation of test data for the structural unit testing of Object-Oriented programs. The application of Evolutionary Algorithms to Test Data generation is often referred to as Evolutionary Testing. eCrash implements an Evolutionary Testing strategy developed with three major purposes: improving the level of performance and automation of the Software Testing process; minimising the interference of the tool’s users on the Test Object analysis to a minimum; and mitigating the impact of users decisions in the Test Data generation process.
    BibTeX:
    @inbook{RibeiroNVZ15,
      author = {José Carlos Bregieiro Ribeiro and Ana Filipa Nogueira and Francisco Fernandéz de Vega and Mário Alberto Zenha-Rela},
      title = {eCrash: a Genetic Programming-Based Testing Tool for Object-Oriented Software},
      publisher = {Springer},
      year = {2015},
      pages = {575-593},
      doi = {http://dx.doi.org/10.1007/978-3-319-20883-1_23}
    }
    					
    2015.11.06 José Miguel Rojas, José Campos, Mattia Vivanti, Gordon Fraser & Andrea Arcuri Combining Multiple Coverage Criteria in Search-Based Unit Test Generation 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 93-108, Bergamo Italy, 5-7 September   Inproceedings Testing and Debugging
    Abstract: Automated test generation techniques typically aim at maximising coverage of well-established structural criteria such as statement or branch coverage. In practice, generating tests only for one specific criterion may not be sufficient when testing object oriented classes, as standard structural coverage criteria do not fully capture the properties developers may desire of their unit test suites. For example, covering a large number of statements could be easily achieved by just calling the main method of a class; yet, a good unit test suite would consist of smaller unit tests invoking individual methods, and checking return values and states with test assertions. There are several different properties that test suites should exhibit, and a search-based test generator could easily be extended with additional fitness functions to capture these properties. However, does search-based testing scale to combinations of multiple criteria, and what is the effect on the size and coverage of the resulting test suites? To answer these questions, we extended the EvoSuite unit test generation tool to support combinations of multiple test criteria, defined and implemented several different criteria, and applied combinations of criteria to a sample of 650 open source Java classes. Our experiments suggest that optimising for several criteria at the same time is feasible without increasing computational costs: When combining nine different criteria, we observed an average decrease of only 0.4 % for the constituent coverage criteria, while the test suites may grow up to 70%.
    BibTeX:
    @inproceedings{RojasCVFA15,
      author = {José Miguel Rojas and José Campos and Mattia Vivanti and Gordon Fraser and Andrea Arcuri},
      title = {Combining Multiple Coverage Criteria in Search-Based Unit Test Generation},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {93-108},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_7}
    }
    					
    2015.11.06 Emil Rubinić, Goran Mauša & Tihana Galinac Grbac Software Defect Classification with a Variant of NSGA-II and Simple Voting Strategies 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 347-353, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Software Defect Prediction is based on datasets that are imbalanced and therefore limit the use of machine learning based classification. Ensembles of genetic classifiers indicate good performance and provide a promising solution to this problem. To further examine this solution, we performed additional experiments in that direction. In this paper we report preliminary results obtained by using a Matlab variant of NSGA-II in combination with four simple voting strategies on three subsequent releases of the Eclipse Plug-in Development Environment (PDE) project. Preliminary results indicate that the voting procedure might influence software defect prediction performances.
    BibTeX:
    @inproceedings{RubinicMG15,
      author = {Emil Rubinić and Goran Mauša and Tihana Galinac Grbac},
      title = {Software Defect Classification with a Variant of NSGA-II and Simple Voting Strategies},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {347-353},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_33}
    }
    					
    2016.03.08 Urko Rueda, Tanja E.J. Vos & I.S.W.B. Prasetya Unit Testing Tool Competition - Round Three 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 19-24, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: This paper describes the third round of the Java Unit Testing Tool Competition. This edition of the contest evaluates no less than seven automated testing tools! And, like during the second round, test suites written by human testers are also used for comparison. This paper contains the full results of the evaluation.
    BibTeX:
    @inproceedings{RuedaVP15,
      author = {Urko Rueda and Tanja E. J. Vos and I.S.W.B. Prasetya},
      title = {Unit Testing Tool Competition - Round Three},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {19-24},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.12}
    }
    					
    2014.11.25 Abdelilah Sakti, Gilles Pesant & Yann-Gaël Guéhéneuc Instance Generator and Problem Representation to Improve Object Oriented Code Coverage 2015 IEEE Transactions on Software Engineering, Vol. 41(3), pp. 294-313, March   Article Testing and Debugging
    Abstract: Search-based approaches have been extensively applied to solve the problem of software test-data generation. Yet, testdata generation for object-oriented programming (OOP) is challenging due to the features of OOP, e.g., abstraction, encapsulation, and visibility that prevent direct access to some parts of the source code. To address this problem we present a new automated search-based software test-data generation approach that achieves high code coverage for unit-class testing. We first describe how we structure the test-data generation problem for unit-class testing to generate relevant sequences of method calls. Through a static analysis, we consider only methods or constructors changing the state of the class-under-test or that may reach a test target. Then we introduce a generator of instances of classes that is based on a family of means-of-instantiation including subclasses and external factory methods. It also uses a seeding strategy and a diversification strategy to increase the likelihood to reach a test target. Using a search heuristic to reach all test targets at the same time, we implement our approach in a tool, JTExpert, that we evaluate on more than a hundred Java classes from different open-source libraries. JTExpert gives better results in terms of search time and code coverage than the state of the art, EvoSuite, which uses traditional techniques.
    BibTeX:
    @article{SaktiPG15,
      author = {Abdelilah Sakti and Gilles Pesant and Yann-Gaël Guéhéneuc},
      title = {Instance Generator and Problem Representation to Improve Object Oriented Code Coverage},
      journal = {IEEE Transactions on Software Engineering},
      year = {2015},
      volume = {41},
      number = {3},
      pages = {294-313},
      month = {March},
      doi = {http://dx.doi.org/10.1109/TSE.2014.2363479}
    }
    					
    2016.03.08 Abdelilah Sakti, Gilles Pesant & Yann-Gaël Guéhéneuc JTExpert at the Third Unit Testing Tool Competition 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 52-55, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: JTExpert is a software testing tool that automatically generates a whole test suite to satisfy the branch-coverage criterion on a given Java source code. It takes as inputs a Java source code and its dependencies and automatically produces a test-case suite in JUnit format. In this paper, we summarize our results for the Unit Testing Tool Competition held at the third SBST Contest, where JT Expert receives 159.16 points and was ranked sixth of seven participating tools. We discuss the internal and external reasons that were behind the relatively poor score and ranking.
    BibTeX:
    @inproceedings{SaktiPG15b,
      author = {Abdelilah Sakti and Gilles Pesant and Yann-Gaël Guéhéneuc},
      title = {JTExpert at the Third Unit Testing Tool Competition},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {52-55},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.20}
    }
    					
    2016.03.08 Kevin Salvesen, Juan P. Galeotti, Florian Gross, Gordon Fraser & Andreas Zeller Using Dynamic Symbolic Execution to Generate Inputs in Search-Based GUI Testing 2015 IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15), pp. 32-35, Florence Italy, 18-19 May   Inproceedings Testing and Debugging
    Abstract: Search-based testing has been successfully applied to generate complex sequences of events for graphical user interfaces (GUIs), but typically relies on simple heuristics or random values for data widgets like text boxes. This may greatly reduce the effectiveness of test generation for applications which expect specific input values to be entered in their GUI by users. Generating such specific input values is one of the virtues of dynamic symbolic execution (DSE), but DSE is less suitable to generate sequences of events. Therefore, this paper describes a hybrid approach that uses search-based testing to generate sequences of events, and DSE to build input data for text boxes. This is achieved by replacing standard widgets in a system under test with symbolic ones, allowing us to execute GUIs symbolically. In this paper, we demonstrate an extension of the search-based GUI testing tool EXSYST, which uses DSE to successfully increase the obtained code coverage on two case study applications.
    BibTeX:
    @inproceedings{SalvesenGGFZ15,
      author = {Kevin Salvesen and Juan P. Galeotti and Florian Gross and Gordon Fraser and Andreas Zeller},
      title = {Using Dynamic Symbolic Execution to Generate Inputs in Search-Based GUI Testing},
      booktitle = {IEEE/ACM 8th International Workshop on Search-Based Software Testing (SBST '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {32-35},
      address = {Florence, Italy},
      month = {18-19 May},
      doi = {http://dx.doi.org/10.1109/SBST.2015.15}
    }
    					
    2015.08.07 Eric Schulte, Westley Weimer & Stephanie Forrest Repairing COTS Router Firmware without Access to Source Code or Test Suites: A Case Study in Evolutionary Software Repair 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 847-854, Madrid Spain, 12-12 July   Inproceedings
    Abstract: The speed with which newly discovered software vulnerabilities are patched is a critical factor in mitigating the harm caused by subsequent exploits. Unfortunately, software vendors are often slow or unwilling to patch vulnerabilities, especially in embedded systems which frequently have no mechanism for updating factory-installed firmware. The situation is particularly dire for commercial off the shelf (COTS) software users, who lack source code and are wholly dependent on patches released by the vendor. We propose a solution in which the vulnerabilities drive an automated evolutionary computation repair process capable of directly patching embedded systems firmware. Our approach does not require access to source code, regression tests, or any participation from the software vendor. Instead, we present an interactive evolutionary algorithm that searches for patches that resolve target vulnerabilities while relying heavily on post-evolution difference minimization to remove most regressions. Extensions to prior work in evolutionary program repair include: repairing vulnerabilities in COTS router firmware; handling stripped MIPS executables; operating without fault localization information; operating without a regression test suite; and incorporating user interaction into the evolutionary repair process. We demonstrate this method by repairing two well-known vulnerabilities in version 4 of NETGEAR's WNDR3700 wireless router before NETGEAR released patches publicly for the vulnerabilities. Without fault localization we are able to find repair edits that are not located on execution traces. Without the advantage of regression tests to guide the search, we find that 80% of repairs of the example vulnerabilities retain program functionality after minimization. With minimal user interaction to demonstrate required functionality, 100% of the proposed repairs were able to address the vulnerabilities while retaining required functionality.
    BibTeX:
    @inproceedings{SchulteWF15,
      author = {Eric Schulte and Westley Weimer and Stephanie Forrest},
      title = {Repairing COTS Router Firmware without Access to Source Code or Test Suites: A Case Study in Evolutionary Software Repair},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {847-854},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768427}
    }
    					
    2016.03.09 Sina Shamshiri, René Just, José Miguel Rojas, Gordon Fraser, Phil McMinn & Andrea Arcuri Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges 2015 Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE '15), pp. 201-211, Lincoln USA, 9-13 November   Inproceedings Testing and Debugging
    Abstract: Rather than tediously writing unit tests manually, tools can be used to generate them automatically - sometimes even resulting in higher code coverage than manual testing. But how good are these tests at actually finding faults? To answer this question, we applied three state-of-the-art unit test generation tools for Java (Randoop, EvoSuite, and Agitar) to the 357 real faults in the Defects4J dataset and investigated how well the generated test suites perform at detecting these faults. Although the automatically generated test suites detected 55.7% of the faults overall, only 19.9% of all the individual test suites detected a fault. By studying the effectiveness and problems of the individual tools and the tests they generate, we derive insights to support the development of automated unit test generators that achieve a higher fault detection rate. These insights include 1) improving the obtained code coverage so that faulty statements are executed in the first instance, 2) improving the propagation of faulty program states to an observable output, coupled with the generation of more sensitive assertions, and 3) improving the simulation of the execution environment to detect faults that are dependent on external factors such as date and time.
    BibTeX:
    @inproceedings{ShamshiriJRFMA15,
      author = {Sina Shamshiri and René Just and José Miguel Rojas and Gordon Fraser and Phil McMinn and Andrea Arcuri},
      title = {Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges},
      booktitle = {Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {201-211},
      address = {Lincoln, USA},
      month = {9-13 November},
      doi = {http://dx.doi.org/10.1109/ASE.2015.86}
    }
    					
    2015.08.07 Sina Shamshiri, José Miguel Rojas, Gordon Fraser & Phil McMinn Random or Genetic Algorithm Search for Object-Oriented Test Suite Generation? 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 1367-1374, Madrid Spain, 11-15 July   Inproceedings Testing and Debugging
    Abstract: Achieving high structural coverage is an important aim in software testing. Several search-based techniques have proved successful at automatically generating tests that achieve high coverage. However, despite the well- established arguments behind using evolutionary search algorithms (e.g., genetic algorithms) in preference to random search, it remains an open question whether the benefits can actually be observed in practice when generating unit test suites for object-oriented classes. In this paper, we report an empirical study on the effects of using a genetic algorithm (GA) to generate test suites over generating test suites incrementally with random search, by applying the EvoSuite unit test suite generator to 1,000 classes randomly selected from the SF110 corpus of open source projects. Surprisingly, the results show little difference between the coverage achieved by test suites generated with evolutionary search compared to those generated using random search. A detailed analysis reveals that the genetic algorithm covers more branches of the type where standard fitness functions provide guidance. In practice, however, we observed that the vast majority of branches in the analyzed projects provide no such guidance.
    BibTeX:
    @inproceedings{ShamshiriRFM15,
      author = {Sina Shamshiri and José Miguel Rojas and Gordon Fraser and Phil McMinn},
      title = {Random or Genetic Algorithm Search for Object-Oriented Test Suite Generation?},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {1367-1374},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754696}
    }
    					
    2016.04.29 Du Shen, Qi Luo, Denys Poshyvanyk & Mark Grechanik Automating Performance Bottleneck Detection using Search-Based Application Profiling 2015 Proceedings of the International Symposium on Software Testing and Analysis (ISSTA '15), pp. 270-281, Baltimore USA, 14-17 July   Inproceedings
    Abstract: Application profiling is an important performance analysis technique, when an application under test is analyzed dynamically to determine its space and time complexities and the usage of its instructions. A big and important challenge is to profile nontrivial web applications with large numbers of combinations of their input parameter values. Identifying and understanding particular subsets of inputs leading to performance bottlenecks is mostly manual, intellectually intensive and laborious procedure. We propose a novel approach for automating performance bottleneck detection using search-based input-sensitive application profiling. Our key idea is to use a genetic algorithm as a search heuristic for obtaining combinations of input parameter values that maximizes a fitness function that represents the elapsed execution time of the application. We implemented our approach, coined as Genetic Algorithm-driven Profiler (GA-Prof) that combines a search-based heuristic with contrast data mining of execution traces to accurately determine performance bottlenecks. We evaluated GA-Prof to determine how effectively and efficiently it can detect injected performance bottlenecks into three popular open source web applications. Our results demonstrate that GA-Prof efficiently explores a large space of input value combinations while automatically and accurately detecting performance bottlenecks, thus suggesting that it is effective for automatic profiling.
    BibTeX:
    @inproceedings{ShenLPG15,
      author = {Du Shen and Qi Luo and Denys Poshyvanyk and Mark Grechanik},
      title = {Automating Performance Bottleneck Detection using Search-Based Application Profiling},
      booktitle = {Proceedings of the International Symposium on Software Testing and Analysis (ISSTA '15)},
      publisher = {ACM},
      year = {2015},
      pages = {270-281},
      address = {Baltimore, USA},
      month = {14-17 July},
      doi = {http://dx.doi.org/10.1145/2771783.2771816}
    }
    					
    2016.02.12 Stelios Sidiroglou-Douskos, Eric Lahtinen, Fan Long & Martin Rinard Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications 2015 Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI '15), pp. 43-54, Portland USA, 13-17 June   Inproceedings
    Abstract: We present Code Phage (CP), a system for automatically transferring correct code from donor applications into recipient applications that process the same inputs to successfully eliminate errors in the recipient. Experimental results using seven donor applications to eliminate ten errors in seven recipient applications highlight the ability of CP to transfer code across applications to eliminate out of bounds access, integer overflow, and divide by zero errors. Because CP works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, CP is the first system to automatically transfer code across multiple applications.
    BibTeX:
    @inproceedings{Sidiroglou-DouskosLLR15,
      author = {Stelios Sidiroglou-Douskos and Eric Lahtinen and Fan Long and Martin Rinard},
      title = {Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications},
      booktitle = {Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {43-54},
      address = {Portland, USA},
      month = {13-17 June},
      doi = {http://dx.doi.org/10.1145/2737924.2737988}
    }
    					
    2015.11.06 Chris Simons, Jeremy Singer & David R. White Search-Based Refactoring: Metrics Are Not Enough 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 47-61, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Search-based Software Engineering (SBSE) techniques have been applied extensively to refactor software, often based on metrics that describe the object-oriented structure of an application. Recent work shows that in some cases applying popular SBSE tools to open-source software does not necessarily lead to an improved version of the software as assessed by some subjective criteria. Through a survey of professionals, we investigate the relationship between popular SBSE refactoring metrics and the subjective opinions of software engineers. We find little or no correlation between the two. Through qualitative analysis, we find that a simple static view of software is insufficient to assess software quality, and that software quality is dependent on factors that are not amenable to measurement via metrics. We recommend that future SBSE refactoring research should incorporate information about the dynamic behaviour of software, and conclude that a human-in-the-loop approach may be the only way to refactor software in a manner helpful to an engineer.
    BibTeX:
    @inproceedings{SimonsSW15,
      author = {Chris Simons and Jeremy Singer and David R. White},
      title = {Search-Based Refactoring: Metrics Are Not Enough},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {47-61},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_4}
    }
    					
    2016.02.02 Jerry Swan & Nathan Burles Templar – A Framework for Template-Method Hyper-Heuristics 2015 Proceedings of the 18th European Conference on Genetic Programming (EuroGP '15), pp. 205-216, Copenhagen Denmark, 8-10 April   Inproceedings
    Abstract: In this work we introduce Templar, a software framework for customising algorithms via the generative technique of template-method hyper-heuristics. We first discuss the need for such an approach, presenting Quicksort as an example. We provide a functional definition of template-method hyper-heuristics, describe how this is implemented by Templar, and show how Templar may be invoked using simple client-code. Finally, we describe experiments using Templar to define a ‘hyper-quicksort’ with the aim of reducing power consumption—the results demonstrate that the generated algorithm has significantly improved performance on the test set.
    BibTeX:
    @inproceedings{SwanB15,
      author = {Jerry Swan and Nathan Burles},
      title = {Templar – A Framework for Template-Method Hyper-Heuristics},
      booktitle = {Proceedings of the 18th European Conference on Genetic Programming (EuroGP '15)},
      publisher = {Springer},
      year = {2015},
      pages = {205-216},
      address = {Copenhagen, Denmark},
      month = {8-10 April},
      doi = {http://dx.doi.org/10.1007/978-3-319-16501-1_17}
    }
    					
    2016.02.17 Shin Hwei Tan & Abhik Roychoudhury relifix: Automated Repair of Software Regressions 2015 Proceedings of IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE '15), pp. 471-482, Florence Italy, 16-24 May   Inproceedings
    Abstract: Regression occurs when code changes introduce failures in previously passing test cases. As software evolves, regressions may be introduced. Fixing regression errors manually is time-consuming and error-prone. We propose an approach of automated repair of software regressions, called relifix, that considers the regression repair problem as a problem of reconciling problematic changes. Specifically, we derive a set of code transformations obtained from our manual inspection of 73 real software regressions; this set of code transformations uses syntactical information from changed statements. Regression repair is then accomplished via a search over the code transformation operators - which operator to apply, and where. Our evaluation compares the repairability of relifix with GenProg on 35 real regression errors. relifix repairs 23 bugs, while GenProg only fixes five bugs. We also measure the likelihood of both approaches in introducing new regressions given a reduced test suite. Our experimental results shows that our approach is less likely to introduce new regressions than GenProg.
    BibTeX:
    @inproceedings{TanR15,
      author = {Shin Hwei Tan and Abhik Roychoudhury},
      title = {relifix: Automated Repair of Software Regressions},
      booktitle = {Proceedings of IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {471-482},
      address = {Florence, Italy},
      month = {16-24 May},
      doi = {http://dx.doi.org/10.1109/ICSE.2015.65}
    }
    					
    2015.11.06 José Manuel Calderón Trilla, Simon Poulding & Colin Runciman Weaving Parallel Threads 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 62-76, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: As the speed of processors is starting to plateau, chip manufacturers are instead looking to multi-core architectures for increased performance. The ubiquity of multi-core hardware has made parallelism an important tool in writing performant programs. Unfortunately, parallel programming is still considered an advanced technique and most programs are written as sequential programs. We propose that we lift this burden from the programmer and allow the compiler to automatically determine which parts of a program can be executed in parallel. Historically, most attempts at auto-parallelism depended on static analysis alone. While static analysis is often able to find safe parallelism, it is difficult to determine worthwhile parallelism. This is known as the granularity problem. Our work shows that we can use static analysis in conjunction with search techniques by having the compiler execute the program and then alter the amount of parallelism based on execution speed. We do this by annotating the program with parallel annotations and using search to determine which annotations to enable. This allows the static analysis to find the safe parallelism and shift the burden of finding worthwhile parallelism to search. Our results show that by searching over the possible parallel settings we can achieve better performance than static analysis alone.
    BibTeX:
    @inproceedings{TrillaPR15,
      author = {José Manuel Calderón Trilla and Simon Poulding and Colin Runciman},
      title = {Weaving Parallel Threads},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {62-76},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_5}
    }
    					
    2016.02.03 Nadarajen Veerapen, Gabriela Ochoa, Mark Harman & Edmund K. Burke An Integer Linear Programming approach to the single and bi-objective Next Release Problem 2015 Information and Software Technology, Vol. 65, pp. 1-13, September   Article
    Abstract: Context The Next Release Problem involves determining the set of requirements to implement in the next release of a software project. When the problem was first formulated in 2001, Integer Linear Programming, an exact method, was found to be impractical because of large execution times. Since then, the problem has mainly been addressed by employing metaheuristic techniques. Objective In this paper, we investigate if the single-objective and bi-objective Next Release Problem can be solved exactly and how to better approximate the results when exact resolution is costly. Methods We revisit Integer Linear Programming for the single-objective version of the problem. In addition, we integrate it within the Epsilon-constraint method to address the bi-objective problem. We also investigate how the Pareto front of the bi-objective problem can be approximated through an anytime deterministic Integer Linear Programming-based algorithm when results are required within strict runtime constraints. Comparisons are carried out against NSGA-II. Experiments are performed on a combination of synthetic and real-world datasets. Findings We show that a modern Integer Linear Programming solver is now a viable method for this problem. Large single objective instances and small bi-objective instances can be solved exactly very quickly. On large bi-objective instances, execution times can be significant when calculating the complete Pareto front. However, good approximations can be found effectively. Conclusion This study suggests that (1) approximation algorithms can be discarded in favor of the exact method for the single-objective instances and small bi-objective instances, (2) the Integer Linear Programming-based approximate algorithm outperforms the NSGA-II genetic approach on large bi-objective instances, and (3) the run times for both methods are low enough to be used in real-world situations.
    BibTeX:
    @article{VeerapenOHB15,
      author = {Nadarajen Veerapen and Gabriela Ochoa and Mark Harman and Edmund K. Burke},
      title = {An Integer Linear Programming approach to the single and bi-objective Next Release Problem},
      journal = {Information and Software Technology},
      year = {2015},
      volume = {65},
      pages = {1-13},
      month = {September},
      doi = {http://dx.doi.org/10.1016/j.infsof.2015.03.008}
    }
    					
    2016.03.09 Xinying Wang, Xiajun Jiang & Huibin Shi Prioritization of Test Scenarios using Hybrid Genetic Algorithm Based on UML Activity Diagram 2015 Proceedings of the 6th IEEE International Conference on Software Engineering and Service Science (ICSESS '15), pp. 854-857, Beijing China, 23-25 September   Inproceedings Testing and Debugging
    Abstract: Software testing is an essential part of the SDLC(Software Development Life Cycle). Test scenarios are used to derive test cases for model based testing. However, with the software rapidly growing in size and complexity, the cost of software will be too high if we want to test all the test cases. So this paper presents an approach using Hybrid Genetic Algorithm(HGA) to prioritize test scenarios, which improves efficiency and reduces cost as well. The algorithm combines Genetic Algorithm(GA) with Particle Swarm Optimization(PSO) algorithm and uses Local Search Strategy to update the local and global best information of the PSO. The proposed algorithm can prioritize test scenarios so as to find a critical scenario. Finally, the proposed method is applied to several typical UML activity diagrams, and compared with the Simple Genetic Algorithm(SGA). The experimental results show that the proposed method not only prioritizes test scenarios, but also improves the efficiency, and further saves effort, time as well as cost.
    BibTeX:
    @inproceedings{WangJS15,
      author = {Xinying Wang and Xiajun Jiang and Huibin Shi},
      title = {Prioritization of Test Scenarios using Hybrid Genetic Algorithm Based on UML Activity Diagram},
      booktitle = {Proceedings of the 6th IEEE International Conference on Software Engineering and Service Science (ICSESS '15)},
      publisher = {IEEE},
      year = {2015},
      pages = {854-857},
      address = {Beijing, China},
      month = {23-25 September},
      doi = {http://dx.doi.org/10.1109/ICSESS.2015.7339189}
    }
    					
    2015.08.07 David R. White & Jeremy Singer Rethinking Genetic Improvement Programming 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 845-846, Madrid Spain, 12-12 July   Inproceedings
    Abstract: We re-examine the central motivation behind Genetic Improvement Programming (GIP), and argue that the most important insight is the concept of applying Genetic Programming to existing software. This viewpoint allows us to make several observations about potential directions for GIP research.
    BibTeX:
    @inproceedings{WhiteS15,
      author = {David R. White and Jeremy Singer},
      title = {Rethinking Genetic Improvement Programming},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {845-846},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768426}
    }
    					
    2015.12.08 Zhiqiao Wu & Jiafu Tang Designing and Reporting on Computational Experiments of Multi-objective Component Selection Algorithm 2015 Journal of Information Technology & Decision Making, Vol. 14(2), March   Article
    Abstract: One important issue of component-based software development is the minimization of the development cost and the maximization of the system reliability while satisfying functional requirements. There are numerous publications on this issue based on metaheuristic techniques, but there are two deficiencies: too tough to evaluate the performance of algorithms and fix parameters in the real-world application. To address this problem, a three phased algorithm is proposed by the Wu et al. [International Journal of Information Technology & Decision Making5 (2011) 811–841]. This paper describes computational experience in solving the problems using the metaheuristics and the proposed algorithm. The results indicate the efficiency of the proposed algorithms in terms of overall nondominated vector generation, well converged set of solutions, and diversity of solutions. Computational results and simulation analysis further assist a decision maker to fix optimal parameters of metaheuristics including the number of iteration, crossover rate, and mutation rate, and explore hints in using metaheuristics for the problem.
    BibTeX:
    @article{WuT15,
      author = {Zhiqiao Wu and Jiafu Tang},
      title = {Designing and Reporting on Computational Experiments of Multi-objective Component Selection Algorithm},
      journal = {Journal of Information Technology & Decision Making},
      year = {2015},
      volume = {14},
      number = {2},
      month = {March},
      doi = {http://dx.doi.org/10.1142/S0219622015500066}
    }
    					
    2015.08.07 Fan Wu, Westley Weimer, Mark Harman, Yue Jia & Jens Krinke Deep Parameter Optimisation 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 1375-1382, Madrid Spain, 11-15 July   Inproceedings
    Abstract: We introduce a mutation-based approach to automatically discover and expose `deep' (previously unavailable) parameters that affect a program's runtime costs. These discovered parameters, together with existing (`shallow') parameters, form a search space that we tune using search-based optimisation in a bi-objective formulation that optimises both time and memory consumption. We implemented our approach and evaluated it on four real-world programs. The results show that we can improve execution time by 12% or achieve a 21% memory consumption reduction in the best cases. In three subjects, our deep parameter tuning results in a significant improvement over the baseline of shallow parameter tuning, demonstrating the potential value of our deep parameter extraction approach.
    BibTeX:
    @inproceedings{WuWHJK15,
      author = {Fan Wu and Westley Weimer and Mark Harman and Yue Jia and Jens Krinke},
      title = {Deep Parameter Optimisation},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {1375-1382},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754648}
    }
    					
    2015.08.07 Jing Xiao, Mei-Ling Gao & Min-Mei Huang Empirical Study of Multi-objective Ant Colony Optimization to Software Project Scheduling Problems 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15), pp. 759-766, Madrid Spain, 11-15 July   Inproceedings
    Abstract: The Software Project Scheduling Problem (SPSP) focuses on the management of software engineers and tasks in a software project so as to complete the tasks with a minimal cost and duration. It's becoming more and more important and challenging with the rapid development of software industry. In this paper, we employ a Multi-objective Evolutionary Algorithm using Decomposition and Ant Colony (MOEA/D-ACO) to solve the SPSP. To the best of our knowledge, it is the first application of Multi-objective Ant Colony Optimization (MOACO) to SPSP. Two heuristics capable of guiding the algorithm to search better in the SPSP model are examined. Experiments are conducted on a set of 36 publicly available instances. The results are compared with the implementation of another multi-objective evolutionary algorithm called NSGA-II for SPSP. MOEA/D-ACO does not outperform NSGA-II for most of complex instances in terms of Pareto Front. But MOEA/D-ACO can obtain solutions with much less time for all instances in our experiments and it outperforms NSGA-II with less duration for most of test instances. The performance may be improved with tuning of the algorithm such as incorporating more heuristic information or using other MOACO algorithms, which deserve further investigation.
    BibTeX:
    @inproceedings{XiaoGH15,
      author = {Jing Xiao and Mei-Ling Gao and Min-Mei Huang},
      title = {Empirical Study of Multi-objective Ant Colony Optimization to Software Project Scheduling Problems},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15)},
      publisher = {ACM},
      year = {2015},
      pages = {759-766},
      address = {Madrid, Spain},
      month = {11-15 July},
      doi = {http://dx.doi.org/10.1145/2739480.2754702}
    }
    					
    2015.11.06 Yongrui Xu, Peng Liang & Muhammad Ali Babar Introducing Learning Mechanism for Class Responsibility Assignment Problem 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 311-317, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Assigning responsibilities to classes is a vital task in object-oriented design, which has a great impact on the overall design of an application. However, this task is not easy for designers due to its complexity. Though many automated approaches have been developed to help designers to assign responsibilities to classes, none of them considers extracting the design knowledge (DK) about the relations between responsibilities in order to adapt designs better against design problems. To address the issue, we propose a novel Learning-based Genetic Algorithm (LGA) for the Class Responsibility Assignment (CRA) problem. In the proposed algorithm, a learning mechanism is introduced to extract DK about which responsibilities have a high probability to be assigned to the same class, and the extracted DK is employed to improve the design qualities of generated solutions. An experiment was conducted, which shows the effectiveness of the proposed approach.
    BibTeX:
    @inproceedings{XuLB15,
      author = {Yongrui Xu and Peng Liang and Muhammad Ali Babar},
      title = {Introducing Learning Mechanism for Class Responsibility Assignment Problem},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {311-317},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_28}
    }
    					
    2016.02.17 Bo Yang, Yanmei Hu & Chin-Yu Huang An Architecture-Based Multi-Objective Optimization Approach to Testing Resource Allocation 2015 IEEE Transactions on Reliability, Vol. 64(1), pp. 497-515, March   Article
    Abstract: Software systems are widely employed in society. With a limited amount of testing resource available, testing resource allocation among components of a software system becomes an important issue. Most existing research on the testing resource allocation problem takes a single-objective optimization approach, which may not adequately address all the concerns in the decision-making process. In this paper, an architecture-based multi-objective optimization approach to testing resource allocation is proposed. An architecture-based model is used for system reliability assessment, which has the advantage of explicitly considering system architecture over the reliability block diagram (RBD)-based models, and has good flexibility to different architectural alternatives and component changes. A system cost modeling approach which is based on well-developed software cost models is proposed, which would be a more flexible, suitable approach to the cost modeling of software than the approach adopted by others which is based on an empirical cost model. A multi-objective optimization model is developed for the testing resource allocation problem, in which the three major concerns in the testing resource allocation problem, i.e., system reliability, system cost, and the total amount of testing resource consumed, are taken into consideration. A multi-objective evolutionary algorithm (MOEA), called multi-objective differential evolution based on weighted normalized sum (WNS-MODE), is developed. Experimental studies are presented, and the experiments show several results. 1) The proposed architecture-based multi-objective optimization approach can identify the testing resource allocation strategy which has a good trade-off among optimization objectives. 2) The developed WNS-MODE is better than the MOEA developed in recent research, called HaD-MOEA, in terms of both solution quality and computational efficiency. 3) The WNS-MODE seems quite robust from the sensitivity analysis results.
    BibTeX:
    @article{YangHH15,
      author = {Bo Yang and Yanmei Hu and Chin-Yu Huang},
      title = {An Architecture-Based Multi-Objective Optimization Approach to Testing Resource Allocation},
      journal = {IEEE Transactions on Reliability},
      year = {2015},
      volume = {64},
      number = {1},
      pages = {497-515},
      month = {March},
      doi = {http://dx.doi.org/10.1109/TR.2014.2372411}
    }
    					
    2015.08.07 Kwaku Yeboah-Antwi & Benoit Baudry Embedding Adaptivity in Software Systems using the ECSELR Framework 2015 Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15), pp. 839-844, Madrid Spain, 12-12 July   Inproceedings
    Abstract: ECSELR is an ecologically-inspired approach to software evolution that enables environmentally driven evolution at runtime in extant software systems without relying on any offline components or management. ECSELR embeds adaptation and evolution inside the target software system enabling the system to transform itself via darwinian evolutionary mechanisms and adapt in a self contained manner. This allows the software system to benefit autonomously from the useful emergent byproducts of evolution like adaptivity and bio-diversity, avoiding the problems involved in engineering and maintaining such properties. ECSELR enables software systems to address changing environments at runtime, ensuring benefits like mitigation of attacks and memory-optimization among others while avoiding time consuming and costly maintenance and downtime. ECSELR differs from existing work in that, 1) adaptation is embedded in the target system, 2) evolution and adaptation happens online(i.e. in-situ at runtime) and 3) ECSELR is able to embed adaptation inside systems that have already been started and are in the midst of execution. We demonstrate the use of ECSELR and present results on using the ECSELR framework to slim a software system.
    BibTeX:
    @inproceedings{Yeboah-AntwiB15,
      author = {Kwaku Yeboah-Antwi and Benoit Baudry},
      title = {Embedding Adaptivity in Software Systems using the ECSELR Framework},
      booktitle = {Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference (GECCO '15) - the 1st International Genetic Improvement Workshop (GI '15)},
      publisher = {ACM},
      year = {2015},
      pages = {839-844},
      address = {Madrid, Spain},
      month = {12-12 July},
      doi = {http://dx.doi.org/10.1145/2739482.2768425}
    }
    					
    2015.11.06 Shin Yoo Amortised Optimisation of Non-functional Properties in Production Environments 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 31-46, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Search Based Software Engineering has high potential for optimising non-functional properties such as execution time or power consumption. However, many non-functional properties are dependent not only on the software system under consideration but also the environment that surrounds the system. This necessitates a support for online, in situ optimisation. This paper introduces the novel concept of amortised optimisation which allows such online optimisation. The paper also presents two case studies: one that seeks to optimise JIT compilation, and another to optimise a hardware dependent algorithm. The results show that, by using the open source libraries we provide, developers can improve the speed of their Python script by up to 8.6 % with virtually no extra effort, and adapt a hardware dependent algorithm automatically for unseen CPUs.
    BibTeX:
    @inproceedings{Yoo,
      author = {Shin Yoo},
      title = {Amortised Optimisation of Non-functional Properties in Production Environments},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {31-46},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_3}
    }
    					
    2015.11.06 Fang Yuan, Yi Bian, Zheng Li & Ruilian Zhao Epistatic Genetic Algorithm for Test Case Prioritization 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 109-124, Bergamo Italy, 5-7 September   Inproceedings Testing and Debugging
    Abstract: Search based technologies have been widely used in regression test suite optimization, including test case prioritization, test case selection and test suite minimization, to improve the efficiency and reduce the cost of testing. Unlike test case selection and test suite minimization, the evaluation of test case prioritization is based on the test case execution sequence, in which genetic algorithm is one of the most popular algorithms employed. When permutation encoding is used to represent the execution sequence, the execution of previous test cases can affect the presence of the following test cases, namely epistatic effect. In this paper, the application of epistatic domains theory in genetic algorithms for test case prioritization is analyzed, where Epistatic Test Case Segment is defined. Two associated crossover operators are proposed based on epistasis. The empirical studies show that the proposed two-point crossover operator, E-Ord, outperform the crossover PMX, and can produce higher fitness with a faster convergence.
    BibTeX:
    @inproceedings{YuanBLZ15,
      author = {Fang Yuan and Yi Bian and Zheng Li and Ruilian Zhao},
      title = {Epistatic Genetic Algorithm for Test Case Prioritization},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {109-124},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_8}
    }
    					
    2015.12.08 Tao Yue, Shaukat Ali & Shuai Wang An Evolutionary and Automated Virtual Team Making Approach for Crowdsourcing Platforms 2015 , pp. 113-130   Inbook
    Abstract: Crowdsourcing has demonstrated its capability of supporting various software development activities including development and testing as it can be seen by several successful crowdsourcing platforms such as TopCoder and uTest. However, to crowd source large-scale and complex software development and testing tasks, there are several optimization challenges to be addressed such as division of tasks, searching a set of registrants, and assignment of tasks to registrants.Since in crowdsourcing a task can be assigned to registrants geographically distributed with various backgrounds, the quality of final task deliverables is a key issue. As the first step to improve the quality, we propose a systematic and automated approach to optimize the assignment of registrants in a crowdsourcing platform to a crowdsourcing task. The objective is to find the best fit of a group of registrants to the defined task. A few examples of factors forming the optimization problem include budget defined by the task submitter and pay expectation from a registrant, skills required by a task, skills of a registrant, task delivering deadline, and availability of a registrant. We first collected a set of commonly seen factors that have impact on the perfect matching between tasks submitted and a virtual team that consists of a selected set of registrants. We then formulated the optimization objective as a fitness functionłthe heuristics used by search algorithms (e.g., Genetic Algorithms) to find an optimal solution. We empirically evaluated a set of well-known search algorithms in software engineering, along with the proposed fitness function, to identify the best solution for our optimization problem. Results of our experiments are very positive in terms of solving optimization problems in a crowdsourcing context.
    BibTeX:
    @inbook{YueAW15,
      author = {Tao Yue and Shaukat Ali and Shuai Wang},
      title = {An Evolutionary and Automated Virtual Team Making Approach for Crowdsourcing Platforms},
      publisher = {Springer},
      year = {2015},
      pages = {113-130},
      doi = {http://dx.doi.org/10.1007/978-3-662-47011-4_7}
    }
    					
    2016.02.27 Ning Zhang, Yuhua Huang & Xinye Cai A Two-Phase External Archive Guided Multiobjective Evolutionary Algorithm for the Software Next Release Problem 2015 Proceedings of the 10the International Conference on Bio-Inspired Computing - Theories and Applications (BIC-TA '15), pp. 664-675, Hefei China, 25-28 September   Inproceedings
    Abstract: Decomposition based multiobjective evolutionary algorithms have been used widely in solving the multiobjective optimization problems, by decomposing a MOP into several single objective subproblems and optimizing them simultaneously. This paper proposes an adaptive mechanism to decide the evolutionary stages and dynamically allocate computational resource by using the information extracted from the external archive. Different from previous proposed EAG-MOEA/D [2], the information extracted from the external archive is explicitly divided into two categories - the convergence information and the diversity information. The proposed algorithm is compared with five well-known algorithms on the Software Next Release Problem. Experimental results show that our proposed algorithm performs better than other algorithms.
    BibTeX:
    @inproceedings{ZhangHC15,
      author = {Ning Zhang and Yuhua Huang and Xinye Cai},
      title = {A Two-Phase External Archive Guided Multiobjective Evolutionary Algorithm for the Software Next Release Problem},
      booktitle = {Proceedings of the 10the International Conference on Bio-Inspired Computing - Theories and Applications (BIC-TA '15)},
      publisher = {Springer},
      year = {2015},
      pages = {664-675},
      address = {Hefei, China},
      month = {25-28 September},
      doi = {http://dx.doi.org/10.1007/978-3-662-49014-3_59}
    }
    					
    2015.11.06 Yuanyuan Zhang, Mark Harman, Yue Jia & Federica Sarro Inferring Test Models from Kate’s Bug Reports using Multi-objectives Search 2015 Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15), pp. 301-307, Bergamo Italy, 5-7 September   Inproceedings
    Abstract: Models inferred from system execution logs can be used to test general system behaviour. In this paper, we infer test models from user bug reports that are written in the natural language. The inferred models can be used to derive new tests which further exercise the buggy features reported by users. Our search-based model inference approach considers three objectives: (1) to reduce the number of invalid user events generated (over approximation), (2) to reduce the number of unrecognised user events (under approximation), (3) to reduce the size of the model (readability). We apply our approach to 721 of Kate’s bug reports which contain the information required to reproduce the bugs. We compare our results to start-of-the-art KLFA tool. Our results show that our inferred models require 19 tests to reveal a bug on average, which is 98 times fewer than the models inferred by KLFA.
    BibTeX:
    @inproceedings{ZhangHJS15,
      author = {Yuanyuan Zhang and Mark Harman and Yue Jia and Federica Sarro},
      title = {Inferring Test Models from Kate’s Bug Reports using Multi-objectives Search},
      booktitle = {Proceedings of the 7th International Symposium on Search-Based Software Engineering (SSBSE ’15)},
      publisher = {Springer},
      year = {2015},
      pages = {301-307},
      address = {Bergamo, Italy},
      month = {5-7 September},
      doi = {http://dx.doi.org/10.1007/978-3-319-22183-0_27}
    }
    					
    2014.11.25 Ali Aburas & Alex Groce An Improved Memetic Algorithm with Method Dependence Relations (MAMDR) 2014 Proceedings of the 14th International Conference on Quality Software (QSIC '14), pp. 11-20, Allen TX USA, 2-3 October   Inproceedings Testing and Debugging
    Abstract: Search-based approaches are successfully used for generating unit tests for object-oriented programs in Java. However, these approaches may struggle to generate sequence method calls with specific values to achieve high coverage due to the large size of the search space. This paper proposes a memetic algorithm (MA) approach in which static analysis is used to identify method dependence relations (MDR) based on the field access. This method dependence information is employed for reducing the search space and used to guide the search towards regions that lead to full (or at least high) structural coverage. Our approach, MAMDR, combines both a genetic algorithm (GA) and Hill Climbing (HC) to generate test data for Java programs. The former is used to produce test cases that maximize the branch coverage of the CUT, while minimizing the length of each test case. The latter is used to target uncovered branches in the preceding search phase using static information that guides the search to generate sequences of method calls and values that could cover target branches. We compare MAMDR with pure random testing, a well-known search based approach (EvoSuite), and a simple MA on several open source projects and classes, and show that the combination of MA and MDR is effective.
    BibTeX:
    @inproceedings{AburasG14,
      author = {Ali Aburas and Alex Groce},
      title = {An Improved Memetic Algorithm with Method Dependence Relations (MAMDR)},
      booktitle = {Proceedings of the 14th International Conference on Quality Software (QSIC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {11-20},
      address = {Allen, TX, USA},
      month = {2-3 October},
      doi = {http://dx.doi.org/10.1109/QSIC.2014.12}
    }
    					
    2014.08.14 Shaukat Ali & Muhammad Zohaib Iqbal Improved Heuristics for Solving OCL Constraints using Search Algorithms 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1231-1238, Vancouver Canada, 12-16 July   Inproceedings Testing and Debugging
    Abstract: The Object Constraint Language (OCL) is a standard language for specifying constraints on Unified Modeling Language (UML) models. The specified constraints can be used for various purposes including verification, and model-based testing (e.g., test data generation). Efficiently solving OCL constraints is one of the key requirements for the practical use of OCL. In this paper, we propose an improvement in existing heuristics to solve OCL constraints using search algorithms. We evaluate our improved heuristics using two empirical studies with three search algorithms: Alternating Variable Method (AVM), (1+1) Evolutionary Algorithm (EA), and a Genetic Algorithm (GA). We also used Random Search (RS) as a comparison baseline. The first empirical study was conducted using carefully designed artificial problems (constraints) to assess each individual heuristics. The second empirical study is based on an industrial case study provided by Cisco about model-based testing of Video Conferencing Systems. The results of both empirical evaluations reveal that the effectiveness of the search algorithms, measured in terms of time to solve the OCL constraints to generate data, is significantly improved when using the novel heuristics presented in this paper. In particular, our experiments show that (1+1) EA with the novel heuristics has the highest success rate among all the analyzed algorithms, as it requires the least number of iterations to solve constraints.
    BibTeX:
    @inproceedings{AliI14,
      author = {Shaukat Ali and Muhammad Zohaib Iqbal},
      title = {Improved Heuristics for Solving OCL Constraints using Search Algorithms},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1231-1238},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598308}
    }
    					
    2014.08.14 Marcos Alvares, Fernando Buarque & Tshilidzi Marwala Coello Coello, C.A. (Hrsg.) Application of Computational Intelligence for Source Code Classification 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14), pp. 895-902, Beijing China, 6-11 July   Inproceedings
    Abstract: Multi-language Source Code Management systems have been largely used to collaboratively manage software development projects. These systems represent a fundamental step in order to fully use communication enhancements by producing concrete value on the way people collaborate to produce more reliable computational systems. These systems evaluate results of analyses in order to organise and optimise source code. These analyses are strongly dependent on technologies (i.e. framework, programming language, libraries) each of them with their own characteristics and syntactic structure. To overcome such limitation, source code classification is an essential pre-processing step to identify which analyses should be evaluated. This paper introduces a new approach for generating content-based classifiers by using Evolutionary Algorithms. Experiments were performed on real world source code collected from more than 200 different open source projects. Results show us that our approach can be successfully used for creating more accurate source code classifiers. The resulting classifier is also expansible and flexible to new classification scenarios (opening perspectives for new technologies).
    BibTeX:
    @inproceedings{AlvaresBM14,
      author = {Marcos Alvares and Fernando Buarque and Tshilidzi Marwala},
      title = {Application of Computational Intelligence for Source Code Classification},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {895--902},
      address = {Beijing, China},
      month = {6-11 July},
      url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6900300}
    }
    					
    2014.08.14 Boukhdhir Amal, Marouane Kessentini, Slim Bechikh, Josselin Dea & Lamjed Ben Said On the Use of Machine Learning and Search-Based Software Engineering for Ill-Defined Fitness Function: A Case Study on Software Refactoring 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 31-45, Fortaleza Brazil, 26-29 August   Inproceedings Distribution and Maintenance
    Abstract: The most challenging step when adapting a search-based technique for a software engineering problem is the definition of the fitness function. For several software engineering problems, a fitness function is ill-defined, subjective, or difficult to quantify. For example, the evaluation of a software design is subjective. This paper introduces the use of a neural network-based fitness function for the problem of software refactoring. The software engineers evaluate manually the suggested refactoring solutions by a Genetic Algorithm (GA) for few iterations then an Artificial Neural Network (ANN) uses these training examples to evaluate the refactoring solutions for the remaining iterations. We evaluate the efficiency of our approach using six different open-source systems through an empirical study and compare the performance of our technique with several existing refactoring studies.
    BibTeX:
    @inproceedings{AmalKBDS14,
      author = {Boukhdhir Amal and Marouane Kessentini and Slim Bechikh and Josselin Dea and Lamjed Ben Said},
      title = {On the Use of Machine Learning and Search-Based Software Engineering for Ill-Defined Fitness Function: A Case Study on Software Refactoring},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {31-45},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_3}
    }
    					
    2014.09.02 Yasaman Amannejad, Vahid Garousi, Rob Irving & Zahra Sahaf A Search-Based Approach for Cost-Effective Software Test Automation Decision Support and an Industrial Case Study 2014 Procedings of IEEE 7th International Conference on Software Testing, Verification and Validation Workshops (ICSTW '14), pp. 302-311, Cleveland OH USA, 31 March - 4 April   Inproceedings Testing and Debugging
    Abstract: Test automation is a widely-used approach to reduce the cost of manual software testing. However, if it is not planned or conducted properly, automated testing would not necessarily be more cost effective than manual testing. Deciding what parts of a given System Under Test (SUT) should be tested in an automated fashion and what parts should remain manual is a frequently-asked and challenging question for practitioner testers. In this study, we propose a search-based approach for deciding what parts of a given SUT should be tested automatically to gain the highest Return On Investment (ROI). This work is the first systematic approach for this problem, and significance of our approach is that it considers automation in the entire testing process (i.e., from test-case design, to test scripting, to test execution, and test-result evaluation). The proposed approach has been applied in an industrial setting in the context of a software product used in the oil and gas industry in Canada. Among the results of the case study is that, when planned and conducted properly using our decision-support approach, test automation provides the highest ROI. In this study, we show that if automation decision is taken effectively, test-case design, test execution, and test evaluation can result in about 307%, 675%, and 41% ROI in 10 rounds of using automated test suites.
    BibTeX:
    @inproceedings{AmannejadGIS14,
      author = {Yasaman Amannejad and Vahid Garousi and Rob Irving and Zahra Sahaf},
      title = {A Search-Based Approach for Cost-Effective Software Test Automation Decision Support and an Industrial Case Study},
      booktitle = {Procedings of IEEE 7th International Conference on Software Testing, Verification and Validation Workshops (ICSTW '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {302-311},
      address = {Cleveland, OH, USA},
      month = {31 March - 4 April},
      doi = {http://dx.doi.org/10.1109/ICSTW.2014.34}
    }
    					
    2014.09.22 Amarjeet & Jitender Kumar Chhabra An Empirical Study of the Sensitivity of Quality Indicator for Software Module Clustering 2014 Proceedings of the 7th International Conference on Contemporary Computing (IC3 '14), pp. 206-211, Noida India, 7-9 August   Inproceedings
    Abstract: Recently, there has been a significant progress in applying evolutionary multiobjective optimization techniques to solve software module clustering problem. The results of evolutionary multiobjective optimization techniques for software module clustering problem are a set of many non-dominating clustering solutions. Generally, the quality indicators of clustering solutions produced by these techniques are sensitive to minor variation in the decision variables of the clustering solutions. Researchers have focused on finding software module clustering with better cluster quality indicator; however in practice developers may not always be interested to better quality indicator clustering solutions, particularly if these quality indicators are quite sensitive. Under such situations, developer looks for clustering solutions whose quality indicators are not sensitive to small variations in the decision variables of the candidate clustering solution. The paper performs an experiment for the sensitivity analysis of quality indicator on software module clustering solution with two multiobjective formulations MCA and ECA. To perform the experiment the NSGA-II is used as multi-objective evolutionary algorithm. We evaluate sensitivity of quality indicators for six real-world software and one random problem. Results indicate that the quality indicator for MCA formulation is less sensitive than ECA formulation and hence MCA will be a better choice for multiobjective software module clustering from sensitivity perspective.
    BibTeX:
    @inproceedings{AmarjeetC14,
      author = {Amarjeet and Jitender Kumar Chhabra},
      title = {An Empirical Study of the Sensitivity of Quality Indicator for Software Module Clustering},
      booktitle = {Proceedings of the 7th International Conference on Contemporary Computing (IC3 '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {206-211},
      address = {Noida, India},
      month = {7-9 August},
      doi = {http://dx.doi.org/10.1109/IC3.2014.6897174}
    }
    					
    2015.12.08 Amarjeet & Jitender Kumar Chhabra Preserving Core Components of Object-Oriented Packages while Maintaining Structural Quality 2014 Proceedings of International Conference on Information and Communication Technologies (ICICT '14), pp. 833–840, Kochi India, 3-5 December   Inproceedings
    Abstract: Different software maintenance activities, carried out from time to time, lead to structural quality degradation. To improve the degraded structural quality of the software system, re-structuring of software entities is desirable and the same can be achieved by using a suitable software clustering technique. Current techniques require too many components (e.g., classes) to be moved between modules (e.g., packages) to achieve high quality software. In such a scenario, the core components of the packages may also move, resulting into loss of identity of the packages. This paper presents a multi-objective evolutionary optimization technique to improve the quality of the existing software while preserving the core components of the package. We evaluate structural quality of six real-world and one random problem instances, using MCA and ECA multi-objective approach.
    BibTeX:
    @inproceedings{AmarjeetC14b,
      author = {Amarjeet and Jitender Kumar Chhabra},
      title = {Preserving Core Components of Object-Oriented Packages while Maintaining Structural Quality},
      booktitle = {Proceedings of International Conference on Information and Communication Technologies (ICICT '14)},
      publisher = {Elsevier},
      year = {2014},
      pages = {833–840},
      address = {Kochi, India},
      month = {3-5 December},
      doi = {http://dx.doi.org/10.1016/j.procs.2015.02.152}
    }
    					
    2015.12.09 Azam Andalib & Seyed Morteza Babamir A New Approach for Test Case Generation by Discrete Particle Swarm Optimization Algorithm 2014 Proceedings of the 22nd Iranian Conference on Electrical Engineering (ICEE '14), pp. 1180-1185, Tehran Iran, 20-22 May   Inproceedings
    Abstract: The increasing complexities in softwares, have reduced the efficiency of the common methods of software testing and urged the necessity to use new and optimal methods to produce test case (TC) to cover high percentage of target plan to find existing errors. Thus, today, the production of TC is considered to be an important aim in software testing methods. Particle Swarm Optimization (PSO) is an intelligent technique based on the collective movement of the particles inspired by social behavior of the flocks of birds and schools of fish. After full introduction of this algorithm and the reasons behind usage PSO, the present study will propose a method for automatic production of the TC, such that the highest code covering is done by the minimum TC. For better analysis of the results, a plan was investigated as a case study with the proposed method. As the evolutionary structures such as genetic algorithm (GA) with high percent of code covering are being used for a long time, thus to prove the optimization of the proposed method, the results are compared with the GA.
    BibTeX:
    @inproceedings{AndalibB14,
      author = {Azam Andalib and Seyed Morteza Babamir},
      title = {A New Approach for Test Case Generation by Discrete Particle Swarm Optimization Algorithm},
      booktitle = {Proceedings of the 22nd Iranian Conference on Electrical Engineering (ICEE '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {1180-1185},
      address = {Tehran, Iran},
      month = {20-22 May},
      doi = {http://dx.doi.org/10.1109/IranianCEE.2014.6999714}
    }
    					
    2016.03.08 Sandro Santos Andrade & Raimundo José de A. Macêdo Do Search-Based Approaches Improve the Design of Self-Adaptive Systems? A Controlled Experiment 2014 Proceedings of Brazilian Symposium on Software Engineering (SBES '14), pp. 101-110, Maceio Brazil, 28 September - 3 October   Inproceedings
    Abstract: Endowing software systems with self-adaptation capabilities has shown to be quite effective in coping with uncertain and dynamic operational environments as well as managing the complexity generated by non-functional requirements. Nowadays, a large number of approaches tackle the issue of enabling self-adaptive behavior from different perspectives and under diverse assumptions, making it harder for architects to make judicious decisions about design alternatives and quality attributes tradeoffs. It has currently been claimed that search-based software design approaches may improve the quality of resulting artifacts and the productivity of design processes, as a consequence of promoting a more comprehensive and systematic representation of design knowledge and preventing design bias and false intuition. To the best of our knowledge, no controlled experiments have been performed to provide sound evidence of such claim in the self-adaptive systems domain. In this paper, we report the results of a quasi-experiment performed with 24 students of a graduate program in Distributed and Ubiquitous Computing. The experiment evaluated the design of self-adaptive systems using a search-based approach, in contrast to the use of a style-based non-automated approach. The results show that search-based approaches can improve the effectiveness of resulting architectures and reduce design complexity. We found no evidence regarding the method's potential for leveraging the acquisition of distilled design knowledge by novice software architects.
    BibTeX:
    @inproceedings{AndradeM14,
      author = {Sandro Santos Andrade and Raimundo José de A. Macêdo},
      title = {Do Search-Based Approaches Improve the Design of Self-Adaptive Systems? A Controlled Experiment},
      booktitle = {Proceedings of Brazilian Symposium on Software Engineering (SBES '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {101-110},
      address = {Maceio, Brazil},
      month = {28 September - 3 October},
      doi = {http://dx.doi.org/10.1109/SBES.2014.17}
    }
    					
    2014.08.14 Allysson Allex Araújo & Matheus Henrique Esteves Paixão Machine Learning for User Modeling in an Interactive Genetic Algorithm for the Next Release Problem 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 228-233, Fortaleza Brazil, 26-29 August   Inproceedings Requirements/Specifications
    Abstract: The Next Release Problem consists in selecting which requirements will be implemented in the next software release. For many SBSE approaches to the NRP, there is still a lack of ability to efficiently include the human opinion and its peculiarities in the search process. Most of these difficulties are due to the problem of the human fatigue. Thus, it is proposed the use of a machine learning technique to model the user and replace him in an Interactive Genetic Algorithm to the NRP. Intermediate results are able to show that an IGA can succesfully incorporate the user preferences in the final solution.
    BibTeX:
    @inproceedings{AraujoP14,
      author = {Allysson Allex Araújo and Matheus Henrique Esteves Paixão},
      title = {Machine Learning for User Modeling in an Interactive Genetic Algorithm for the Next Release Problem},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {228-233},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_16}
    }
    					
    2013.10.09 Andrea Arcuri & Lionel Briand A Hitchhiker's Guide To Statistical Tests For Assessing Randomized Algorithms In Software Engineering 2014 Journal of Software Testing, Verification and Reliability, Vol. 24(3), pp. 219-250, May   Article Testing and Debugging
    Abstract: Randomized algorithms are widely used to address many types of software engineering problems, especially in the area of software verification and validation with a strong emphasis on test automation. However, randomized algorithms are affected by chance and so require the use of appropriate statistical tests to be properly analysed in a sound manner. This paper features a systematic review regarding recent publications in 2009 and 2010 showing that, overall, empirical analyses involving randomized algorithms in software engineering tend to not properly account for the random nature of these algorithms. Many of the novel techniques presented clearly appear promising, but the lack of soundness in their empirical evaluations casts unfortunate doubts on their actual usefulness. In software engineering, although there are guidelines on how to carry out empirical analyses involving human subjects, those guidelines are not directly and fully applicable to randomized algorithms. Furthermore, many of the textbooks on statistical analysis are written from the viewpoints of social and natural sciences, which present different challenges from randomized algorithms. To address the questionable overall quality of the empirical analyses reported in the systematic review, this paper provides guidelines on how to carry out and properly analyse randomized algorithms applied to solve software engineering tasks, with a particular focus on software testing, which is by far the most frequent application area of randomized algorithms within software engineering.
    BibTeX:
    @article{ArcuriB14,
      author = {Andrea Arcuri and Lionel Briand},
      title = {A Hitchhiker's Guide To Statistical Tests For Assessing Randomized Algorithms In Software Engineering},
      journal = {Journal of Software Testing, Verification and Reliability},
      year = {2014},
      volume = {24},
      number = {3},
      pages = {219-250},
      month = {May},
      doi = {http://dx.doi.org/10.1002/stvr.1486}
    }
    					
    2014.08.14 Andrea Arcuri & Gordon Fraser On the Effectiveness of Whole Test Suite Generation 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 1-15, Fortaleza Brazil, 26-29 August   Inproceedings Testing and Debugging
    Abstract: A common application of search-based software testing is to generate test cases for all goals defined by a coverage criterion (e.g., statements, branches, mutants). Rather than generating one test case at a time for each of these goals individually, whole test suite generation optimizes entire test suites towards satisfying all goals at the same time. There is evidence that the overall coverage achieved with this approach is superior to that of targeting individual coverage goals. Nevertheless, there remains some uncertainty on whether the whole test suite approach might be inferior to a more focused search in the case of particularly difficult coverage goals. In this paper, we perform an in-depth analysis to study if this is the case. An empirical study on 100 Java classes reveals that indeed there are some testing goals that are easier to cover with the traditional approach. However, their number is not only very small in comparison with those which are only covered by the whole test suite approach, but also those coverage goals appear in small classes for which both approaches already obtain high coverage.
    BibTeX:
    @inproceedings{ArcuriF14,
      author = {Andrea Arcuri and Gordon Fraser},
      title = {On the Effectiveness of Whole Test Suite Generation},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {1-15},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_1}
    }
    					
    2014.09.02 Andrea Arcuri, Gordon Fraser & Juan Pablo Galeotti Automated Unit Test Generation for Classes with Environment Dependencies 2014 Proceedings of the 2014 International Conference on Automated Software Engineering (ASE '14), pp. 79-90, Västerås Sweden, 15-19 September   Inproceedings Testing and Debugging
    Abstract: Automated test generation for object-oriented software typically consists of producing sequences of calls aiming at high code coverage. In practice, the success of this process may be inhibited when classes interact with their environment, such as the file system, network, user-interactions, etc. This leads to two major problems: First, code that depends on the environment can sometimes not be fully covered simply by generating sequences of calls to a class under test, for example when execution of a branch depends on the contents of a file. Second, even if code that is environment- dependent can be covered, the resulting tests may be unstable, i.e., they would pass when first generated, but then may fail when executed in a different environment. For example, tests on classes that make use of the system time may have failing assertions if the tests are executed at a different time than when they were generated. In this paper, we apply bytecode instrumentation to automatically separate code from its environmental dependencies, and extend the EVOSUITE Java test generation tool such that it can explicitly set the state of the environment as part of the sequences of calls it generates. Using a prototype implementation, which handles a wide range of environmental interactions such as the file system, console inputs and many non-deterministic functions of the Java virtual machine (JVM), we performed experiments on 100 Java projects randomly selected from SourceForge (the SF100 corpus). The results show significantly improved code coverage? in some cases even in the order of +80%/+90%. Furthermore, our techniques reduce the number of unstable tests by more than 50%.
    BibTeX:
    @inproceedings{ArcuriFG14,
      author = {Andrea Arcuri and Gordon Fraser and Juan Pablo Galeotti},
      title = {Automated Unit Test Generation for Classes with Environment Dependencies},
      booktitle = {Proceedings of the 2014 International Conference on Automated Software Engineering (ASE '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {79-90},
      address = {Västerås, Sweden},
      month = {15-19 September},
      doi = {http://dx.doi.org/10.1145/2642937.2642986}
    }
    					
    2014.08.14 Danilo Ardagna, Giovanni Paolo Gibilisco, Michele Ciavotta & Alexander Lavrentev A Multi-Model Optimization Framework for the Model Driven Design of Cloud Applications 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 61-76, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: The rise and adoption of the Cloud computing paradigm had a strong impact on the ICT world in the last few years; this technology has now reached maturity and Cloud providers offer a variety of solutions and services to their customers. However, beside the advantages, Cloud computing introduced new issues and challenges. In particular, the heterogeneity of the Cloud services offered and their relative pricing models makes the identification of a deployment solution that minimizes costs and guarantees QoS very complex. Performance assessment of Cloud based application needs for new models and tools to take into consideration the dynamism and multi-tenancy intrinsic of the Cloud environment. The aim of this work is to provide a novel mixed integer linear program (MILP) approach to find a minimum cost feasible cloud configuration for a given cloud based application. The feasibility of the solution is considered with respect to some non-functional requirements that are analyzed through multiple performance models with different levels of accuracy. The initial solution is further improved by a local search based procedure. The quality of the initial feasible solution is compared against first principle heuristics currently adopted by practitioners and Cloud providers.
    BibTeX:
    @inproceedings{ArdagnaGCL14,
      author = {Danilo Ardagna and Giovanni Paolo Gibilisco and Michele Ciavotta and Alexander Lavrentev},
      title = {A Multi-Model Optimization Framework for the Model Driven Design of Cloud Applications},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {61-76},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_5}
    }
    					
    2014.02.21 Wesley Klewerton Guez Assunção, Thelma Elita Colanzi, Silvia Regina Vergilio & Aurora Pozo A Multi-objective Optimization Approach for the Integration and Test Order Problem 2014 Information Sciences, Vol. 267, pp. 119-139, May   Article Testing and Debugging
    Abstract: A common problem found during the integration testing is to determine an order to integrate and test the units. Important factors related to stubbing costs and constraints regarding to the software development context must be considered. To solve this problem, the most promising results were obtained with multi-objective algorithms, however few algorithms and contexts have been addressed by existing works. Considering such fact, this paper aims at introducing a generic approach based on multi-objective optimization to be applied in different development contexts and with distinct multi-objective algorithms. The approach is instantiated in the object and aspect-oriented contexts, and evaluated with real systems and three algorithms: NSGA-II, SPEA2 and PAES. The algorithms are compared by using different number of objectives and four quality indicators. Results point out that the characteristics of the systems, the instantiation context and the number of objectives influence on the behavior of the algorithms. Although for more complex systems, PAES reaches better results, NSGA-II is more suitable to solve the referred problem in general cases, considering all systems and indicators.
    BibTeX:
    @article{AssuncaoCVP14,
      author = {Wesley Klewerton Guez Assunção and Thelma Elita Colanzi and Silvia Regina Vergilio and Aurora Pozo},
      title = {A Multi-objective Optimization Approach for the Integration and Test Order Problem},
      journal = {Information Sciences},
      year = {2014},
      volume = {267},
      pages = {119-139},
      month = {May},
      doi = {http://dx.doi.org/10.1016/j.ins.2013.12.040}
    }
    					
    2014.02.21 Márcio de Oliveira Barros An Experimental Evaluation of the Importance of Randomness in Hill Climbing Searches applied to Software Engineering Problems 2014 Empirical Software Engineering   Article
    Abstract: Random number generators are a core component of heuristic search algorithms. They are used to build candidate solutions and reduce bias while transforming these solutions during the search. Despite their usefulness, random numbers also have drawbacks, as one cannot guarantee that all portions of the search space are covered by the search and must run an algorithm many times to statistically assess its behavior. Determine whether deterministic quasi-random sequences can be used as an alternative to pseudo-random numbers in feeding “randomness” into Hill Climbing searches addressing Software Engineering problems. We have designed and executed three experimental studies in which a Hill Climbing search was used to find solutions for two Software Engineering problems: software module clustering and requirement selection. The algorithm was executed using both pseudo-random numbers and three distinct quasi-random sequences (Faure, Halton, and Sobol). The software clustering problem was evaluated for 32 real-world instances and the requirement selection problem was addressed using 15 instances reused from previous research works. The experimental studies were chained to allow varying as few as possible experimental factors between any given study and its subsequent one. Results found by searches powered by distinct quasi-random sequences were compared to those produced by the pseudo-random search on a per instance basis. The comparison evaluated search efficiency (processing time required to run the search) and effectiveness (quality of results produced by the search). Contrary to previous findings observed in the context of other heuristic search algorithms, we found evidence that quasi-random sequences cannot outperform pseudo-random numbers regularly in Hill Climbing searches. Detailed statistical analysis is provided to support the evidence favoring pseudo-random numbers.
    BibTeX:
    @article{Barros14,
      author = {Márcio de Oliveira Barros},
      title = {An Experimental Evaluation of the Importance of Randomness in Hill Climbing Searches applied to Software Engineering Problems},
      journal = {Empirical Software Engineering},
      year = {2014},
      doi = {http://dx.doi.org/10.1007/s10664-013-9294-4}
    }
    					
    2014.08.14 Omar Benomar, Houari Sahraoui & Pierre Poulin Detecting Program Execution Phases using Heuristic Search 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 16-30, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: Understanding a program from its execution traces is extremely difficult because a trace consists of thousands to millions of events, such as method calls, object creation and destruction, etc. Nonetheless, execution traces can provide valuable information, once abstracted from their low-level events. We propose to identify feature-level phases based on events collected from traces of the program execution. We cast our approach in an optimization problem, searching through the dynamic information provided by the program’s execution traces to form a set of phases that minimizes coupling while maximizing cohesion. We applied and evaluated our search algorithms on different execution scenarios of JHotDraw and Pooka.
    BibTeX:
    @inproceedings{BenomarSP14,
      author = {Omar Benomar and Houari Sahraoui and Pierre Poulin},
      title = {Detecting Program Execution Phases using Heuristic Search},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {16-30},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_2}
    }
    					
    2016.03.08 Nazia Bibi, Ali Ahsan & Zeeshan Anwar Project Resource Allocation Optimization using Search Based Software Engineering - A Framework 2014 Proceedings of the 9th International Conference on Digital Information Management (ICDIM '14), pp. 226-229, Phitsanulok Thailand, 29 September - 1 October   Inproceedings
    Abstract: Human Resource Management is an important area of project management. The concept of Human Resource Allocation is not new and it can also be used for resource allocation to software projects. Software projects are more critical as compared to projects of other disciplines because success of software projects dependents on human resources. In software projects, Project Manager (PM) allocates resources and level resources using Resource Leveling techniques which are already implemented in various project management software. But, Resource Leveling is a resource smoothing technique not an optimization technique neither it ensures optimized resource allocation. Furthermore, Project duration and cost may increase after resource leveling. Therefore, resource leveling is not always a reliable method for resource allocation optimization. Exact solution of resource optimization problem cannot be determined because resource optimization is a NP-Hard problem. To solve resource optimization problems Search Based Software Engineering (SBSE) is used in various studies. However, in existing SBSE algorithms implementation for resource allocation optimization many objectives are not considered. Resource allocation optimization is a multi-objective optimization problem and many important factors like activity criticality, resource skills, activity precedence, skill required to perform activities must be addressed. Our research fills this gap and uses multiple objectives for resource allocation optimization which are 1: increase resource utilization, 2: decrease project duration and 3: decrease project cost. A framework and mathematical model for the implementation of resource allocation optimization is also proposed.
    BibTeX:
    @inproceedings{BibiAA14,
      author = {Nazia Bibi and Ali Ahsan and Zeeshan Anwar},
      title = {Project Resource Allocation Optimization using Search Based Software Engineering - A Framework},
      booktitle = {Proceedings of the 9th International Conference on Digital Information Management (ICDIM '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {226-229},
      address = {Phitsanulok, Thailand},
      month = {29 September - 1 October},
      doi = {http://dx.doi.org/10.1109/ICDIM.2014.6991431}
    }
    					
    2011.05.19 Paulo M.S. Bueno, Mario Jino & W. Eric Wong Diversity Oriented Test Data Generation using Metaheuristic Search Techniques 2014 Information Sciences, Vol. 259, pp. 490-509   Article Testing and Debugging
    Abstract: We present a new test data generation technique which uses the concept of diversity of test sets as a basis for the diversity oriented test data generation - DOTG. Using DOTG we translate into an automatic test data generation technique the intuitive belief that increasing the variety, or diversity, of the test data used to test a program can lead to an improvement on the completeness, or quality, of the testing performed. We define the input domain perspective for diversity (DOTG-ID), which considers the distances among the test data in the program input domain to compute a diversity value for test sets. We describe metaheuristics which can be used to automate the generation of test sets for the DOTG-ID testing technique: simulated annealing; a genetic algorithm; and a proposed metaheuristic named simulated repulsion. The effectiveness of DOTG-ID was evaluated by using a Monte Carlo simulation, and also by applying the technique to test simple programs and measuring the data-flow coverage and mutation scores achieved. The standard random testing technique was used as a baseline for these evaluations. Results provide an understanding of the potential gains in terms of testing effectiveness of DOTG-ID over random testing and also reveal testing factors which can make DOTG-ID less effective.
    BibTeX:
    @article{BuenoJW14,
      author = {Paulo M.S. Bueno and Mario Jino and W. Eric Wong},
      title = {Diversity Oriented Test Data Generation using Metaheuristic Search Techniques},
      journal = {Information Sciences},
      year = {2014},
      volume = {259},
      pages = {490-509},
      doi = {http://dx.doi.org/10.1016/j.ins.2011.01.025}
    }
    					
    2016.03.08 Thelma Elita Colanzi & Silvia Regina Vergilio A Comparative Analysis of Two Multi-objective Evolutionary Algorithms in Product Line Architecture Design Optimization 2014 Proceedings of IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI '14), pp. 681-688, Limassol Cyprus, 10-12 November   Inproceedings
    Abstract: The Product Line Architecture (PLA) design is a multi-objective optimization problem that can be properly solved with search-based algorithms. However, search-based PLA design is an incipient research field. Due to this, works in this field have addressed main points to solve the problem: adequate representation, specific search operators and suitable evaluation fitness functions. Similarly what happens in the search-based design of traditional software, existing works on search-based PLA design use NSGA-II, without evaluating the characteristics of this algorithm, such as the use of crossover operator. Considering this fact, this paper reports results from a comparative analysis of two algorithms, NSGA-II and PAES, to the PLA design problem. PAES was chosen because it implements a different evolution strategy that does not employ crossover. An experimental study was carried out with nine PLAs and results of the conducted study attest that NSGA-II performs better than PAES in the PLA design context.
    BibTeX:
    @inproceedings{ColanziV14,
      author = {Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {A Comparative Analysis of Two Multi-objective Evolutionary Algorithms in Product Line Architecture Design Optimization},
      booktitle = {Proceedings of IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {681-688},
      address = {Limassol, Cyprus},
      month = {10-12 November},
      doi = {http://dx.doi.org/10.1109/ICTAI.2014.107}
    }
    					
    2016.03.08 Thelma Elita Colanzi & Silvia Regina Vergilio A Feature-Driven Crossover Operator for Product Line Architecture Design Optimization 2014 IEEE 38th Annual Computer Software and Applications Conference (COMPSAC '14), pp. 43-52, Västeras Sweden, 21-25 July   Inproceedings
    Abstract: The Product Line Architecture (PLA) design is a multi-objective optimization problem that can be properly solved in the Search Based Software Engineering (SBSE) field. However, the PLA design has specific characteristics. For example, the PLA is designed in terms of features and a highly modular PLA is necessary to enable the growth of a software product line. However, existing search based design approaches do not consider such needs. To overcome this limitation, this paper introduces a feature-driven crossover operator that aims at improving feature modularization. The proposed operator was applied in an empirical study using the multi-objective evolutionary algorithm named NSGAII. In comparison with another version of NSGAII that uses only mutation operators, the feature-driven crossover version found a greater diversity of solutions (potential PLA designs), with higher feature-based cohesion, and less feature scattering and tangling.
    BibTeX:
    @inproceedings{ColanziV14b,
      author = {Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {A Feature-Driven Crossover Operator for Product Line Architecture Design Optimization},
      booktitle = {IEEE 38th Annual Computer Software and Applications Conference (COMPSAC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {43-52},
      address = {Västeras, Sweden},
      month = {21-25 July},
      doi = {http://dx.doi.org/10.1109/COMPSAC.2014.11}
    }
    					
    2014.08.14 Andrew M. Connor & Amit Shah Resource Allocation using Metaheuristic Search 2014 Proceedings of the 4th International Conference on Computer Science and Information Technology (CCSIT '14), Sydney Australia, 21-22 February   Inproceedings Management
    Abstract: This research is focused on solving problems in the area of software project management using metaheuristic search algorithmsand as such is research in the field of search based software engineering. The main aim of this research is to evaluate the performance of different metaheuristic search techniques in resource allocation and scheduling problemsthat would be typical of software development projects.This paper reports a set of experiments which evaluate the performance of three algorithms, namely simulated annealing, tabu search and genetic algorithms. The experimental results indicate thatall of themetaheuristics search techniques can be used to solve problems in resource allocation and scheduling within a software project. Finally, a comparative analysis suggests that overall the genetic algorithm had performed better than simulated annealing and tabu search.
    BibTeX:
    @inproceedings{ConnorS14,
      author = {Andrew M. Connor and Amit Shah},
      title = {Resource Allocation using Metaheuristic Search},
      booktitle = {Proceedings of the 4th International Conference on Computer Science and Information Technology (CCSIT '14)},
      year = {2014},
      address = {Sydney, Australia},
      month = {21-22 February},
      doi = {http://dx.doi.org/10.5121/csit.2014.4230}
    }
    					
    2014.08.14 Haitao Dan, Mark Harman, Jens Krinke, Lingbo Li, Alexandru Marginean & Fan Wu Pidgin Crasher: Searching for Minimised Crashing GUI Event Sequences 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 253-258, Fortaleza Brazil, 26-29 August   Inproceedings Testing and Debugging
    Abstract: We present a search based testing system that automatically explores the space of all possible GUI event interleavings. Search guides our system to novel crashing sequences using Levenshtein distance and minimises the resulting fault-revealing UI sequences in a post-processing hill climb. We report on the application of our system to the SSBSE 2014 challenge program, Pidgin. Overall, our Pidgin Crasher found 20 different events that caused 2 distinct kinds of bugs, while the event sequences that caused them were reduced by 84% on average using our minimisation post processor.
    BibTeX:
    @inproceedings{DanHKLMW14,
      author = {Haitao Dan and Mark Harman and Jens Krinke and Lingbo Li and Alexandru Marginean and Fan Wu},
      title = {Pidgin Crasher: Searching for Minimised Crashing GUI Event Sequences},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {253-258},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_20}
    }
    					
    2014.05.28 Dharmalingam Jeya Mala, Sabarinathan K. & Balamurugan S. A Hybrid Test Optimization Framework using Memetic Algorithm with Cuckoo Flocking Based Search Approach 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), Hyderabad India, 2-3 June   Inproceedings
    Abstract: The testing process of industrial strength applications usually takes more time to ensure that all the components are rigorously tested to have failure-free operation upon delivery. This research work proposed a hybrid optimization approach that combines the population based multi-objective optimization approach namely Memetic Algorithm with Cuckoo Search (MA-CK) to generate optimal number of test cases that achieves the specified test adequacy criteria based on mutation score and branch coverage. Further, GA, HGA and MA based heuristic algorithms are empirically evaluated and it has been shown that the proposed MA with cuckoo search based optimization algorithm provides an optimal solution.
    BibTeX:
    @inproceedings{DharmalingamKS14,
      author = {Dharmalingam Jeya Mala and Sabarinathan K. and Balamurugan S.},
      title = {A Hybrid Test Optimization Framework using Memetic Algorithm with Cuckoo Flocking Based Search Approach},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593843}
    }
    					
    2014.08.14 J. Andres Diaz-Pace, Matias Nicoletti, Silvia Schiaffino & Santiago Vidal Producing Just Enough Documentation: The Next SAD Version Problem 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 46-60, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: Software architecture knowledge is an important asset in today’s projects, as it serves to share the main design decisions among the project stakeholders. Architectural knowledge is commonly captured by the Software Architecture Document (SAD), an artifact that is useful but can also be costly to produce and maintain. In practice, the SAD often fails to fulfill its mission of addressing the stakeholders’ information needs, due to factors such as: detailed or high-level contents that do not consider all stakeholders, outdated documentation, or documentation generated late in the lifecycle, among others. To alleviate this problem, we propose a documentation strategy that seeks to balance the stakeholders’ interests in the SAD against the efforts of producing it. Our strategy is cast as an optimization problem called ”the next SAD version problem” (NSVP) and several search-based techniques for it are discussed. A preliminary evaluation of our approach has shown its potential for exploring cost-benefit tradeoffs in documentation production.
    BibTeX:
    @inproceedings{Diaz-PaceNSV14,
      author = {J. Andres Diaz-Pace and Matias Nicoletti and Silvia Schiaffino and Santiago Vidal},
      title = {Producing Just Enough Documentation: The Next SAD Version Problem},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {46-60},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_4}
    }
    					
    2014.09.03 Peter Dinges & Gul Agha Solving Complex Path Conditions through Heuristic Search on Induced Polytopes 2014 Proceedings of the 22nd ACM SIGSOFT Symposium on Foundations of Software Engineering (FSE '14), pp. 425-436, Hong Kong China, 16-21 November   Inproceedings
    Abstract: Test input generators using symbolic and concolic execution must solve path conditions to systematically explore a program and generate high coverage tests. However, path conditions may contain complicated arithmetic constraints that are infeasible to solve: a solver may be unavailable, solving may be computationally intractable, or the constraints may be undecidable. Existing test generators either simplify such constraints with concrete values to make them decidable, or rely on strong but incomplete constraint solvers. Unfortunately, simplification yields coarse approximations whose solutions rarely satisfy the original constraint. Moreover, constraint solvers cannot handle calls to native library methods. We show how a simple combination of linear constraint solving and heuristic search can overcome these limitations. We call this technique Concolic Walk. On a corpus of 11 programs, an instance of our Concolic Walk algorithm using tabu search generates tests with two- to three-times higher coverage than simplification-based tools while being up to five-times as efficient. Furthermore, our algorithm improves the coverage of two state-of-the-art test generators by 21% and 32%. Other concolic and symbolic testing tools could integrate our algorithm to solve complex path conditions without having to sacrifice any of their own capabilities, leading to higher overall coverage.
    BibTeX:
    @inproceedings{DingesA14,
      author = {Peter Dinges and Gul Agha},
      title = {Solving Complex Path Conditions through Heuristic Search on Induced Polytopes},
      booktitle = {Proceedings of the 22nd ACM SIGSOFT Symposium on Foundations of Software Engineering (FSE '14)},
      publisher = {ACM},
      year = {2014},
      pages = {425-436},
      address = {Hong Kong, China},
      month = {16-21 November},
      doi = {http://dx.doi.org/10.1145/2635868.2635889}
    }
    					
    2014.08.14 Dionysios Efstathiou, Peter McBurney, Steffen Zschaler & Johann Bourcier Surrogate-Assisted Optimisation of Composite Applications in Mobile Ad hoc Networks 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1239-1246, Vancouver Canada, 12-16 July   Inproceedings
    Abstract: Infrastructure-less mobile ad-hoc networks enable the development of collaborative pervasive applications. Within such dynamic networks, collaboration between devices can be realised through service-orientation by abstracting device resources as services. Recently, a framework for QoS-aware service composition has been introduced which takes into account a spectrum of orchestration patterns, and enables compositions of a better QoS than traditional centralised orchestration approaches. In this paper, we focus on the automated exploration of trade-off compositions within the search space defined by this flexible composition model. For the studied problem, the evaluation of the fitness functions guiding the search process is computationally expensive because it either involves a high-fidelity simulation or actually requires calling the composite service. To overcome this limitation, we have developed efficient surrogate models for estimating the QoS metrics of a candidate solution during the search. Our experimental results show that the use of surrogates can produce solutions with good convergence and diversity properties at a much lower computational effort.
    BibTeX:
    @inproceedings{EfstathiouMcZB14,
      author = {Dionysios Efstathiou and Peter McBurney and Steffen Zschaler and Johann Bourcier},
      title = {Surrogate-Assisted Optimisation of Composite Applications in Mobile Ad hoc Networks},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1239-1246},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598307}
    }
    					
    2014.05.28 Yong-Yi FanJiang & Yang Syu Semantic-based Automatic Service Composition with Functional and Non-functional Requirements in Design Time: A Genetic Algorithm Approach 2014 Information and Software Technology, Vol. 56(3), pp. 352-373, March   Article
    Abstract: Context In recent years, the composition of ready-made and loosely coupled services into desired systems is a common industrial approach and a widely followed research topic in academia. In the field, the current research trend is to automate this composition; however, each of the existing efforts automates only a component of the entire problem. Therefore, a real automation process that addresses all composition concerns is lacking. Objective The objective is to first identify the present composition concerns and subsequently to devise a compositional approach that covers all concerns. Ultimately, we conduct a number of experiments to investigate the proposed approach. Method We identify the current composition concerns by surveying and briefly describing the existing approaches. To include all of the identified concerns, the solution space that must be searched is highly dimensioned. Thus, we adopt a genetic algorithm (GA) due to its ability to solve problems with such characteristics. Proposed GA-based approach is designed with four unusual independent fitness functions. Additionally, experiments are carried out and discussions are presented for verification of the design, including the necessity for and correctness of the independence and priority of the four fitness functions. Results The case studies demonstrate that our approach can automatically generate the required composite services and considers all identified concerns simultaneously. The results confirm the need for the independence of the fitness function and also identify a more efficient priority for these functions. Conclusions In this study, we present an all-inclusive automatic composer that does not require human intervention and effort during the composition process and is designed for users who must address multiple composition concerns simultaneously, including requirements for overall functionality, internally workable dataflow, and non-functional transaction and quality-of-service considerations. Such multiple and complex composition requirements cannot be satisfied by any of the previous single-concern composition approaches.
    BibTeX:
    @article{FanJiangS14,
      author = {Yong-Yi FanJiang and Yang Syu},
      title = {Semantic-based Automatic Service Composition with Functional and Non-functional Requirements in Design Time: A Genetic Algorithm Approach},
      journal = {Information and Software Technology},
      year = {2014},
      volume = {56},
      number = {3},
      pages = {352-373},
      month = {March},
      doi = {http://dx.doi.org/10.1016/j.infsof.2013.12.001}
    }
    					
    2014.08.12 Filomena Ferrucci, Mark Harman & Federica Sarro Ruhe, G. & Wholin, C. (Hrsg.) 15 ( Search-Based Software Project Management, in Software Project Management in a Changing World ) 2014 Search-Based Software Project Management, in Software Project Management in a Changing World, pp. 373-399   Inbook
    Abstract: Project management presents the manager with a complex set of related optimisation problems. Decisions made can more profoundly affect the outcome of a project than any other activity. In the chapter, we provide an overview of Search-Based Software Project Management, in which search-based software engineering (SBSE) is applied to problems in software project management. We show how SBSE has been used to attack the problems of staffing, scheduling, risk, and effort estimation. SBSE can help to solve the optimisation problems the manager faces, but it can also yield insight. SBSE therefore provides both decision making and decision support. We provide a comprehensive survey of search-based software project management and give directions for the development of this subfield of SBSE.
    BibTeX:
    @inbook{FerrucciHS14,
      author = {Filomena Ferrucci and Mark Harman and Federica Sarro},
      title = {Search-Based Software Project Management, in Software Project Management in a Changing World},
      publisher = {Springer},
      year = {2014},
      pages = {373-399},
      doi = {http://dx.doi.org/10.1007/978-3-642-55035-5_15}
    }
    					
    2014.05.28 Gordon Fraser & Andrea Arcuri Automated Test Generation for Java Generics 2014 Proceedings of the 6th International Conference on Software Quality. Model-Based Approaches for Advanced Software and Systems Engineering (SWQD '14), pp. 185-198, Vienna Austria, 14-16 January   Inproceedings Testing and Debugging
    Abstract: Software testing research has resulted in effective white-box test generation techniques that can produce unit test suites achieving high code coverage. However, research prototypes usually only cover subsets of the basic programming language features, thus inhibiting practical use and evaluation. One feature commonly omitted are Java’s generics, which have been present in the language since 2004. In Java, a generic class has type parameters and can be instantiated for different types; for example, a collection can be parameterized with the type of values it contains. To enable test generation tools to cover generics, two simple changes are required to existing approaches: First, the test generator needs to use Java’s extended reflection API to retrieve the little information that remains after type erasure. Second, a simple static analysis can identify candidate classes for type parameters of generic classes. The presented techniques are implemented in the EvoSuite test data generation tool and their feasibility is demonstrated with an example.
    BibTeX:
    @inproceedings{FraserA14,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {Automated Test Generation for Java Generics},
      booktitle = {Proceedings of the 6th International Conference on Software Quality. Model-Based Approaches for Advanced Software and Systems Engineering (SWQD '14)},
      publisher = {Springer},
      year = {2014},
      pages = {185-198},
      address = {Vienna, Austria},
      month = {14-16 January},
      doi = {http://dx.doi.org/10.1007/978-3-319-03602-1_12}
    }
    					
    2014.08.14 Richard Fuchshuber & Márcio de Oliveira Barros Improving Heuristics for the Next Release Problem through Landscape Visualization 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 222-227, Fortaleza Brazil, 26-29 August   Inproceedings Requirements/Specifications
    Abstract: The selection of the requirements to be included in the next release of a software is a complex task. Each customer has their needs, but it is usually impossible to fulfill all of them due to constraints such as budget availability. The Next Release Problem (NRP) aims to select the requirements that maximize customer’s benefit, while minimizing development effort. Visualizing the problem search space from a customer concentration perspective, we observed a recurring behavior in all instances analyzed. This work presents these findings and shows some initial results of a Hill Climbing algorithm modified to take advantage of this pattern. The modified algorithm was able to generate solutions that are statistically better than those generated by the original Hill Climbing.
    BibTeX:
    @inproceedings{FuchshuberB14,
      author = {Richard Fuchshuber and Márcio de Oliveira Barros},
      title = {Improving Heuristics for the Next Release Problem through Landscape Visualization},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {222-227},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_15}
    }
    					
    2014.09.03 Juan Pablo Galeotti, Gordon Fraser & Andrea Arcuri Extending A Search-based Test Generator with Adaptive Dynamic Symbolic Execution 2014 Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA '14), pp. 421-424, California USA, 21-25 July   Inproceedings Testing and Debugging
    Abstract: Automatic unit test generation aims to support developers by alleviating the burden of test writing. Different techniques have been proposed over the years, each with distinct limitations. To overcome these limitations, we present an extension to the EvoSuite unit test generator that combines two of the most popular techniques for test case generation: Search-Based Software Testing (SBST) and Dynamic Symbolic Execution (DSE). A novel integration of DSE as a step of local improvement in a genetic algorithm results in an adaptive approach, such that the best test generation technique for the problem at hand is favoured, resulting in overall higher code coverage.
    BibTeX:
    @inproceedings{GaleottiFA14,
      author = {Juan Pablo Galeotti and Gordon Fraser and Andrea Arcuri},
      title = {Extending A Search-based Test Generator with Adaptive Dynamic Symbolic Execution},
      booktitle = {Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA '14)},
      publisher = {ACM},
      year = {2014},
      pages = {421-424},
      address = {California, USA},
      month = {21-25 July},
      doi = {http://dx.doi.org/10.1145/2610384.2628049}
    }
    					
    2014.05.28 Gregory Gay, Matt Staats, Michael W. Whalen & Mats P.E. Heimdahl Moving the Goalposts: Coverage Satisfaction Is Not Enough 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), pp. 19-22, Hyderabad India, 2-3 June   Inproceedings Testing and Debugging
    Abstract: Structural coverage criteria have been proposed to measure the adequacy of testing efforts. Indeed, in some domains—e.g., critical systems areas—structural coverage criteria must be satisfied to achieve certification. The advent of powerful search-based test generation tools has given us the ability to generate test inputs to satisfy these structural coverage criteria. While tempting, recent empirical evidence indicates these tools should be used with caution, as merely achieving high structural coverage is not necessarily indicative of high fault detection ability. In this report, we review some of these findings, and offer recommendations on how the strengths of search-based test generation methods can alleviate these issues.
    BibTeX:
    @inproceedings{GaySWH14,
      author = {Gregory Gay and Matt Staats and Michael W. Whalen and Mats P. E. Heimdahl},
      title = {Moving the Goalposts: Coverage Satisfaction Is Not Enough},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      pages = {19-22},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593837}
    }
    					
    2014.08.14 Shadi Ghaith, Miao Wang, Philip Perry & John Murphy Transaction Profile Estimation of Queueing Network Models for IT Systems using a Search-Based Technique 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 234-239, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: The software and hardware systems required to deliver modern Web based services are becoming increasingly complex. Management and evolution of the systems requires periodic analysis of performance and capacity to maintain quality of service and maximise efficient use of resources. In this work we present a method that uses a repeated local search technique to improve the accuracy of modelling such systems while also reducing the complexity and time required to perform this task. The accuracy of the model derived from the search-based approach is validated by extrapolating the performance to multiple load levels which enables system capacity and performance to be planned and managed more efficiently.
    BibTeX:
    @inproceedings{GhaithWPM14,
      author = {Shadi Ghaith and Miao Wang and Philip Perry and John Murphy},
      title = {Transaction Profile Estimation of Queueing Network Models for IT Systems using a Search-Based Technique},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {234-239},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_18}
    }
    					
    2014.09.02 Alberto Goffi, Alessandra Gorla, Andrea Mattavelli, Mauro Pezzè & Paolo Tonella Search-Based Synthesis of Equivalent Method Sequences 2014 Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE '14), pp. 366-376, Hong Kong China, 16-21 November   Inproceedings Testing and Debugging
    Abstract: Software components are usually redundant, since their interface offers different operations that are equivalent in their functional behavior. Several reliability techniques exploit this redundancy to either detect or tolerate faults in software. Metamorphic testing, for instance, executes pairs of sequences of operations that are expected to produce equivalent results, and identifies faults in case of mismatching outcomes. Some popular fault tolerance and self-healing techniques execute redundant operations in an attempt to avoid failures at runtime. The common assumption of these techniques, though, is that such redundancy is known a priori. This means that the set of operations that are supposed to be equivalent in a given component should be available in the specifications. Unfortunately, inferring this information manually can be expensive and error prone. This paper proposes a search-based technique to synthesize sequences of method invocations that are equivalent to a target method within a finite set of execution scenarios. The experimental results obtained on 47 methods from 7 classes show that the proposed approach correctly identifies equivalent method sequences in the majority of the cases where redundancy was known to exist, with very few false positives.
    BibTeX:
    @inproceedings{GoffiGMPT14,
      author = {Alberto Goffi and Alessandra Gorla and Andrea Mattavelli and Mauro Pezzè and Paolo Tonella},
      title = {Search-Based Synthesis of Equivalent Method Sequences},
      booktitle = {Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE '14)},
      publisher = {ACM},
      year = {2014},
      pages = {366-376},
      address = {Hong Kong, China},
      month = {16-21 November},
      doi = {http://dx.doi.org/10.1145/2635868.2635888}
    }
    					
    2016.03.08 Aurélio da Silva Grande, Rosiane de Freitas Rodrigues & Arilo Claudio Dias Neto A Framework to Support the Selection of Software Technologies by Search-Based Strategy 2014 Proceedings of IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI '14), pp. 979-983, Limassol Cyprus, 10-12 November   Inproceedings
    Abstract: This paper presents a framework to instantiate software technologies selection approaches by using search techniques. The software technologies selection problem (STSP) is modeled as a Combinatorial Optimization problem aiming attending different real scenarios in Software Engineering. The proposed framework works as a top-level layer over generic optimization frameworks that implement a high number of metaheuristics proposed in the technical literature, such as JMetal and OPT4J. It aims supporting software engineers that are not able to use optimization frameworks during a software project due to short deadlines and limited resources or skills. The framework was evaluated in a case study of a complex real-world software engineering scenario. This scenario was modeled as the STSP and some experiments were executed with different metaheuristics using the proposed framework. The results indicate its feasibility as support to the selection of software technologies.
    BibTeX:
    @inproceedings{GrandeFD14,
      author = {Aurélio da Silva Grande and Rosiane de Freitas Rodrigues and Arilo Claudio Dias Neto},
      title = {A Framework to Support the Selection of Software Technologies by Search-Based Strategy},
      booktitle = {Proceedings of IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {979-983},
      address = {Limassol, Cyprus},
      month = {10-12 November},
      doi = {http://dx.doi.org/10.1109/ICTAI.2014.148}
    }
    					
    2014.08.14 Giovani Guizzo, Thelma Elita Colanzi & Silvia Regina Vergilio A Pattern-Driven Mutation Operator for Search-Based Product Line Architecture Design 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 77-91, Fortaleza Brazil, 26-29 August   Inproceedings Design Tools and Techniques
    Abstract: The application of design patterns through mutation operators in search-based design may improve the quality of the architectures produced in the evolution process. However, we did not find, in the literature, works applying such patterns in the optimization of Product Line Architecture (PLA). Existing works offer manual approaches, which are not search-based, and only apply specific patterns in particular domains. Considering this fact, this paper introduces a meta-model and a mutation operator to allow the design patterns application in the search-based PLA design. The model represents suitable scopes, that is, set of architectural elements that are suitable to receive a pattern. The mutation operator is used with a multi-objective and evolutionary approach to obtain PLA alternatives. Quantitative and qualitative analysis of empirical results show an improvement in the quality of the obtained solutions.
    BibTeX:
    @inproceedings{GuizzoCV14,
      author = {Giovani Guizzo and Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {A Pattern-Driven Mutation Operator for Search-Based Product Line Architecture Design},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {77-91},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_6}
    }
    					
    2014.05.28 Joachim Hänsel Model Based Test Case Generation with Metaheuristics for Networks of Timed Automata 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), pp. 31-34, Hyderabad India, 2-3 June   Inproceedings Testing and Debugging
    Abstract: Model-Based Testing, the task of generating test inputs and oracles from a test model, has been successfully applied in the context of safety-critical real time systems. As these systems grow in complexity, test-models, designed to reflect the systems behaviour, will grow too. Currently testers face situations where test-models are too complex for present test generators. In this paper, we outline a software tool for the evaluation of the scalability of a combination of approaches for model-based test generation. We chose Networks of Timed Automata (NTA) as the modeling formalism because real-time properties can be specified and the semantics are well-defined. However, the tool input is given as a restricted UML statechart which is internally transformed. We expect this to increase industrial acceptance. The tool will provide the selection, parametrization and generation of a metaheuristic algorithm. The aim is to support test model specific generation algorithms. A simulator for NTAs will enable the metaheuristic to search for test goals in the model. For better performance, it will have an advanced parallelisation. Furthermore, input models will be used for search space reduction for even faster test case generation. The proposed approach allows the inclusion of an oracle generator that is able to provide expected outputs; this enables conformance checking between test models and systems under test. We plan to implement the outlined tool to enable test case generation even for models that are beyond the scope of currently available generators.
    BibTeX:
    @inproceedings{Hansel14,
      author = {Joachim Hänsel},
      title = {Model Based Test Case Generation with Metaheuristics for Networks of Timed Automata},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      pages = {31-34},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593840}
    }
    					
    2016.02.02 Saemundur O. Haraldsson & John R. Woodward Automated Design of Algorithms and Genetic Improvement: Contrast and Commonalities 2014 Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1373-1380, Vancouver Canada, 12-16 July   Inproceedings
    Abstract: Automated Design of Algorithms (ADA) and Genetic Improvement (GI) are two relatively young fields of research that have been receiving more attention in recent years. Both methodologies can improve programs using evolutionary search methods and successfully produce human competitive programs. ADA and GI are used for improving functional properties such as quality of solution and non-functional properties, e.g. speed, memory and, energy consumption. Only GI of the two has been used to fix bugs, probably because it is applied globally on the whole source code while ADA typically replaces a function or a method locally. While GI is applied directly to the source code ADA works ex-situ, i.e. as a separate process from the program it is improving. Although the methodologies overlap in many ways they differ on some fundamentals and for further progress to be made researchers from both disciplines should be aware of each other's work.
    BibTeX:
    @inproceedings{HaraldssonW14,
      author = {Saemundur O. Haraldsson and John R. Woodward},
      title = {Automated Design of Algorithms and Genetic Improvement: Contrast and Commonalities},
      booktitle = {Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1373-1380},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2598394.2609874}
    }
    					
    2014.08.14 Mark Harman, Syed Islam, Yue Jia, Leandro L. Minku, Federica Sarro & Komsan Srivisut Less is More: Temporal Fault Predictive Performance over Multiple Hadoop Releases 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 240-246, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: We investigate search based fault prediction over time based on 8 consecutive Hadoop versions, aiming to analyse the impact of chronology on fault prediction performance. Our results confound the assumption, implicit in previous work, that additional information from historical versions improves prediction; though G-mean tends to improve, Recall can be reduced.
    BibTeX:
    @inproceedings{HarmanIJMSS14,
      author = {Mark Harman and Syed Islam and Yue Jia and Leandro L. Minku and Federica Sarro and Komsan Srivisut},
      title = {Less is More: Temporal Fault Predictive Performance over Multiple Hadoop Releases},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {240-246},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_18}
    }
    					
    2015.02.05 Mark Harman, Yue Jia, Jens Krinke, William B. Langdon, Justyna Petke & Yuanyuan Zhang Search Based Software Engineering for Software Product Line Engineering: A Survey and Directions for Future Work 2014 Proceedings of the 18th International Software Product Line Conference (SPLC '14), pp. 5-18, Florence Italy, 15-19 September   Inproceedings
    Abstract: This paper presents a survey of work on Search Based Software Engineering (SBSE) for Software Product Lines (SPLs). We have attempted to be comprehensive, in the sense that we have sought to include all papers that apply computational search techniques to problems in software product line engineering. Having surveyed the recent explosion in SBSE for SPL research activity, we highlight some directions for future work. We focus on suggestions for the development of recent advances in genetic improvement, showing how these might be exploited by SPL researchers and practitioners: Genetic improvement may grow new products with new functional and non-functional features and graft these into SPLs. It may also merge and parameterise multiple branches to cope with SPL branchmania.
    BibTeX:
    @inproceedings{HarmanJKLPZ14,
      author = {Mark Harman and Yue Jia and Jens Krinke and William B. Langdon and Justyna Petke and Yuanyuan Zhang},
      title = {Search Based Software Engineering for Software Product Line Engineering: A Survey and Directions for Future Work},
      booktitle = {Proceedings of the 18th International Software Product Line Conference (SPLC '14)},
      publisher = {ACM},
      year = {2014},
      pages = {5-18},
      address = {Florence, Italy},
      month = {15-19 September},
      doi = {http://dx.doi.org/10.1145/2648511.2648513}
    }
    					
    2014.08.14 Mark Harman, Yue Jia & William B. Langdon Babel Pidgin: SBSE Can Grow and Graft Entirely New Functionality into a Real World System 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 247-252, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: Adding new functionality to an existing, large, and perhaps poorly-understood system is a challenge, even for the most competent human programmer. We introduce a ‘grow and graft’ approach to Genetic Improvement (GI) that transplants new functionality into an existing system. We report on the trade offs between varying degrees of human guidance to the GI transplantation process. Using our approach, we successfully grew and transplanted a new ‘Babel Fish’ linguistic translation feature into the Pidgin instant messaging system, creating a genetically improved system we call ‘Babel Pidgin’. This is the first time that SBSE has been used to evolve and transplant entirely novel functionality into an existing system. Our results indicate that our grow and graft approach requires surprisingly little human guidance.
    BibTeX:
    @inproceedings{HarmanJL14,
      author = {Mark Harman and Yue Jia and William B. Langdon},
      title = {Babel Pidgin: SBSE Can Grow and Graft Entirely New Functionality into a Real World System},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {247-252},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_19}
    }
    					
    2014.05.28 Mark Harman, Yue Jia, William B. Langdon, Justyna Petke, Iman Hemati Moghadam, Shin Yoo & Fan Wu Genetic Improvement for Adaptive Software Engineering (keynote) 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS '14), pp. 1-4, Hyderabad India, 2-3 June   Inproceedings
    Abstract: This paper presents a brief outline of an approach to online genetic improvement. We argue that existing progress in genetic improvement can be exploited to support adaptivity. We illustrate our proposed approach with a 'dreaming smart device' example that combines online and offline machine learning and optimisation.
    BibTeX:
    @inproceedings{HarmanJLPMYW14,
      author = {Mark Harman and Yue Jia and William B. Langdon and Justyna Petke and Iman Hemati Moghadam and Shin Yoo and Fan Wu},
      title = {Genetic Improvement for Adaptive Software Engineering (keynote)},
      booktitle = {Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1-4},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593929.2600116}
    }
    					
    2014.11.26 Nikolas Havrikov, Matthias Höschele, Juan Pablo Galeotti & Andreas Zeller ACM XMLMate: Evolutionary XML Test Generation 2014 Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE '14), pp. 719-722, Hongkong China, 16-21 November   Inproceedings Testing and Debugging
    Abstract: Generating system inputs satisfying complex constraints is still a challenge for modern test generators. We present XMLMATE, a search-based test generator specially aimed at XML-based systems. XMLMATE leverages program structure, existing XML schemas, and XML inputs to generate, mutate, recombine, and evolve valid XML inputs. Over a set of seven XML-based systems, XMLMATE detected 31 new unique failures in production code, all triggered by system inputs and thus true alarms.
    BibTeX:
    @inproceedings{HavrikovHGZ14,
      author = {Nikolas Havrikov and Matthias Höschele and Juan Pablo Galeotti and Andreas Zeller},
      title = {XMLMate: Evolutionary XML Test Generation},
      booktitle = {Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE '14)},
      year = {2014},
      pages = {719-722},
      address = {Hongkong, China},
      month = {16-21 November},
      doi = {http://dx.doi.org/10.1145/2635868.2661666}
    }
    					
    2014.08.14 Christopher Henard, Mike Papadakis & Yves Le Traon Mutation-Based Generation of Software Product Line Test Configurations 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 92-106, Fortaleza Brazil, 26-29 August   Inproceedings Testing and Debugging
    Abstract: Software Product Lines (SPLs) are families of software products that can be configured and managed through a combination of features. Such products are usually represented with a Feature Model (FM). Testing the entire SPL may not be conceivable due to economical or time constraints and, more simply, because of the large number of potential products. Thus, defining methods for generating test configurations is required, and is now a very active research topic for the testing community. In this context, mutation has recently being advertised as a promising technique. Mutation evaluates the ability of the test suite to detect defective versions of the FM, called mutants. In particular, it has been shown that existing test configurations achieving the mutation criterion correlate with fault detection. Despite the potential benefit of mutation, there is no approach which aims at generating test configurations for SPL with respect to the mutation criterion. In this direction, we introduce a search-based approach which explores the SPL product space to generate product test configurations with the aim of detecting mutants.
    BibTeX:
    @inproceedings{HenardPT14,
      author = {Christopher Henard and Mike Papadakis and Yves Le Traon},
      title = {Mutation-Based Generation of Software Product Line Test Configurations},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {92-106},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_7}
    }
    					
    2014.02.21 Irman Hermadi, Chris Lokan & R. Sarker Dynamic Stopping Criteria for Search-based Test Data Generation for Path Testing 2014 Information and Software Technology, Vol. 56(4), pp. 395-407, April   Article Testing and Debugging
    Abstract: Context Evolutionary algorithms have proved to be successful for generating test data for path coverage testing. However in this approach, the set of target paths to be covered may include some that are infeasible. It is impossible to find test data to cover those paths. Rather than searching indefinitely, or until a fixed limit of generations is reached, it would be desirable to stop searching as soon it seems likely that feasible paths have been covered and all remaining un-covered target paths are infeasible. Objective The objective is to develop criteria to halt the evolutionary test data generation process as soon as it seems not worth continuing, without compromising testing confidence level. Method Drawing on software reliability growth models as an analogy, this paper proposes and evaluates a method for determining when it is no longer worthwhile to continue searching for test data to cover un-covered target paths. We outline the method, its key parameters, and how it can be used as the basis for different decision rules for early termination of a search. Twenty-one test programs from the SBSE path testing literature are used to evaluate the method. Results Compared to searching for a standard number of generations, an average of 30–75% of total computation was avoided in test programs with infeasible paths, and no feasible paths were missed due to early termination. The extra computation in programs with no infeasible paths was negligible. Conclusions The method is effective and efficient. It avoids the need to specify a limit on the number of generations for searching. It can help to overcome problems caused by infeasible paths in search-based test data generation for path testing.
    BibTeX:
    @article{HermadiLS14,
      author = {Irman Hermadi and Chris Lokan and R. Sarker},
      title = {Dynamic Stopping Criteria for Search-based Test Data Generation for Path Testing},
      journal = {Information and Software Technology},
      year = {2014},
      volume = {56},
      number = {4},
      pages = {395-407},
      month = {April},
      doi = {http://dx.doi.org/10.1016/j.infsof.2014.01.001}
    }
    					
    2014.05.28 Matthias Höschele, Juan Pablo Galeotti & Andreas Zeller Test Generation across Multiple Layers 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), pp. 1-4, Hyderabad India, 2-3 June   Inproceedings Testing and Debugging
    Abstract: Complex software systems frequently come in many layers, each realized in a different programming language. This is a challenge for test generation, as the semantics of each layer have to be determined and integrated. An automatic test generator for Java, for instance, is typically unable to deal with the internals of lower level code (such as C-code), which results in lower coverage and fewer test cases of interest. In this paper, we sketch a novel approach to help search-based test generators for Java to achieve better coverage of underlying native code layers. The key idea is to apply test generation to the native layer first, and then to use the inputs to the native test cases as targets for search-based testing of the higher Java layers. We demonstrate our approach on a case study combining KLEE and EVOSUITE.
    BibTeX:
    @inproceedings{HoscheleGZ14,
      author = {Matthias Höschele and Juan Pablo Galeotti and Andreas Zeller},
      title = {Test Generation across Multiple Layers},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1-4},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593834}
    }
    					
    2015.12.09 Bahare Hoseini & Saeed Jalili Automatic Test Path Generation from Sequence Diagram using Genetic Algorithm 2014 Proceedings of the 7th International Symposium on Telecommunications (IST '14), pp. 106-111, Tehran Iran, 9-11 September   Inproceedings
    Abstract: Software testing is an important and complicated phase of software development cycle. Software test process acquires test cases as input for the system under test to evaluate the behavior of the product. If test cases are prepared before coding, it will help the developers to control their code to conform to specification. White box testing requires a set of predefined test paths to generate test cases, therefore generating a set of reliable test paths is a critical task. The most common approach in white box testing is to generate test paths from source code while the generation process must be delayed until completion of source code. Using sequence diagram as an input artifact for generating test path is cost and time efficient due to the fact that test process starts before implementation phase. Furthermore, tester involvement in source code complexity is reduced to a minimum. Test paths are generated from the control flow graph, which is extracted from sequence diagrams. Among all graph based coverage criteria, Prime path coverage subsumes different graph based coverage criteria that lead us to complete path coverage. Also, prime path coverage concentrates on visiting all nodes and edges in the control flow graph rather than traversing all existing paths, which results in test effort reduction. Genetic algorithm is applied minimize the number of test cases required to reach the highest coverage. In this paper, we proposed a model to generate all prime paths automatically and extract minimum paths with shortest possible length, which covers all prime paths by means of genetic algorithm. The experimental results show the generated paths can easily turn into optimal test paths with the best prime path coverage having the least number of test paths.
    BibTeX:
    @inproceedings{HoseiniJ14,
      author = {Bahare Hoseini and Saeed Jalili},
      title = {Automatic Test Path Generation from Sequence Diagram using Genetic Algorithm},
      booktitle = {Proceedings of the 7th International Symposium on Telecommunications (IST '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {106-111},
      address = {Tehran, Iran},
      month = {9-11 September},
      doi = {http://dx.doi.org/10.1109/ISTEL.2014.7000678}
    }
    					
    2014.08.14 Vendula Hrubá, Bohuslav Křena, Zdeněk Letko, Hana Pluháčková & Tomáš Vojnar Multi-objective Genetic Optimization for Noise-Based Testing of Concurrent Software 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 107-122, Fortaleza Brazil, 26-29 August   Inproceedings Testing and Debugging
    Abstract: Testing of multi-threaded programs is a demanding work due to the many possible thread interleavings one should examine. The noise injection technique helps to increase the number of thread interleavings examined during repeated test executions provided that a suitable setting of noise injection heuristics is used. The problem of finding such a setting, i.e., the so called test and noise configuration search problem (TNCS problem), is not easy to solve. In this paper, we show how to apply a multi-objective genetic algorithm (MOGA) to the TNCS problem. In particular, we focus on generation of TNCS solutions that cover a high number of distinct interleavings (especially those which are rare) and provide stable results at the same time. To achieve this goal, we study suitable metrics and ways how to suppress effects of non-deterministic thread scheduling on the proposed MOGA-based approach. We also discuss a choice of a concrete MOGA and its parameters suitable for our setting. Finally, we show on a set of benchmark programs that our approach provides better results when compared to the commonly used random approach as well as to the sooner proposed use of a single-objective genetic approach.
    BibTeX:
    @inproceedings{HrubaKLPV14,
      author = {Vendula Hrubá and Bohuslav Křena and Zdeněk Letko and Hana Pluháčková and Tomáš Vojnar},
      title = {Multi-objective Genetic Optimization for Noise-Based Testing of Concurrent Software},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {107-122},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_8}
    }
    					
    2016.03.09 Yen-Ching Hsu, Kuan-Li Peng & Chin-Yu Huang A Study of Applying Severity-weighted Greedy Algorithm to Software Test Case Prioritization During Testing 2014 Proceedings of IEEE International Conference on Industrial Engineering and Engineering Management (IEEM '14), pp. 1086-1090, Bandar Sunway Malaysia, 9-12 December   Inproceedings Testing and Debugging
    Abstract: Regression testing is a very useful technique for software testing. Traditionally, there are several techniques for test case prioritization; two of the most used techniques are Greedy and Additional Greedy Algorithm (GA and AGA). However, it can be found that they may not consider the severity while prioritizing test cases. In this paper, an Enhanced Additional Greedy Algorithm (EAGA) is proposed for test case prioritization. Experiments with eight subject programs are performed to investigate the effects of different techniques under different criteria and fault severity. Experimental results show that proposed EAGA perform well than other techniques.
    BibTeX:
    @inproceedings{HsuPH14,
      author = {Yen-Ching Hsu and Kuan-Li Peng and Chin-Yu Huang},
      title = {A Study of Applying Severity-weighted Greedy Algorithm to Software Test Case Prioritization During Testing},
      booktitle = {Proceedings of IEEE International Conference on Industrial Engineering and Engineering Management (IEEM '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {1086-1090},
      address = {Bandar Sunway, Malaysia},
      month = {9-12 December},
      doi = {http://dx.doi.org/10.1109/IEEM.2014.7058806}
    }
    					
    2015.12.09 Ming Huang, Chunlei Zhang & Xu Liang Software Test Cases Generation based on Improved Particle Swarm Optimization 2014 Proceedings of the 2nd International Conference on Information Technology and Electronic Commerce (ICITEC '14), pp. 52-55, Dalian China, 20-21 December   Inproceedings
    Abstract: The analysis of test case generation based on particle swarm algorithm introduced the group self-activity feedback (SAF) operator and Gauss mutation (G) changing inertia weight to improve the performance of particle swarm optimization (PSO). Using the improved algorithm in software test case, experiments show that the introduction of a single path fitness function structure and multi-path fitness calculation of parallel thinking are superior to the iteration time in single path test than standard PSO, and more efficient in multi-path test case generation.
    BibTeX:
    @inproceedings{HuangZL14,
      author = {Ming Huang and Chunlei Zhang and Xu Liang},
      title = {Software Test Cases Generation based on Improved Particle Swarm Optimization},
      booktitle = {Proceedings of the 2nd International Conference on Information Technology and Electronic Commerce (ICITEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {52-55},
      address = {Dalian, China},
      month = {20-21 December},
      doi = {http://dx.doi.org/10.1109/ICITEC.2014.7105570}
    }
    					
    2014.09.03 Atif Aftab Jilani, Muhammad Zohaib Iqbal & Muhammad Uzair Khan A Search Based Test Data Generation Approach for Model Transformations 2014 Proceedings of the 7th International Conference on Theory and Practice of Model Transformations (ICMT '14), pp. 17-24, York UK, 21-22 July   Inproceedings Testing and Debugging
    Abstract: Model transformations are a fundamental part of Model Driven Engineering. Automated testing of model transformation is challenging due to the complexity of generating test models as test data. In the case of model transformations, the test model is an instance of a meta-model. Generating input models manually is a laborious and error prone task. Test cases are typically generated to satisfy a coverage criterion. Test data generation corresponding to various structural testing coverage criteria requires solving a number of predicates. For model transformation, these predicates typically consist of constraints on the source meta-model elements. In this paper, we propose an automated search-based test data generation approach for model transformations. The proposed approach is based on calculating approach level and branch distances to guide the search. For this purpose, we have developed specialized heuristics for calculating branch distances of model transformations. The approach allows test data generation corresponding to various coverage criteria, including statement coverage, branch coverage, and multiple condition/decision coverage. Our approach is generic and can be applied to various model transformation languages. Our developed tool, MOTTER, works with Atlas Transformation Language (ATL) as a proof of concept. We have successfully applied our approach on a well-known case study from ATL Zoo to generate test data.
    BibTeX:
    @inproceedings{JilaniIK14,
      author = {Atif Aftab Jilani and Muhammad Zohaib Iqbal and Muhammad Uzair Khan},
      title = {A Search Based Test Data Generation Approach for Model Transformations},
      booktitle = {Proceedings of the 7th International Conference on Theory and Practice of Model Transformations (ICMT '14)},
      publisher = {Springer},
      year = {2014},
      pages = {17-24},
      address = {York, UK},
      month = {21-22 July},
      doi = {http://dx.doi.org/10.1007/978-3-319-08789-4_2}
    }
    					
    2014.08.14 Nanlin Jin & Xin Yao Coello Coello, C.A. (Hrsg.) Heuristic Optimization for Software Project Management with Impacts of Team Efficiency 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14), pp. 3016-3023, Beijing China, 6-11 July   Inproceedings Management
    Abstract: Most of the studies on project scheduling problems assume that every assigned participant or every team of the same number of participants, completes tasks with an equal efficiency, but this is usually not the case for real world problems. This paper presents a more realistic and complex model with extra considerations on team efficiency which are quantitatively measured on employee task assignment. This study demonstrates the impacts of team efficiency in a well-studied software project management problem. Moreover, this study illustrates how a heuristic optimisation method, population-based incremental learning, copes with such added complexity. The experimental results show that the resulting near optimal solutions not only satisfy constraints, but also reflect the impacts of team efficiency. The findings will hopefully motivate future studies on comprehensive understandings of the quality and efficiency of team work.
    BibTeX:
    @inproceedings{JinY14,
      author = {Nanlin Jin and Xin Yao},
      title = {Heuristic Optimization for Software Project Management with Impacts of Team Efficiency},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {3016--3023},
      address = {Beijing, China},
      month = {6-11 July},
      doi = {http://dx.doi.org/10.1109/CEC.2014.6900527}
    }
    					
    2014.08.14 Muhammad Rezaul Karim & Guenther Ruhe Bi-Objective Genetic Search for Release Planning in Support of Themes 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 123-137, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: Release planning is a mandatory part of incremental and iterative software development. For the decision about which features should be implemented next, the values of features need to be balanced with the effort and readiness of their implementation. Traditional planning looks at the sum of the values of individual and potentially isolated features. As an alternative idea, a theme is a meta-functionality which integrates a number of individual features under a joint umbrella. That way, possible value synergies from offering features in conjunction (theme-related) can be utilized. In this paper, we model theme-based release planning as a bi-objective (search-based) optimization problem. Each solution of this optimization problem balances the preference between individual and theme-based planning objectives. We apply a two-stage solution approach. In Phase 1, the existing Non-dominated Sorting Genetic Algorithm-II (NSGA-II) is adapted. Subsequently, the problem of guiding the user in the set of non-dominated solutions is addressed in Phase 2. We propose and explore two alternative ways to select among the (potentially large number) Pareto-optimal solutions. The applicability and empirical analysis of the proposed approach is evaluated for two explorative case study projects having 50 resp. 25 features grouped around 8 resp. 5 themes.
    BibTeX:
    @inproceedings{KarimR14,
      author = {Muhammad Rezaul Karim and Guenther Ruhe},
      title = {Bi-Objective Genetic Search for Release Planning in Support of Themes},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {123-137},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_9}
    }
    					
    2014.09.22 Wael Kessentini, Marouane Kessentini, Houari Sahraoui, Slim Bechikh & Ali Ouni A Cooperative Parallel Search-Based Software Engineering Approach for Code-Smells Detection 2014 IEEE Transactions on Software Engineering, Vol. 40(9), pp. 841-861, September   Article
    Abstract: We propose in this paper to consider code-smells detection as a distributed optimization problem. The idea is that different methods are combined in parallel during the optimization process to find a consensus regarding the detection of code-smells. To this end, we used Parallel Evolutionary algorithms (P-EA) where many evolutionary algorithms with different adaptations (fitness functions, solution representations, and change operators) are executed, in a parallel cooperative manner, to solve a common goal which is the detection of code-smells. An empirical evaluation to compare the implementation of our cooperative P-EA approach with random search, two single population-based approaches and two code-smells detection techniques that are not based on meta-heuristics search. The statistical analysis of the obtained results provides evidence to support the claim that cooperative P-EA is more efficient and effective than state of the art detection approaches based on a benchmark of nine large open source systems where more than 85 percent of precision and recall scores are obtained on a variety of eight different types of code-smells.
    BibTeX:
    @article{KessentiniKSBO14,
      author = {Wael Kessentini and Marouane Kessentini and Houari Sahraoui and Slim Bechikh and Ali Ouni},
      title = {A Cooperative Parallel Search-Based Software Engineering Approach for Code-Smells Detection},
      journal = {IEEE Transactions on Software Engineering},
      year = {2014},
      volume = {40},
      number = {9},
      pages = {841-861},
      month = {September},
      doi = {http://dx.doi.org/10.1109/TSE.2014.2331057}
    }
    					
    2014.09.02 Fitsum Meshesha Kifetew, Wei Jin, Roberto Tiella, Alessandro Orso & Paolo Tonella Reproducing Field Failures for Programs with Complex Grammar Based Input 2014 Proceedings of the 7th International Conference on Software Testing, Verification and Validation (ICST '14), pp. 163-172, Cleveland OH USA, 31 March - 4 April   Inproceedings Testing and Debugging
    Abstract: To isolate and fix failures that occur in the field, after deployment, developers must be able to reproduce and investigate such failures in-house. In practice, however, bug reports rarely provide enough information to recreate field failures, thus making in-house debugging an arduous task. This task becomes even more challenging for programs whose input must adhere to a formal specification, such as a grammar. To help developers address this issue, we propose an approach for automatically generating inputs that recreate field failures in-house. Given a faulty program and a field failure for this program, our approach exploits the potential of grammar-guided genetic programming to iteratively find legal inputs that can trigger the observed failure using a limited amount of runtime data collected in the field. When applied to 11 failures of 5 real-world programs, our approach was able to reproduce all but one of the failures while imposing a limited amount of overhead.
    BibTeX:
    @inproceedings{KifetewJTOT14,
      author = {Fitsum Meshesha Kifetew and Wei Jin and Roberto Tiella and Alessandro Orso and Paolo Tonella},
      title = {Reproducing Field Failures for Programs with Complex Grammar Based Input},
      booktitle = {Proceedings of the 7th International Conference on Software Testing, Verification and Validation (ICST '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {163-172},
      address = {Cleveland, OH, USA},
      month = {31 March - 4 April},
      doi = {http://dx.doi.org/10.1109/ICST.2014.29}
    }
    					
    2014.08.14 Fitsum Meshesha Kifetew, Roberto Tiella & Paolo Tonella Combining Stochastic Grammars and Genetic Programming for Coverage Testing at the System Level 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 138-152, Fortaleza Brazil, 26-29 August   Inproceedings Testing and Debugging
    Abstract: When tested at the system level, many programs require complex and highly structured inputs, which must typically satisfy some formal grammar. Existing techniques for grammar based testing make use of stochastic grammars that randomly derive test sentences from grammar productions, trying at the same time to avoid unbounded recursion. In this paper, we combine stochastic grammars with genetic programming, so as to take advantage of the guidance provided by a coverage oriented fitness function during the sentence derivation and evolution process. Experimental results show that the combination of stochastic grammars and genetic programming outperforms stochastic grammars alone.
    BibTeX:
    @inproceedings{KifetewTT14,
      author = {Fitsum Meshesha Kifetew and Roberto Tiella and Paolo Tonella},
      title = {Combining Stochastic Grammars and Genetic Programming for Coverage Testing at the System Level},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {138-152},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_10}
    }
    					
    2014.09.02 Yunho Kim, Zhihong Xu, Moonzoo Kim, Myra B. Cohen & Gregg Rothermel Hybrid Directed Test Suite Augmentation: An Interleaving Framework 2014 Proceedings of the 7th International Conference on Software Testing, Verification and Validation (ICST '14), pp. 263-272, Cleveland OH USA, 31 March - 4 April   Inproceedings Testing and Debugging
    Abstract: Test suite augmentation techniques generate test cases to cover code missed by existing regression test suites. Various augmentation techniques have been proposed, utilizing several test case generation algorithms. Research has shown that different algorithms have different strengths, and that combining them into a single hybrid approach may be cost-effective. In this paper we present a framework for hybrid test suite augmentation that allows test case generation algorithms to be interleaved dynamically and that can easily incorporate new algorithms, interleaving strategies, and choices of other parameters that influence algorithm performance. We empirically study an implementation of this framework in which we use two test case generation algorithms and several algorithm interleavings. Our results show that specific instantiations of our framework can produce augmentation techniques that are more cost-effective than others, and illustrate tradeoffs between instantiations.
    BibTeX:
    @inproceedings{KimXKCR14,
      author = {Yunho Kim and Zhihong Xu and Moonzoo Kim and Myra B. Cohen and Gregg Rothermel},
      title = {Hybrid Directed Test Suite Augmentation: An Interleaving Framework},
      booktitle = {Proceedings of the 7th International Conference on Software Testing, Verification and Validation (ICST '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {263-272},
      address = {Cleveland, OH, USA},
      month = {31 March - 4 April},
      doi = {http://dx.doi.org/10.1109/ICST.2014.39}
    }
    					
    2016.02.17 Kevilienuo Kire & Neha Malhotra Software Testing using Intelligent Technique 2014 International Journal of Computer Applications, Vol. 90(19), pp. 22-25, March   Article Testing and Debugging
    Abstract: This paper proposed software testing system by using Artificial Intelligent techniques. In today's scenario, Software Testing is a critical issue in software development and maintenance for increasing the quality and reliability of the software. In Software Testing, regression testing is often performed and researchers are finding ways to reduce the regression testing cost. In this paper, an approach is proposed which draws inspiration from Swarm Intelligence to reduce test suite for regression testing. . This approach will strive to get the best optimal solution and contribute a lot in considerably reducing the testing cost, efforts and time of regression testing.
    BibTeX:
    @article{KireM14,
      author = {Kevilienuo Kire and Neha Malhotra},
      title = {Software Testing using Intelligent Technique},
      journal = {International Journal of Computer Applications},
      year = {2014},
      volume = {90},
      number = {19},
      pages = {22-25},
      month = {March},
      doi = {http://dx.doi.org/10.5120/15829-4637}
    }
    					
    2014.08.12 Z.A. Kocsis, G. Neumann, J. Swan, M.G. Epitropakis, A.E.I. Brownlee, S.O. Haraldsson & E. Bowles Repairing and Optimizing Hadoop hashCode Implementations 2014 Proceedings of Symposium on Search-Based Software Engineering (SSBSE '14), pp. 259-264, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: We describe how contract violations in JavaTM hashCode methods can be repaired using novel combination of semantics-preserving and generative methods, the latter being achieved via Automatic Improvement Programming. The method described is universally applicable. When applied to the Hadoop platform, it was established that it produces hashCode functions that are at least as good as the original, broken method as well as those produced by a widely-used alternative method from the ‘Apache Commons’ library.
    BibTeX:
    @inproceedings{KocsisNSEBHB14,
      author = {Z. A. Kocsis and G. Neumann and J. Swan and M. G. Epitropakis and A. E. I. Brownlee and S. O. Haraldsson and E. Bowles},
      title = {Repairing and Optimizing Hadoop hashCode Implementations},
      booktitle = {Proceedings of Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      pages = {259-264},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_22}
    }
    					
    2014.08.12 Zoltan A. Kocsis & Jerry Swan Asymptotic Genetic Improvement Programming with Type Functors and Catamorphisms 2014 Proceedings of Semantic Methods in Genetic Programming (SMGP) at Parallel Problem Solving from Nature (PPSN XIV), Ljubljana Slovenia   Inproceedings
    BibTeX:
    @inproceedings{KocsisS14,
      author = {Zoltan A. Kocsis and Jerry Swan},
      title = {Asymptotic Genetic Improvement Programming with Type Functors and Catamorphisms},
      booktitle = {Proceedings of Semantic Methods in Genetic Programming (SMGP) at Parallel Problem Solving from Nature (PPSN XIV)},
      year = {2014},
      address = {Ljubljana, Slovenia},
      url = {http://www.cs.put.poznan.pl/kkrawiec/smgp/uploads/Site/Kocsis.pdf}
    }
    					
    2014.11.25 Anton Kotelyanskii & Gregory M. Kapfhammer Parameter Tuning for Search-Based Test-Data Generation Revisited: Support for Previous Results 2014 Proceedings of the 14th International Conference on Quality Software (QSIC '14), pp. 79-84, Allen TX USA, 2-3 October   Inproceedings Testing and Debugging
    Abstract: Although search-based test-data generators, like EvoSuite, efficiently and automatically create effective JUnit test suites for Java classes, these tools are often difficult to configure. Prior work by Arcuri and Fraser revealed that the tuning of EvoSuite with response surface methodology (RSM) yielded a configuration of the test data generator that did not outperform the default configuration. Following the experimental design and protocol described by Arcuri and Fraser, this paper presents the results of a study that lends further support to prior results: like RSM, the EvoSuite configuration identified by the well-known Sequential Parameter Optimization Toolbox (SPOT) failed to significantly outperform the default settings. Although this result is negative, it furnishes further empirical evidence of the challenge associated with tuning a complex search-based test data generator. Moreover, the outcomes of the presented experiments also suggests that EvoSuite's default parameters have been set by experts in the field and are thus suitable for use in future experimental studies and industrial testing efforts.
    BibTeX:
    @inproceedings{KotelyanskiiK14,
      author = {Anton Kotelyanskii and Gregory M. Kapfhammer},
      title = {Parameter Tuning for Search-Based Test-Data Generation Revisited: Support for Previous Results},
      booktitle = {Proceedings of the 14th International Conference on Quality Software (QSIC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {79-84},
      address = {Allen, TX, USA},
      month = {2-3 October},
      doi = {http://dx.doi.org/10.1109/QSIC.2014.43}
    }
    					
    2016.02.02 William B. Langdon Genetic Improvement of Programs 2014 Proceedings of the 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC '14), pp. 14-19, Timisoara Romania, 22-25 September   Inproceedings
    Abstract: Genetic programming can optimise software, including: evolving test benchmarks, generating hyper-heuristics by searching meta-heuristics, generating communication protocols, composing telephony systems and web services, generating improved hashing and C++ heap managers, redundant programming and even automatic bug fixing. Particularly in embedded real-time or mobile systems, there may be many ways to trade off expenses (such as time, memory, energy, power consumption) vs. Functionality. Human programmers cannot try them all. Also the best multi-objective Pareto trade off may change with time, underlying hardware and network connection or user behaviour. It may be GP can automatically suggest different trade offs for each new market. Recent results include substantial speed up by evolving a new version of a program customised for a special case.
    BibTeX:
    @inproceedings{Langdon14,
      author = {William B. Langdon},
      title = {Genetic Improvement of Programs},
      booktitle = {Proceedings of the 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {14-19},
      address = {Timisoara, Romania},
      month = {22-25 September},
      doi = {http://dx.doi.org/10.1109/SYNASC.2014.10}
    }
    					
    2016.02.02 William B. Langdon & Mark Harman Genetically Improved CUDA C++ Software 2014 Proceedings of the 17th European Conference on Genetic Programming (EuroGP '14), pp. 87-99, Granada Spain, 23-25 April   Inproceedings
    Abstract: Genetic Programming (GP) may dramatically increase the performance of software written by domain experts. GP and autotuning are used to optimise and refactor legacy GPGPU C code for modern parallel graphics hardware and software. Speed ups of more than six times on recent nVidia GPU cards are reported compared to the original kernel on the same hardware.
    BibTeX:
    @inproceedings{LangdonH14,
      author = {William B. Langdon and Mark Harman},
      title = {Genetically Improved CUDA C++ Software},
      booktitle = {Proceedings of the 17th European Conference on Genetic Programming (EuroGP '14)},
      publisher = {Springer},
      year = {2014},
      pages = {87-99},
      address = {Granada, Spain},
      month = {23-25 April},
      url = {http://link.springer.com/chapter/10.1007%2F978-3-662-44303-3_8}
    }
    					
    2014.08.14 William B. Langdon, Marc Modat, Justyna Petke & Mark Harman Improving 3D Medical Image Registration CUDA Software with Genetic Programming 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 951-958, Vancouver Canada, 12-16 July   Inproceedings
    Abstract: Genetic Improvement (GI) is shown to optimise, in some cases by more than 35percent, a critical component of healthcare industry software across a diverse range of six nVidia graphics processing units (GPUs). GP and other search based software engineering techniques can automatically optimise the current rate limiting CUDA parallel function in the NiftyReg open source C++ project used to align or register high resolution nuclear magnetic resonance NMRI and other diagnostic NIfTI images. Future Neurosurgery techniques will require hardware acceleration, such as GPGPU, to enable real time comparison of three dimensional in theatre images with earlier patient images and reference data. With millimetre resolution brain scan measurements comprising more than ten million voxels the modified kernel can process in excess of 3 billion active voxels per second.
    BibTeX:
    @inproceedings{LangdonMPH14,
      author = {William B. Langdon and Marc Modat and Justyna Petke and Mark Harman},
      title = {Improving 3D Medical Image Registration CUDA Software with Genetic Programming},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {951-958},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598244}
    }
    					
    2014.08.14 Lingbo Li, Mark Harman, Emmanuel Letier & Yuanyuan Zhang Robust Next Release Problem: Handling Uncertainty during Optimization 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1247-1254, Vancouver Canada, 12-16 July   Inproceedings Requirements/Specifications
    Abstract: Uncertainty is inevitable in real world requirement engineering. It has a significant impact on the feasibility of proposed solutions and thus brings risks to the software release plan. This paper proposes a multi-objective optimization technique, augmented with Monte-Carlo Simulation, that optimizes requirement choices for the three objectives of cost, revenue, and uncertainty. The paper reports the results of an empirical study over four data sets derived from a single real world data set. The results show that the robust optimal solutions obtained by our approach are conservative compared to their corresponding optimal solutions produced by traditional Multi-Objective Next Release Problem. We obtain a robustness improvement of at least 18% at a small cost (a maximum 0.0285 shift in the 2D Pareto-front in the unit space). Surprisingly we found that, though a requirement's cost is correlated with inclusion on the Pareto-front, a requirement's expected revenue is not.
    BibTeX:
    @inproceedings{LiHLZ14,
      author = {Lingbo Li and Mark Harman and Emmanuel Letier and Yuanyuan Zhang},
      title = {Robust Next Release Problem: Handling Uncertainty during Optimization},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1247-1254},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598334}
    }
    					
    2014.08.14 Lukas Linsbauer, Roberto Erick Lopez-Herrejon & Alexander Egyed Feature Model Synthesis with Genetic Programming 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 153-167, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: Search-Based Software Engineering (SBSE) has proven successful on several stages of the software development life cycle. It has also been applied to different challenges in the context of Software Product Lines (SPLs) like generating minimal test suites. When reverse engineering SPLs from legacy software an important challenge is the reverse engineering of variability, often expressed in the form of Feature Models (FMs). The synthesis of FMs has been studied with techniques such as Genetic Algorithms. In this paper we explore the use of Genetic Programming for this task. We sketch our general workflow, the GP pipeline employed, and its evolutionary operators. We report our experience in synthesizing feature models from sets of feature combinations for 17 representative feature models, and analyze the results using standard information retrieval metrics.
    BibTeX:
    @inproceedings{LinsbauerLE14,
      author = {Lukas Linsbauer and Roberto Erick Lopez-Herrejon and Alexander Egyed},
      title = {Feature Model Synthesis with Genetic Programming},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {153-167},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_11}
    }
    					
    2016.02.17 Gautam M Lodha & Rahul S Gaikwad Search Based Software Testing with Genetic using Fitness Function 2014 Proceedings of International Conference on Innovative Applications of Computational Intelligence on Power, Energy and Controls with their Impact on Humanity (CIPECH '14), pp. 159-163, Ghaziabad India, 28-29 November   Inproceedings Testing and Debugging
    Abstract: Software engineering is deal with development activity in which testing is an important part. in testing if it is possible to generate an test case for code then it is easy task to find error in program so white box testing is possible using search base algorithm. in software testing white box testing is test an code which is used to develop an software. In this article purposed system is use genetic algorithm. Software testing is branch of software engineering in which testing performing vital role in any kind of software system. basically testing is noting but to find out bug as early as possible. in other words testing is intention to find out the error from software program. Software testing is an essential but expensive activity in software development life cycle and hence much research effort has been put into automation of software testing. In software testing it is interested to see how well a series of test input test a piece of code with intention to find out bug.
    BibTeX:
    @inproceedings{LodhaG14,
      author = {Gautam M Lodha and Rahul S Gaikwad},
      title = {Search Based Software Testing with Genetic using Fitness Function},
      booktitle = {Proceedings of International Conference on Innovative Applications of Computational Intelligence on Power, Energy and Controls with their Impact on Humanity (CIPECH '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {159-163},
      address = {Ghaziabad, India},
      month = {28-29 November},
      doi = {http://dx.doi.org/10.1109/CIPECH.2014.7019065}
    }
    					
    2014.08.14 Roberto E. Lopez-Herrejon & Javier Ferrer A Parallel Evolutionary Algorithm for Prioritized Pairwise Testing of Software Product Lines 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1255-1262, Vancouver Canada, 12-16 July   Inproceedings Testing and Debugging
    Abstract: Software Product Lines (SPLs) are families of related software systems, which provide different feature combinations. Different SPL testing approaches have been proposed. However, despite the extensive and successful use of evolutionary computation techniques for software testing, their application to SPL testing remains largely unexplored. In this paper we present the Parallel Prioritized product line Genetic Solver (PPGS), a parallel genetic algorithm for the generation of prioritized pairwise testing suites for SPLs. We perform an extensive and comprehensive analysis of PPGS with 235 feature models from a wide range of number of features and products, using 3 different priority assignment schemes and 5 product prioritization selection strategies. We also compare PPGS with the greedy algorithm prioritized-ICPL. Our study reveals that overall PPGS obtains smaller covering arrays with an acceptable performance difference with prioritized-ICPL.
    BibTeX:
    @inproceedings{Lopez-HerrejonF14,
      author = {Roberto E. Lopez-Herrejon and Javier Ferrer},
      title = {A Parallel Evolutionary Algorithm for Prioritized Pairwise Testing of Software Product Lines},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1255-1262},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598305}
    }
    					
    2014.08.14 Roberto Erick Lopez-Herrejon, Javier Ferrer, Francisco Chicano, Alexander Egyed & Enrique Alba Coello Coello, C.A. (Hrsg.) Comparative Analysis of Classical Multi-Objective Evolutionary Algorithms and Seeding Strategies for Pairwise Testing of Software Product Lines 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14), pp. 387-396, Beijing China, 6-11 July   Inproceedings Testing and Debugging
    Abstract: Software Product Lines (SPLs) are families of related software products, each with its own set of feature combinations. Their commonly large number of products poses a unique set of challenges for software testing as it might not be technologically or economically feasible to test of all them individually. SPL pairwise testing aims at selecting a set of products to test such that all possible combinations of two features are covered by at least one selected product. Most approaches for SPL pairwise testing have focused on achieving full coverage of all pairwise feature combinations with the minimum number of products to test. Though useful in many contexts, this single-objective perspective does not reflect the prevailing scenario where software engineers face trade-offs between maximising coverage or minimising the number of products to test. In contrast, our work is the first to propose a classical multi-objective formalisation where both objectives are equally important. We study the application to SPL pairwise testing of four classical multi-objective evolutionary algorithms. We developed three seeding strategies techniques that leverage problem domain knowledge and measured their performance impact on a large and diverse corpus of case studies using two well-known multiobjective quality metrics. Our study identifies performance differences among the algorithms and corroborates that the more domain knowledge leveraged the better the search results. Our findings enable software engineers to select not just one solution (like single-objective techniques) but instead to select from an array of test suite possibilities the one that best matches the economical and technological constraints of their testing context.
    BibTeX:
    @inproceedings{Lopez-HerrejonFCEA14,
      author = {Roberto Erick Lopez-Herrejon and Javier Ferrer and Francisco Chicano and Alexander Egyed and Enrique Alba},
      title = {Comparative Analysis of Classical Multi-Objective Evolutionary Algorithms and Seeding Strategies for Pairwise Testing of Software Product Lines},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {387--396},
      address = {Beijing, China},
      month = {6-11 July},
      doi = {http://dx.doi.org/10.1109/CEC.2014.6900473}
    }
    					
    2014.05.28 Roberto E. Lopez-Herrejon, Javier Ferrer, Francisco Chicano, Evelyn Nicole Haslinger, Alexander Egyed & Enrique Alba Towards a Benchmark and a Comparison Framework for Combinatorial Interaction Testing of Software Product Lines 2014 (arXiv:1401.5367), January   Techreport Testing and Debugging
    Abstract: As Software Product Lines (SPLs) are becoming a more pervasive development practice, their effective testing is becoming a more important concern. In the past few years many SPL testing approaches have been proposed, among them, are those that support Combinatorial Interaction Testing (CIT) whose premise is to select a group of products where faults, due to feature interactions, are more likely to occur. Many CIT techniques for SPL testing have been put forward; however, no systematic and comprehensive comparison among them has been performed. To achieve such goal two items are important: a common benchmark of feature models, and an adequate comparison framework. In this research-in-progress paper, we propose 19 feature models as the base of a benchmark, which we apply to three different techniques in order to analyze the comparison framework proposed by Perrouin et al. We identify the shortcomings of this framework and elaborate alternatives for further study.
    BibTeX:
    @techreport{Lopez-HerrejonFCHEA14,
      author = {Roberto E. Lopez-Herrejon and Javier Ferrer and Francisco Chicano and Evelyn Nicole Haslinger and Alexander Egyed and Enrique Alba},
      title = {Towards a Benchmark and a Comparison Framework for Combinatorial Interaction Testing of Software Product Lines},
      year = {2014},
      number = {arXiv:1401.5367},
      month = {January},
      url = {http://arxiv.org/pdf/1401.5367v1.pdf}
    }
    					
    2014.05.28 Francisco Luna, David L. González-Álvarez, Francisco Chicano & Miguel A. Vega-Rodríguez The Software Project Scheduling Problem: A Scalability Analysis of Multi-objective Metaheuristics 2014 Applied Soft Computing, Vol. 15, pp. 136-148, February   Article
    Abstract: Computer aided techniques for scheduling software projects are a crucial step in the software development process within the highly competitive software industry. The Software Project Scheduling (SPS) problem relates to the decision of who does what during a software project lifetime, thus involving mainly both people-intensive activities and human resources. Two major conflicting goals arise when scheduling a software project: reducing both its cost and duration. A multi-objective approach is therefore the natural way of facing the SPS problem. As companies are getting involved in larger and larger software projects, there is an actual need of algorithms that are able to deal with the tremendous search spaces imposed. In this paper, we analyze the scalability of eight multi-objective algorithms when they are applied to the SPS problem using instances of increasing size. The algorithms are classical algorithms from the literature (NSGA-II, PAES, and SPEA2) and recent proposals (DEPT, MOCell, MOABC, MO-FA, and GDE3). From the experimentation conducted, the results suggest that PAES is the algorithm with the best scalability features.
    BibTeX:
    @article{LunaGCV14,
      author = {Francisco Luna and David L. González-Álvarez and Francisco Chicano and Miguel A. Vega-Rodríguez},
      title = {The Software Project Scheduling Problem: A Scalability Analysis of Multi-objective Metaheuristics},
      journal = {Applied Soft Computing},
      year = {2014},
      volume = {15},
      pages = {136-148},
      month = {February},
      doi = {http://dx.doi.org/10.1016/j.asoc.2013.10.015}
    }
    					
    2016.03.09 Surendra Mahajan, S.D. Joshi & V. Khanaa Genetic Algorithm Secure Procedures Algorithm to Manage Data Integrity Of Test Case Prioritization Methodology 2014 Proceedings of IEEE Global Conference onWireless Computing and Networking (GCWCN '14), pp. 208-212, Lonavala India, 22-24 December   Inproceedings
    Abstract: The focus of present research paper is to manage data integrity and trustworthiness of issues involved in software testing phase of software development life cycle. Many times, it seems that, data integration left behind the software testing phase and it is directly focused at software deployment or delivery time. To avoid issues due to lack of data integration, we developed algorithm which can track integration of test case prioritization along with trustworthiness of test case execution. This paper demonstrates how software testing can be efficient with management of data integrity factor to avoid major security issues. One of the main advantages of our approach is that domain specific semantics can be integrated with the data quality test cases prioritization, thus being able to discover test feed data quality problems beyond conventional quality measures.
    BibTeX:
    @inproceedings{MahajanJK14,
      author = {Surendra Mahajan and S.D. Joshi and V. Khanaa},
      title = {Genetic Algorithm Secure Procedures Algorithm to Manage Data Integrity Of Test Case Prioritization Methodology},
      booktitle = {Proceedings of IEEE Global Conference onWireless Computing and Networking (GCWCN '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {208-212},
      address = {Lonavala, India},
      month = {22-24 December},
      doi = {http://dx.doi.org/10.1109/GCWCN.2014.7030880}
    }
    					
    2014.05.28 Sonal Mahajan, Bailan Li & William G.J. Halfond Root Cause Analysis for HTML Presentation Failures using Search-based Techniques 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), pp. 15-18, Hyderabad India, 2-3 June   Inproceedings Testing and Debugging
    Abstract: Presentation failures in web applications can negatively impact users' perception of the application's quality and its usability. Such failures are challenging to diagnose and correct since the user interfaces of modern web applications are defined by a complex interaction between HTML tags and their visual properties defined by CSS and HTML attributes. In this paper, we introduce a novel approach for automatically identifying the root cause of presentation failures in web applications that uses image processing and search based techniques. In an experiment conducted for assessing the accuracy of our approach, we found that it was able to identify the correct root cause with 100% accuracy.
    BibTeX:
    @inproceedings{MahajanLH14,
      author = {Sonal Mahajan and Bailan Li and William G. J. Halfond},
      title = {Root Cause Analysis for HTML Presentation Failures using Search-based Techniques},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      pages = {15-18},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593836}
    }
    					
    2014.09.03 Jan Malburg & Gordon Fraser Search-based Testing using Constraint-based Mutation 2014 Software: Testing, Verification and Reliability, Vol. 24(6), pp. 472-495, September   Article Testing and Debugging
    Abstract: Many modern automated test generators are based on either metaheuristic search techniques or use constraint solvers. Both approaches have their advantages, but they also have specific drawbacks: Search-based methods may get stuck in local optima and degrade when the search landscape offers no guidance; constraint-based approaches, on the other hand, can only handle certain domains efficiently. This paper describes a method that integrates both techniques and delivers the best of both worlds. On a high-level view, the proposed method uses a genetic algorithm to generate tests, but the twist is that during evolution, a constraint solver is used to ensure that mutated offspring efficiently explores different control flow. Experiments on 20 case study programmes show that on average the combination improves branch coverage by 28% over search-based techniques while reducing the number of tests by 55%, and improves coverage by 13% over constraint-based techniques while reducing the number of tests by 73%.
    BibTeX:
    @article{MalburgF14,
      author = {Jan Malburg and Gordon Fraser},
      title = {Search-based Testing using Constraint-based Mutation},
      journal = {Software: Testing, Verification and Reliability},
      year = {2014},
      volume = {24},
      number = {6},
      pages = {472-495},
      month = {September},
      doi = {http://dx.doi.org/10.1002/stvr.1508}
    }
    					
    2014.05.28 Ruchika Malhotra Search Based Techniques for Software Fault Prediction: Current Trends and Future Directions 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), pp. 35-36, Hyderabad India, 2-3 June   Inproceedings Testing and Debugging
    Abstract: The effective allocation of the resources is crucial and essential in the testing phase of the software development life cycle so that the weak areas in the software can be verified and validated efficiently. The prediction of fault prone classes in the early phases of software development can help software developers to focus the limited available resources on those portions of software, which are more prone to fault. Recently, the search based techniques have been successfully applied in the software engineering domain. In this study, we analyze the position of search based techniques for use in software fault prediction by collecting relevant studies from the literature which were conducted during the period January 1991 to October 2013. We further summarize current trends by assessing the performance capability of the search based techniques in the existing research and suggest future directions.
    BibTeX:
    @inproceedings{Malhotra14,
      author = {Ruchika Malhotra},
      title = {Search Based Techniques for Software Fault Prediction: Current Trends and Future Directions},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      pages = {35-36},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593842}
    }
    					
    2016.03.08 Ruchika Malhotra, Nakul Pritam & Yogesh Singh On the Applicability of Evolutionary Computation for Software Defect Prediction 2014 Proceedings of International Conference on Advances in Computing, Communications and Informatics (ICACCI '14), New Delhi India, 24-27 September   Inproceedings
    Abstract: Removal of defects is the key in ensuring long-term error free operation of a software system. Although improvements in the software testing process has resulted in better coverage, it is evident that some parts of a software system tend to be more defect prone than the other parts and identification of these parts can greatly benefit the software practitioners in order to deliver high quality maintainable software products. A defect prediction model is built by training a learner using the software metrics. These models can later be used to predict defective classes in a software system. Many studies have been conducted in the past for predicting defective classes in the early phases of the software development. However, the evolutionary computation techniques have not yet been explored for predicting defective classes. The nature of evolutionary computation techniques makes them better suited to the software engineering problems. In this study we explore the predictive ability of the evolutionary computation and hybridized evolutionary computation techniques for defect prediction. This work contributes to the literature by examining the effectiveness of the 15 evolutionary computation and hybridized evolutionary computation techniques to 5 datasets obtained from the Apache Software Foundation using the Defect Collection and Reporting System. The results are evaluated in terms of the values of accuracy. We further compare the evolutionary computation techniques using the Friedman ranking. The results suggest that the defect prediction models built using the evolutionary computation techniques perform well over all the datasets in terms of prediction accuracy.
    BibTeX:
    @inproceedings{MalhotraPS14,
      author = {Ruchika Malhotra and Nakul Pritam and Yogesh Singh},
      title = {On the Applicability of Evolutionary Computation for Software Defect Prediction},
      booktitle = {Proceedings of International Conference on Advances in Computing, Communications and Informatics (ICACCI '14)},
      publisher = {IEEE},
      year = {2014},
      address = {New Delhi, India},
      month = {24-27 September},
      doi = {http://dx.doi.org/10.1109/ICACCI.2014.6968592}
    }
    					
    2014.09.19 Chengying Mao Generating Test Data for Software Structural Testing Based on Particle Swarm Optimization 2014 Arabian Journal for Science and Engineering, Vol. 39(6), pp. 4593-4607, June   Article Testing and Debugging
    Abstract: Testing is an important way to ensure and improve the quality of software system. However, it is a time-consuming and labor-intensive activity. In the paper, our main concern is software structural testing, and propose a search-based test data generation solution. In our framework, particle swarm optimization (PSO) technique is adopted due to its simplicity and fast convergence speed. For test data generation problem, the inputs of program under test are encoded into particles. Once a set of test inputs is produced, coverage information can be collected by test driver. Meanwhile, the fitness value of branch coverage can be calculated based on such information. Then, the fitness is used for PSO to adjust the search direction. Finally, the test data set with the highest coverage rate is yielded. In addition, eight well-known programs are used for experimental analysis. The results show that PSO-based approach has a distinct advantage compared to the traditional evolutionary algorithms such as generic algorithm and simulated annealing, and also outperforms comprehensive learning-PSO both in covering effect and in evolution speed.
    BibTeX:
    @article{Mao14,
      author = {Chengying Mao},
      title = {Generating Test Data for Software Structural Testing Based on Particle Swarm Optimization},
      journal = {Arabian Journal for Science and Engineering},
      year = {2014},
      volume = {39},
      number = {6},
      pages = {4593-4607},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s13369-014-1074-y}
    }
    					
    2014.08.14 Luiz Martins, Ricardo Nobre, Alexandre Delbem, Eduardo Marques & Joao Cardoso Coello Coello, C.A. (Hrsg.) A Clustering-Based Approach for Exploring Sequences of Compiler Optimizations 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation, pp. 2436-2443, Beijing China, 6-11 July   Inproceedings
    Abstract: In this paper we present a clustering-based selection approach for reducing the number of compilation passes used in search space during the exploration of optimisations aiming at increasing the performance of a given function and/or code fragment. The basic idea is to identify similarities among functions and to use the passes previously explored each time a new function is being compiled. This subset of compiler optimisations is then used by a Design Space Exploration (DSE) process. The identification of similarities is obtained by a data mining method which is applied to a symbolic code representation that translates the main structures of the source code to a sequence of symbols based on transformation rules. Experiments were performed for evaluating the effectiveness of the proposed approach. The selection of compiler Optimisation sequences considering a set of 49 compilation passes and targeting a Xilinx MicroBlaze processor was performed aiming at latency improvements for 41 functions from Texas Instruments benchmarks. The results reveal that the passes selection based on our clustering method achieves a significant gain on execution time over the full search space still achieving important performance speedups.
    BibTeX:
    @inproceedings{MartinsNDMC14,
      author = {Luiz Martins and Ricardo Nobre and Alexandre Delbem and Eduardo Marques and Joao Cardoso},
      title = {A Clustering-Based Approach for Exploring Sequences of Compiler Optimizations},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation},
      publisher = {IEEE},
      year = {2014},
      pages = {2436--2443},
      address = {Beijing, China},
      month = {6-11 July},
      doi = {http://dx.doi.org/10.1109/CEC.2014.6900634}
    }
    					
    2014.08.14 Hamid Masoud & Saeed Jalili A Clustering-based Model for Class Responsibility Assignment Problem in Object-Oriented Analysis 2014 Journal of Systems and Software, Vol. 93, pp. 110-131, July   Article
    Abstract: Assigning responsibilities to classes is a vital task in object-oriented analysis and design, and it directly affects the maintainability and reusability of software systems. There are many methodologies to help recognize the responsibilities of a system and assign them to classes, but all of them depend greatly on human judgment and decision-making. In this paper, we propose a clustering-based model to solve the class responsibility assignment (CRA) problem. The proposed model employs a novel interactive graph-based method to find inheritance hierarchies, and two novel criteria to determine the appropriate number of classes. It reduces the dependency of CRA on human judgment and provides a decision-making support for CRA in class diagrams. To evaluate the proposed model, we apply three different hierarchical agglomerative clustering algorithms and two different types of similarity measures. By comparing the obtained results of clustering techniques with the models designed by multi-objective genetic algorithm (MOGA), it is revealed that clustering techniques yield promising results.
    BibTeX:
    @article{MasoudJ14,
      author = {Hamid Masoud and Saeed Jalili},
      title = {A Clustering-based Model for Class Responsibility Assignment Problem in Object-Oriented Analysis},
      journal = {Journal of Systems and Software},
      year = {2014},
      volume = {93},
      pages = {110-131},
      month = {July},
      doi = {http://dx.doi.org/10.1016/j.jss.2014.02.053}
    }
    					
    2014.09.22 Leandro L. Minku, Dirk Sudholt & Xin Yao Improved Evolutionary Algorithm Design for the Project Scheduling Problem Based on Runtime Analysis 2014 IEEE Transactions on Software Engineering, Vol. 40(1), pp. 83-102, January   Article
    Abstract: Several variants of evolutionary algorithms (EAs) have been applied to solve the project scheduling problem (PSP), yet their performance highly depends on design choices for the EA. It is still unclear how and why different EAs perform differently. We present the first runtime analysis for the PSP, gaining insights into the performance of EAs on the PSP in general, and on specific instance classes that are easy or hard. Our theoretical analysis has practical implications-based on it, we derive an improved EA design. This includes normalizing employees' dedication for different tasks to ensure they are not working overtime; a fitness function that requires fewer pre-defined parameters and provides a clear gradient towards feasible solutions; and an improved representation and mutation operator. Both our theoretical and empirical results show that our design is very effective. Combining the use of normalization to a population gave the best results in our experiments, and normalization was a key component for the practical effectiveness of the new design. Not only does our paper offer a new and effective algorithm for the PSP, it also provides a rigorous theoretical analysis to explain the efficiency of the algorithm, especially for increasingly large projects.
    BibTeX:
    @article{MinkuSY14,
      author = {Leandro L. Minku and Dirk Sudholt and Xin Yao},
      title = {Improved Evolutionary Algorithm Design for the Project Scheduling Problem Based on Runtime Analysis},
      journal = {IEEE Transactions on Software Engineering},
      year = {2014},
      volume = {40},
      number = {1},
      pages = {83-102},
      month = {January},
      doi = {http://dx.doi.org/10.1109/TSE.2013.52}
    }
    					
    2014.03.26 Leandro L. Minku & Xin Yao How to Make Best Use of Cross-Company Data in Software Effort Estimation? 2014 Proceedings of the 36th International Conference on Software Engineering, pp. 446-456, Hyderabad India, 31 May - 7 June   Inproceedings
    Abstract: Previous works using Cross-Company (CC) data for making Within-Company (WC) Software Effort Estimation (SEE) try to use CC data or models directly to provide predictions in the WC context. So, these data or models are only helpful when they match the WC well. When they do not, a fair amount of WC training data, which are usually expensive to acquire, are still necessary to achieve good performance. We investigate how to make best use of CC data, so that we can reduce the amount of WC data while maintaining or improving performance in comarison to WC SEE models. This is done by proposing a new framework to learn the relationship between CC and WC projects explicitly, allowing CC models to be mapped to the WC context. Such mapped models can be useful even when the CC models themselves do not match the WC context directly. Our study shows that a new approach instantiating this framework is able not only to use substantially less WC data than a corresponding WC model, but also to achieve similar/better performance. This approach can also be used to provide insight into the behaviour of a company in comparison to others.
    BibTeX:
    @inproceedings{MinkuY14,
      author = {Leandro L. Minku and Xin Yao},
      title = {How to Make Best Use of Cross-Company Data in Software Effort Estimation?},
      booktitle = {Proceedings of the 36th International Conference on Software Engineering},
      publisher = {IEEE},
      year = {2014},
      pages = {446-456},
      address = {Hyderabad, India},
      month = {31 May - 7 June},
      doi = {http://dx.doi.org/10.1145/2568225.2568228}
    }
    					
    2014.08.14 Mohamed Wiem Mkaouer, Marouane Kessentini, Slim Bechikh, Kalyanmoy Deb & Mel Ó Cinnéide High Dimensional Search-based Software Engineering: Finding Tradeoffs among 15 Objectives for Automating Software Refactoring using NSGA-III 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1263-1270, Vancouver Canada, 12-16 July   Inproceedings Distribution and Maintenance
    Abstract: There is a growing need for scalable search-based software engineering approaches that address software engineering problems where a large number of objectives are to be optimized. Software refactoring is one of these problems where a refactoring sequence is sought that optimizes several software metrics. Most of the existing refactoring work uses a large set of quality metrics to evaluate the software design after applying refactoring operations, but current search-based software engineering approaches are limited to using a maximum of five metrics. We propose for the first time a scalable search-based software engineering approach based on a newly proposed evolutionary optimization method NSGA-III where there are 15 different objectives to be optimized. In our approach, automated refactoring solutions are evaluated using a set of 15 distinct quality metrics. We evaluated this approach on seven large open source systems and found that, on average, more than 92% of code smells were corrected. Statistical analysis of our experiments over 31 runs shows that NSGA-III performed significantly better than two other many-objective techniques (IBEA and MOEA/D), a multi-objective algorithm (NSGA-II) and two mono-objective approaches, hence demonstrating that our NSGA-III approach represents the new state of the art in fully-automated refactoring.
    BibTeX:
    @inproceedings{MkaouerKBDO14,
      author = {Mohamed Wiem Mkaouer and Marouane Kessentini and Slim Bechikh and Kalyanmoy Deb and Mel Ó Cinnéide},
      title = {High Dimensional Search-based Software Engineering: Finding Tradeoffs among 15 Objectives for Automating Software Refactoring using NSGA-III},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1263-1270},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598366}
    }
    					
    2014.08.14 Mohamed Wiem Mkaouer, Marouane Kessentini, Slim Bechikh & Mel Ó Cinnéide A Robust Multi-objective Approach for Software Refactoring under Uncertainty 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 168-183, Fortaleza Brazil, 26-29 August   Inproceedings Distribution and Maintenance
    Abstract: Refactoring large systems involves several sources of uncertainty related to the severity levels of code smells to be corrected and the importance of the classes in which the smells are located. Due to the dynamic nature of software development, these values cannot be accurately determined in practice, leading to refactoring sequences that lack robustness. To address this problem, we introduced a multi-objective robust model, based on NSGA-II, for the software refactoring problem that tries to find the best trade-off between quality and robustness. We evaluated our approach using six open source systems and demonstrated that it is significantly better than state-of-the-art refactoring approaches in terms of robustness in 100% of experiments based on a variety of real-world scenarios. Our suggested refactoring solutions were found to be comparable in terms of quality to those suggested by existing approaches and to carry an acceptable robustness price. Our results also revealed an interesting feature about the trade-off between quality and robustness that demonstrates the practical value of taking robustness into account in software refactoring.
    BibTeX:
    @inproceedings{MkaouerKBO14,
      author = {Mohamed Wiem Mkaouer and Marouane Kessentini and Slim Bechikh and Mel Ó Cinnéide},
      title = {A Robust Multi-objective Approach for Software Refactoring under Uncertainty},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {168-183},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_12}
    }
    					
    2016.02.17 Martin Monperrus A Critical Review of “Automatic Patch Generation Learned from Human-Written Patches”: Essay on the Problem Statement and the Evaluation of Automatic Software Repair 2014 Proceedings of the 36th International Conference on Software Engineering (ICSE '14), pp. 234-242, Hyderabad India, 31 May - 7 June   Inproceedings
    Abstract: At ICSE'2013, there was the first session ever dedicated to automatic program repair. In this session, Kim et al. presented PAR, a novel template-based approach for fixing Java bugs. We strongly disagree with key points of this paper. Our critical review has two goals. First, we aim at explaining why we disagree with Kim and colleagues and why the reasons behind this disagreement are important for research on automatic software repair in general. Second, we aim at contributing to the field with a clarification of the essential ideas behind automatic software repair. In particular we discuss the main evaluation criteria of automatic software repair: understandability, correctness and completeness. We show that depending on how one sets up the repair scenario, the evaluation goals may be contradictory. Eventually, we discuss the nature of fix acceptability and its relation to the notion of software correctness.
    BibTeX:
    @inproceedings{Monperrus14,
      author = {Martin Monperrus},
      title = {A Critical Review of “Automatic Patch Generation Learned from Human-Written Patches”: Essay on the Problem Statement and the Evaluation of Automatic Software Repair},
      booktitle = {Proceedings of the 36th International Conference on Software Engineering (ICSE '14)},
      publisher = {ACM},
      year = {2014},
      pages = {234-242},
      address = {Hyderabad, India},
      month = {31 May - 7 June},
      doi = {http://dx.doi.org/10.1145/2568225.2568324}
    }
    					
    2015.02.05 Shiva Nejati & Lionel C. Briand Identifying Optimal Trade-offs between CPU Time Usage and Temporal Constraints using Search 2014 Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA '14), pp. 351-361, Bay Area California USA, 21-25 July   Inproceedings
    Abstract: Integration of software from different sources is a critical activity in many embedded systems across most industry sectors. Software integrators are responsible for producing reliable systems that fulfil various functional and performance requirements. In many situations, these requirements inversely impact one another. In particular, embedded system integrators often need to make compromises regarding some of the functional system properties to optimize the use of various resources, such as CPU time. In this paper, motivated by challenges faced by industry, we introduce a multi-objective decision support approach to help balance the minimization of CPU time usage and the satisfaction of temporal constraints in automotive systems. We develop a multi-objective, search-based optimization algorithm, specifically designed to work for large search spaces, to identify optimal trade-off solutions fulfilling these two objectives. We evaluated our algorithm by applying it to a large automotive system. Our results show that our algorithm can find solutions that are very close to the estimated ideal optimal values, and further, it finds significantly better solutions than a random strategy while being faster. Finally, our approach efficiently identifies a large number of diverse solutions, helping domain experts and other stakeholders negotiate the solutions to reach an agreement.
    BibTeX:
    @inproceedings{NejatiB14,
      author = {Shiva Nejati and Lionel C. Briand},
      title = {Identifying Optimal Trade-offs between CPU Time Usage and Temporal Constraints using Search},
      booktitle = {Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA '14)},
      publisher = {ACM},
      year = {2014},
      pages = {351-361},
      address = {Bay Area, California, USA},
      month = {21-25 July},
      doi = {http://dx.doi.org/10.1145/2610384.2610396}
    }
    					
    2014.08.14 Elmahdi Omar, Sudipto Ghosh & Darrell Whitley Comparing Search Techniques for Finding Subtle Higher Order Mutants 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1271-1278, Vancouver Canada, 12-16 July   Inproceedings Testing and Debugging
    Abstract: Subtle Higher Order Mutants (HOMs) are those HOMs that cannot be killed by existing test suites that kill all First Order Mutants (FOMs) for the program under test. Subtle HOMs simulate complex, real faults, whose behavior cannot be simulated using FOMs. However, due to the coupling effect, subtle HOMs are rare in the exponentially large space of candidate HOMs and they can be costly to find even for small programs. In this paper we propose new search techniques for finding subtle HOMs and extend our prior work with new heuristics and search strategies. We compare the effectiveness of six search techniques applied to Java and AspectJ programs. Our study shows that more subtle HOMs were found when the new heuristics and search strategies were used. The programming language (Java or AspectJ) did not affect the effectiveness of any search technique.
    BibTeX:
    @inproceedings{OmarGW14,
      author = {Elmahdi Omar and Sudipto Ghosh and Darrell Whitley},
      title = {Comparing Search Techniques for Finding Subtle Higher Order Mutants},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1271-1278},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598286}
    }
    					
    2014.09.22 Elmahdi Omar, Sudipto Ghosh & Darrell Whitley HOMAJ: A Tool for Higher Order Mutation Testing in AspectJ and Java 2014 Proceedings of the 7th International Conference on Software Testing, Verification and Validation Workshops (ICSTW '14), pp. 165-170, Cleveland OH USA, 31 March - 4 April   Inproceedings Testing and Debugging
    Abstract: The availability of automated tool support is an important consideration for software developers before they can incorporate higher order mutation testing into their software development processes. This paper presents HOMAJ, a higher order mutation testing tool for AspectJ and Java. HOMAJ automates the process of generating and evaluating first order mutants (FOMs) and higher order mutants (HOMs). In particular, HOMAJ can be used to generate subtle HOMs, which are HOMs that cannot be killed by an existing test set that kills all the FOMs. Subtle HOMs can be valuable for improving test effectiveness because they can simulate complex and non-trivial faults that cannot be simulated with the use of traditional FOMs. HOMAJ implements a number of different techniques for generating subtle HOMs, including several search-based software engineering techniques, enumeration search, and random search. HOMAJ is designed in a modular way to make it easy to incorporate a new search strategy. In this paper we demonstrate the use of HOMAJ to evaluate the implemented techniques.
    BibTeX:
    @inproceedings{OmarGW14b,
      author = {Elmahdi Omar and Sudipto Ghosh and Darrell Whitley},
      title = {HOMAJ: A Tool for Higher Order Mutation Testing in AspectJ and Java},
      booktitle = {Proceedings of the 7th International Conference on Software Testing, Verification and Validation Workshops (ICSTW '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {165-170},
      address = {Cleveland, OH, USA},
      month = {31 March - 4 April},
      doi = {http://dx.doi.org/10.1109/ICSTW.2014.19}
    }
    					
    2014.08.14 Michael O'Neill, Miguel Nicolau & Alexandros Agapitos Coello Coello, C.A. (Hrsg.) Experiments in Program Synthesis with Grammatical Evolution: A Focus on Integer Sorting 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14), pp. 1504-1511, Beijing China, 6-11 July   Inproceedings
    Abstract: We present the results of a series of investigations where we apply a form of grammar-based genetic programming to the problem of program synthesis in an attempt to evolve an Integer Sorting algorithm. The results confirm earlier research in the field on the difficulty of the problem given a primitive set of functions and terminals. The inclusion of a swap(i,j) function in combination with a nested for loop in the grammar enabled a successful solution to be found in every run. We suggest some future research directions to overcome the challenge of evolving sorting algorithms from primitive functions and terminals.
    BibTeX:
    @inproceedings{ONeillNA14,
      author = {Michael O'Neill and Miguel Nicolau and Alexandros Agapitos},
      title = {Experiments in Program Synthesis with Grammatical Evolution: A Focus on Integer Sorting},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {1504--1511},
      address = {Beijing, China},
      month = {6-11 July},
      doi = {http://dx.doi.org/10.1109/CEC.2014.6900578}
    }
    					
    2016.03.08 Daniela C.C. Peixoto, Geraldo R. Mateus & Rodolfo F. Resende The Issues of Solving Staffing and Scheduling Problems in Software Development Projects 2014 Proceedings of IEEE 38th Annual Computer Software and Applications Conference (COMPSAC '14), pp. 1-10, Västeras Sweden, 21-25 July   Inproceedings
    Abstract: Search-Based Software Engineering (SBSE) applies search-based optimization techniques in order to solve complex Software Engineering problems. In the recent years there has been a dramatic increase in the number of SBSE applications in areas such as Software Test, Requirements Engineering, and Project Planning. Our focus is on the analysis of the literature in Project Planning, specifically the researches conducted in software project scheduling and resource allocation. SBSE project scheduling and resource allocation solutions basically use optimization algorithms. Considering the results of a previous Systematic Literature Review, in this work, we analyze the issues of adopting these optimization algorithms in what is considered typical settings found in software development organizations. We found few evidence signaling that the expectations of software development organizations are being attended.
    BibTeX:
    @inproceedings{PeixotoMR14,
      author = {Daniela C. C. Peixoto and Geraldo R. Mateus and Rodolfo F. Resende},
      title = {The Issues of Solving Staffing and Scheduling Problems in Software Development Projects},
      booktitle = {Proceedings of IEEE 38th Annual Computer Software and Applications Conference (COMPSAC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {1-10},
      address = {Västeras, Sweden},
      month = {21-25 July},
      doi = {http://dx.doi.org/10.1109/COMPSAC.2014.96}
    }
    					
    2014.08.12 Justyna Petke, Mark Harman, William B. Langdon & Westley Weimer Using Genetic Improvement & Code Transplants to Specialise a C++ Program to a Problem Class 2014 Proceedings of the 17th European Conference on Genetic Programming (EuroGP '14), pp. 137-149, Granada Spain, 23-25, April   Inproceedings
    Abstract: Genetic Improvement (GI) is a form of Genetic Programming that improves an existing program. We use GI to evolve a faster version of a C++ program, a Boolean satisfiability (SAT) solver called MiniSAT, specialising it for a particular problem class, namely Combinatorial Interaction Testing (CIT), using automated code transplantation. Our GI-evolved solver achieves overall 17percent improvement, making it comparable with average expert human performance. Additionally, this automatically evolved solver is faster than any of the human-improved solvers for the CIT problem.
    BibTeX:
    @inproceedings{PetkeHLW14,
      author = {Justyna Petke and Mark Harman and William B. Langdon and Westley Weimer},
      title = {Using Genetic Improvement & Code Transplants to Specialise a C++ Program to a Problem Class},
      booktitle = {Proceedings of the 17th European Conference on Genetic Programming (EuroGP '14)},
      publisher = {Springer},
      year = {2014},
      pages = {137-149},
      address = {Granada, Spain},
      month = {23-25, April},
      doi = {http://dx.doi.org/10.1007/978-3-662-44303-3_12}
    }
    					
    2014.08.14 Simon Poulding & Robert Feldt Generating Structured Test Data with Specific Properties using Nested Monte-Carlo Search 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1279-1286, Vancouver Canada, 12-16 July   Inproceedings Testing and Debugging
    Abstract: Software acting on complex data structures can be challenging to test: it is difficult to generate diverse test data that satisfies structural constraints while simultaneously exhibiting properties, such as a particular size, that the test engineer believes will be effective in detecting faults. In our previous work we introduced GödelTest, a framework for generating such data structures using non-deterministic programs, and combined it with Differential Evolution to optimize the generation process. Monte-Carlo Tree Search (MCTS) is a search technique that has shown great success in playing games that can be represented as a sequence of decisions. In this paper we apply Nested Monte-Carlo Search, a single-player variant of MCTS, to the sequence of decisions made by the generating programs used by GödelTest, and show that this combination can efficiently generate random data structures which exhibit the specific properties that the test engineer requires. We compare the results to Boltzmann sampling, an analytical approach to generating random combinatorial data structures.
    BibTeX:
    @inproceedings{PouldingF14,
      author = {Simon Poulding and Robert Feldt},
      title = {Generating Structured Test Data with Specific Properties using Nested Monte-Carlo Search},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1279-1286},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598339}
    }
    					
    2014.11.25 Simon Poulding & Hélène Waeselynck Adding Contextual Guidance to the Automated Search for Probabilistic Test Profiles 2014 Proceedings of the 7th International Conference on Software Testing, Verification and Validation (ICST '14), pp. 293-302, Cleveland OH USA, March 31 - April 4   Inproceedings Testing and Debugging
    Abstract: Statistical testing is a probabilistic approach to test data generation that has been demonstrated to be very effective at revealing faults. Its premise is to compensate for the imperfect connection between coverage criteria and the faults to be revealed by exercising each coverage element several times with different random data. The cornerstone of the approach is the often complex task of determining a suitable input profile, and recent work has shown that automated metaheuristic search can be a practical method of synthesising such profiles. The starting point of this paper is the hypothesis that, for some software, the existing grammar-based representation used by the search algorithm fails to capture important relationships between input arguments and this can limit the fault-revealing power of the synthesised profiles. We provide evidence in support of this hypothesis, and propose a solution in which the user provides some basic contextual knowledge to guide the search. Empirical results for two case studies are promising: knowledge gained by a very straightforward review of the software-under-test is sufficient to dramatically increase the efficacy of the profiles synthesised by search.
    BibTeX:
    @inproceedings{PouldingW14,
      author = {Simon Poulding and Hélène Waeselynck},
      title = {Adding Contextual Guidance to the Automated Search for Probabilistic Test Profiles},
      booktitle = {Proceedings of the 7th International Conference on Software Testing, Verification and Validation (ICST '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {293-302},
      address = {Cleveland, OH, USA},
      month = {March 31 - April 4},
      doi = {http://dx.doi.org/10.1109/ICST.2014.42}
    }
    					
    2014.05.30 Yuhua Qi, Xiaoguang Mao, Yan Lei, Ziying Dai & Chengsong Wang The Strength of Random Search on Automated Program Repair 2014 Proceedings of the 36th International Conference on Software Engineering (ICSE '14), pp. 254-265, Hyderabad India, 31 May - 7 June   Inproceedings Testing and Debugging
    Abstract: Automated program repair recently received considerable attentions, and many techniques on this research area have been proposed. Among them, two genetic-programming-based techniques, GenProg and Par, have shown the promising results. In particular, GenProg has been used as the baseline technique to check the repair effectiveness of new techniques in much literature. Although GenProg and Par have shown their strong ability of fixing real-life bugs in nontrivial programs, to what extent GenProg and Par can benefit from genetic programming, used by them to guide the patch search process, is still unknown. To address the question, we present a new automated repair technique using random search, which is commonly considered much simpler than genetic programming, and implement a prototype tool called RSRepair. Experiment on 7 programs with 24 versions shipping with real-life bugs suggests that RSRepair, in most cases (23/24), outperforms GenProg in terms of both repair effectiveness (requiring fewer patch trials) and efficiency (requiring fewer test case executions), justifying the stronger strength of random search over genetic programming. According to experimental results, we suggest that every proposed technique using optimization algorithm should check its effectiveness by comparing it with random search.
    BibTeX:
    @inproceedings{QiMLDW14,
      author = {Yuhua Qi and Xiaoguang Mao and Yan Lei and Ziying Dai and Chengsong Wang},
      title = {The Strength of Random Search on Automated Program Repair},
      booktitle = {Proceedings of the 36th International Conference on Software Engineering (ICSE '14)},
      publisher = {ACM},
      year = {2014},
      pages = {254-265},
      address = {Hyderabad, India},
      month = {31 May - 7 June},
      doi = {http://dx.doi.org/10.1145/2568225.2568254}
    }
    					
    2014.08.14 Aurora Ramrez, José Raúl Romero & Sebastián Ventura On the Performance of Multiple Objective Evolutionary Algorithms for Software Architecture Discovery 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1287-1294, Vancouver Canada, 12-16 July   Inproceedings
    Abstract: During the design of complex systems, software architects have to deal with a tangle of abstract artefacts, measures and ideas to discover the most fitting underlying architecture. A common way to structure these systems is in terms of their interacting software components, whose composition and connections need to be properly adjusted. Its abstract and highly combinatorial nature increases the complexity of the problem. In this scenario, Search-based Software Engineering (SBSE) may serve to support this decision making process from initial analysis models, since the discovery of component-based architectures can be formulated as a challenging multiple optimisation problem, where different metrics and configurations can be applied depending on the design requirements and its specific domain. Many-objective optimisation evolutionary algorithms can provide an interesting alternative to classical multi-objective approaches. This paper presents a comparative study of five different algorithms, including an empirical analysis of their behaviour in terms of quality and variety of the returned solutions. Results are also discussed considering those aspects of concern to the expert in the decision making process, like the number and type of architectures found. The analysis of many-objectives algorithms constitutes an important challenge, since some of them have never been explored before in SBSE.
    BibTeX:
    @inproceedings{RamirezRV14,
      author = {Aurora Ramrez and José Raúl Romero and Sebastián Ventura},
      title = {On the Performance of Multiple Objective Evolutionary Algorithms for Software Architecture Discovery},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1287-1294},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598310}
    }
    					
    2016.03.08 Daniele Romano, Steven Raemaekers & Martin Pinzger Refactoring Fat Interfaces using a Genetic Algorithm 2014 Proceedings of IEEE International Conference on Software Maintenance and Evolution (ICSME '14), pp. 351-360, Victoria Canada, 29 September - 3 October   Inproceedings
    Abstract: Recent studies have shown that the violation of the Interface Segregation Principle (ISP) is critical for maintaining and evolving software systems. Fat interfaces (i.e., interfaces violating the ISP) change more frequently and degrade the quality of the components coupled to them. According to the ISP the interfaces' design should force no client to depend on methods it does not invoke. Fat interfaces should be split into smaller interfaces exposing only the methods invoked by groups of clients. However, applying the ISP is a challenging task when fat interfaces are invoked differently by many clients. In this paper, we formulate the problem of applying the ISP as a multi-objective clustering problem and we propose a genetic algorithm to solve it. We evaluate the capability of the proposed genetic algorithm with 42,318 public Java APIs whose clients' usage has been mined from the Maven repository. The results of this study show that the genetic algorithm outperforms other search based approaches (i.e., random and simulated annealing approaches) in splitting the APIs according to the ISP.
    BibTeX:
    @inproceedings{RomanoRP14,
      author = {Daniele Romano and Steven Raemaekers and Martin Pinzger},
      title = {Refactoring Fat Interfaces using a Genetic Algorithm},
      booktitle = {Proceedings of IEEE International Conference on Software Maintenance and Evolution (ICSME '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {351-360},
      address = {Victoria, Canada},
      month = {29 September - 3 October},
      doi = {http://dx.doi.org/10.1109/ICSME.2014.57}
    }
    					
    2014.02.21 Ramón Sagarna, Alexander Mendiburu, Iñaki Inza & José A. Lozano Assisting in Search Heuristics Selection through Multidimensional Supervised Classification: A Case Study on Software Testing 2014 Information Sciences, Vol. 258, pp. 122–139, February   Article Testing and Debugging
    Abstract: A fundamental question in the field of approximation algorithms, for a given problem instance, is the selection of the best (or a suitable) algorithm with regard to some performance criteria. A practical strategy for facing this problem is the application of machine learning techniques. However, limited support has been given in the literature to the case of more than one performance criteria, which is the natural scenario for approximation algorithms. We propose multidimensional Bayesian network (mBN) classifiers as a relatively simple, yet well-principled, approach for helping to solve this problem. Precisely, we relax the algorithm selection decision problem into the elucidation of the nondominated subset of algorithms, which contains the best. This formulation can be used in different ways to elucidate the main problem, each of which can be tackled with an mBN classifier. Namely, we deal with two of them: the prediction of the whole nondominated set and whether an algorithm is nondominated or not. We illustrate the feasibility of the approach for real-life scenarios with a case study in the context of Search Based Software Test Data Generation (SBSTDG). A set of five SBSTDG generators is considered and the aim is to assist a hypothetical test engineer in elucidating good generators to fulfil the branch testing of a given programme.
    BibTeX:
    @article{SagarnaMIL14,
      author = {Ramón Sagarna and Alexander Mendiburu and Iñaki Inza and José A. Lozano},
      title = {Assisting in Search Heuristics Selection through Multidimensional Supervised Classification: A Case Study on Software Testing},
      journal = {Information Sciences},
      year = {2014},
      volume = {258},
      pages = {122–139},
      month = {February},
      doi = {http://dx.doi.org/10.1016/j.ins.2013.09.050}
    }
    					
    2014.11.26 Eric Schkufza, Rahul Sharma & Alex Aiken Stochastic Optimization of Floating-Point Programs with Tunable Precision 2014 Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI '14), pp. 53-64, Edinburgh UK, 9-11 June   Inproceedings Testing and Debugging
    Abstract: The aggressive optimization of floating-point computations is an important problem in high-performance computing. Unfortunately, floating-point instruction sets have complicated semantics that often force compilers to preserve programs as written. We present a method that treats floating-point optimization as a stochastic search problem. We demonstrate the ability to generate reduced precision implementations of Intel's handwritten C numeric library which are up to 6 times faster than the original code, and achieve end-to-end speedups of over 30% on a direct numeric simulation and a ray tracer by optimizing kernels that can tolerate a loss of precision while still remaining correct. Because these optimizations are mostly not amenable to formal verification using the current state of the art, we present a stochastic search technique for characterizing maximum error. The technique comes with an asymptotic guarantee and provides strong evidence of correctness.
    BibTeX:
    @inproceedings{SchkufzaSA14,
      author = {Eric Schkufza and Rahul Sharma and Alex Aiken},
      title = {Stochastic Optimization of Floating-Point Programs with Tunable Precision},
      booktitle = {Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI '14)},
      publisher = {ACM},
      year = {2014},
      pages = {53-64},
      address = {Edinburgh, UK},
      month = {9-11 June},
      doi = {http://dx.doi.org/10.1145/2594291.2594302}
    }
    					
    2016.02.17 Eric Schulte Neutral Networks of Real-World Programs and their Application to Automated Software Evolution 2014 , July School: University of New Mexico   Phdthesis
    Abstract: The existing software development ecosystem is the product of evolutionary forces, and consequently real-world software is amenable to improvement through automated evolutionary techniques. This dissertation presents empirical evidence that software is inherently robust to small randomized program transformations, or mutations. Simple and general mutation operations are demonstrated that can be applied to software source code, compiled assembler code, or directly to binary executables. These mutations often generate variants of working programs that differ significantly from the original, yet remain fully functional. Applying successive mutations to the same software program uncovers large neutral networks of fully functional variants of real-world software projects. These properties of mutational robustness and the corresponding neutral networks have been studied extensively in biology and are believed to be related to the capacity for unsupervised evolution and adaptation. As in biological systems, mutational robustness and neutral networks in software systems enable automated evolution. The dissertation presents several applications that leverage software neutral networks to automate common software development and maintenance tasks. Neutral networks are explored to generate diverse implementations of software for improving runtime security and for proactively repairing latent bugs. Next, a technique is introduced for automatically repairing bugs in the assembler and executables compiled from off-the-shelf software. As demonstration, a proprietary executable is manipulated to patch security vulnerabilities without access to source code or any aid from the software vendor. Finally, software neutral networks are leveraged to optimize complex nonfunctional runtime properties. This optimization technique is used to reduce the energy consumption of the popular PARSEC benchmark applications by 20% as compared to the best available public domain compiler optimizations. The applications presented herein apply evolutionary computation techniques to existing software using common software engineering tools. By enabling evolutionary techniques within the existing software development toolchain, this work is more likely to be of practical benefit to the developers and maintainers of real-world software systems.
    BibTeX:
    @phdthesis{Schulte14,
      author = {Eric Schulte},
      title = {Neutral Networks of Real-World Programs and their Application to Automated Software Evolution},
      school = {University of New Mexico},
      year = {2014},
      month = {July},
      url = {http://www.cs.unm.edu/~eschulte/dissertation/schulte-dissertation.pdf}
    }
    					
    2016.02.17 Eric Schulte, Jonathan Dorn, Stephen Harding, Stephanie Forrest & Westley Weimer Post-compiler Software Optimization for Reducing Energy 2014 Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '14), pp. 639-652, Salt Lake City USA, 1-5 March   Inproceedings
    Abstract: Modern compilers typically optimize for executable size and speed, rarely exploring non-functional properties such as power efficiency. These properties are often hardware-specific, time-intensive to optimize, and may not be amenable to standard dataflow optimizations. We present a general post-compilation approach called Genetic Optimization Algorithm (GOA), which targets measurable non-functional aspects of software execution in programs that compile to x86 assembly. GOA combines insights from profile-guided optimization, superoptimization, evolutionary computation and mutational robustness. GOA searches for program variants that retain required functional behavior while improving non-functional behavior, using characteristic workloads and predictive modeling to guide the search. The resulting optimizations are validated using physical performance measurements and a larger held-out test suite. Our experimental results on PARSEC benchmark programs show average energy reductions of 20%, both for a large AMD system and a small Intel system, while maintaining program functionality on target workloads.
    BibTeX:
    @inproceedings{SchulteDHFW14,
      author = {Eric Schulte and Jonathan Dorn and Stephen Harding and Stephanie Forrest and Westley Weimer},
      title = {Post-compiler Software Optimization for Reducing Energy},
      booktitle = {Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '14)},
      publisher = {ACM},
      year = {2014},
      pages = {639-652},
      address = {Salt Lake City, USA},
      month = {1-5 March},
      doi = {http://dx.doi.org/10.1145/2541940.2541980}
    }
    					
    2013.09.26 Eric Schulte, Zachary P. Fry, Ethan Fast, Westley Weimer & Stephanie Forrest Software Mutational Robustness 2014 Genetic Programming and Evolvable Machines, Vol. 15(3), pp. 281-312, September   Article Testing and Debugging
    Abstract: Neutral landscapes and mutational robustness are believed to be important enablers of evolvability in biology. We apply these concepts to software, defining mutational robustness to be the fraction of random mutations to program code that leave a program’s behavior unchanged. Test cases are used to measure program behavior and mutation operators are taken from earlier work on genetic programming. Although software is often viewed as brittle, with small changes leading to catastrophic changes in behavior, our results show surprising robustness in the face of random software mutations. The paper describes empirical studies of the mutational robustness of 22 programs, including 14 production software projects, the Siemens benchmarks, and four specially constructed programs. We find that over 30 % of random mutations are neutral with respect to their test suite. The results hold across all classes of programs, for mutations at both the source code and assembly instruction levels, across various programming languages, and bear only a limited relation to test suite coverage. We conclude that mutational robustness is an inherent property of software, and that neutral variants (i.e., those that pass the test suite) often fulfill the program’s original purpose or specification. Based on these results, we conjecture that neutral mutations can be leveraged as a mechanism for generating software diversity. We demonstrate this idea by generating a population of neutral program variants and showing that the variants automatically repair latent bugs. Neutral landscapes also provide a partial explanation for recent results that use evolutionary computation to automatically repair software bugs.
    BibTeX:
    @article{SchulteFFWF14,
      author = {Eric Schulte and Zachary P. Fry and Ethan Fast and Westley Weimer and Stephanie Forrest},
      title = {Software Mutational Robustness},
      journal = {Genetic Programming and Evolvable Machines},
      year = {2014},
      volume = {15},
      number = {3},
      pages = {281-312},
      month = {September},
      doi = {http://dx.doi.org/10.1007/s10710-013-9195-8}
    }
    					
    2014.05.28 Sergio Segura, José A. Parejo, Robert M. Hierons, David Benavides & Antonio Ruiz-Cortés Automated Generation of Computationally Hard Feature Models using Evolutionary Algorithms 2014 Expert Systems with Applications, Vol. 41(8), pp. 3975-3992, June   Article
    Abstract: A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.
    BibTeX:
    @article{SeguraPHBR14,
      author = {Sergio Segura and José A. Parejo and Robert M. Hierons and David Benavides and Antonio Ruiz-Cortés},
      title = {Automated Generation of Computationally Hard Feature Models using Evolutionary Algorithms},
      journal = {Expert Systems with Applications},
      year = {2014},
      volume = {41},
      number = {8},
      pages = {3975-3992},
      month = {June},
      doi = {http://dx.doi.org/10.1016/j.eswa.2013.12.028}
    }
    					
    2015.02.05 Thiago Gomes Nepomuceno Da Silva, Leonardo Sampaio Rocha & José Everardo Bessa Maia An Effective Method for MOGAs Initialization to Solve the Multi-Objective Next Release Problem 2014 Proceedings of the 13th Mexican International Conference on Artificial Intelligence (MICAI '14), pp. 25-37, Tuxtla Gutiérrez Mexico, 16-22 November   Inproceedings
    Abstract: In this work we evaluate the usefulness of a Path Relinking based method for generating the initial population of Multi-Objective Genetic Algorithms and evaluate its performance on the Multi-Objective Next Release Problem.The performance of the method was evaluated for the algorithms MoCell and NSGA-II, and the experimental results have shown that it is consistently superior to the random initialization method and the extreme solutions method, considering the convergence speed and the quality of the Pareto front, that was measured using the Spread and Hypervolume indexes.
    BibTeX:
    @inproceedings{SilvaRM14,
      author = {Thiago Gomes Nepomuceno Da Silva and Leonardo Sampaio Rocha and José Everardo Bessa Maia},
      title = {An Effective Method for MOGAs Initialization to Solve the Multi-Objective Next Release Problem},
      booktitle = {Proceedings of the 13th Mexican International Conference on Artificial Intelligence (MICAI '14)},
      publisher = {Springer},
      year = {2014},
      pages = {25-37},
      address = {Tuxtla Gutiérrez, Mexico},
      month = {16-22 November},
      doi = {http://dx.doi.org/10.1007/978-3-319-13650-9_3}
    }
    					
    2014.08.14 Christopher L. Simons, Jim Smith & Paul White Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design 2014 Swarm Intelligence, Vol. 8(2), pp. 139-157, June   Article Design Tools and Techniques
    Abstract: Finding good designs in the early stages of the software development lifecycle is a demanding multi-objective problem that is crucial to success. Previously, both interactive and non-interactive techniques based on evolutionary algorithms (EAs) have been successfully applied to assist the designer. However, recently ant colony optimization was shown to outperform EAs at optimising quantitative measures of software designs with a limited computational budget. In this paper, we propose a novel interactive ACO (iACO) approach, in which the search is steered jointly by an adaptive model that combines subjective and objective measures. Results show that iACO is speedy, responsive and effective in enabling interactive, dynamic multi-objective search. Indeed, study participants rate the iACO search experience as compelling. Moreover, inspection of the learned model facilitates understanding of factors affecting users’ judgements, such as the interplay between a design’s elegance and the interdependencies between its components.
    BibTeX:
    @article{SimonsSW14,
      author = {Christopher L. Simons and Jim Smith and Paul White},
      title = {Interactive Ant Colony Optimization (iACO) for Early Lifecycle Software Design},
      journal = {Swarm Intelligence},
      year = {2014},
      volume = {8},
      number = {2},
      pages = {139-157},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s11721-014-0094-2}
    }
    					
    2014.08.14 Luciano Souza, Ricardo Prudencio & Flavia Barros Coello Coello, C.A. (Hrsg.) A Comparison Study of Binary Multi-Objective Particle Swarm Optimization Approaches for Test Case Selection 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14), pp. 2164-2171, Beijing China, 6-11 July   Inproceedings Testing and Debugging
    Abstract: During the software testing process many test suites can be generated in order to evaluate and assure the quality of the products. In some cases the execution of all suites can not fit the available resources (time, people, etc). Hence, automatic Test Case (TC) selection could be used to reduce the suites based on some selection criterion. This process can be treated as an Optimisation problem, aiming to find a subset of TCs which optimises one or more objective functions (i.e., selection criteria). The majority of search-based works focus on single-objective selection. In this light, we developed mechanisms for functional TC selection which considers two objectives simultaneously: maximise requirements' coverage while minimising cost in terms of TC execution effort. These mechanisms were implemented by deploying multi-objective techniques based on Particle Swarm Optimisation (PSO). Due to the drawbacks of original binary version of PSO we implemented five binary PSO algorithms and combined them with a multi-objective versions of PSO in order to create new Optimisation strategies applied to TC selection. The experiments were performed on two real test suites, revealing the feasibility of the proposed strategies and the differences among them.
    BibTeX:
    @inproceedings{SouzaPB14,
      author = {Luciano Souza and Ricardo Prudencio and Flavia Barros},
      title = {A Comparison Study of Binary Multi-Objective Particle Swarm Optimization Approaches for Test Case Selection},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {2164--2171},
      address = {Beijing, China},
      month = {6-11 July},
      doi = {http://dx.doi.org/10.1109/CEC.2014.6900522}
    }
    					
    2014.09.22 Ashish Sureka Requirements Prioritization and Next-Release Problem under Non-additive Value Conditions 2014 Proceedins of the 23rd Australian Software Engineering Conference (ASWEC '14), pp. 120-123, Milsons Point NSW Australia, 7-10 April   Inproceedings Requirements/Specifications
    Abstract: Next Release Problem (NRP) is a complex combinatorial optimization problem consisting of identifying a subset of software requirements maximizing the business value under given constraints such as cost and resource limitation, time and functionality related dependencies between requirements. NRP can be mathematically formulated as an integer linear programming problem and previous researches solve the NRP multi-objective optimization problem using exact and metaheuristic search techniques. We present a mathematical formulation of the NRP under conditions of non-additive customer valuations (positive and negative synergies) across requirements. We present a model that allows customers to state their preferences or valuations across bundles or combinations of requirements. We analyze the economic efficiency gains, cognitive and computationally complexity of the proposed model. We conduct experiments to investigate the applicability of multi-objective evolutionary algorithms (MOEAs) in solving the NRP with non-additive valuations and implication constraints on requirements. We compare and contrast the performance of state-of-the-art MOEAs such as NSGA-II and GDE3 on synthetic dataset representing multiple problem characteristics and size and present the result of our empirical analysis.
    BibTeX:
    @inproceedings{Sureka14,
      author = {Ashish Sureka},
      title = {Requirements Prioritization and Next-Release Problem under Non-additive Value Conditions},
      booktitle = {Proceedins of the 23rd Australian Software Engineering Conference (ASWEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {120-123},
      address = {Milsons Point, NSW, Australia},
      month = {7-10 April},
      doi = {http://dx.doi.org/10.1109/ASWEC.2014.12}
    }
    					
    2016.02.17 Bharti Suri & Shweta Singhal Development and Validation of an Improved Test Selection and Prioritization Algorithm based on ACO 2014 International Journal of Reliability, Quality and Safety Engineering, Vol. 21(6), December   Article Testing and Debugging
    Abstract: Regression testing is an important and often costly software maintenance activity. Retesting the software using existing test suite whenever modifications are made to the system, in order to regain confidence in correctness of the system, is called as Regression Testing. Regression test suites are often too large to re-execute in the given time and cost constraints. Reordering of the test suite is done according to appropriate criteria like code, branch, condition and fault coverage, etc. This process is known as Test Suite Prioritization. We can also select a subset of the original test suite on the basis of some criteria, often called as Regression Test Selection. The research problem that arises from this is the choice of technique or process to be used for selecting and prioritizing according to one or more of the chosen criteria(s). Ant Colony Optimization (ACO) is one such technique that was used by Singh et al. for solving Time-Constrained Test Suite Selection and Prioritization problem using Fault Exposing Potential (FEP). In this paper, we propose improvements to the existing algorithm along with details of the time complexity of the algorithm. It was very convincing to implement the technique considering the results obtained. Implementation of the proposed algorithm has also been demonstrated. The tool was repeatedly run on sample programs by changing the time constraint criterion. The analysis shows the usefulness and effectiveness of using ACO technique for test suite selection and prioritization.
    BibTeX:
    @article{SuriS14,
      author = {Bharti Suri and Shweta Singhal},
      title = {Development and Validation of an Improved Test Selection and Prioritization Algorithm based on ACO},
      journal = {International Journal of Reliability, Quality and Safety Engineering},
      year = {2014},
      volume = {21},
      number = {6},
      month = {December},
      doi = {http://dx.doi.org/10.1142/S0218539314500326}
    }
    					
    2014.08.12 Jerry Swan, Michael G Epitropakis & John Woodward Gen-O-Fix: An Embeddable Framework for Dynamic Adaptive Genetic Improvement Programming 2014 (CSM-195)   Techreport
    Abstract: Genetic Improvement Programming (GIP) is concerned with automating the burden of software maintenance, the most costly phase of the software life cycle. We describe Gen-O-Fix, a GIP framework which allows a software system hosted on the Java Virtual Machine to be continually improved (e.g. make better predictions; pass more regression tests; reduce power consumption). It is the first exemplar of a dynamic adaptive GIP framework, i.e. it can improve a system as it runs. It is written in the Scala programming language and uses reflection to yield source-to-source transformation. One of the design goals for Gen-O-Fix was to create a tool that is user-centric rather than researcher-centric: the end-user is required only to provide a measure of system quality and the URL of the source code to be improved. We discuss potential applications to predictive, embedded and high-performance systems.
    BibTeX:
    @techreport{SwanEW14,
      author = {Jerry Swan and Michael G Epitropakis and John Woodward},
      title = {Gen-O-Fix: An Embeddable Framework for Dynamic Adaptive Genetic Improvement Programming},
      year = {2014},
      number = {CSM-195},
      url = {http://www.cs.stir.ac.uk/~jrw/publications/genofix-TR.pdf}
    }
    					
    2014.05.28 Julian Thomé, Alessandra Gorla & Andreas Zeller Search-based Security Testing of Web Applications 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), pp. 5-14, Hyderabad India, 2-3 June   Inproceedings Testing and Debugging
    Abstract: SQL injections are still the most exploited web application vulnerabilities. We present a technique to automatically detect such vulnerabilities through targeted test generation. Our approach uses search-based testing to systematically evolve inputs to maximize their potential to expose vulnerabilities. Starting from an entry URL, our BIOFUZZ prototype systematically crawls a web application and generates inputs whose effects on the SQL interaction are assessed at the interface between Web server and database. By evolving those inputs whose resulting SQL interactions show best potential, BIOFUZZ exposes vulnerabilities on real-world Web applications within minutes. As a black-box approach, BIOFUZZ requires neither analysis nor instrumentation of server code; however, it even outperforms state-of-the-art white-box vulnerability scanners.
    BibTeX:
    @inproceedings{ThomeGZ14,
      author = {Julian Thomé and Alessandra Gorla and Andreas Zeller},
      title = {Search-based Security Testing of Web Applications},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      pages = {5-14},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593835}
    }
    					
    2014.05.28 Nikolai Tillmann, Judith Bishop, Nigel Horspool, Daniel Perelman & Tao Xie Code Hunt: Searching for Secret Code for Fun 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), pp. 23-26, Hyderabad India, 2-3 June   Inproceedings Testing and Debugging
    Abstract: Learning to code can be made more effective and sustainable if it is perceived as fun by the learner. Code Hunt uses puzzles that players have to explore by means of clues presented as test cases. Players iteratively modify their code to match the functional behavior of secret solutions. This way of learning to code is very different to learning from a specification. It is essentially re-engineering from test cases. Code Hunt is based on the test/clue generation of Pex, a white-box test generation tool that uses dynamic symbolic execution. Pex performs a guided search to determine feasible execution paths. Conceptually, solving a puzzle is the manual process of conducting search-based test generation: the “test data” to be generated by the player is the player’s code, and the “fitness values” that reflect the closeness of the player’s code to the secret code are the clues (i.e., Pex-generated test cases). This paper is the first one to describe Code Hunt and its extensions over its precursor Pex4Fun. Code Hunt represents a high-impact educational gaming platform that not only internally leverages fitness values to guide test/clue generation but also externally offers fun user experiences where search-based test generation is manually emulated. Because the amount of data is growing all the time, the entire system runs in the cloud on Windows Azure.
    BibTeX:
    @inproceedings{TillmannBHPX14,
      author = {Nikolai Tillmann and Judith Bishop and Nigel Horspool and Daniel Perelman and Tao Xie},
      title = {Code Hunt: Searching for Secret Code for Fun},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      pages = {23-26},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593838}
    }
    					
    2015.12.09 Prashant Vats & Manju Mandot A Comparative Analysis of Ant Colony Optimization for its Applications into Software Testing 2014 Proceedings of Innovative Applications of Computational Intelligence on Power, Energy and Controls with their impact on Humanity (CIPECH '14), pp. 476-481, Ghaziabad India, 28-29 November   Inproceedings Testing and Debugging
    Abstract: Ant Colony Optimization are metaheuristic algorithms that uses the search based algorithms as their base. It applies the natural phenomenon of finding the best possible path by the Ants that is covering the minimum distance from the food source to the ant colony, which will be followed by the rest of the ants, resulting into the optimized path. This phenomenon can be applied to provide optimized solutions to solve some complex computational problems. In this paper, we have carried out a review for the applications of the Ant colony Optimization algorithms in context to various level of the Software Testing, thus proving their worth in providing solutions to the various aspects of the Software Testing.
    BibTeX:
    @inproceedings{VatsM14,
      author = {Prashant Vats and Manju Mandot},
      title = {A Comparative Analysis of Ant Colony Optimization for its Applications into Software Testing},
      booktitle = {Proceedings of Innovative Applications of Computational Intelligence on Power, Energy and Controls with their impact on Humanity (CIPECH '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {476-481},
      address = {Ghaziabad, India},
      month = {28-29 November},
      doi = {http://dx.doi.org/10.1109/CIPECH.2014.7019110}
    }
    					
    2014.05.27 Tanja E.J. Vos, Paolo Tonella, Wishnu Prasetya, Peter M. Kruse, Alessandra Bagnato, Mark Harman & Onn Shehory FITTEST: A New Continuous and Automated Testing Process for Future Internet Applications 2014 Proceedings of the IEEE Conference on Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE) Software Evolution Week, pp. 407-410, Antwerp Belgium, 3-6 February   Inproceedings Testing and Debugging
    BibTeX:
    @inproceedings{VosTPKBHS14,
      author = {Tanja E. J. Vos and Paolo Tonella and Wishnu Prasetya and Peter M. Kruse and Alessandra Bagnato and Mark Harman and Onn Shehory},
      title = {FITTEST: A New Continuous and Automated Testing Process for Future Internet Applications},
      booktitle = {Proceedings of the IEEE Conference on Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE) Software Evolution Week},
      publisher = {IEEE},
      year = {2014},
      pages = {407-410},
      address = {Antwerp, Belgium},
      month = {3-6 February},
      doi = {http://dx.doi.org/10.1109/CSMR-WCRE.2014.6747206}
    }
    					
    2014.08.14 Markus Wagner Coello Coello, C.A. (Hrsg.) Maximising Axiomatization Coverage and Minimizing Regression Testing Time 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14), pp. 2885-2892, Beijing China, 6-11 July   Inproceedings Testing and Debugging
    Abstract: The correctness of program verification systems is of great importance, as they are used to formally prove that safety and security critical programs follow their specification. One of the contributing factors to the correctness of the whole verification system is the correctness of the background axiomatisation, which captures the semantics of the target program language. We present a framework for the maximisation of the proportion of the axiomatisation that is used ("covered") during testing of the verification tool. The diverse set of test cases found not only increases the trust in the verification system, but it can also be used to reduce the time needed for regression testing.
    BibTeX:
    @inproceedings{Wagner14,
      author = {Markus Wagner},
      title = {Maximising Axiomatization Coverage and Minimizing Regression Testing Time},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {2885--2892},
      address = {Beijing, China},
      month = {6-11 July},
      doi = {http://dx.doi.org/10.1109/CEC.2014.6900324}
    }
    					
    2014.08.14 Shuai Wang, Shaukat Ali & Arnaud Gotlieb Random-Weighted Search-Based Multi-Objective Optimization Revisited 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 199-214, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: Weight-based multi-objective optimization requires assigning appropriate weights using a weight strategy to each of the objectives such that an overall optimal solution can be obtained with a search algorithm. Choosing weights using an appropriate weight strategy has a huge impact on the obtained solutions and thus warrants the need to seek the best weight strategy. In this paper, we propose a new weight strategy called Uniformly Distributed Weights (UDW), which generates weights from uniform distribution, while satisfying a set of user-defined constraints among various cost and effectiveness measures. We compare UDW with two commonly used weight strategies, i.e., Fixed Weights (FW) and Randomly-Assigned Weights (RAW), based on five cost/effectiveness measures for an industrial problem of test minimization defined in the context of Video Conferencing System Product Line developed by Cisco Systems. We empirically evaluate the performance of UDW, FW, and RAW in conjunction with four search algorithms ((1+1) Evolutionary Algorithm (EA), Genetic Algorithm, Alternating Variable Method, and Random Search) using the industrial case study and 500 artificial problems of varying complexity. Results show that UDW along with (1+1) EA achieves the best performance among the other combinations of weight strategies and algorithms.
    BibTeX:
    @inproceedings{WangAG14,
      author = {Shuai Wang and Shaukat Ali and Arnaud Gotlieb},
      title = {Random-Weighted Search-Based Multi-Objective Optimization Revisited},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {199-214},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_14}
    }
    					
    2016.02.27 Shuai Wang, David Buchmann, Shaukat Ali, Arnaud Gotlieb, Dipesh Pradhan & Marius Liaaen Multi-objective Test Prioritization in Software Product Line Testing: An Industrial Case Study 2014 Proceedings of the 18th International Software Product Line Conference (SPLC '14), pp. 32-41, Florence Italy, 15-19 September   Inproceedings Testing and Debugging
    Abstract: Test prioritization is crucial for testing products in a product line considering limited budget in terms of available time and resources. In general, it is not practically feasible to execute all the possible test cases and so, ordering test case execution permits test engineers to discover faults earlier in the testing process. An efficient prioritization of test cases for one or more products requires a clear consideration of the tradeoff among various costs (e.g., time, required resources) and effectiveness (e.g., feature coverage) objectives. As an integral part of the future Cisco's test scheduling system for validating video conferencing products, we introduce a search-based multi-objective test prioritization technique, considering multiple cost and effectiveness measures. In particular, our multi-objective optimization setup includes the minimization of execution cost (e.g., time), and the maximization of number of prioritized test cases, feature pairwise coverage and fault detection capability. Based on cost-effectiveness measures, a novel fitness function is defined for such test prioritization problem. The fitness function is empirically evaluated together with three commonly used search algorithms (e.g., (1+1) Evolutionary algorithm (EA)) and Random Search as a comparison baseline based on the Cisco's industrial case study and 500 artificial designed problems. The results show that (1+1) EA achieves the best performance for solving the test prioritization problem and it scales up to solve the problems of varying complexity.
    BibTeX:
    @inproceedings{WangBAGPL14,
      author = {Shuai Wang and David Buchmann and Shaukat Ali and Arnaud Gotlieb and Dipesh Pradhan and Marius Liaaen},
      title = {Multi-objective Test Prioritization in Software Product Line Testing: An Industrial Case Study},
      booktitle = {Proceedings of the 18th International Software Product Line Conference (SPLC '14)},
      publisher = {ACM},
      year = {2014},
      pages = {32-41},
      address = {Florence, Italy},
      month = {15-19 September},
      doi = {http://dx.doi.org/10.1145/2648511.2648515}
    }
    					
    2014.05.28 Ying lin Wang & Jin wei Pang Ant Colony Optimization for Feature Selection in Software Product Lines 2014 Journal of Shanghai Jiaotong University (Science), Vol. 19(1), pp. 50-58, February   Article
    Abstract: Software product lines (SPLs) are important software engineering techniques for creating a collection of similar software systems. Software products can be derived from SPLs quickly. The process of software product derivation can be modeled as feature selection optimization with resource constraints, which is a nondeterministic polynomial-time hard (NP-hard) problem. In this paper, we present an approach that using ant colony optimization to get an approximation solution of the problem in polynomial time. We evaluate our approach by comparing it to two important approximation techniques. One is filtered Cartesian flattening and modified heuristic (FCF+M-HEU) algorithm, the other is genetic algorithm for optimized feature selection (GAFES). The experimental results show that our approach performs 6% worse than FCF+M-HEU with reducing much running time. Meanwhile, it performs 10% better than GAFES with taking more time.
    BibTeX:
    @article{WangP14,
      author = {Ying-lin Wang and Jin-wei Pang},
      title = {Ant Colony Optimization for Feature Selection in Software Product Lines},
      journal = {Journal of Shanghai Jiaotong University (Science)},
      year = {2014},
      volume = {19},
      number = {1},
      pages = {50-58},
      month = {February},
      doi = {http://dx.doi.org/10.1007/s12204-013-1468-0}
    }
    					
    2014.05.28 Huayao Wu & Changhai Nie An Overview of Search Based Combinatorial Testing 2014 Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14), pp. 27-30, Hyderabad India, 2-3 June   Inproceedings Testing and Debugging
    Abstract: Combinatorial testing (CT) is a branch of software testing, which aims to detect the interaction triggered failures as much as possible. Search based combinatorial testing is to use the search techniques to solve the problem in combinatorial testing. It has been shown to be effective and promising. In this paper, we aim to provide an overview of search based combinatorial testing, especially focusing on test suite generation without constraint, and discuss the potential future directions in this field.
    BibTeX:
    @inproceedings{WuN14,
      author = {Huayao Wu and Changhai Nie},
      title = {An Overview of Search Based Combinatorial Testing},
      booktitle = {Proceedings of the 7th International Workshop on Search-Based Software Testing (SBST '14)},
      publisher = {ACM},
      year = {2014},
      pages = {27-30},
      address = {Hyderabad, India},
      month = {2-3 June},
      doi = {http://dx.doi.org/10.1145/2593833.2593839}
    }
    					
    2014.04.16 Xin Xia, Yang Feng, David Lo, Zhenyu Chen & Xinyu Wang Towards More Accurate Multi-label Software Behavior Learning 2014 Proceedings of IEEE Conference on Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE '14), pp. 134 -143, Antwerp Belgium, 3-6 February   Inproceedings
    Abstract: In a modern software system, when a program fails, a crash report which contains an execution trace would be sent to the software vendor for diagnosis. A crash report which corresponds to a failure could be caused by multiple types of faults simultaneously. Many large companies such as Baidu organize a team to analyze these failures, and classify them into multiple labels (i.e., multiple types of faults). However, it would be time-consuming and difficult for developers to manually analyze these failures and come out with appropriate fault labels. In this paper, we automatically classify a failure into multiple types of faults, using a composite algorithm named MLL-GA, which combines various multi-label learning algorithms by leveraging genetic algorithm (GA). To evaluate the effectiveness of MLL-GA, we perform experiments on 6 open source programs and show that MLL-GA could achieve average F-measures of 0.6078 to 0.8665. We also compare our algorithm with Ml.KNN and show that on average across the 6 datasets, MLL-GA improves the average F-measure of MI.KNN by 14.43%.
    BibTeX:
    @inproceedings{XiaFLCW14,
      author = {Xin Xia and Yang Feng and David Lo and Zhenyu Chen and Xinyu Wang},
      title = {Towards More Accurate Multi-label Software Behavior Learning},
      booktitle = {Proceedings of IEEE Conference on Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {134 -143},
      address = {Antwerp, Belgium},
      month = {3-6 February},
      doi = {http://dx.doi.org/10.1109/CSMR-WCRE.2014.6747163}
    }
    					
    2014.09.19 Ying Xing, Junfei Huang, Yunzhan Gong, Yawen Wang & Xuzhou Zhang An Intelligent Method Based on State Space Search for Automatic Test Case Generation 2014 Journal of Software, Vol. 9(2), pp. 358-364, February   Article Testing and Debugging
    Abstract: Search-Based Software Testing reformulates testing as search problems so that test case generation can be automated by some chosen search algorithms. This paper reformulates path-oriented test case generation as a state space search problem and proposes an intelligent method Best-First-Search Branch & Bound to solve it, utilizing the algorithms of Branch & Bound and Backtrack to search the space of potential test cases and adopting bisection to lower the bounds of the search space. We also propose an optimization method by removing irrelevant variables. Experiments show that the proposed search method generates test cases with promising performance and outperforms some MetaHeuristic Search algorithms.
    BibTeX:
    @article{XingHGWZ14,
      author = {Ying Xing and Junfei Huang and Yunzhan Gong and Yawen Wang and Xuzhou Zhang},
      title = {An Intelligent Method Based on State Space Search for Automatic Test Case Generation},
      journal = {Journal of Software},
      year = {2014},
      volume = {9},
      number = {2},
      pages = {358-364},
      month = {February},
      doi = {http://dx.doi.org/10.4304/jsw.9.2.358-364}
    }
    					
    2014.08.14 Yongrui Xu & Peng Liang A New Learning Mechanism for Resolving Inconsistencies in Using Cooperative Co-evolution Model 2014 Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14), Vol. 8636, pp. 215-221, Fortaleza Brazil, 26-29 August   Inproceedings
    Abstract: Many aspects of Software Engineering problems lend themselves to a coevolutionary model of optimization because software systems are complex and rich in potential population that could be productively coevolved. Most of these aspects can be coevolved to work better together in a cooperative manner. Compared with the simple and common used predator-prey co-evolution model, cooperative co-evolution model has more challenges that need to be addressed. One of these challenges is how to resolve the inconsistencies between two populations in order to make them work together with no conflict. In this position paper, we propose a new learning mechanism based on Baldwin effect, and introduce the learning genetic operators to address the inconsistency issues. A toy example in the field of automated architectural synthesis is provided to describe the use of our proposed approach.
    BibTeX:
    @inproceedings{XuL14,
      author = {Yongrui Xu and Peng Liang},
      title = {A New Learning Mechanism for Resolving Inconsistencies in Using Cooperative Co-evolution Model},
      booktitle = {Proceedings of the 6th International Symposium on Search-Based Software Engineering (SSBSE '14)},
      publisher = {Springer},
      year = {2014},
      volume = {8636},
      pages = {215-221},
      address = {Fortaleza, Brazil},
      month = {26-29 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-09940-8_14}
    }
    					
    2014.08.14 Tao Yue & Shaukat Ali Applying Search Algorithms for Optimizing Stakeholders Familiarity and Balancing Workload in Requirements Assignment 2014 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14), pp. 1295-1302, Vancouver Canada, 12-16 July   Inproceedings Requirements/Specifications
    Abstract: During the early phase of project development lifecycle of large scale cyber-physical systems, a large number of requirements are needed to be assigned to different stakeholders from different organizations or different departments of the same organization for reviewing, clarifying and checking their conformance to industry standards and government or other regulations. These requirements have different characteristics such as various extents of importance to the organization, complexity, and dependencies between each other, thereby requiring different effort (workload) to review and clarify. While working with our industrial partners in the domain of cyber-physical systems, we discovered an optimization problem, where an optimal solution is required for assigning requirements to different stakeholders by maximizing their familiarities to the assigned requirements while balancing the overall workload of each stakeholder. We propose a fitness function which was investigated with four search algorithms: (1+1) Evolutionary Algorithm (EA), Genetic Algorithm, and Alternating Variable Method, whereas Random Search is used as a comparison base line. We empirically evaluated their performance for finding an optimal solution using a large-scale industrial case study and 120 artificial problems with varying complexity. Results show that (1+1) EA gives the best results together with our proposed fitness function as compared to the other three algorithms.
    BibTeX:
    @inproceedings{YueA14,
      author = {Tao Yue and Shaukat Ali},
      title = {Applying Search Algorithms for Optimizing Stakeholders Familiarity and Balancing Workload in Requirements Assignment},
      booktitle = {Proceedings of the 2014 Conference on Genetic and Evolutionary Computation (GECCO '14)},
      publisher = {ACM},
      year = {2014},
      pages = {1295-1302},
      address = {Vancouver, Canada},
      month = {12-16 July},
      doi = {http://dx.doi.org/10.1145/2576768.2598309}
    }
    					
    2014.08.14 Zeratul Mohd Yusoh & Maolin Tang Coello Coello, C.A. (Hrsg.) Composite SaaS Scaling in Cloud Computing using a Hybrid Genetic Algorithm 2014 Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14), pp. 1609-1616, Beijing China, 6-11 July   Inproceedings
    Abstract: A Software-as-a-Service or SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. Components in a composite SaaS may need to be scaled, replicated or deleted, to accommodate the user's load. It may not be necessary to replicate all components of the SaaS, as some components can be shared by other instances. On the other hand, when the load is low, some of the instances may need to be deleted to avoid resource under use. Thus, it is important to determine which components are to be scaled such that the performance of the SaaS is still maintained. Extensive research on the SaaS resource management in Cloud has not yet addressed the challenges of scaling process for composite SaaS. Therefore, a hybrid genetic algorithm is proposed in which it utilises the problem's knowledge and explores the best combination of scaling plan for the components. Experimental results demonstrate that the proposed algorithm outperforms existing heuristic-based solutions.
    BibTeX:
    @inproceedings{YusohT14,
      author = {Zeratul Mohd Yusoh and Maolin Tang},
      title = {Composite SaaS Scaling in Cloud Computing using a Hybrid Genetic Algorithm},
      booktitle = {Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)},
      publisher = {IEEE},
      year = {2014},
      pages = {1609--1616},
      address = {Beijing, China},
      month = {6-11 July},
      doi = {http://dx.doi.org/10.1109/CEC.2014.6900614}
    }
    					
    2015.11.06 Yuanyuan Zhang, Mark Harman, Gabriela Ochoa, Guenther Ruhe & Sjaak Brinkkemper An Empirical Study of Meta-and Hyper-Heuristic Search for Multi-Objective Release Planning 2014 (RN/14/07), June   Techreport
    Abstract: A variety of meta-heuristic search algorithms have been introduced for optimising software release planning. However, there has been no comprehensive empirical study of different search algorithms across multiple different real world datasets. In this paper we present an empirical study of global, local and hybrid meta- and hyper-heuristic search based algorithms on 10 real world datasets. We find that the hyper-heuristics are particularly effective. For example, the hyper-heuristic genetic algorithm significantly outperformed the other six approaches (and with high effect size) for solution quality 85% of the time, and was also faster than all others 70% of the time. Furthermore, correlation analysis reveals that it scales well as the number of requirements increases.
    BibTeX:
    @techreport{ZhangHORB14,
      author = {Yuanyuan Zhang and Mark Harman and Gabriela Ochoa and Guenther Ruhe and Sjaak Brinkkemper},
      title = {An Empirical Study of Meta-and Hyper-Heuristic Search for Multi-Objective Release Planning},
      year = {2014},
      number = {RN/14/07},
      month = {June},
      url = {http://www.cs.ucl.ac.uk/fileadmin/UCL-CS/research/Research_Notes/RN_14_07.pdf}
    }
    					
    2014.09.17 Sheeva Afshan Search-Based Generation of Human Readable Test Data and Its Impact on Human Oracle Costs 2013 , March School: University of Sheffield   Phdthesis Testing and Debugging
    Abstract: The frequent non-availability of an automated oracle makes software testing a tedious manual task which involves the expensive performance of a human oracle. Despite this, the literature concerning the automated test data generation has mainly focused on the achievement of structural code coverage, without simultaneously considering the reduction of human oracle cost. One source of human oracle cost is the unreadability of machine-generated test inputs, which can result in test scenarios that are hard to comprehend and time-consuming to verify. This is particularly apparent for string inputs consisting of arbitrary sequences of characters that are dissimilar to values a human tester would normally generate. The key objectives of this research is to investigate the impact of a seeded search-based test data generation approach on test data oracle costs, and to propose a novel technique that can generate human readable test inputs for string data types. The first contribution of this thesis is the result of an empirical study in which human subjects are invited to manually evaluate test inputs generated using the seeded and unseeded search-based approaches for 14 open source case studies. For 9 of the case studies, the human manual evaluation was significantly less time-consuming for inputs produced using the seeded approach, while the accuracy of test input evaluation was also significantly improved in 2 cases. The second contribution is the introduction of a novel technique in which a natural language model is incorporated into the search-based process with the aim of improving the human readability of generated strings. A human study is performed in which test inputs generated using the technique for 17 open source case studies are evaluated manually by human subjects. For 10 of the case studies, the human manual evaluation was significantly less time consuming for inputs produced using the language model. In addition, the results revealed that accuracy of test input evaluation was also significantly enhanced for 3 of the case studies.
    BibTeX:
    @phdthesis{Afshan13,
      author = {Sheeva Afshan},
      title = {Search-Based Generation of Human Readable Test Data and Its Impact on Human Oracle Costs},
      school = {University of Sheffield},
      year = {2013},
      month = {March},
      url = {http://etheses.whiterose.ac.uk/4337/1/Thesis.pdf}
    }
    					
    2014.09.02 Sheeva Afshan, Phil McMinn & Mark Stevenson Evolving Readable String Test Inputs using a Natural Language Model to Reduce Human Oracle Cost 2013 Proceedings of IEEE 6th International Conference onSoftware Testing, Verification and Validation (ICST '13), pp. 352-361, Luembourg Luembourg, 18-22 March   Inproceedings Testing and Debugging
    Abstract: The frequent non-availability of an automated oracle means that, in practice, checking software behaviour is frequently a painstakingly manual task. Despite the high cost of human oracle involvement, there has been little research investigating how to make the role easier and less time-consuming. One source of human oracle cost is the inherent unreadability of machine-generated test inputs. In particular, automatically generated string inputs tend to be arbitrary sequences of characters that are awkward to read. This makes test cases hard to comprehend and time-consuming to check. In this paper we present an approach in which a natural language model is incorporated into a search-based input data generation process with the aim of improving the human readability of generated strings. We further present a human study of test inputs generated using the technique on 17 open source Java case studies. For 10 of the case studies, the participants recorded significantly faster times when evaluating inputs produced using the language model, with medium to large effect sizes 60% of the time. In addition, the study found that accuracy of test input evaluation was also significantly improved for 3 of the case studies.
    BibTeX:
    @inproceedings{AfshanMS13,
      author = {Sheeva Afshan and Phil McMinn and Mark Stevenson},
      title = {Evolving Readable String Test Inputs using a Natural Language Model to Reduce Human Oracle Cost},
      booktitle = {Proceedings of IEEE 6th International Conference onSoftware Testing, Verification and Validation (ICST '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {352-361},
      address = {Luembourg, Luembourg},
      month = {18-22 March},
      doi = {http://dx.doi.org/10.1109/ICST.2013.11}
    }
    					
    2014.09.17 Nassima Aleb & Samir Kechid Automatic Test Data Generation using a Genetic Algorithm 2013 Proceedings of the 13th International Conference on Computational Science and Its Applications (ICCSA '13), Vol. 7972, pp. 574-586, Ho Chi Minh City Vietnam, 24-27 June   Inproceedings Testing and Debugging
    Abstract: The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test generation process have been limited, having been constrained by the size and complexity of software, and the basic fact that in general, test data generation is an undecidable problem. Metaheuristic search techniques offer much promise in regard to these problems. Metaheuristic search techniques are high-level frameworks, which utilize heuristics to seek solutions for combinatorial problems at a reasonable computational cost. In this paper, we present a new evolutionary approach for automated test data generation for structural testing. Our method presents several noteworthy features: It uses a newly defined program modeling allowing an easy program manipulation. Furthermore, instead of affecting a unique value for each input variable, we assign to each input an interval. This representation has the advantage of delimiting first the input value and to refine the interval progressively. In this manner, the search space is explored more efficiently. We use an original fitness function, which expresses truthfully the individual quality. Furthermore, we define a crossover operator allowing to effectively improving individuals.
    BibTeX:
    @inproceedings{AlebK13,
      author = {Nassima Aleb and Samir Kechid},
      title = {Automatic Test Data Generation using a Genetic Algorithm},
      booktitle = {Proceedings of the 13th International Conference on Computational Science and Its Applications (ICCSA '13)},
      publisher = {Springer},
      year = {2013},
      volume = {7972},
      pages = {574-586},
      address = {Ho Chi Minh City, Vietnam},
      month = {24-27 June},
      doi = {http://dx.doi.org/10.1007/978-3-642-39643-4_41}
    }
    					
    2013.08.05 Aldeida Aleti, Barbora Buhnova, Lars Grunske, Anne Koziolek & Indika Meedeniya Software Architecture Optimization Methods: A Systematic Literature Review 2013 IEEE Transactions On Software Engineering, Vol. 39(5), pp. 658-683, May   Article Design Tools and Techniques
    Abstract: Due to significant industrial demands toward software systems with increasing complexity and challenging quality requirements, software architecture design has become an important development activity and the research domain is rapidly evolving. In the last decades, software architecture optimization methods, which aim to automate the search for an optimal architecture design with respect to a (set of) quality attribute(s), have proliferated. However, the reported results are fragmented over different research communities, multiple system domains, and multiple quality attributes. To integrate the existing research results, we have performed a systematic literature review and analyzed the results of 188 research papers from the different research communities. Based on this survey, a taxonomy has been created which is used to classify the existing research. Furthermore, the systematic analysis of the research literature provided in this review aims to help the research community in consolidating the existing research efforts and deriving a research agenda for future developments.
    BibTeX:
    @article{AletiBGKM13,
      author = {Aldeida Aleti and Barbora Buhnova and Lars Grunske and Anne Koziolek and Indika Meedeniya},
      title = {Software Architecture Optimization Methods: A Systematic Literature Review},
      journal = {IEEE Transactions On Software Engineering},
      year = {2013},
      volume = {39},
      number = {5},
      pages = {658-683},
      month = {May},
      doi = {http://dx.doi.org/10.1109/TSE.2012.64}
    }
    					
    2012.10.25 Shaukat Ali, Muhammad Zohaib Iqbal, Andrea Arcuri & Lionel C. Briand Generating Test Data from OCL Constraints with Search Techniques 2013 IEEE Transactions on Software Engineering, Vol. 39(10), pp. 1376-1402, October   Article Testing and Debugging
    Abstract: Model-based testing (MBT) aims at automated, scalable, and systematic testing solutions for complex industrial software systems. To increase chances of adoption in industrial contexts, software systems can be modeled using well-established standards such as the Unified Modeling Language (UML) and the Object Constraint Language (OCL). Given that test data generation is one of the major challenges to automate MBT, we focus on test data generation from OCL constraints in this paper. Though search-based software testing has been applied to test data generation for white-box testing (e.g., branch coverage), its application to the MBT of industrial software systems has been limited. In this paper, we propose a set of search heuristics targeted to OCL constraints to guide test data generation and automate MBT in industrial applications. We evaluate these heuristics for three search algorithms: Genetic Algorithm, (1+1) Evolutionary Algorithm, and Alternating Variable Method. We empirically evaluate our heuristics using complex artificial problems, followed by empirical analyses of the feasibility of our approach on one industrial system in the context of robustness testing. Our approach is also compared with the most widely referenced OCL solver (UMLtoCSP) in the literature and shows to be significantly more efficient.
    BibTeX:
    @article{AliIAB13,
      author = {Shaukat Ali and Muhammad Zohaib Iqbal and Andrea Arcuri and Lionel C. Briand},
      title = {Generating Test Data from OCL Constraints with Search Techniques},
      journal = {IEEE Transactions on Software Engineering},
      year = {2013},
      volume = {39},
      number = {10},
      pages = {1376-1402},
      month = {October},
      doi = {http://dx.doi.org/10.1109/TSE.2013.17}
    }
    					
    2013.08.05 Faisal Alrebeish & Rami Bahsoon Using Portfolio Theory to Diversify the Dynamic Allocation of Web Services in the Cloud 2013 Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13), pp. 197-198, Amsterdam The Netherlands, 6-10 July   Inproceedings Management
    Abstract: In this paper, we view the Cloud as market place for trading instances of Web Services, which can be bought or leased by web applications. Applications can buy diversity by selecting web services from multiple cloud sellers in a cloud based market. We argue that by diversifying the selection, we can improve the dependability of the application and reduce risks associated with Service Level Agreements (SLA) violations. We propose a novel dynamic adaptive search based software engineering approach, which uses portfolio theory to construct a diversify portfolio of web service instances, traded from multiple Cloud providers. The approach systematically evaluates the Quality of Service (QoS) and risks of the portfolio, compare it to the optimal traded portfolio at a given time, and dynamically decide on a new portfolio and adapt the application accordingly.
    BibTeX:
    @inproceedings{AlrebeishB13,
      author = {Faisal Alrebeish and Rami Bahsoon},
      title = {Using Portfolio Theory to Diversify the Dynamic Allocation of Web Services in the Cloud},
      booktitle = {Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {197-198},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2464576.2464674}
    }
    					
    2014.09.03 Mohammad A. Alshraideh, Basel A. Mahafzah, Hamzeh S. Eyal Salman & Imad Salah Using Genetic Algorithm as Test Data Generator for Stored PL/SQL Program Units 2013 Journal of Software Engineering and Applications, Vol. 6, pp. 65-73   Article Testing and Debugging
    Abstract: PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored pro- gram units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execu- tion of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables; in this case, GA is not able to generate test cases for this branch.
    BibTeX:
    @article{AlshraidehMSS13,
      author = {Mohammad A. Alshraideh and Basel A. Mahafzah and Hamzeh S. Eyal Salman and Imad Salah},
      title = {Using Genetic Algorithm as Test Data Generator for Stored PL/SQL Program Units},
      journal = {Journal of Software Engineering and Applications},
      year = {2013},
      volume = {6},
      pages = {65-73},
      doi = {http://dx.doi.org/10.4236/jsea.2013.62011}
    }
    					
    2013.06.28 Saswat Anand, Edmund Burke, Tsong Yueh Chen, John A. Clark, Myra B. Cohen, Wolfgang Grieskamp, Mark Harman, Mary Jean Harrold & Phil McMinn An Orchestrated Survey on Automated Software Test Case Generation 2013 Journal of Systems and Software, Vol. 86(8), pp. 1978–2001, August   Article Testing and Debugging
    Abstract: Test case generation is among the most labour-intensive tasks in software testing. It also has a strong impact on the effectiveness and efficiency of software testing. For these reasons, it has been one of the most active research topics in software testing for several decades, resulting in many different approaches and tools. This paper presents an orchestrated survey of the most prominent techniques for automatic generation of software test cases, reviewed in self-standing sections. The techniques presented include: (a) structural testing using symbolic execution, (b) model-based testing, (c) combinatorial testing, (d) random testing and its variant of adaptive random testing, and (e) search-based testing. Each section is contributed by world-renowned active researchers on the technique, and briefly covers the basic ideas underlying the method, the current state of the art, a discussion of the open research problems, and a perspective of the future development of the approach. As a whole, the paper aims at giving an introductory, up-to-date and (relatively) short overview of research in automatic test case generation, while ensuring a comprehensive and authoritative treatment.
    BibTeX:
    @article{AnandBCCCGHHM13,
      author = {Saswat Anand and Edmund Burke and Tsong Yueh Chen and John A. Clark and Myra B. Cohen and Wolfgang Grieskamp and Mark Harman and Mary Jean Harrold and Phil McMinn},
      title = {An Orchestrated Survey on Automated Software Test Case Generation},
      journal = {Journal of Systems and Software},
      year = {2013},
      volume = {86},
      number = {8},
      pages = {1978–2001},
      month = {August},
      doi = {http://dx.doi.org/10.1016/j.jss.2013.02.061}
    }
    					
    2014.09.22 Sandro Santos Andrade & Raimundo José de A. Macêdo Toward Systematic Conveying of Architecture Design Knowledge for Self-Adaptive Systems 2013 Proceedings of the 7th International Conference on Self-Adaptation and Self-Organizing Systems Workshops (SASOW '13), pp. 23-24, Philadelphia PA USA, 9-13 September   Inproceedings
    Abstract: With the increasing complexity and stringent requirements of modern large-scale distributed systems, well-structured representations of software design knowledge arise as a promising approach to keep delivering high quality products in a timely and cost-effective way. Although domain-specific architecture styles and reference architectures help in conveying such design knowledge, the lack of systematic and structured representations makes it hard to grasp design alternatives promptly and support well-informed trade-off analysis. This short paper presents DuSE-MT - a supporting tool for the DuSE approach to architectural design of self-adaptive systems. DuSE-MT implements: i) a generic met model or systematic representation of design spaces (DuSE), which enables automated architecture design and analysis, ii) a specific design space for the self-adaptive systems domain (SA:DuSE), iii) a set of metrics that capture quality attributes of resulting self-adaptive architectures, and iv) a multi-objective optimization approach to explicitly elicit trade-off decision by finding out a set of Pareto-optimal candidate architectures. Our approach has been evaluated in a case study involving self-adaptive cloud-based services.
    BibTeX:
    @inproceedings{AndradeM13,
      author = {Sandro Santos Andrade and Raimundo José de A. Macêdo},
      title = {Toward Systematic Conveying of Architecture Design Knowledge for Self-Adaptive Systems},
      booktitle = {Proceedings of the 7th International Conference on Self-Adaptation and Self-Organizing Systems Workshops (SASOW '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {23-24},
      address = {Philadelphia, PA, USA},
      month = {9-13 September},
      doi = {http://dx.doi.org/10.1109/SASOW.2013.13}
    }
    					
    2016.03.08 Sandro Santos Andrade & Raimundo José de A. Macêdo A Search-Based Approach for Architectural Design of Feedback Control Concerns in Self-Adaptive Systems 2013 Proceedings of IEEE 7th International Conference on Self-Adaptive and Self-Organizing Systems (SASO '13), pp. 61-70, Philadelphia USA, 9-13 September   Inproceedings
    Abstract: A number of approaches for endowing systems with self-adaptive behavior have been proposed over the past years. Among such efforts, architecture-centric solutions with explicit representation of feedback loops have currently been advocated as a promising research landscape. Although noteworthy results have been achieved on some fronts, the lack of systematic representations of architectural knowledge and effective support for well-informed trade-off decisions still poses significant challenges when designing modern self-adaptive systems. In this paper, we present a systematic and flexible representation of design dimensions related to feedback control concerns, a set of metrics which estimate quality attributes of resulting automated designs, and a search-based approach to find out a set of Pareto-optimal candidate architectures. The proposed approach has been fully implemented in a supporting tool and a case study with a self-adaptive cloud computing environment has been undertaken. The results indicate that our approach effectively captures the most prominent degrees of freedom when designing self-adaptive systems, helps to elicit effective subtle designs, and provides useful support for early analysis of trade-off decisions.
    BibTeX:
    @inproceedings{AndradeM13b,
      author = {Sandro Santos Andrade and Raimundo José de A. Macêdo},
      title = {A Search-Based Approach for Architectural Design of Feedback Control Concerns in Self-Adaptive Systems},
      booktitle = {Proceedings of IEEE 7th International Conference on Self-Adaptive and Self-Organizing Systems (SASO '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {61-70},
      address = {Philadelphia, USA},
      month = {9-13 September},
      doi = {http://dx.doi.org/10.1109/SASO.2013.42}
    }
    					
    2012.03.06 Andrea Arcuri It Really Does Matter How You Normalize the Branch Distance in Search-based Software Testing 2013 Software Testing, Verification and Reliability, Vol. 23(2), pp. 119-147, March   Article Testing and Debugging
    Abstract: The use of search algorithms for test data generation has seen many successful results. For structural criteria like branch coverage, heuristics have been designed to help the search. The most common heuristic is the use of approach level (usually represented with an integer) to reward test cases whose executions get close (in the control flow graph) to the target branch. To solve the constraints of the predicates in the control flow graph, the branch distance is commonly employed. These two measures are linearly combined. Since the approach level is more important, the branch distance is normalized, often in the range [0, 1]. In this paper, different types of normalizing functions are analyzed. The analyses show that the one that is usually employed in the literature has several flaws. The paper presents a different normalizing function that is very simple and does not suffer from these limitations. Empirical and analytical analyses are carried out to compare these two functions. In particular, their effect is studied on commonly used search algorithms, such as Hill Climbing, Simulated Annealing and Genetic Algorithms.
    BibTeX:
    @article{Arcuri13,
      author = {Andrea Arcuri},
      title = {It Really Does Matter How You Normalize the Branch Distance in Search-based Software Testing},
      journal = {Software Testing, Verification and Reliability},
      year = {2013},
      volume = {23},
      number = {2},
      pages = {119-147},
      month = {March},
      doi = {http://dx.doi.org/10.1002/stvr.457}
    }
    					
    2013.08.05 Andrea Arcuri & Gordon Fraser Parameter Tuning or Default Values? An Empirical Investigation in Search-based Software Engineering 2013 Empirical Software Engineering, Vol. 18(3), pp. 594-623, June   Article Testing and Debugging
    Abstract: Many software engineering problems have been addressed with search algorithms. Search algorithms usually depend on several parameters (e.g., population size and crossover rate in genetic algorithms), and the choice of these parameters can have an impact on the performance of the algorithm. It has been formally proven in the No Free Lunch theorem that it is impossible to tune a search algorithm such that it will have optimal settings for all possible problems. So, how to properly set the parameters of a search algorithm for a given software engineering problem? In this paper, we carry out the largest empirical analysis so far on parameter tuning in search-based software engineering. More than one million experiments were carried out and statistically analyzed in the context of test data generation for object-oriented software using the EvoSuite tool. Results show that tuning does indeed have impact on the performance of a search algorithm. But, at least in the context of test data generation, it does not seem easy to find good settings that significantly outperform the “default” values suggested in the literature. This has very practical value for both researchers (e.g., when different techniques are compared) and practitioners. Using “default” values is a reasonable and justified choice, whereas parameter tuning is a long and expensive process that might or might not pay off in the end.
    BibTeX:
    @article{ArcuriF13,
      author = {Andrea Arcuri and Gordon Fraser},
      title = {Parameter Tuning or Default Values? An Empirical Investigation in Search-based Software Engineering},
      journal = {Empirical Software Engineering},
      year = {2013},
      volume = {18},
      number = {3},
      pages = {594-623},
      month = {June},
      doi = {http://dx.doi.org/10.1007/s10664-013-9249-9}
    }
    					
    2013.06.28 Nesa Asoudeh & Yvan Labiche A Multi-Objective Genetic Algorithm for Generating Test Suites from Extended Finite State Machines 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: We propose a test suite generation technique from extended finite state machines based on a genetic algorithm that fulfills multiple (conflicting) objectives. We aim at maximizing coverage and feasibility of a set of test cases while minimizing similarity between these cases and minimizing overall cost.
    BibTeX:
    @inproceedings{AsoudehL13,
      author = {Nesa Asoudeh and Yvan Labiche},
      title = {A Multi-Objective Genetic Algorithm for Generating Test Suites from Extended Finite State Machines},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_26}
    }
    					
    2013.06.28 Wesley Klewerton Guez Assunção, Thelma Elita Colanzi, Silvia Regina Vergilio & Aurora Pozo On the Application of the Multi-Evolutionary and Coupling-Based Approach with Different Aspect-Class Integration Testing Strategies 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 19-33, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: During the integration test of aspect-oriented software, it is necessary to determine an aspect-class integration and test order, associated to a minimal possible stubbing cost. To determine such optimal orders an approach based on multi-objective evolutionary algorithms was proposed. It generates a set of good orders with a balanced compromise among different measures and factors that may influence the stubbing process. However, in the literature there are different strategies proposed to aspect-class integration. For instance, the classes and aspects can be integrated in a combined strategy, or in an incremental way. The few works evaluating such strategies do not consider the multi-objective and coupling based approach. Given the importance of such approach to reduce testing efforts, in this work, we conduct an empirical study and present results from the application of the multi-objective approach with both mentioned strategies. The approach is implemented with four coupling measures and three evolutionary algorithms that are also evaluated: NSGA-II, SPEA2 and PAES. We observe that different strategies imply in different ways to explore the search space. Moreover, other results related to the practical use of both strategies are presented.
    BibTeX:
    @inproceedings{AssuncaoCVP13,
      author = {Wesley Klewerton Guez Assunção and Thelma Elita Colanzi and Silvia Regina Vergilio and Aurora Pozo},
      title = {On the Application of the Multi-Evolutionary and Coupling-Based Approach with Different Aspect-Class Integration Testing Strategies},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {19-33},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_4}
    }
    					
    2013.06.28 Colin Atkinson, Marcus Kessel & Marcus Schumacher On the Synergy between Search-Based and Search-Driven Software Engineering 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 239-244, St. Petersburg Russia, 24-26 August   Inproceedings
    Abstract: Two notions of “search” can be used to enhance the software engineering process — the notion of searching for optimal architectures/designs using AI-motivated optimization algorithms, and the notion of searching for reusable components using query-driven search engines. To date these possibilities have largely been explored separately within different communities. In this paper we suggest there is a synergy between the two approaches, and that a hybrid approach which integrates their strengths could be more useful and powerful than either approach individually. After first characterizing the two approaches we discuss some of the opportunities and challenges involved in their synergetic integration.
    BibTeX:
    @inproceedings{AtkinsonKS13,
      author = {Colin Atkinson and Marcus Kessel and Marcus Schumacher},
      title = {On the Synergy between Search-Based and Search-Driven Software Engineering},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {239-244},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_18}
    }
    					
    2013.06.28 Priti Bansal, Sangeeta Sabharwal, Shreya Malik, Vikhyat Arora & Vineet Kumar An Approach to Test Set Generation for Pair-wise Testing using Genetic Algorithms 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: Instead of performing exhaustive testing that tests all possible combinations of input parameter values of a system, it is better to switch to a more efficient and effective testing technique i.e., pair wise testing. In pair wise testing, test cases are designed to cover all possible combinations of each pair of input parameter values. It has been shown that the problem of finding the minimum set of test cases for pair-wise testing is an NP complete problem. In this paper we apply genetic algorithm, a meta heuristic search algorithm, to find an optimal solution to the pair-wise test set generation problem. We present a method to generate initial population using hamming distance and an algorithm to find crossover points for combining individuals selected for reproduction. We describe the implementation of the proposed approach by extending an open source tool PWiseGen and evaluate the effectiveness of the proposed approach. Empirical results indicate that our approach can generate test sets with higher fitness level by covering more pairs of input parameter values.
    BibTeX:
    @inproceedings{BansalSMAK13,
      author = {Priti Bansal and Sangeeta Sabharwal and Shreya Malik and Vikhyat Arora and Vineet Kumar},
      title = {An Approach to Test Set Generation for Pair-wise Testing using Genetic Algorithms},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_27}
    }
    					
    2013.06.28 Márcio de Oliveira Barros An Experimental Study on Incremental Search-Based Software Engineering 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 34-49, St. Petersburg Russia, 24-26 August   Inproceedings
    Abstract: Since its inception, SBSE has supported many different software engineering activities, including some which aim on improving or correcting existing systems. In such cases, search results may propose changes to the organization of the systems. Extensive changes may be inconvenient for developers, who maintain a mental model about the state of the system and use this knowledge to be productive in their daily business. Thus, a balance between optimization objectives and their impact on system structure may be pursued. In this paper, we introduce incremental search-based software engineering, an extension to SBSE which suggests optimizing a system through a sequence of restricted search turns, each limited to a maximum number of changes, so that developers can become aware of these changes before a new turn is enacted. We report on a study addressing the cost of breaking a search into a sequence of restricted turns and conclude that, at least for the selected problem and instances, there is indeed huge penalty in doing so.
    BibTeX:
    @inproceedings{Barros13,
      author = {Márcio de Oliveira Barros},
      title = {An Experimental Study on Incremental Search-Based Software Engineering},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {34-49},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_5}
    }
    					
    2013.08.05 Rodrigo C. Barros, Márcio P. Basgalupp, Ricardo Cerri, Tiago S. da Silva & André C.P.L.F. de Carvalho A Grammatical Evolution Approach for Software Effort Estimation 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1413-1420, Amsterdam The Netherlands, 6-10 July   Inproceedings Management
    Abstract: Software effort estimation is an important task within software engineering. It is widely used for planning and monitoring software project development as a means to deliver the product on time and within budget. Several approaches for generating predictive models from collected metrics have been proposed throughout the years. Machine learning algorithms, in particular, have been widely-employed to this task, bearing in mind their capability of providing accurate predictive models for the analysis of project stakeholders. In this paper, we propose a grammatical evolution approach for software metrics estimation. Our novel algorithm, namely SEEGE, is empirically evaluated on public project data sets, and we compare its performance with state-of-the-art machine learning algorithms such as support vector machines for regression and artificial neural networks, and also to popular linear regression. Results show that SEEGE outperforms the other algorithms considering three different evaluation measures, clearly indicating its effectiveness for the effort estimation task.
    BibTeX:
    @inproceedings{BarrosBCSC13,
      author = {Rodrigo C. Barros and Márcio P. Basgalupp and Ricardo Cerri and Tiago S. da Silva and André C.P.L.F. de Carvalho},
      title = {A Grammatical Evolution Approach for Software Effort Estimation},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1413-1420},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463546}
    }
    					
    2013.06.28 Márcio de Oliveira Barros & Fábio de Almeida Farzat What Can a Big Program Teach Us about Optimization? 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings
    Abstract: In this paper we report on the evolution of Apache Ant, a build automation tool developed in Java. We observed a typical case of architectural mismatch in this system: its original simple design was lost due to maintenance and addition of new features. We have applied SBSE techniques to determine whether the search would be able to recover at least parts of the original design, in a metrics-based optimization. We observed that current SBSE techniques produce complex designs, but they also allow us to study the limitations of present design metrics. In the end, we propose a new research perspective joining software clustering and refactoring selection to improve software evolution.
    BibTeX:
    @inproceedings{Barrosd13,
      author = {Márcio de Oliveira Barros and Fábio de Almeida Farzat},
      title = {What Can a Big Program Teach Us about Optimization?},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_24}
    }
    					
    2013.08.05 Karel P. Bergmann & Jörg Denzinger Testing of precision agricultural networks for adversary-induced problems 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1421-1428, Amsterdam The Netherlands, 6-10 July   Inproceedings Testing and Debugging
    Abstract: We present incremental adaptive corrective learning as a method to test ad-hoc wireless network protocols and applications. This learning method allows for the evolution of complex, variable-length, cooperative behaviour patterns for adversarial agents acting in such networks. We used the method to test precision agriculture sensor networks for vulnerabilities which could be exploited by attackers to significantly increase power consumption within the network. Our technique was able to find behaviours which increased power consumption by at least a factor of 3.6 for a node in each of the tested scenarios.
    BibTeX:
    @inproceedings{BergmannD13,
      author = {Karel P. Bergmann and Jörg Denzinger},
      title = {Testing of precision agricultural networks for adversary-induced problems},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1421-1428},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463544}
    }
    					
    2013.06.28 Mohamed Boussaa, Wael Kessentini, Marouane Kessentini, Slim Bechikh & Soukeina Ben Chikha Competitive Coevolutionary Code-Smells Detection 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 50-65, St. Petersburg Russia, 24-26 August   Inproceedings Distribution and Maintenance
    Abstract: Software bad-smells, also called design anomalies, refer to design situations that may adversely affect the maintenance of software. Bad-smells are unlikely to cause failures directly, but may do it indirectly. In general, they make a system difficult to change, which may in turn introduce bugs. Although these bad practices are sometimes unavoidable, they should be in general fixed by the development teams and removed from their code base as early as possible. In this paper, we propose, for the first time, the use of competitive coevolutionary search to the code-smells detection problem. We believe that such approach to code-smells detection is attractive because it allows combining the generation of code-smell examples with the production of detection rules based on quality metrics. The main idea is to evolve two populations simutaneously where the first one generates a set of detection rules (combination of quality metrics) that maximizes the coverage of a base of code-smell examples and the second one maximizes the number of generated “artificial” code-smells that are not covered by solutions (detection rules) of the first population. The statistical analysis of the obtained results shows that our proposed approach is promising when compared to two single population-based metaheuristics on a variety of benchmarks.
    BibTeX:
    @inproceedings{BoussaaKKBC13,
      author = {Mohamed Boussaa and Wael Kessentini and Marouane Kessentini and Slim Bechikh and Soukeina Ben Chikha},
      title = {Competitive Coevolutionary Code-Smells Detection},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {50-65},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_6}
    }
    					
    2013.08.05 Mustafa Bozkurt Cost-aware Pareto Optimal Test Suite Minimisation for Service-centric Systems 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1429-1436, Amsterdam The Netherlands, 6-10 July   Inproceedings Testing and Debugging
    Abstract: Runtime testing cost caused by service invocations is considered as one of the major limitations in Service-centric System Testing (ScST). Unfortunately, most of the existing work cannot achieve cost reduction at runtime as they perform offline testing. In this paper, we introduce a novel cost-aware Pareto optimal test suite minimisation approach for ScST aimed at reducing runtime testing cost. In experimental analysis, the proposed approach achieved reductions between 69percent and 98.6percent in monetary cost of service invocations while retaining test suite coverage. The results also provided evidence for the effectiveness of the selected algorithm HNSGA-II over the two commonly used algorithms: Greedy and NSGA-II.
    BibTeX:
    @inproceedings{Bozkurt13,
      author = {Mustafa Bozkurt},
      title = {Cost-aware Pareto Optimal Test Suite Minimisation for Service-centric Systems},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1429-1436},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463551}
    }
    					
    2014.09.17 Mustafa Bozkurt Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing 2013 , June School: University College London   Phdthesis Testing and Debugging
    Abstract: Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testing.
    BibTeX:
    @phdthesis{Bozkurt13b,
      author = {Mustafa Bozkurt},
      title = {Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing},
      school = {University College London},
      year = {2013},
      month = {June},
      url = {http://discovery.ucl.ac.uk/1400300/1/m.bozkurt_thesis_final.pdf}
    }
    					
    2015.11.06 Mustafa Bozkurt & Yuanyuan Zhang Customer-centric Optimal Software Release Problem in Cloud 2013 (RN/13/16), September   Techreport
    Abstract: As the number of cloud providers increases, the competition forces providers to reduce the prices to stay competitive. A recent testimony to this claim is the “price wars” between the cloud storage service providers [1]. In such competitive business environments, it is safe to assume that the providers will want to know more about the needs of their customers in order to shape their services based on customer demand. At the same time, try to find ways reducing the cost of their services in order to maintain the profitability. This problem is an instance of optimal software release problem which is NP-Hard [2]. In this paper, we propose an approach aimed at helping service providers to optimise their service configurations based on user demand. Unlike the existing work, our approach does not just provide n number of optimal configurations but rather allows the decision maker to discover trade-offs between service cost and customer satisfaction by comparing a set of Pareto optimal solutions. Our approach can help providers to discover the configurations that maximises profit while providing a high level of customer satisfaction.
    BibTeX:
    @techreport{BozkurtZ13,
      author = {Mustafa Bozkurt and Yuanyuan Zhang},
      title = {Customer-centric Optimal Software Release Problem in Cloud},
      year = {2013},
      number = {RN/13/16},
      month = {September},
      url = {http://www.cs.ucl.ac.uk/fileadmin/UCL-CS/research/Research_Notes/RN_13_16.pdf}
    }
    					
    2014.05.27 Jeremy S. Bradbury, David Kelk & Mark Green Effectively using Search-based Software Engineering Techniques within Model Checking and its Applications 2013 Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13), pp. 67 - 70, San Francisco CA USA, 20-20 May   Inproceedings
    Abstract: In this position paper, we affirm that there are synergies to be gained by using search-based techniques within software model checking. We will show from the literature how meta-heuristic search based techniques can augment both the model checking process and its applications. We will provide evidence to support this assertion in the form of existing research work and open problems that may benefit from combining Search-Based Software Engineering (SBSE) techniques and software model checking.
    BibTeX:
    @inproceedings{BradburyKG13,
      author = {Jeremy S. Bradbury and David Kelk and Mark Green},
      title = {Effectively using Search-based Software Engineering Techniques within Model Checking and its Applications},
      booktitle = {Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {67 - 70},
      address = {San Francisco, CA, USA},
      month = {20-20 May},
      doi = {http://dx.doi.org/10.1109/CMSBSE.2013.6605713}
    }
    					
    2013.06.28 Lionel Briand, Yvan Labiche & Kathy Chen A Multi-Objective Genetic Algorithm to Rank State-Based Test Cases 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 66-80, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: We propose a multi-objective genetic algorithm method to prioritize state-based test cases to achieve several competing objectives such as budget and coverage of data flow information, while hopefully detecting faults as early as possible when executing prioritized test cases. The experimental results indicate that our approach is useful and effective: prioritizations quickly achieve maximum data flow coverage and this results in early fault detection; prioritizations perform much better than random orders with much smaller variance.
    BibTeX:
    @inproceedings{BriandLC13,
      author = {Lionel Briand and Yvan Labiche and Kathy Chen},
      title = {A Multi-Objective Genetic Algorithm to Rank State-Based Test Cases},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {66-80},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_7}
    }
    					
    2014.05.27 Frank R. Burton & Simon Poulding Complementing Metaheuristic Search with Higher Abstraction Techniques 2013 Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13), pp. 45-48, San Francisco CA USA, 20-20 May   Inproceedings
    Abstract: Search-Based Software Engineering and Model-Driven Engineering are both innovative approaches to software engineering. The premise of Search-Based Software Engineering is to reformulate engineering tasks as optimisation problems that can be solved using metaheuristic search techniques. Model-Driven Engineering aims to apply greater levels of abstraction to software engineering problems. In this paper, it is argued that these two approaches are complementary and that both research fields can make further progress by applying techniques from the other. We suggest ways in which synergies between the fields can be exploited.
    BibTeX:
    @inproceedings{BurtonP13,
      author = {Frank R. Burton and Simon Poulding},
      title = {Complementing Metaheuristic Search with Higher Abstraction Techniques},
      booktitle = {Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {45-48},
      address = {San Francisco, CA, USA},
      month = {20-20 May},
      doi = {http://dx.doi.org/10.1109/CMSBSE.2013.6604436}
    }
    					
    2013.06.28 Arina Buzdalova, Maxim Buzdalov & Vladimir Parfenov Generation of Tests for Programming Challenge Tasks using Helper-Objectives 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: Generation of performance tests for programming challenge tasks is considered. A number of evolutionary approaches are compared on two different solutions of an example problem. It is shown that using helper-objectives enhances evolutionary algorithms in the considered case. The general approach involves automated selection of such objectives.
    BibTeX:
    @inproceedings{BuzdalovaBP13,
      author = {Arina Buzdalova and Maxim Buzdalov and Vladimir Parfenov},
      title = {Generation of Tests for Programming Challenge Tasks using Helper-Objectives},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_28}
    }
    					
    2013.08.01 Xinye Cai & Ou We A Hybrid of Decomposition and Domination based Evolutionary Algorithm for Multi-objective Software Next Release Problem 2013 Proceedings of the 10th IEEE International Conference on Control and Automation (ICCA '13), pp. 412-417, Hangzhou China, 12-14 June   Inproceedings Requirements/Specifications
    Abstract: In software industry, one common problem that the companies face is to decide what requirements should be implemented in the next release of the software. In a more realistic perspective, multiple objectives, such as cost and customer satisfaction, need to be considered when making the decision; thus the multi-objective formulations of the NRP become increasingly popular. This paper studies various multi-objective evolutionary algorithms (MOEAs) to address multi-objective NRP(MONRP). A novel multi-objective algorithm, MOEA/DD is proposed to obtain trade-off solutions when deciding which requirements to be implemented in MONRP. The proposed MOEA/DD addresses several important issues of decomposition based MOEAs in context of MONRP by combining it with desirable feature of domination based MOEAs. A density based mechanism is proposed to switch between decomposition and domination archives when constructing the subpopulation of subproblems. Our experimental results suggest the proposed approach outperforms the state-of-art domination or decomposition based multi-objective MOEAs.
    BibTeX:
    @inproceedings{CaiW13,
      author = {Xinye Cai and Ou We},
      title = {A Hybrid of Decomposition and Domination based Evolutionary Algorithm for Multi-objective Software Next Release Problem},
      booktitle = {Proceedings of the 10th IEEE International Conference on Control and Automation (ICCA '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {412-417},
      address = {Hangzhou, China},
      month = {12-14 June},
      doi = {http://dx.doi.org/10.1109/ICCA.2013.6565143}
    }
    					
    2014.09.03 José Campos, Rui Abreu, Gordon Fraser & Marcelo d’Amorim Entropy-based Test Generation for Improved Fault Localization 2013 Proceedings of IEEE/ACM 28th International Conference on Automated Software Engineering (ASE '13), pp. 257-267, Silicon Valley CA USA, 11-15 November   Inproceedings Testing and Debugging
    Abstract: Spectrum-based Bayesian reasoning can effectively rank candidate fault locations based on passing/failing test cases, but the diagnostic quality highly depends on the size and diversity of the underlying test suite. As test suites in practice often do not exhibit the necessary properties, we present a technique to extend existing test suites with new test cases that optimize the diagnostic quality. We apply probability theory concepts to guide test case generation using entropy, such that the amount of uncertainty in the diagnostic ranking is minimized. Our ENTBUG prototype extends the search-based test generation tool EVOSUITE to use entropy in the fitness function of its underlying genetic algorithm, and we applied it to seven real faults. Empirical results show that our approach reduces the entropy of the diagnostic ranking by 49% on average (compared to using the original test suite), leading to a 91% average reduction of diagnosis candidates needed to inspect to find the true faulty one.
    BibTeX:
    @inproceedings{CamposAFA13,
      author = {José Campos and Rui Abreu and Gordon Fraser and Marcelo d’Amorim},
      title = {Entropy-based Test Generation for Improved Fault Localization},
      booktitle = {Proceedings of IEEE/ACM 28th International Conference on Automated Software Engineering (ASE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {257-267},
      address = {Silicon Valley, CA, USA},
      month = {11-15 November},
      doi = {http://dx.doi.org/10.1109/ASE.2013.6693085}
    }
    					
    2014.09.22 Gerardo Canfora, Andrea De Lucia, Massimiliano Di Penta, Rocco Oliveto, Annibale Panichella & Sebastiano Panichella Multi-Objective Cross-Project Defect Prediction 2013 Proceedings of the 6th International Conference onSoftware Testing, Verification and Validation (ICST '13), pp. 252-261, Luembourg Luembourg, 18-22 March   Inproceedings
    Abstract: Cross-project defect prediction is very appealing because (i) it allows predicting defects in projects for which the availability of data is limited, and (ii) it allows producing generalizable prediction models. However, existing research suggests that cross-project prediction is particularly challenging and, due to heterogeneity of projects, prediction accuracy is not always very good. This paper proposes a novel, multi-objective approach for cross-project defect prediction, based on a multi-objective logistic regression model built using a genetic algorithm. Instead of providing the software engineer with a single predictive model, the multi-objective approach allows software engineers to choose predictors achieving a compromise between number of likely defect-prone artifacts (effectiveness) and LOC to be analyzed/tested (which can be considered as a proxy of the cost of code inspection). Results of an empirical evaluation on 10 datasets from the Promise repository indicate the superiority and the usefulness of the multi-objective approach with respect to single-objective predictors. Also, the proposed approach outperforms an alternative approach for cross-project prediction, based on local prediction upon clusters of similar classes.
    BibTeX:
    @inproceedings{CanforaLPOPP13,
      author = {Gerardo Canfora and Andrea De Lucia and Massimiliano Di Penta and Rocco Oliveto and Annibale Panichella and Sebastiano Panichella},
      title = {Multi-Objective Cross-Project Defect Prediction},
      booktitle = {Proceedings of the 6th International Conference onSoftware Testing, Verification and Validation (ICST '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {252-261},
      address = {Luembourg, Luembourg},
      month = {18-22 March},
      doi = {http://dx.doi.org/10.1109/ICST.2013.38}
    }
    					
    2014.05.27 Betty H.C. Cheng, Andres J. Ramirez & Philip K. McKinley Harnessing Evolutionary Computation to Enable Dynamically Adaptive Systems to Manage Uncertainty 2013 Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13), pp. 1-6, San Francisco CA USA, 20-20 May   Inproceedings
    Abstract: This keynote talk and paper intend to motivate research projects that investigate novel ways to model, analyze, and mitigate uncertainty arising in three different aspects of the cyber-physical systems. First, uncertainty about the physical environment can lead to suboptimal, and sometimes catastrophic, results as the system tries to adapt to unanticipated or poorly-understood environmental conditions. Second, uncertainty in the cyber environment can have lead to unexpected and adverse effects, including not only performance impacts (load, traffic, etc.) but also potential threats or overt attacks. Finally, uncertainty can exist with the components themselves and how they interact upon reconfiguration, including unexpected and unwanted feature interactions. Each of these sources of uncertainty can potentially be identified at different stages, respectively run time, design time, and requirements, but their mitigation might be done at the same or a different stage. Based on the related literature and our preliminary investigations, we argue that the following three overarching techniques are essential and warrant further research to provide enabling technologies to address uncertainty at all three stages: model-based development, assurance, and dynamic adaptation. Furthermore, we posit that in order to go beyond incremental improvements to current software engineering techniques, we need to leverage, extend, and integrate techniques from other disciplines.
    BibTeX:
    @inproceedings{ChengRM13,
      author = {Betty H.C. Cheng and Andres J. Ramirez and Philip K. McKinley},
      title = {Harnessing Evolutionary Computation to Enable Dynamically Adaptive Systems to Manage Uncertainty},
      booktitle = {Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {1-6},
      address = {San Francisco, CA, USA},
      month = {20-20 May},
      doi = {http://dx.doi.org/10.1109/CMSBSE.2013.6604427}
    }
    					
    2012.07.23 Wei-Neng Chen & Jun Zhang Ant Colony Optimization for Software Project Scheduling and Staffing with an Event-Based Scheduler 2013 IEEE Transactions on Software Engineering, Vol. 39(1), pp. 1-17, January   Article Management
    Abstract: Research into developing effective computer aided techniques for planning software projects is important and challenging for software engineering. Different from projects in other fields, software projects are people-intensive activities and their related resources are mainly human resources. Thus an adequate model for software project planning has to deal with not only the problem of project task scheduling but also the problem of human resource allocation. But as both of these two problems are difficult, existing models either suffer from a very large search space or have to restrict the flexibility of human resource allocation to simplify the model. To develop a flexible and effective model for software project planning, this paper develops a novel approach with an event-based scheduler (EBS) and an ant colony optimization (ACO) algorithm. The proposed approach represents a plan by a task list and a planned employee allocation matrix. In this way, both the issues of task scheduling and employee allocation can be taken into account. In the EBS, the beginning time of the project, the time when resources are released from finished tasks, and the time when employees join or leave the project are regarded as events. The basic idea of the EBS is to adjust the allocation of employees at events and keep the allocation unchanged at non-events. With this strategy, the proposed method enables the modeling of resource conflict and task preemption and preserves the flexibility in human resource allocation. To solve the planning problem, an ACO algorithm is further designed. Experimental results on 83 instances demonstrate that the proposed method is very promising.
    BibTeX:
    @article{ChenZ13,
      author = {Wei-Neng Chen and Jun Zhang},
      title = {Ant Colony Optimization for Software Project Scheduling and Staffing with an Event-Based Scheduler},
      journal = {IEEE Transactions on Software Engineering},
      year = {2013},
      volume = {39},
      number = {1},
      pages = {1-17},
      month = {January},
      doi = {http://dx.doi.org/10.1109/TSE.2012.17}
    }
    					
    2013.08.05 Daniil Chivilikhin & Vladimir Ulyantsev MuACOsm: A New Mutation-based Ant Colony Optimization Algorithm for Learning Finite-State Machines 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 511-518, Amsterdam The Netherlands, 6-10 July   Inproceedings Testing and Debugging
    Abstract: In this paper we present MuACOsm, a new method of learning Finite-State Machines (FSM) based on Ant Colony Optimisation (ACO) and a graph representation of the search space. The input data is a set of events, a set of actions and the number of states in the target FSM. The goal is to maximise the given fitness function, which is defined on the set of all FSMs with given parameters. The new algorithm is compared with evolutionary algorithms and a genetic programming related approach on the well-known Artificial Ant problem.
    BibTeX:
    @inproceedings{ChivilikhinU13,
      author = {Daniil Chivilikhin and Vladimir Ulyantsev},
      title = {MuACOsm: A New Mutation-based Ant Colony Optimization Algorithm for Learning Finite-State Machines},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {511-518},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463440}
    }
    					
    2013.08.05 Brendan Cody-Kenny & Stephen Barrett Self-focusing Genetic Programming for Software Optimisation 2013 Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13), pp. 203-204, Amsterdam The Netherlands, 6-10 July   Inproceedings
    Abstract: Approaches in the area of Search Based Software Engineering (SBSE) have seen Genetic Programming (GP) algorithms applied to the optimisation of software. While the potential of GP for this task has been demonstrated, the complexity of real-world software code bases poses a scalability problem for its serious application. To address this scalability problem, we inspect a form of GP which incorporates a mechanism to focus operators to relevant locations within a program code base. When creating offspring individuals, we introduce operator node selection bias by allocating values to nodes within an individual. Offspring values are inherited and updated when a difference in behaviour between offspring and parent is found. We argue that this approach may scale to find optimal solutions in more complex code bases under further development.
    BibTeX:
    @inproceedings{Cody-KennyB13,
      author = {Brendan Cody-Kenny and Stephen Barrett},
      title = {Self-focusing Genetic Programming for Software Optimisation},
      booktitle = {Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {203-204},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2464576.2464681}
    }
    					
    2013.06.28 Brendan Cody-Kenny & Stephen Barrett The Emergence of Useful Bias in Self-focusing Genetic Programming for Software Optimisation 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings
    Abstract: The use of Genetic Programming (GP) to optimise increasingly large software code has been enabled through biasing the application of GP operators to code areas relevant to the optimisation of interest. As previous approaches have used various forms of static bias applied before the application of GP, we show the emergence of bias learned within the GP process itself which improves solution finding probability in a similar way. As this variant technique is sensitive to the evolutionary lineage, we argue that it may more accurately provide bias in programs which have undergone heavier modification and thus find solutions addressing more complex issues.
    BibTeX:
    @inproceedings{Cody-KennyB13b,
      author = {Brendan Cody-Kenny and Stephen Barrett},
      title = {The Emergence of Useful Bias in Self-focusing Genetic Programming for Software Optimisation},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_29}
    }
    					
    2014.05.27 Thelma Elita Colanzi & Silvia Regina Vergilio Representation of Software Product Line Architectures for Search-based Design 2013 Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13), pp. 28-3, San Francisco CA USA, 20-20 May   Inproceedings Design Tools and Techniques
    Abstract: The Product-Line Architecture (PLA) is the main artifact of a Software Product Line (SPL). Search-based approaches can provide automated discovery of near-optimal PLAs and make its design less dependent on human architects. To do this, it is necessary to adopt a suitable PLA representation to apply the search operators. In this sense, we review existing architecture representations proposed by related work, but all of them need to be extended to encompass specific characteristics of SPL. Then, the use of such representations for PLA is discussed and, based on the performed analysis, we introduce a novel direct PLA representation for search-based optimization. Some implementation aspects are discussed involving implementation details about the proposed PLA representation, constraints and impact on specific search operators. Ongoing work addresses the application of specific search operators for the proposed representation and the definition of a fitness function to be applied in a multi-objective search-based approach for the PLA design.
    BibTeX:
    @inproceedings{ColanziV13,
      author = {Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {Representation of Software Product Line Architectures for Search-based Design},
      booktitle = {Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {28-3},
      address = {San Francisco, CA, USA},
      month = {20-20 May},
      doi = {http://dx.doi.org/10.1109/CMSBSE.2013.6604433}
    }
    					
    2012.08.21 Thelma Elita Colanzi, Silvia Regina Vergilio, Wesley Klewerton Guez Assunção & Aurora Trinidad Ramirez Pozo Search Based Software Engineering: Review and Analysis of the Field in Brazil 2013 Journal of Systems and Software, Vol. 86(4), pp. 970–984, April   Article General Aspects and Survey
    Abstract: Search Based Software Engineering (SBSE) is the field of software engineering research and practice that applies search based techniques to solve different optimization problems from diverse software engineering areas. SBSE approaches allow software engineers to automatically obtain solutions for complex and labor-intensive tasks, contributing to reduce efforts and costs associated to the software development. The SBSE field is growing rapidly in Brazil. The number of published works and research groups has significantly increased in the last three years and a Brazilian SBSE community is emerging. This is mainly due to the Brazilian Workshop on Search Based Software Engineering (WOES), co-located with the Brazilian Symposium on Software Engineering (SBES). Considering these facts, this paper presents results of a mapping we have performed in order to provide an overview of the SBSE field in Brazil. The main goal is to map the Brazilian SBSE community on SBES by identifying the main researchers, focus of the published works, fora and frequency of publications. The paper also introduces SBSE concerns and discusses trends, challenges, and open research problems to this emergent area. We hope the work serves as a reference to this novel field, contributing to disseminate SBSE and to its consolidation in Brazil.
    BibTeX:
    @article{ColanziVAP13,
      author = {Thelma Elita Colanzi and Silvia Regina Vergilio and Wesley Klewerton Guez Assunção and Aurora Trinidad Ramirez Pozo},
      title = {Search Based Software Engineering: Review and Analysis of the Field in Brazil},
      journal = {Journal of Systems and Software},
      year = {2013},
      volume = {86},
      number = {4},
      pages = {970–984},
      month = {April},
      doi = {http://dx.doi.org/10.1016/j.jss.2012.07.041}
    }
    					
    2014.05.28 Jonathas Cruz, Pedro Santos Neto, Ricardo Britto, Ricardo Rabelo, Werney Ayala, Thiago Soares & Mauricio Mota Toward a Hybrid Approach to Generate Software Product Line Portfolios 2013 Proceedings of IEEE Congress onEvolutionary Computation (CEC '13), pp. 2229-2236, Cancún Mexico, 20-23 June   Inproceedings
    Abstract: Software Product Line (SPL) development is a new approach to software engineering that aims at the development of a whole range of products. One of the problems which hinders the adoption of that approach is related with the management of the products of the line. Additionally, the scope of a software product line is determined by the bounds of the capabilities provided by the collection of products in the product line. This introduces new challenges related to the scope problem. One of the main three different forms of scoping is the Product Portfolio Scoping (PPS). PPS aims at defining the products that should be developed as well as their key features. While this has an impact on the actual reuse opportunities, it is usually driven from marketing aspects. Defining a product portfolio by considering costumers satisfaction and cost aspects is a NP-hard problem. This work presents a hybrid approach, which combines fuzzy inference systems and the multi-objective metaheuristics NSGAII to support product management by generating portfolios of products, based in segments of users and the development cost of the assets of the SPL. Fuzzy inference systems are used to generate development cost of an asset by using coupling, number of code lines and cyclomatic complexity and also to estimate the quality of the products generated by the optimization module of our approach. The NSGA-II metaheuristic is used to search for products minimizing the cost and maximizing the relevance of the candidate products. The results show that the proposed approach is effective in proposing the best products in terms of relevance and cost of the assets.
    BibTeX:
    @inproceedings{CruzNBRASM13,
      author = {Jonathas Cruz and Pedro Santos Neto and Ricardo Britto and Ricardo Rabelo and Werney Ayala and Thiago Soares and Mauricio Mota},
      title = {Toward a Hybrid Approach to Generate Software Product Line Portfolios},
      booktitle = {Proceedings of IEEE Congress onEvolutionary Computation (CEC '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {2229-2236},
      address = {Cancún, Mexico},
      month = {20-23 June},
      doi = {http://dx.doi.org/10.1109/CEC.2013.6557834}
    }
    					
    2014.05.30 Kivanc Doganay, Markus Bohlin & Ola Sellin Search Based Testing of Embedded Systems Implemented in IEC 61131-3: An Industrial Case Study 2013 Proceedings of the 6th International Workshop on Search-Based Software Testing (SBST '13), pp. 425-432, Luxembourg Luxembourg, 22-22 March   Inproceedings Testing and Debugging
    Abstract: This paper presents a case study of search-based test generation for embedded system software units developed using the Function Block Diagrams (FBDs), a graphical language in the IEC 61131-3 standard aimed at programmable logic controllers (PLCs). We consider 279 different components from the train control software developed by Bombardier Transportation, a major rail vehicle manufacturer. The software is compiled into C code with a particular structure. We use a modified hill climbing algorithm for generating test data to maximize MC/DC coverage for assignments with logical expressions in the C code, while retaining the semantics of the original FBD implementation. An experimental evaluation for comparing the effectiveness (coverage rate) and the efficiency (required number of executions) of hill climbing algorithm with random testing is presented. The results show that random testing performs well for most units under test, while around 30% of the artifacts significantly benefit from the hill climbing algorithm. Structural properties of the units that affect the performance of hill climbing and random testing are also discussed.
    BibTeX:
    @inproceedings{DoganayBS13,
      author = {Kivanc Doganay and Markus Bohlin and Ola Sellin},
      title = {Search Based Testing of Embedded Systems Implemented in IEC 61131-3: An Industrial Case Study},
      booktitle = {Proceedings of the 6th International Workshop on Search-Based Software Testing (SBST '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {425-432},
      address = {Luxembourg, Luxembourg},
      month = {22-22 March},
      doi = {http://dx.doi.org/10.1109/ICSTW.2013.78}
    }
    					
    2013.06.28 Dionysios Efstathiou, Peter Mcburney, Steffen Zschaler & Johann Bourcier Exploring Optimal Service Compositions in Highly Heterogeneous and Dynamic Service-Based Systems 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings
    Abstract: Dynamic and heterogeneous service-oriented systems present challenges when developing composite applications that exhibit specified quality properties. Resource heterogeneity, mobility, and a large number of spatially distributed service providers complicate the process of composing complex applications with specified QoS requirements. This PhD project aims at enabling the efficient run-time generation of service compositions that share functionality, but differ in their trade-offs between multiple competing and conflicting quality objectives such as application response time, availability and consumption of resources. In this paper we present a research roadmap towards an approach for flexible service composition in dynamic and heterogeneous environments.
    BibTeX:
    @inproceedings{EfstathiouMZB13,
      author = {Dionysios Efstathiou and Peter Mcburney and Steffen Zschaler and Johann Bourcier},
      title = {Exploring Optimal Service Compositions in Highly Heterogeneous and Dynamic Service-Based Systems},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_30}
    }
    					
    2013.08.05 Achiya Elyasaf, Michael Orlov & Moshe Sipper A Heuristiclab Evolutionary Algorithm for FINCH 2013 Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13), pp. 1727-1728, Amsterdam The Netherlands, 6-10 July   Inproceedings Design Tools and Techniques
    Abstract: We present a HeuristicLab plugin for FINCH. FINCH (Fertile Darwinian Bytecode Harvester) is a system designed to evolutionarily improve actual, extant software, which was not intentionally written for the purpose of serving as a GP representation in particular, nor for evolution in general. This is in contrast to existing work that uses restricted subsets of the Java bytecode instruction set as a representation language for individuals in genetic programming. The ability to evolve Java programs will hopefully lead to a valuable new tool in the software engineer's toolkit.
    BibTeX:
    @inproceedings{ElyasafOS13,
      author = {Achiya Elyasaf and Michael Orlov and Moshe Sipper},
      title = {A Heuristiclab Evolutionary Algorithm for FINCH},
      booktitle = {Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1727-1728},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2464576.2480786}
    }
    					
    2014.05.27 Eduard Paul Enoiu, Kivanc Doganay, Markus Bohlin, Daniel Sundmark & Paul Pettersson MOS: An Integrated Model-based and Search-based Testing Tool for Function Block Diagrams 2013 Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13), pp. 55-60, San Francisco CA USA, 20-20 May   Inproceedings Testing and Debugging
    Abstract: In this paper we present a new testing tool for safety critical applications described in Function Block Diagram (FBD) language aimed to support both a model and a search-based approach. Many benefits emerge from this tool, including the ability to automatically generate test suites from an FBD program in order to comply to quality requirements such as component testing and specific coverage measurements. Search-based testing methods are used to generate test data based on executable code rather than the FBD program, alleviating any problems that may arise from the ambiguities that occur while creating FBD programs. Test cases generated by both approaches are executed and used as a way of cross validation. In the current work, we describe the architecture of the tool, its workflow process, and a case study in which the tool has been applied in a real industrial setting to test a train control management system.
    BibTeX:
    @inproceedings{EnoiuDBSP13,
      author = {Eduard Paul Enoiu and Kivanc Doganay and Markus Bohlin and Daniel Sundmark and Paul Pettersson},
      title = {MOS: An Integrated Model-based and Search-based Testing Tool for Function Block Diagrams},
      booktitle = {Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {55-60},
      address = {San Francisco, CA, USA},
      month = {20-20 May},
      doi = {http://dx.doi.org/10.1109/CMSBSE.2013.6605711}
    }
    					
    2013.08.13 Robert Feldt & Simon Poulding Finding Test Data with Specific Properties via Metaheuristic Search 2013 Proceedings of the 24th IEEE International Symposium on Software Reliability Engineering (ISSRE '13), pp. 4-7 November, Pasadena CA USA, 350-359   Inproceedings Testing and Debugging
    Abstract: For software testing to be effective the test data should cover a large and diverse range of the possible input domain. Boltzmann samplers were recently introduced as a systematic method to randomly generate data with a range of sizes from combinatorial classes, and there are a number of automated testing frameworks that serve a similar purpose. However, size is only one of many possible properties that data generated for software testing should exhibit. For the testing of realistic software systems we also need to trade off between multiple different properties or search for specific instances of data that combine several properties. In this paper we propose a general search-based framework for finding test data with specific properties. In particular, we use a metaheuristic, differential evolution, to search for stochastic models for the data generator. Evaluation of the framework demonstrates that it is more general and flexible than existing solutions based on random sampling.
    BibTeX:
    @inproceedings{FeldtP13,
      author = {Robert Feldt and Simon Poulding},
      title = {Finding Test Data with Specific Properties via Metaheuristic Search},
      booktitle = {Proceedings of the 24th IEEE International Symposium on Software Reliability Engineering (ISSRE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {4-7 November},
      address = {Pasadena, CA, USA},
      month = {350-359},
      doi = {http://dx.doi.org/10.1109/ISSRE.2013.6698888}
    }
    					
    2014.05.27 Filomena Ferrucci, Mark Harman, Jian Ren & Federica Sarro Not Going to Take This Anymore: Multi-objective Overtime Planning for Software Engineering Projects 2013 Proceedings of the 2013 International Conference on Software Engineering (ICSE '13), pp. 462-471, San Francisco CA USA, 18-26 May   Inproceedings Management
    Abstract: Software Engineering and development is well- known to suffer from unplanned overtime, which causes stress and illness in engineers and can lead to poor quality software with higher defects. In this paper, we introduce a multi-objective decision support approach to help balance project risks and duration against overtime, so that software engineers can better plan overtime. We evaluate our approach on 6 real world software projects, drawn from 3 organisations using 3 standard evaluation measures and 3 different approaches to risk assessment. Our results show that our approach was significantly better (p < 0.05) than standard multi-objective search in 76% of experiments (with high Cohen effect size in 85% of these) and was significantly better than currently used overtime planning strategies in 100% of experiments (with high effect size in all). We also show how our approach provides actionable overtime planning results and inves- tigate the impact of the three different forms of risk assessment.
    BibTeX:
    @inproceedings{FerrucciHRS13,
      author = {Filomena Ferrucci and Mark Harman and Jian Ren and Federica Sarro},
      title = {Not Going to Take This Anymore: Multi-objective Overtime Planning for Software Engineering Projects},
      booktitle = {Proceedings of the 2013 International Conference on Software Engineering (ICSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {462-471},
      address = {San Francisco, CA, USA},
      month = {18-26 May},
      doi = {http://dx.doi.org/10.1109/ICSE.2013.6606592}
    }
    					
    2012.02.28 Gordon Fraser & Andrea Arcuri Whole Test Suite Generation 2013 IEEE Transactions on Software Engineering, Vol. 39, pp. 276-291, Februrary   Article Testing and Debugging
    Abstract: Not all bugs lead to program crashes, and not always is there a formal specification to check the correctness of a software test's outcome. A common scenario in software testing is therefore that test data is generated, and a tester manually adds test oracles. As this is a difficult task, it is important to produce small yet representative test sets, and this representativeness is typically measured using code coverage. There is, however, a fundamental problem with the general approach of targeting one coverage goal at a time: Coverage goals are not independent, not equally difficult, and sometimes infeasible - the result of test generation is therefore dependent on the order of coverage goals and their feasibility. To overcome this problem, we propose a novel paradigm in which whole test suites are evolved with the aim of covering all coverage goals at the same time, while keeping the total size as small as possible. We evaluated our EvoSuite prototype implementation to the common approach of addressing one goal at a time on open source libraries and an industrial case study for a total of 1,741 classes, showing that EvoSuite achieved up to 188 times the branch coverage of a traditional approach targeting single branches, with up to 62% smaller test suites.
    BibTeX:
    @article{FraserA13,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {Whole Test Suite Generation},
      journal = {IEEE Transactions on Software Engineering},
      year = {2013},
      volume = {39},
      pages = {276-291},
      month = {Februrary},
      doi = {http://dx.doi.org/10.1109/TSE.2012.14}
    }
    					
    2014.05.30 Gordon Fraser & Andrea Arcuri EvoSuite at the SBST 2013 Tool Competition 2013 Proceedings of the 6th International Workshop on Search-Based Software Testing (SBST '13), pp. 406-409, Luxembourg Luxembourg, 22-22 March   Inproceedings Testing and Debugging
    Abstract: EvoSuite is a mature research prototype that automatically generates unit tests for Java code. This paper summarizes the results and experiences in participating at the unit testing competition held at SBST 2013, where EvoSuite ranked first with a score of 156.95.
    BibTeX:
    @inproceedings{FraserA13b,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {EvoSuite at the SBST 2013 Tool Competition},
      booktitle = {Proceedings of the 6th International Workshop on Search-Based Software Testing (SBST '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {406-409},
      address = {Luxembourg, Luxembourg},
      month = {22-22 March},
      doi = {http://dx.doi.org/10.1109/ICSTW.2013.53}
    }
    					
    2014.09.02 Gordon Fraser & Andrea Arcuri Handling Test Length Bloat 2013 Software Testing, Verification and Reliability, Vol. 23(7), pp. 553-582, November   Article Testing and Debugging
    Abstract: The length of test cases is a little investigated topic in search-based test generation for object-oriented software, where test cases are sequences of method calls. Although intuitively longer tests can achieve higher overall code coverage, there is always the threat of bloat – a complex phenomenon in evolutionary computation, where the length abnormally grows over time. In this paper, we show that bloat indeed also occurs in the context of test generation for object-oriented software. We present different techniques to overcome the problem of length bloat, and evaluate all possible combinations of these techniques using different starting lengths for the search. Experiments on a set of difficult search targets, selected from several open source and industrial projects, show that controlling bloat with the appropriate techniques can significantly improve the search performance.
    BibTeX:
    @article{FraserA13c,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {Handling Test Length Bloat},
      journal = {Software Testing, Verification and Reliability},
      year = {2013},
      volume = {23},
      number = {7},
      pages = {553-582},
      month = {November},
      doi = {http://dx.doi.org/10.1002/stvr.1495}
    }
    					
    2014.09.22 Gordon Fraser & Andrea Arcuri EvoSuite: On the Challenges of Test Case Generation in the Real World 2013 Proceedings of the 6th International Conference on Software Testing, Verification and Validation (ICST '13), pp. 362-369, Luembourg Luembourg, 18-22 March   Inproceedings Testing and Debugging
    Abstract: Test case generation is an important but tedious task, such that researchers have devised many different prototypes that aim to automate it. As these are research prototypes, they are usually only evaluated on a few hand-selected case studies, such that despite great results there remains the question of usability in the “real world”. EVOSUITE is such a research prototype, which automatically generates unit test suites for classes written in the Java programming language. In our ongoing endeavour to achieve real-world usability, we recently passed the milestone success of applying EVOSUITE on hundred projects randomly selected from the SourceForge open source platform. This paper discusses the technical challenges that a testing tool like EVOSUITE needs to address when handling Java classes coming from real-world open source projects, and when producing JUnit test suites intended for real users.
    BibTeX:
    @inproceedings{FraserA13d,
      author = {Gordon Fraser and Andrea Arcuri},
      title = {EvoSuite: On the Challenges of Test Case Generation in the Real World},
      booktitle = {Proceedings of the 6th International Conference on Software Testing, Verification and Validation (ICST '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {362-369},
      address = {Luembourg, Luembourg},
      month = {18-22 March},
      doi = {http://dx.doi.org/10.1109/ICST.2013.51}
    }
    					
    2013.08.05 Gordon Fraser, Andrea Arcuri & Phil McMinn Test Suite Generation with Memetic Algorithms 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1437-1444, Amsterdam The Netherlands, 6-10 July   Inproceedings Testing and Debugging
    Abstract: Genetic Algorithms have been successfully applied to the generation of unit tests for classes, and are well suited to create complex objects through sequences of method calls. However, because the neighbourhood in the search space for method sequences is huge, even supposedly simple optimisations on primitive variables (e.g., numbers and strings) can be ineffective or unsuccessful. To overcome this problem, we extend the global search applied in the EVOSUITE test generation tool with local search on the individual statements of method sequences. In contrast to previous work on local search, we also consider complex data types including strings and arrays. A rigorous experimental methodology has been applied to properly evaluate these new local search operators. In our experiments on a set of open source classes of different kinds (e.g., numerical applications and text processing), the resulting test data generation technique increased branch coverage by up to 32percent on average over the normal Genetic Algorithm.
    BibTeX:
    @inproceedings{FraserAM13,
      author = {Gordon Fraser and Andrea Arcuri and Phil McMinn},
      title = {Test Suite Generation with Memetic Algorithms},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1437-1444},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463548}
    }
    					
    2014.09.02 Gordon Fraser, Matt Staats, Phil McMinn, Andrea Arcuri & Frank Padberg Does Automated White-box Test Generation Really Help Software Testers? 2013 Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA '13), pp. 291-301, Lugano Switzerland, 15-20 July   Inproceedings Testing and Debugging
    Abstract: Automated test generation techniques can efficiently produce test data that systematically cover structural aspects of a program. In the absence of a specification, a common assumption is that these tests relieve a developer of most of the work, as the act of testing is reduced to checking the results of the tests. Although this assumption has persisted for decades, there has been no conclusive evidence to date confirming it. However, the fact that the approach has only seen a limited uptake in industry suggests the contrary, and calls into question its practical usefulness. To investigate this issue, we performed a controlled experiment comparing a total of 49 subjects split between writing tests manually and writing tests with the aid of an automated unit test generation tool, EvoSuite. We found that, on one hand, tool support leads to clear improvements in commonly applied quality metrics such as code coverage (up to 300% increase). However, on the other hand, there was no measurable improvement in the number of bugs actually found by developers. Our results not only cast some doubt on how the research community evaluates test generation tools, but also point to improvements and future work necessary before automated test generation tools will be widely adopted by practitioners.
    BibTeX:
    @inproceedings{FraserSMAP13,
      author = {Gordon Fraser and Matt Staats and Phil McMinn and Andrea Arcuri and Frank Padberg},
      title = {Does Automated White-box Test Generation Really Help Software Testers?},
      booktitle = {Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA '13)},
      publisher = {ACM},
      year = {2013},
      pages = {291-301},
      address = {Lugano, Switzerland},
      month = {15-20 July},
      doi = {http://dx.doi.org/10.1145/2483760.2483774}
    }
    					
    2013.08.05 Erik M. Fredericks & Betty H.C. Cheng Exploring Automated Software Composition with Genetic Programming 2013 Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13), pp. 1733-1734, Amsterdam The Netherlands, 6-10 July   Inproceedings Design Tools and Techniques
    Abstract: Much research has been performed in investigating the numerous dimensions of software composition. Challenges include creating a composition-based design process, designing software for reuse, investigating various strategies for composition, and automating the composition process. Depending on the complexity of the relevant components, numerous composition strategies may exist, each of which may have several options and variations for aggregate steps in realising these strategies. This paper presents an evolutionary computation-based framework for automatically searching for and realising an optimal composition strategy for composing a given target module into an existing software system.
    BibTeX:
    @inproceedings{FredericksC13,
      author = {Erik M. Fredericks and Betty H.C. Cheng},
      title = {Exploring Automated Software Composition with Genetic Programming},
      booktitle = {Proceeding of The 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1733-1734},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2464576.2480790}
    }
    					
    2013.06.28 Erik M. Fredericks, Andres J. Ramirez & Betty H.C. Cheng Validating Code-Level Behavior of Dynamic Adaptive Systems in the Face of Uncertainty 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 81-95, St. Petersburg Russia, 24-26 August   Inproceedings
    Abstract: A dynamically adaptive system (DAS) self-reconfigures at run time in order to handle adverse combinations of system and environmental conditions. Techniques are needed to make DASs more resilient to system and environmental uncertainty. Furthermore, automated support to validate that a DAS provides acceptable behavior even through reconfigurations are essential to address assurance concerns. This paper introduces Fenrir, an evolutionary computation-based approach to address these challenges. By explicitly searching for diverse and interesting operational contexts and examining the resulting execution traces generated by a DAS as it reconfigures in response to adverse conditions, Fenrir can discover undesirable behaviors triggered by unexpected environmental conditions at design time, which can be used to revise the system appropriately. We illustrate Fenrir by applying it to a dynamically adaptive remote data mirroring network that must efficiently diffuse data even in the face of adverse network conditions.
    BibTeX:
    @inproceedings{FredericksRC13,
      author = {Erik M. Fredericks and Andres J. Ramirez and Betty H.C. Cheng},
      title = {Validating Code-Level Behavior of Dynamic Adaptive Systems in the Face of Uncertainty},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {81-95},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_8}
    }
    					
    2014.09.22 Sören Frey, Florian Fittkau & Wilhelm Hasselbring Search-Based Genetic Optimization for Deployment and Reconfiguration of Software in the Cloud 2013 Proceedings of the 35th International Conference on Software Engineering (ICSE '13), pp. 512-521, San Francisco CA USA, 18-26 May   Inproceedings
    Abstract: Migrating existing enterprise software to cloud platforms involves the comparison of competing cloud deployment options (CDOs). A CDO comprises a combination of a specific cloud environment, deployment architecture, and runtime reconfiguration rules for dynamic resource scaling. Our simulator CDOSim can evaluate CDOs, e.g., regarding response times and costs. However, the design space to be searched for well-suited solutions is extremely huge. In this paper, we approach this optimization problem with the novel genetic algorithm CDOXplorer. It uses techniques of the search-based software engineering field and CDOSim to assess the fitness of CDOs. An experimental evaluation that employs, among others, the cloud environments Amazon EC2 and Microsoft Windows Azure, shows that CDOXplorer can find solutions that surpass those of other state-of-the-art techniques by up to 60%. Our experiment code and data and an implementation of CDOXplorer are available as open source software.
    BibTeX:
    @inproceedings{FreyFH13,
      author = {Sören Frey and Florian Fittkau and Wilhelm Hasselbring},
      title = {Search-Based Genetic Optimization for Deployment and Reconfiguration of Software in the Cloud},
      booktitle = {Proceedings of the 35th International Conference on Software Engineering (ICSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {512-521},
      address = {San Francisco, CA, USA},
      month = {18-26 May},
      doi = {http://dx.doi.org/10.1109/ICSE.2013.6606597}
    }
    					
    2014.09.02 Juan Pablo Galeotti, Gordon Fraser & Andrea Arcuri Improving Search-based Test Suite Generation with Dynamic Symbolic Execution 2013 Proceedings of IEEE 24th International Symposium on Software Reliability Engineering (ISSRE '13), pp. 360-369, Pasadena CA USA, 4-7 November   Inproceedings Testing and Debugging
    Abstract: Search-based testing can automatically generate unit test suites for object oriented code, but may struggle to generate specific values necessary to cover difficult parts of the code. Dynamic symbolic execution (DSE) efficiently generates such specific values, but may struggle with complex datatypes, in particular those that require sequences of calls for construction. The solution to these problems lies in a hybrid approach that integrates the best of both worlds, but such an integration needs to adapt to the problem at hand to avoid that higher coverage in a few corner cases comes at the price of lower coverage in the general case. We have extended the Genetic Algorithm (GA) in the Evosuite unit test generator to integrate DSE in an adaptive approach where feedback from the search determines when a problem is suitable for DSE. In experiments on a set of difficult classes our adaptive hybrid approach achieved an increase in code coverage of up to 63% (11% on average); experiments on the SF100 corpus of roughly 9,000 open source classes confirm that the improvement is of practical value, and a comparison with a DSE tool on the Roops set of benchmark classes shows that the hybrid approach improves over both its constituent techniques, GA and DSE.
    BibTeX:
    @inproceedings{GaleottiFA13,
      author = {Juan Pablo Galeotti and Gordon Fraser and Andrea Arcuri},
      title = {Improving Search-based Test Suite Generation with Dynamic Symbolic Execution},
      booktitle = {Proceedings of IEEE 24th International Symposium on Software Reliability Engineering (ISSRE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {360-369},
      address = {Pasadena, CA, USA},
      month = {4-7 November},
      doi = {http://dx.doi.org/10.1109/ISSRE.2013.6698889}
    }
    					
    2014.05.27 Alessio Gambi, Waldemar Hummer & Schahram Dustdar Testing Elastic Systems with Surrogate Models 2013 Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13), pp. 8-11, San Francisco CA USA, 20-20 May   Inproceedings Testing and Debugging
    Abstract: We combine search-based test case generation and surrogate models for black-box system testing of elastic systems. We aim to efficiently generate tests that expose functional errors and performance problems related to system elasticity. Elastic systems dynamically change their resources allocation to provide consistent quality of service in face of workload fluctuations. However, their ability to adapt could be a double edged sword if not properly designed: They may fail to acquire the right amount of resources or even fail to release them. Blackbox system testing may expose such problems by stimulating system elasticity with suitable sequences of interactions. However, finding such sequences is far from trivial because the number of possible combinations of requests over time is unbounded. In this paper, we analyze the problem of generating test cases for elastic systems, we cast it as a search-based optimization combined with surrogate models, and present the conceptual framework that supports its execution.
    BibTeX:
    @inproceedings{GambiHD13,
      author = {Alessio Gambi and Waldemar Hummer and Schahram Dustdar},
      title = {Testing Elastic Systems with Surrogate Models},
      booktitle = {Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {8-11},
      address = {San Francisco, CA, USA},
      month = {20-20 May},
      doi = {http://dx.doi.org/10.1109/CMSBSE.2013.6604429}
    }
    					
    2013.06.28 Adnane Ghannem, Ghizlane El Boussaidi & Marouane Kessentini Model Refactoring using Interactive Genetic Algorithm 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 96-110, St. Petersburg Russia, 24-26 August   Inproceedings Distribution and Maintenance
    Abstract: Refactoring aims at improving the quality of design while preserving its semantic. Providing an automatic support for refactoring is a challenging problem. This problem can be considered as an optimization problem where the goal is to find appropriate refactoring suggestions using a set of refactoring examples. However, some of the refactorings proposed using this approach do not necessarily make sense depending on the context and the semantic of the system under analysis. This paper proposes an approach that tackles this problem by adapting the Interactive Genetic Algorithm (IGA) which enables to interact with users and integrate their feedbacks into a classic GA. The proposed algorithm uses a fitness function that combines the structural similarity between the analyzed design model and models from a base of examples, and the designers’ ratings of the refactorings proposed during execution of the classic GA. Experimentation with the approach yielded interesting and promising results.
    BibTeX:
    @inproceedings{GhannemBK13,
      author = {Adnane Ghannem and Ghizlane El Boussaidi and Marouane Kessentini},
      title = {Model Refactoring using Interactive Genetic Algorithm},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {96-110},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_9}
    }
    					
    2014.05.27 Francisco Gomes de Oliveira Neto, Robert Feldt, Richard Torkar & Patricia D.L. Machado Searching for Models to Evaluate Software Technology 2013 Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13), pp. 12-15, San Francisco CA USA, 20-20 May   Inproceedings
    Abstract: Modeling and abstraction is key in all engineering processes and have found extensive use also in software engineering. When developing new methodologies and techniques to support software engineers we want to evaluate them on realistic models. However, this is a challenge since (1) it is hard to get industry to give access to their models, and (2) we need a large number of models to systematically evaluate a technology. This paper proposes that search-based techniques can be used to search for models with desirable properties, which can then be used to systematically evaluate model-based technologies. By targeting properties seen in industrial models we can then get the best of both worlds: models that are similar to models used in industry but in quantities that allow extensive experimentation. To exemplify our ideas we consider a specific case in which a model generator is used to create models to test a regression test optimization technique.
    BibTeX:
    @inproceedings{GomesdeOliveiraNetoFTM13,
      author = {Francisco Gomes de Oliveira Neto and Robert Feldt and Richard Torkar and Patricia D. L. Machado},
      title = {Searching for Models to Evaluate Software Technology},
      booktitle = {Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {12-15},
      address = {San Francisco, CA, USA},
      month = {20-20 May},
      doi = {http://dx.doi.org/10.1109/CMSBSE.2013.6604430}
    }
    					
    2014.09.19 Dunwei Gong & Yan Zhang Generating Test Data for Both Path Coverage and Fault Detection using Genetic Algorithms 2013 Frontiers of Computer Science, Vol. 7(6), pp. 822-837, December   Article Testing and Debugging
    Abstract: The aim of software testing is to find faults in a program under test, so generating test data that can expose the faults of a program is very important. To date, current studies on generating test data for path coverage do not perform well in detecting low probability faults on the covered path. The automatic generation of test data for both path coverage and fault detection using genetic algorithms is the focus of this study. To this end, the problem is first formulated as a bi-objective optimization problem with one constraint whose objectives are the number of faults detected in the traversed path and the risk level of these faults, and whose constraint is that the traversed path must be the target path. An evolutionary algorithmis employed to solve the formulatedmodel, and several types of fault detectionmethods are given. Finally, the proposed method is applied to several real-world programs, and compared with a random method and evolutionary optimization method in the following three aspects: the number of generations and the time consumption needed to generate desired test data, and the success rate of detecting faults. The experimental results confirm that the proposed method can effectively generate test data that not only traverse the target path but also detect faults lying in it.
    BibTeX:
    @article{GongZ13,
      author = {Dunwei Gong and Yan Zhang},
      title = {Generating Test Data for Both Path Coverage and Fault Detection using Genetic Algorithms},
      journal = {Frontiers of Computer Science},
      year = {2013},
      volume = {7},
      number = {6},
      pages = {822-837},
      month = {December},
      doi = {http://dx.doi.org/10.1007/s11704-013-3024-3}
    }
    					
    2016.02.12 Claire Le Goues Automatic Program Repair using Genetic Programming 2013 , May School: University of Virginia   Phdthesis
    Abstract: Software quality is an urgent problem. There are so many bugs in industrial program source code that mature software projects are known to ship with both known and unknown bugs [1], and the number of outstanding defects typically exceeds the resources available to address them [2]. This has become a pressing economic problem whose costs in the United States can be measured in the billions of dollars annually [3]. A dominant reason that software defects are so expensive is that fixing them remains a manual process. The process of identifying, triaging, reproducing, and localizing a particular bug, coupled with the task of understanding the underlying error, identifying a set of code changes that address it correctly, and then verifying those changes, costs both time [4] and money, and the cost of repairing a defect can increase by orders of magnitude as development progresses [5]. As a result, many defects, including critical security defects [6], remain unaddressed for long periods of time [7]. Moreover, humans are error-prone, and many human fixes are imperfect, in that they are either incorrect or lead to crashes, hangs, corruption, or security problems [8]. As a result, defect repair has become a major component of software maintenance, which in turn consumes up to 90% of the total lifecycle cost of a given piece of software [9]. Although considerable research attention has been paid to supporting various aspects of the manual debugging process [10, 11], and also to preempting or dynamically addressing particular classes of vulnerabilities, such as buffer overruns [12, 13], there exist virtually no previous automated solutions that address the synthesis of patches for general bugs as they are reported in real-world software. The primary contribution of this dissertation is GenProg, one of the very first automatic solutions designed to help alleviate the manual bug repair burden by automatically and generically patching bugs in deployed and legacy software. GenProg uses a novel genetic programming algorithm, guided by test cases and domain-specific operators, to affect scalable, expressive, and high quality automated repair. We present experimental evidence to substantiate our claims that GenProg can repair multiple types of bugs in multiple types of programs, and that it can repair a large proportion of the bugs that human developers address in practice (that it is expressive); that it scales to real-world system sizes (that it is scalable); and that it produces repairs that are of sufficiently high quality. Over the course of this evaluation, we contribute new benchmark sets of real bugs in real open-source software and novel experimental frameworks for quantitatively evaluating an automated repair technique. We also contribute a novel characterization of the automated repair search space, and provide analysis both of that space and of the performance and scaling behavior of our technique. General automated software repair was unheard of in 2009. In 2013, it has its own multi-paper sessions in top tier software engineering conferences. The research area shows no signs of slowing down. This dissertation’s description of GenProg provides a detailed report on the state of the art for early automated software repair efforts.
    BibTeX:
    @phdthesis{Goues13,
      author = {Claire Le Goues},
      title = {Automatic Program Repair using Genetic Programming},
      school = {University of Virginia},
      year = {2013},
      month = {May},
      url = {https://www.cs.virginia.edu/~weimer/students/claire-phd.pdf}
    }
    					
    2016.02.12 Claire Le Goues, Stephanie Forrest & Westley Weimer Current Challenges in Automatic Software Repair 2013 Software Quality Journal, Vol. 21(3), pp. 421-443, September   Article
    Abstract: The abundance of defects in existing software systems is unsustainable. Addressing them is a dominant cost of software maintenance, which in turn dominates the life cycle cost of a system. Recent research has made significant progress on the problem of automatic program repair, using techniques such as evolutionary computation, instrumentation and run-time monitoring, and sound synthesis with respect to a specification. This article serves three purposes. First, we review current work on evolutionary computation approaches, focusing on GenProg, which uses genetic programming to evolve a patch to a particular bug. We summarize algorithmic improvements and recent experimental results. Second, we review related work in the rapidly growing subfield of automatic program repair. Finally, we outline important open research challenges that we believe should guide future research in the area.
    BibTeX:
    @article{GouesFW13,
      author = {Claire Le Goues and Stephanie Forrest and Westley Weimer},
      title = {Current Challenges in Automatic Software Repair},
      journal = {Software Quality Journal},
      year = {2013},
      volume = {21},
      number = {3},
      pages = {421-443},
      month = {September},
      doi = {http://dx.doi.org/10.1007/s11219-013-9208-0}
    }
    					
    2014.05.28 Giovani Guizzo, Thelma Elita Colanzi & Silvia Regina Vergilio Applying Design Patterns in Product Line Search-based Design: Feasibility Analysis and Implementation Aspects 2013 Proceedings of the Chilean Computing Conference (JCC '13), Temuco Chile, 11-15 November   Inproceedings Design Tools and Techniques
    Abstract: Some works have manually applied design patterns in Product Line Architectures (PLAs) in order to improve the understanding and reuse of the PLAs artifacts. However, there is no search-based approach that considers such subject. Applying design patterns in conventional architectures through mutation processes in evolutionary approaches has been proven as an efficient technique. In this sense, this work presents results of a feasibility analysis for the automated application of GoF design patterns with a search-based PLA design approach. In addition to this, the paper proposes a metamodel representation for suitable scopes to receive the design patterns application and a mutation operator procedure to apply the patterns identified as feasible.
    BibTeX:
    @inproceedings{GuizzoCV13,
      author = {Giovani Guizzo and Thelma Elita Colanzi and Silvia Regina Vergilio},
      title = {Applying Design Patterns in Product Line Search-based Design: Feasibility Analysis and Implementation Aspects},
      booktitle = {Proceedings of the Chilean Computing Conference (JCC '13)},
      year = {2013},
      address = {Temuco, Chile},
      month = {11-15 November},
      url = {http://jcc2013.inf.uct.cl/wp-content/proceedings/SCCC/Applying%20design%20patterns%20in%20product%20line%20search-based%20design%20feasibility%20analysis%20and%20implementation%20aspects.pdf}
    }
    					
    2015.12.08 Nirmal Kumar Gupta & Mukesh Kumar Rohil Improving GA based Automated Test Data Generation Technique for Object Oriented Software 2013 Proceedings of the IEEE 3rd International Advance Computing Conference (IACC '13), pp. 249-253, Ghaziabad India, 22-23 February   Inproceedings
    Abstract: Genetic algorithms have been successfully applied in the area of software testing. The demand for automation of test case generation in object oriented software testing is increasing. Extensive tests can only be achieved through a test automation process. The benefits achieved through test automation include lowering the cost of tests and consequently, the cost of whole process of software development. Several studies have been performed using this technique for automation in generating test data but this technique is expensive and cannot be applied properly to programs having complex structures. Since, previous approaches in the area of object-oriented testing are limited in terms of test case feasibility due to call dependences and runtime exceptions. This paper proposes a strategy for evaluating the fitness of both feasible and unfeasible test cases leading to the improvement of evolutionary search by achieving higher coverage and evolving more number of unfeasible test cases into feasible ones.
    BibTeX:
    @inproceedings{GuptaR13,
      author = {Nirmal Kumar Gupta and Mukesh Kumar Rohil},
      title = {Improving GA based Automated Test Data Generation Technique for Object Oriented Software},
      booktitle = {Proceedings of the IEEE 3rd International Advance Computing Conference (IACC '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {249-253},
      address = {Ghaziabad, India},
      month = {22-23 February},
      doi = {http://dx.doi.org/10.1109/IAdCC.2013.6514229}
    }
    					
    2013.07.26 Mark Harman Software Engineering: An Ideal Set of Challenges for Evolutionary Computation 2013 Proceedings of The 15th International Conference Companion on Genetic and Evolutionary Computation (GECCO '13), pp. 1759-1760, Amsterdam The Netherlands, 6-10 July   Inproceedings General Aspects and Survey
    Abstract: Software is an engineering material to be optimised. Until comparatively recently many computer scientists doubted this; why would one want to optimise something that could be made perfect by pure logical reasoning? However, the wider community has come to realise that, while very small programs may be perfect in isolation, larger software systems may never be (because the world in which they operate is not perfect). Once we accept this, we soon arrive at evolutionary computation as a means of optimising software. However, software is not merely some other engineering material to be optimised. Software is virtual and inherently adaptive, making it better suited to evolutionary computation than any other engineering material. This is leading to breakthroughs at the interface of software engineering and evolutionary computation, though there are still many exciting open problems for evolutionary commutation researchers to get their teeth into. This talk will cover some of these recent developments in Search Based Software Engineering (SBSE) and Dynamic Adaptive SBSE.
    BibTeX:
    @inproceedings{Harman13,
      author = {Mark Harman},
      title = {Software Engineering: An Ideal Set of Challenges for Evolutionary Computation},
      booktitle = {Proceedings of The 15th International Conference Companion on Genetic and Evolutionary Computation (GECCO '13)},
      year = {2013},
      pages = {1759-1760},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2464576.2480770}
    }
    					
    2013.08.05 Mark Harman, John A. Clark & Mel Ó Cinnéide Dynamic Adaptive Search Based Software Engineering Needs Fast Approximate Metrics 2013 Proceedings of the 4th International Workshop on Emerging Trends in Software Metrics (WETSoM ‘13), pp. 1-6, San Francisco CA USA, 21-21 May   Inproceedings General Aspects and Survey
    Abstract: Search Based Software Engineering (SBSE) uses fitness functions to guide an automated search for solutions to challenging software engineering problems. The fitness function is a form of software metric, so there is a natural and close interrelationship between software metics and SBSE. SBSE can be used as a way to experimentally validate metrics, revealing startling conflicts between metrics that purport to measure the same software attributes. SBSE also requires new forms of surrogate metrics. This topic is less well studied and, therefore, remains an interesting open problem for future work. This paper overviews recent results on SBSE for experimental metric validation and discusses the open problem of fast approximate surrogate metrics for dynamic adaptive SBSE.
    BibTeX:
    @inproceedings{HarmanCO13,
      author = {Mark Harman and John A. Clark and Mel Ó Cinnéide},
      title = {Dynamic Adaptive Search Based Software Engineering Needs Fast Approximate Metrics},
      booktitle = {Proceedings of the 4th International Workshop on Emerging Trends in Software Metrics (WETSoM ‘13)},
      publisher = {IEEE},
      year = {2013},
      pages = {1-6},
      address = {San Francisco, CA, USA},
      month = {21-21 May},
      doi = {http://dx.doi.org/10.1109/WETSoM.2013.6619329}
    }
    					
    2013.04.12 Mark Harman, Kiran Lakhotia, Jeremy Singer, David R. White & Shin Yoo Cloud Engineering is Search Based Software Engineering too 2013 Journal of Systems and Software, Vol. 86(9), pp. 2225-2241, September   Article General Aspects and Survey
    Abstract: Many of the problems posed by the migration of computation to cloud platforms can be formulated and solved using techniques associated with Search Based Software Engineering (SBSE). Much of cloud software engineering involves problems of optimisation: performance, allocation, assignment and the dynamic balancing of resources to achieve pragmatic trade-offs between many competing technical and business objectives. SBSE is concerned with the application of computational search and optimisation to solve precisely these kinds of software engineering challenges. Interest in both cloud computing and SBSE has grown rapidly in the past five years, yet there has been little work on SBSE as a means of addressing cloud computing challenges. Like many computationally demanding activities, SBSE has the potential to benefit from the cloud; ‘SBSE in the cloud’. However, this paper focuses, instead, of the ways in which SBSE can benefit cloud computing. It thus develops the theme of ‘SBSE for the cloud’, formulating cloud computing challenges in ways that can be addressed using SBSE.
    BibTeX:
    @article{HarmanLSWY,
      author = {Mark Harman and Kiran Lakhotia and Jeremy Singer and David R. White and Shin Yoo},
      title = {Cloud Engineering is Search Based Software Engineering too},
      journal = {Journal of Systems and Software},
      year = {2013},
      volume = {86},
      number = {9},
      pages = {2225-2241},
      month = {September},
      doi = {http://dx.doi.org/10.1016/j.jss.2012.10.027}
    }
    					
    2013.09.26 Mark Harman, William B. Langdon & Westley Weimer Genetic Programming for Reverse Engineering 2013 Proceedings of the 20th Working Conference on Reverse Engineering (WCRE '13) - Keynote, pp. 1-10, Landau Germany, 14-17 October   Inproceedings Distribution and Maintenance
    Abstract: This paper overviews the application of Search Based Software Engineering (SBSE) to reverse engineering with a particular emphasis on the growing importance of recent developments in genetic programming and genetic improvement for reverse engineering. This includes work on SBSE for re-modularisation, refactoring, regression testing, syntax-preserving slicing and dependence analysis, concept assignment and feature location, bug fixing, and code migration. We also explore the possibilities for new directions in research using GP and GI for partial evaluation, amorphous slicing, and product lines, with a particular focus on code transplantation.
    BibTeX:
    @inproceedings{HarmanLW13,
      author = {Mark Harman and William B. Langdon and Westley Weimer},
      title = {Genetic Programming for Reverse Engineering},
      booktitle = {Proceedings of the 20th Working Conference on Reverse Engineering (WCRE '13) - Keynote},
      publisher = {IEEE},
      year = {2013},
      pages = {1-10},
      address = {Landau, Germany},
      month = {14-17 October},
      doi = {http://dx.doi.org/10.1109/WCRE.2013.6671274}
    }
    					
    2014.05.28 Evelyn Nicole Haslinger, Roberto E. Lopez-Herrejon & Alexander Egyed Improving CASA Runtime Performance by Exploiting Basic Feature Model Analysis 2013 (arXiv:1311.7313v2)   Techreport Testing and Debugging
    Abstract: In Software Product Line Engineering (SPLE) families of systems are designed, rather than developing the individual systems independently. Combinatorial Interaction Testing has proven to be effective for testing in the context of SPLE, where a representative subset of products is chosen for testing in place of the complete family. Such a subset of products can be determined by computing a so called t-wise Covering Array (tCA), whose computation is NP-complete. Recently, reduction rules that exploit basic feature model analysis have been proposed that reduce the number of elements that need to be considered during the computation of tCAs for Software Product Lines (SPLs). We applied these rules to CASA, a simulated annealing algorithm for tCA generation for SPLs. We evaluated the adapted version of CASA using 133 publicly available feature models and could record on average a speedup of 61.8% of median execution time, while at the same time preserving the coverage of the generated array.
    BibTeX:
    @techreport{HaslingerLE13,
      author = {Evelyn Nicole Haslinger and Roberto E. Lopez-Herrejon and Alexander Egyed},
      title = {Improving CASA Runtime Performance by Exploiting Basic Feature Model Analysis},
      year = {2013},
      number = {arXiv:1311.7313v2},
      url = {http://arxiv.org/pdf/1311.7313v2.pdf}
    }
    					
    2014.05.28 Christopher Henard, Mike Papadakis, Gilles Perrouin, Jacques Klein & Yves Le Traon Towards Automated Testing and Fixing of Re-engineered Feature Models 2013 Proceedings of the 35th International Conference on Software Engineering (ICSE '13), pp. 1245-1248, San Francisco CA USA, 18-26 May   Inproceedings Testing and Debugging
    Abstract: Mass customization of software products requires their efficient tailoring performed through combination of features. Such features and the constraints linking them can be represented by Feature Models (FMs), allowing formal analysis, derivation of specific variants and interactive configuration. Since they are seldom present in existing systems, techniques to re-engineer FMs have been proposed. There are nevertheless error-prone and require human intervention. This paper introduces an automated search-based process to test and fix FMs so that they adequately represent actual products. Preliminary evaluation on the Linux kernel FM exhibit erroneous FM constraints and significant reduction of the inconsistencies.
    BibTeX:
    @inproceedings{HenardPPKT13,
      author = {Christopher Henard and Mike Papadakis and Gilles Perrouin and Jacques Klein and Yves Le Traon},
      title = {Towards Automated Testing and Fixing of Re-engineered Feature Models},
      booktitle = {Proceedings of the 35th International Conference on Software Engineering (ICSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {1245-1248},
      address = {San Francisco, CA, USA},
      month = {18-26 May},
      doi = {http://dx.doi.org/10.1109/ICSE.2013.6606689}
    }
    					
    2014.09.02 Wei He & Ruilian Zhao Sequential Pattern Mining Based Test Case Regeneration 2013 Journal of Software, Vol. 8(12), pp. 3105-3113, December   Article Testing and Debugging
    Abstract: Automated test generation for object-oriented programs is an essential and yet a difficult task. Many automated test generation approaches produce test cases entirely from the program under test, without considering useful information from already created test cases. This paper presents an approach to regenerate test cases via exploiting frequently-used method call sequences from test repository. Particularly, for an object-oriented program under test, a sequential pattern mining strategy is employed to obtain frequent subsequences of method invocations as sequential patterns from corresponding test repository, and then a GA-based test case regeneration strategy is used to produce new test cases on the basis of the sequential patterns. A prototype called SPM-RGN is developed and is applied to generate test cases for actual Java programs. Empirical results show that SPM-RGN can achieve 47.5%, 11.2% and 4.5% higher branch coverage than three existing automated test generators. Besides, SPM-RGN produces 85.1%, 28.1% and 27.4% shorter test cases than those test generators. Therefore, the test cases generated by SPM-RGN are more effective and easier to understand.
    BibTeX:
    @article{HeZ13,
      author = {Wei He and Ruilian Zhao},
      title = {Sequential Pattern Mining Based Test Case Regeneration},
      journal = {Journal of Software},
      year = {2013},
      volume = {8},
      number = {12},
      pages = {3105-3113},
      month = {December},
      doi = {http://dx.doi.org/10.4304/jsw.8.12.3105-3113}
    }
    					
    2016.03.09 Ming Huang, Shujie Guo, Xu Liang & Xuan Jiao Research on Regression Test Case Selection Based on Improved Genetic Algorithm 2013 Proceedings of the 3rd International Conference on Computer Science and Network Technology (ICCSNT '13), pp. 256-259, Dalian China, 12-13 October   Inproceedings
    Abstract: In order to get the optimum selection of the test cases, an approach based on an improved genetic algorithm is proposed. Genetic operators of selection, crossover and mutation are improved, which effectively improve the algorithm's global search ability. Experiments are carried out to verify the feasibility of the improved algorithm. The result shows, compared with classical algorithm, the improved algorithm greatly improves the search performance; and it is effective in solving regression test case selection problem.
    BibTeX:
    @inproceedings{HuangGLJ13,
      author = {Ming Huang and Shujie Guo and Xu Liang and Xuan Jiao},
      title = {Research on Regression Test Case Selection Based on Improved Genetic Algorithm},
      booktitle = {Proceedings of the 3rd International Conference on Computer Science and Network Technology (ICCSNT '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {256-259},
      address = {Dalian, China},
      month = {12-13 October},
      doi = {http://dx.doi.org/10.1109/ICCSNT.2013.6967108}
    }
    					
    2014.09.02 Muhammad Zohaib Iqbal, Andrea Arcuri & Lionel Briand Environment Modeling and Simulation for Automated Testing of Soft Real-time Embedded Software 2013 Software & Systems Modeling, April   Article Testing and Debugging
    Abstract: Given the challenges of testing at the system level, only a fully automated approach can really scale up to industrial real-time embedded systems (RTES). Our goal is to provide a practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system’s design but are application domain experts, to model the system environment in such a way as to enable its black-box test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator to enable testing on the development platform or without involving actual hardware, the selection of test cases, and the evaluation of their expected results (oracles). From a practical standpoint—and such considerations are crucial for industrial adoption—environment modeling should be based on modeling standards (1) that are at an adequate level of abstraction, (2) that software engineers are familiar with, and (3) that are well supported by commercial or open source tools. In this paper, we propose a precise environment modeling methodology fitting these requirements and discuss how these models can be used to generate environment simulators. The environment models are expressed using UML/MARTE and OCL, which are international standards for real-time systems and constraint modeling. The presented techniques are evaluated on a set of three artificial problems and on two industrial RTES.
    BibTeX:
    @article{IqbalAB13,
      author = {Muhammad Zohaib Iqbal and Andrea Arcuri and Lionel Briand},
      title = {Environment Modeling and Simulation for Automated Testing of Soft Real-time Embedded Software},
      journal = {Software & Systems Modeling},
      year = {2013},
      month = {April},
      doi = {http://dx.doi.org/10.1007/s10270-013-0328-6}
    }
    					
    2014.08.12 Yue Jia, Myra B. Cohen, Mark Harman & Justyna Petke Learning Combinatorial Interaction Testing Strategies using Hyperheuristic Search 2013 (RN/13/17)   Techreport
    Abstract: Two decades of bespoke Combinatorial Interaction Testing (CIT) algorithm development have left software engineers with a bewildering choice of configurable system testing techniques. This paper introduces a single hyperheuristic algorithm that earns CIT strategies, providing a single generalist approach. We report experiments that show that our algorithm competes with known best solutions across constrained and unconstrained problems. For all 26 real world subjects and 29 of the 30 constrained benchmark problems studied, it equals or improves upon the best known result. We also present evidence that our algorithm's strong generic performance is caused by its effective unsupervised learning. Hyperheuristic search is thus a promising way to relocate CIT design intelligence from human to machine.
    BibTeX:
    @techreport{JiaCHP13,
      author = {Yue Jia and Myra B. Cohen and Mark Harman and Justyna Petke},
      title = {Learning Combinatorial Interaction Testing Strategies using Hyperheuristic Search},
      year = {2013},
      number = {RN/13/17},
      url = {http://www.cs.ucl.ac.uk/fileadmin/UCL-CS/research/Research_Notes/RN_13_17.pdf}
    }
    					
    2013.06.28 Sabrine Kalboussi, Slim Bechikh, Marouane Kessentini & Lamjed Ben Said Preference-Based Many-Objective Evolutionary Testing Generates Harder Test Cases for Autonomous Agents 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 245-250, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: Despite the high number of existing works in software testing within the SBSE community, there are very few ones that address the problematic of agent testing. The most prominent work in this direction is by Nguyen et al. [13], which formulates this problem as a bi-objective optimization problem to search for hard test cases from a robustness viewpoint. In this paper, we extend this work by: (1) proposing a new seven-objective formulation of this problem and (2) solving it by means of a preference-based many-objective evolutionary method. The obtained results show that our approach generates harder test cases than Nguyen et al. method ones. Moreover, Nguyen et al. method becomes a special case of our method since the user can incorporate his/her preferences within the search process by emphasizing some testing aspects over others.
    BibTeX:
    @inproceedings{KalboussiBKS13,
      author = {Sabrine Kalboussi and Slim Bechikh and Marouane Kessentini and Lamjed Ben Said},
      title = {Preference-Based Many-Objective Evolutionary Testing Generates Harder Test Cases for Autonomous Agents},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {245-250},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_19}
    }
    					
    2014.09.19 Sabrine Kalboussi, Slim Bechikh, Marouane Kessentini & Lamjed Ben Said On the Influence of the Number of Objectives in Evolutionary Autonomous Software Agent Testing 2013 Proceedings of the 25th International Conference on Tools with Artificial Intelligence (ICTAI '13), pp. 229-234, Herndon VA USA, 4-6 November   Inproceedings Testing and Debugging
    Abstract: Autonomous software agents are increasingly used in a wide range of applications. Thus, testing these entities is extremely crucial. However, testing autonomous agents is still a hard task since they may react in different manners for the same input over time. To address this problem, Nguyen et al. [6] have introduced the first approach that uses evolutionary optimization to search for challenging test cases. In this paper, we extend this work by studying experimentally the effect of the number of objectives on the obtained test cases. This is achieved by proposing five additional objectives and solving the new obtained problem by means of a Preference-based Many-Objective Evolutionary Testing (P-MOET) method. The obtained results show that the hardness of test cases increases with the rise of the number of objectives.
    BibTeX:
    @inproceedings{KalboussiBKS13b,
      author = {Sabrine Kalboussi and Slim Bechikh and Marouane Kessentini and Lamjed Ben Said},
      title = {On the Influence of the Number of Objectives in Evolutionary Autonomous Software Agent Testing},
      booktitle = {Proceedings of the 25th International Conference on Tools with Artificial Intelligence (ICTAI '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {229-234},
      address = {Herndon, VA, USA},
      month = {4-6 November},
      doi = {http://dx.doi.org/10.1109/ICTAI.2013.43}
    }
    					
    2014.09.02 Gregory M. Kapfhammer, Phil McMinn & Chris J. Wright Search-Based Testing of Relational Schema Integrity Constraints Across Multiple Database Management Systems 2013 Proceedings of IEEE 6th International Conference on Software Testing, Verification and Validation (ICST '13), pp. 31-40, Luembourg Luembourg, 18-22 March   Inproceedings Testing and Debugging
    Abstract: There has been much attention to testing applications that interact with database management systems, and the testing of individual database management systems themselves. However, there has been very little work devoted to testing arguably the most important artefact involving an application supported by a relational database - the underlying schema. This paper introduces a search-based technique for generating database table data with the intention of exercising the integrity constraints placed on table columns. The development of a schema is a process open to flaws like any stage of application development. Its cornerstone nature to an application means that defects need to be found early in order to prevent knock-on effects to other parts of a project and the spiralling bug-fixing costs that may be incurred. Examples of such flaws include incomplete primary keys, incorrect foreign keys, and omissions of NOT NULL declarations. Using mutation analysis, this paper presents an empirical study evaluating the effectiveness of our proposed technique and comparing it against a popular tool for generating table data, DBMonster. With competitive or faster data generation times, our method outperforms DBMonster in terms of both constraint coverage and mutation score.
    BibTeX:
    @inproceedings{KapfhammerMW13,
      author = {Gregory M. Kapfhammer and Phil McMinn and Chris J. Wright},
      title = {Search-Based Testing of Relational Schema Integrity Constraints Across Multiple Database Management Systems},
      booktitle = {Proceedings of IEEE 6th International Conference on Software Testing, Verification and Validation (ICST '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {31-40},
      address = {Luembourg, Luembourg},
      month = {18-22 March},
      doi = {http://dx.doi.org/10.1109/ICST.2013.47}
    }
    					
    2014.05.27 Reza Karimpour & Guenther Ruhe Bi-criteria Genetic Search for Adding New Features into an Existing Product Line 2013 Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13), pp. 34-38, San Francisco CA USA, 20-20 May   Inproceedings
    Abstract: Software product line evolution involves decisions like finding which products are better candidates for realizing new feature requests. In this paper, we propose a solution for finding trade-off evolution alternatives for products while balancing between overall value and product integrity. The purpose of this study is to support product managers with feature selection for an existing product line. For this purpose, first, the feature model of the product line is encoded into a single binary encoding. Then we employ a bi-criteria genetic search algorithm, NSGA-II, to find the possible alternatives with different value and product integrity. From the proposed set of trade-off alternatives, the product line manager can select the solutions that best fit with the concerns of their preference. The implementation has been initially evaluated by two product line configurations.
    BibTeX:
    @inproceedings{KarimpourR13,
      author = {Reza Karimpour and Guenther Ruhe},
      title = {Bi-criteria Genetic Search for Adding New Features into an Existing Product Line},
      booktitle = {Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {34-38},
      address = {San Francisco, CA, USA},
      month = {20-20 May},
      doi = {http://dx.doi.org/10.1109/CMSBSE.2013.6604434}
    }
    					
    2015.12.09 K. Karnavel & J. Santhoshkumar Automated Software Testing for Application Maintenance by using Bee Colony Optimization algorithms (BCO) 2013 Proceedings of International Conference on Information Communication and Embedded Systems (ICICES '13), pp. 327-330, Chennai India, 21-22 February   Inproceedings
    Abstract: The discipline of software engineering encompasses knowledge, tools, and methods for defining software requirements, and performing software design, software construction, software testing, and software maintenance tasks. In software development practice, testing accounts for as much as 50% of total development efforts. Testing can be manual, automated, or a combination of both. Manual testing is the process of executing the application and manually interacting with the application, specifying inputs and observing outputs The objective of the project is to test the software application by suitable algorithm. The proposed system has to reduce the application testing moment, easily can find out bug and solve the bug by Regression Testing. Here by we are going to using a safe efficient regression bee colony optimization algorithm(BCO) for construct control flow graphs for a procedure or program and its modified version and use these graphs to select tests that execute changed code from the original test suite. Traceability relation of completeness checking for agent errors. In traceability relation to identifying missing elements in entire application and classifying defect in the testing moment.
    BibTeX:
    @inproceedings{KarnavelS13,
      author = {K. Karnavel and J. Santhoshkumar},
      title = {Automated Software Testing for Application Maintenance by using Bee Colony Optimization algorithms (BCO)},
      booktitle = {Proceedings of International Conference on Information Communication and Embedded Systems (ICICES '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {327-330},
      address = {Chennai, India},
      month = {21-22 February},
      doi = {http://dx.doi.org/10.1109/ICICES.2013.6508211}
    }
    					
    2014.09.19 Mohammod Abul Kashem & Mohammad Naderuzzaman An Enhanced Pairwise Search Approach for Generating Optimum Number of Test Data and Reduce Execution Time 2013 Computer Engineering and Intelligent Systems, Vol. 4(1), pp. 19-28   Article Testing and Debugging
    Abstract: In recent days testing considers the most important task for building software that is free from error. Since the resources and time is limited to produce software, hence, it is not possible of performing exhaustive tests (i.e. to test all possible combinations of input data.) An alternative to get ride from this type exhaustive numbers and as well to reduce cost, an approach called Pairwise (2 way) test data generation approach will be effective. Most of the software faults in pairwise approach caused by unusual combination of input data. Hence, the demand for the optimization of number of generated test-cases and reducing the execution time is growing in software industries. This paper proposes an enhancement in pairwise search approach which generates optimum number of input values for testing purposes. In this approach it searches the most coverable pairs by pairing parameters and adopts one-test-at-a-time strategy for constructing a final test-suite. Compared to other existing strategies, Our proposed approach is effective in terms of number of generated test cases and of execution time.
    BibTeX:
    @article{KashemN13,
      author = {Mohammod Abul Kashem and Mohammad Naderuzzaman},
      title = {An Enhanced Pairwise Search Approach for Generating Optimum Number of Test Data and Reduce Execution Time},
      journal = {Computer Engineering and Intelligent Systems},
      year = {2013},
      volume = {4},
      number = {1},
      pages = {19-28},
      url = {http://iiste.org/Journals/index.php/CEIS/article/download/3845/3916}
    }
    					
    2013.08.05 Joseph Kempka, Phil McMinn & Dirk Sudholt A Theoretical Runtime and Empirical Analysis of Different Alternating Variable Searches for Search-based Testing 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1445-1452, Amsterdam The Netherlands, 6-10 July   Inproceedings Testing and Debugging
    Abstract: The Alternating Variable Method (AVM) has been shown to be a surprisingly effective and efficient means of generating branch-covering inputs for procedural programs. However, there has been little work that has sought to analyse the technique and further improve its performance. This paper proposes two new local searches that may be used in conjunction with the AVM, Geometric and Lattice Search. A theoretical runtime analysis shows that under certain conditions, the use of these searches is proved to outperform the original AVM. These theoretical results are confirmed by an empirical study with four programs, which shows that increases of speed of over 50percent are possible in practice.
    BibTeX:
    @inproceedings{KempkaMS13,
      author = {Joseph Kempka and Phil McMinn and Dirk Sudholt},
      title = {A Theoretical Runtime and Empirical Analysis of Different Alternating Variable Searches for Search-based Testing},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1445-1452},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463549}
    }
    					
    2014.05.27 Marouane Kessentini, Philip Langer & Manue Wimmer Searching Models, Modeling Search: On the Synergies of SBSE and MDE 2013 Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13), pp. 51-54, San Francisco CA USA, 20-20 May   Inproceedings
    Abstract: In the past years, several researchers applied search-based optimization algorithms successfully in the software engineering domain to obtain automatically near-optimal solutions to complex problems posing huge solution spaces. More recently, such algorithms have also been proven useful for solving problems in model engineering. However, applying search-based optimization algorithms to problems in model engineering efficiently and effectively is a challenging endeavor demanding for expertize in both, search-based algorithms as well as model engineering formalisms and techniques. In this paper, we report on our experiences in applying such search-based algorithms for model engineering problems and propose a model-driven approach to ease the adoption of search-based algorithms for the area of model engineering.
    BibTeX:
    @inproceedings{KessentiniLW13,
      author = {Marouane Kessentini and Philip Langer and Manue Wimmer},
      title = {Searching Models, Modeling Search: On the Synergies of SBSE and MDE},
      booktitle = {Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {51-54},
      address = {San Francisco, CA, USA},
      month = {20-20 May},
      doi = {http://dx.doi.org/10.1109/CMSBSE.2013.6604438}
    }
    					
    2013.08.05 Marouane Kessentini, Wafa Werda, Philip Langer & Manuel Wimmer Search-based Model Merging 2013 Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1453-1460, Amsterdam The Netherlands, 6-10 July   Inproceedings Design Tools and Techniques
    Abstract: In Model-Driven Engineering (MDE) adequate means for collaborative modelling among multiple team members is crucial for large projects. To this end, several approaches exist to identify the operations applied in parallel, to detect conflicts among them, as well as to construct a merged model by incorporating all non-conflicting operations. Conflicts often denote situations where the application of one operation disables the applicability of another operation. Whether one operation disables the other, however, often depends on their application order. To obtain a merged model that maximises the combined effect of all parallel operations, we propose an automated approach for finding the optimal merging sequence that maximises the number of successfully applied operations. Therefore, we adapted and used a heuristic search algorithm to explore the huge search space of all possible operation sequences. The validation results on merging various versions of real-world models confirm that our approach finds operation sequences that successfully incorporate a high number of conflicting operations, which are otherwise not reflected in the merge by current approaches.
    BibTeX:
    @inproceedings{KessentiniWLW13,
      author = {Marouane Kessentini and Wafa Werda and Philip Langer and Manuel Wimmer},
      title = {Search-based Model Merging},
      booktitle = {Proceeding of The 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13)},
      publisher = {ACM},
      year = {2013},
      pages = {1453-1460},
      address = {Amsterdam, The Netherlands},
      month = {6-10 July},
      doi = {http://dx.doi.org/10.1145/2463372.2463553}
    }
    					
    2014.09.02 Shaukat Ali Khan & Aamer Nadeem Automated Test Data Generation for Coupling Based Integration Testing of Object Oriented Programs Using Particle Swarm Optimization (PSO) 2013 Proceedings of the Seventh International Conference on Genetic and Evolutionary Computing (ICGEC '13), pp. 115-124, Prague Czech Republic, 25-27 August   Inproceedings Testing and Debugging
    Abstract: Automated test data generation is a challenging problem for researchers in the area of software testing. Up until now, most of the work on test data generation is at unit level. Until level test data generation involves the execution of test path at unit level where interaction with other components is minimum. Test data generation for unit testing involves a single path and there is no usage of formal and actual parameters. The problem of automated test data generation becomes very challenging when we move to other levels of testing including integration testing and system level testing. At integration level, the variables are passed as arguments to other components and variables change their names; also multiple paths are executed from different components to ensure proper functionality. Recently evolutionary approaches have been proven a powerful tool for test data generation. In this paper, we have proposed a novel approach for test data generation for coupling based integration testing using particle swarm optimization. Up until now, there is no research for test data generation for coupling based integration testing using particle swarm optimization. Our approach takes the coupling path as input, containing different sub paths, and generates the test data using particle swarm optimization. We have also proposed architecture of tool for automation of our approach. In future, we will implement our proposed approach and will perform different experiments to prove its significance.
    BibTeX:
    @inproceedings{KhanN13,
      author = {Shaukat Ali Khan and Aamer Nadeem},
      title = {Automated Test Data Generation for Coupling Based Integration Testing of Object Oriented Programs Using Particle Swarm Optimization (PSO)},
      booktitle = {Proceedings of the Seventh International Conference on Genetic and Evolutionary Computing (ICGEC '13)},
      publisher = {Springer},
      year = {2013},
      pages = {115-124},
      address = {Prague, Czech Republic},
      month = {25-27 August},
      doi = {http://dx.doi.org/10.1007/978-3-319-01796-9_12}
    }
    					
    2014.05.27 Fitsum Meshesha Kifetew, Wei Jin, Roberto Tiella, Alessandro Orso & Paolo Tonella SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input 2013 Proceedings of IEEE/ACM 28th International Conference on Automated Software Engineering (ASE '13), pp. 604-609, Silicon Valley CA USA, 11-15 November   Inproceedings
    Abstract: Reproducing field failures in-house, a step developers must perform when assigned a bug report, is an arduous task. In most cases, developers must be able to reproduce a reported failure using only a stack trace and/or some informal description of the failure. The problem becomes even harder for the large class of programs whose input is highly structured and strictly specified by a grammar. To address this problem, we present SBFR, a search-based failure-reproduction technique for programs with structured input. SBFR formulates failure reproduction as a search problem. Starting from a reported failure and a limited amount of dynamic information about the failure, SBFR exploits the potential of genetic programming to iteratively find legal inputs that can trigger the failure.
    BibTeX:
    @inproceedings{KifetewJTOT13,
      author = {Fitsum Meshesha Kifetew and Wei Jin and Roberto Tiella and Alessandro Orso and Paolo Tonella},
      title = {SBFR: A Search Based Approach for Reproducing Failures of Programs with Grammar Based Input},
      booktitle = {Proceedings of IEEE/ACM 28th International Conference on Automated Software Engineering (ASE '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {604-609},
      address = {Silicon Valley, CA, USA},
      month = {11-15 November},
      doi = {http://dx.doi.org/10.1109/ASE.2013.6693120}
    }
    					
    2014.05.27 Fitsum Meshesha Kifetew, Annibale Panichella, Andrea De Lucia, Rocco Oliveto & Paolo Tonella Orthogonal Exploration of the Search Space in Evolutionary Test Case Generation 2013 Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA '13), pp. 257-267, Lugano Switzerland, 15-20 July   Inproceedings Testing and Debugging
    Abstract: The effectiveness of evolutionary test case generation based on Genetic Algorithms (GAs) can be seriously impacted by genetic drift, a phenomenon that inhibits the ability of such algorithms to effectively diversify the search and look for alternative potential solutions. In such cases, the search becomes dominated by a small set of similar individuals that lead GAs to converge to a sub-optimal solution and to stagnate, without reaching the desired objective. This problem is particularly common for hard-to-cover program branches, associated with an extremely large solution space. In this paper, we propose an approach to solve this problem by integrating a mechanism for orthogonal exploration of the search space into standard GA. The diversity in the population is enriched by adding individuals in orthogonal directions, hence providing a more effective exploration of the solution space. To the best of our knowledge, no prior work has addressed explicitly the issue of evolution direction based diversification in the context of evolutionary testing. Results achieved on 17 Java classes indicate that the proposed enhancements make GA much more effective and efficient in automating the testing process. In particular, effectiveness (coverage) was significantly improved in 47% of the subjects and efficiency (search budget consumed) was improved in 85% of the subjects on which effectiveness remains the same.
    BibTeX:
    @inproceedings{KifetewPLOT13,
      author = {Fitsum Meshesha Kifetew and Annibale Panichella and Andrea De Lucia and Rocco Oliveto and Paolo Tonella},
      title = {Orthogonal Exploration of the Search Space in Evolutionary Test Case Generation},
      booktitle = {Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA '13)},
      publisher = {ACM},
      year = {2013},
      pages = {257-267},
      address = {Lugano, Switzerland},
      month = {15-20 July},
      doi = {http://dx.doi.org/10.1145/2483760.2483789}
    }
    					
    2013.06.28 Alexey Kolesnichenko, Christopher M. Poskitt & Bertrand Meyer Applying Search in a Contract-Based Automatic Testing Tool 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: Automated random testing has been shown to be effective at finding faults in a variety of contexts and is deployed in several testing frameworks. AutoTest is one such framework, targeting programs written in Eiffel, an object-oriented language natively supporting executable pre- and postconditions; these respectively serving as test filters and test oracles. In this paper, we propose the integration of search-based techniques—along the lines of Tracey—to try and guide the tool towards input data that leads to violations of the postconditions present in the code; input data that random testing alone might miss, or take longer to find. Furthermore, we propose to minimise the performance impact of this extension by applying GPU programming to amenable parts of the computation.
    BibTeX:
    @inproceedings{KolesnichenkoPM13,
      author = {Alexey Kolesnichenko and Christopher M. Poskitt and Bertrand Meyer},
      title = {Applying Search in a Contract-Based Automatic Testing Tool},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_31}
    }
    					
    2014.09.22 Segla Kpodjedo, Filippo Ricca, Philippe Galinier, Giuliano Antoniol & Yann-Gaël Guéhéneuc MADMatch: Many-to-Many Approximate Diagram Matching for Design Comparison 2013 IEEE Transactions on Software Engineering, Vol. 39(8), pp. 1090-1111, August   Article
    Abstract: Matching algorithms play a fundamental role in many important but difficult software engineering activities, especially design evolution analysis and model comparison. We present MADMatch, a fast and scalable many-to-many approximate diagram matching approach based on an error-tolerant graph matching (ETGM) formulation. Diagrams are represented as graphs, costs are assigned to possible differences between two given graphs, and the goal is to retrieve the cheapest matching. We address the resulting optimization problem with a tabu search enhanced by the novel use of lexical and structural information. Through several case studies with different types of diagrams and tasks, we show that our generic approach obtains better results than dedicated state-of-the-art algorithms, such as AURA, PLTSDiff, or UMLDiff, on the exact same datasets used to introduce (and evaluate) these algorithms.
    BibTeX:
    @article{KpodjedoRGAG13,
      author = {Segla Kpodjedo and Filippo Ricca and Philippe Galinier and Giuliano Antoniol and Yann-Gaël Guéhéneuc},
      title = {MADMatch: Many-to-Many Approximate Diagram Matching for Design Comparison},
      journal = {IEEE Transactions on Software Engineering},
      year = {2013},
      volume = {39},
      number = {8},
      pages = {1090-1111},
      month = {August},
      doi = {http://dx.doi.org/10.1109/TSE.2013.9}
    }
    					
    2016.02.27 A.Charan Kumari & K. Srinivas Scheduling and Inspection Planning in Software Development Projects using Multi-Objective Hyper-Heuristic Evolutionary Algorithm 2013 International Journal of Software Engineering & Applications, Vol. 4(3), May   Article
    Abstract: This paper presents a Multi-objective Hyper-heuristic Evolutionary Algorithm (MHypEA) for the solution of Scheduling and Inspection Planning in Software Development Projects. Scheduling and Inspection planning is a vital problem in software engineering whose main objective is to schedule the persons to various activities in the software development process such as coding, inspection, testing and rework in such a way that the quality of the software product is maximum and at the same time the project make span and cost of the project are minimum. The problem becomes challenging when the size of the project is huge. The MHypEA is an effective metaheuristic search technique for suggesting scheduling and inspection planning. It incorporates twelve low-level heuristics which are based on different methods of selection, crossover and mutation operations of Evolutionary Algorithms. The selection mechanism to select a lowlevel heuristic is based on reinforcement learning with adaptive weights. The efficacy of the algorithm has been studied on randomly generated test problem.
    BibTeX:
    @article{KumariS13,
      author = {A.Charan Kumari and K. Srinivas},
      title = {Scheduling and Inspection Planning in Software Development Projects using Multi-Objective Hyper-Heuristic Evolutionary Algorithm},
      journal = {International Journal of Software Engineering & Applications},
      year = {2013},
      volume = {4},
      number = {3},
      month = {May},
      doi = {http://dx.doi.org/10.5121/ijsea.2013.4304}
    }
    					
    2014.05.30 A. Charan Kumari, K. Srinivas & M.P. Gupta Software Requirements Optimization using Multi-Objective Quantum-Inspired Hybrid Differential Evolution 2013 , pp. 107-120   Inbook Requirements/Specifications
    Abstract: Multi-Objective Next Release Problem (MONRP) is an important software requirements optimization problem in Search-Based Software Engineering. As the customer requirements varies from time to time, often software products are required to incorporate these changes. It is a hard task to optimize the requirements from a large number of candidates, for the accomplishment of the business goals and at the same time, the satisfaction of the customers. MONRP identifies a set of requirements to be included in the next release of the product, by minimizing the cost in terms of money or resources, and maximizing the number of customers to get satisfied by including these requirements. The problem is multi-objective in nature and the objectives are conflicting objectives. The problem is NP-hard and since it cannot be solved effectively and efficiently by traditional optimization techniques especially for large problem instances, Metaheuristic Search and Optimization Techniques are required. Since MONRP has wide applicability in software companies and manufacturing companies, there is a need for efficient solution techniques especially for the large problem instances. Therefore, this paper presents a Multi-objective Quantum-inspired Hybrid Differential Evolution (MQHDE) for the solution of MONRP which combines the strengths of Quantum Computing, Differential Evolution and Genetic Algorithm. The features of MQHDE help in achieving consistency in performance in terms of convergence to Pareto-optimal front, good spread among the obtained Pareto-optimal front solutions, faster convergence and obtaining relatively more number of solutions along the Pareto-optimal front. The performance of MQHDE is tested on six benchmark problems using Spread and HyperVolume metrics. The comparison of the obtained results indicates consistent and superior performance of MQHDE over the other methods reported in the literature.
    BibTeX:
    @inbook{KumariSG13,
      author = {A. Charan Kumari and K. Srinivas and M.P. Gupta},
      title = {Software Requirements Optimization using Multi-Objective Quantum-Inspired Hybrid Differential Evolution},
      publisher = {Springer},
      year = {2013},
      pages = {107-120},
      doi = {http://dx.doi.org/10.1007/978-3-642-31519-0_7}
    }
    					
    2014.09.22 A. Charan Kumari, K. Srinivas & M.P. Gupta Software Module Clustering using a Hyper-heuristic based Multi-Objective Genetic Algorithm 2013 Proceedings of the 3rd International Advance Computing Conference (IACC '13), pp. 813-818, Ghaziabad India, 22-23 February   Inproceedings
    Abstract: This paper presents a Fast Multi-objective Hyper-heuristic Genetic Algorithm (MHypGA) for the solution of Multi-objective Software Module Clustering Problem. Multi-objective Software Module Clustering Problem is an important and challenging problem in Software Engineering whose main goal is to obtain a good modular structure of the Software System. Software Engineers greatly emphasize on good modular structure as it is easier to comprehend, develop and maintain such software systems. In recent times, the problem has been converted into a Search-based Software Engineering Problem with multiple objectives. This problem is NP hard as it is an instance of Graph Partitioning and hence cannot be solved using traditional optimization techniques. The MHypGA is a fast and effective metaheuristic search technique for suggesting software module clusters in a software system while maximizing cohesion and minimizing the coupling of the software modules. It incorporates twelve low-level heuristics which are based on different methods of selection, crossover and mutation operations of Genetic Algorithms. The selection mechanism to select a low-level heuristic is based on reinforcement learning with adaptive weights. The efficacy of the algorithm has been studied on six real-world module clustering problems reported in the literature and the comparison of the results prove the superiority of the MHypGA in terms of quality of solutions and computational time.
    BibTeX:
    @inproceedings{KumariSG13b,
      author = {A. Charan Kumari and K. Srinivas and M.P. Gupta},
      title = {Software Module Clustering using a Hyper-heuristic based Multi-Objective Genetic Algorithm},
      booktitle = {Proceedings of the 3rd International Advance Computing Conference (IACC '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {813-818},
      address = {Ghaziabad, India},
      month = {22-23 February},
      doi = {http://dx.doi.org/10.1109/IAdCC.2013.6514331}
    }
    					
    2014.05.30 Kiran Lakhotia En Garde: Winning Coding Duels through Genetic Programming 2013 Proceedings of the 6th International Workshop on Search-Based Software Testing (SBST '13), pp. 421-424, Luxembourg Luxembourg, 22-22 March   Inproceedings Testing and Debugging
    Abstract: In this paper we present a Genetic Programming system to solve coding duels on the Pex4Fun website. Users create simple puzzle methods in a .NET supported programming language, and other users have to `guess' the puzzle implementation through trial and error. We have replaced the human user who solves a puzzle (i.e. implements a program that matches the implementation of the puzzle) with a Genetic Programming system that tries to win such coding duels. During a proof of concept experiment we found that our system can indeed automatically generate code that matches the behaviour of a secret puzzle method. It takes on average 76.57 fitness evaluations to succeed.
    BibTeX:
    @inproceedings{Lakhotia13,
      author = {Kiran Lakhotia},
      title = {En Garde: Winning Coding Duels through Genetic Programming},
      booktitle = {Proceedings of the 6th International Workshop on Search-Based Software Testing (SBST '13)},
      publisher = {IEEE},
      year = {2013},
      pages = {421-424},
      address = {Luxembourg, Luxembourg},
      month = {22-22 March},
      doi = {http://dx.doi.org/10.1109/ICSTW.2013.79}
    }
    					
    2014.09.02 Kiran Lakhotia, Mark Harman & Hamilton Gross AUSTIN: An Open Source Tool for Search Based Software Testing of C Programs 2013 Information and Software Technology, Vol. 55(1), pp. 112-125, January   Article Testing and Debugging
    Abstract: Context Despite the large number of publications on Search-Based Software Testing (SBST), there remain few publicly available tools. This paper introduces AUSTIN, a publicly available open source SBST tool for the C language.1 The paper is an extension of previous work [1]. It includes a new hill climb algorithm implemented in AUSTIN and an investigation into the effectiveness and efficiency of different pointer handling techniques implemented by AUSTIN’s test data generation algorithms. Objective To evaluate the different search algorithms implemented within AUSTIN on open source systems with respect to effectiveness and efficiency in achieving branch coverage. Further, to compare AUSTIN against a non-publicly available, state-of-the-art Evolutionary Testing Framework (ETF). Method First, we use example functions from open source benchmarks as well as common data structure implementations to check if the decision procedure for pointer inputs, introduced in this paper, differs in terms of effectiveness and efficiency compared to a simpler alternative that generates random memory graphs. A second empirical study formulates two alternate hypotheses regarding the effectiveness and efficiency of AUSTIN compared to the ETF. These hypotheses are tested using a paired Wilcoxon test. Results and Conclusion The first study highlights some practical problems with the decision procedure for pointer inputs described in this paper. In particular, if the code under test contains insufficient guard statements to enforce constraints over pointers, then using a constraint solver for pointer inputs may be suboptimal compared to a method that generates random memory graphs. The programs used in the second study do not require any constraint solving for pointer inputs and consist of eight non-trivial, real-world C functions drawn from three embedded automotive software modules. For these functions, AUSTIN is competitive compared to the ETF, achieving an equal or higher branch coverage for six of the functions. In addition, for functions where AUSTIN’s branch coverage is equal or higher, AUSTIN is more efficient than the ETF.
    BibTeX:
    @article{LakhotiaHG13,
      author = {Kiran Lakhotia and Mark Harman and Hamilton Gross},
      title = {AUSTIN: An Open Source Tool for Search Based Software Testing of C Programs},
      journal = {Information and Software Technology},
      year = {2013},
      volume = {55},
      number = {1},
      pages = {112-125},
      month = {January},
      doi = {http://dx.doi.org/10.1016/j.infsof.2012.03.009}
    }
    					
    2013.06.28 Zheng Li, Yi Bian, Ruilian Zhao & Jun Cheng A Fine-Grained Parallel Multi-objective Test Case Prioritization on GPU 2013 Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13), Vol. 8084, pp. 111-125, St. Petersburg Russia, 24-26 August   Inproceedings Testing and Debugging
    Abstract: Multi-Objective Evolutionary Algorithms (MOEAs) have been widely used to address regression test optimization problems, including test case selection and test suite minimization. GPU-based parallel MOEAs are proposed to increase execution efficiency to fulfill the industrial demands. When using binary representation in MOEAs, the fitness evaluation can be transformed a parallel matrix multiplication that is implemented on GPU easily and more efficiently. Such GPU-based parallel MOEAs may achieve higher level of speed-up for test case prioritization because the computation load of fitness evaluation in test case prioritization is more than that in test case selection or test suite minimization. However, the non-applicability of binary representation in the test case prioritization results in the challenge of parallel fitness evaluation on GPU. In this paper, we present a GPU-based parallel fitness evaluation and three novel parallel crossover computation schemes based on ordinal and sequential representations, which form a fine-grained parallel framework for multi-objective test case prioritization. The empirical studies based on eight benchmarks and one open source program show a maximum of 120x speed-up achieved.
    BibTeX:
    @inproceedings{LiBZC13,
      author = {Zheng Li and Yi Bian and Ruilian Zhao and Jun Cheng},
      title = {A Fine-Grained Parallel Multi-objective Test Case Prioritization on GPU},
      booktitle = {Proceedings of the 5th International Symposium on Search Based Software Engineering (SSBSE '13)},
      publisher = {Springer},
      year = {2013},
      volume = {8084},
      pages = {111-125},
      address = {St. Petersburg, Russia},
      month = {24-26 August},
      doi = {http://dx.doi.org/10.1007/978-3-642-39742-4_10}
    }
    					
    2014.09.22 Roberto E. Lopez-Herrejon & Alexander Egyed SBSE4VM: Search Based Software Engineering for Variability Management