Please refresh page for updates – Live blog commencing 10.30am on 22nd April 2013
The DAASE COW will be running all day 22nd and 23rd April at UCL
Program here: http://crest.cs.ucl.ac.uk/cow/26/
Professor Mark Harman, DAASE project PI introduces the day by welcoming everyone to UCL, to London and to #CREST26.
Prof Mark Harman welcomes everyone
Professor Harman asks everyone at the workshop to introduce themselves.
11:15 Keynote: Living with change and uncertainty, and the quest for incrementality. Prof Carlo Ghezzi, Dipartimento di Elettronica e Informazione, Politecnico di Milano, Italy
Prof Ghezzi begins by asking everyone to interrupt if they have any questions.
Basic findings from his research of the last 4 years. Software is everywhere, there are cyber-physical systems, new behaviours emerge dynamically and systems need to run continuously.
The challenge: continuous change and uncertainty in the requirements, the infrastructure, the platform. At the same time our systems need to be dependable.
Change and dependability don’t go together very well.
Slide: The questions and the answers
The traditional separation between development time and runtime must be broken.
You need to convince yourself that your specification and your assumptions entail the specification of the requirements.
Hidden assumptions need to be made explicit. Assumptions are heavily affected by change and uncertainty. Changes lead to software evolution, adaptation is a special case of evolution. Changes matter because they may affect dependability.
Professor Carlo Ghezzi on Google Scholar
Continuous verification needs to be performed to detect the need for evolution.
Q: is it possible to identify a subset of priorities at runtime?
A: verification needs to be made incremental. It is a challenge.
We have focused mainly on non-functional requirements quantitatively stated in probabilistic terms. These things all change out of your control and the users profile is also subject to change.
Prof Ghezzi’s team’s approach, there are several models with different viewpoints, focusing on non-functional properties and Markov models.
The DMTC Model
The problem: is it affordable everytime you have a change to run the model checker from scratch? Can changes be handled incrementally? This requires revisiting verification procedures.
Running the model checker can be impractical. Verification needs to have an agile approach, it must be incremental.
Incrementality by parameterization: requires anticipation of changing parameters, you can do a partial evaluation of the model with transitions as variables. Things that may change are represented as variables. Values are partly numeric and partly symbolic represented in matrices.
Prof Ghezzi states we must move verification to runtime, thru incrementality formal specification/verification can be reconciled with agility.
[SB This reminds me of our 2009 paper “Formal vs Agile: survival of the fittest?”]
Summary: self adaptation requires continued reasoning and verification at runtime because of the detected changes. Verification has to be the driver, if you want it to be safe, the only way to be dependable is to use a model view. Mainstream verification approaches must be adapted.
Prof Carlo Ghezzi can be contacted via Twitter: @carloghezzi
Next is Gabriela Ochoa from Stirling University talking about “Hyper-heuristics and cross-domain optimisation”
There are various autonomous/adaptive search approaches.
Hyper heuristics work well on different problems and are bespoke. They comprise heuristic selection followed by heuristic generation.
Gabriela describes several case studies in hyper-heuristics. The university course timetabling case study which looked at minimising conflicts. Many operators were available which could be selected on the fly. The five best operators were identified.
Gabriela has found that good algorithms are hybrid and dynamic, adaptive approaches can beat state of the art algorithms.
Prof Marc Schonauer from INRIA speaks next about “Adaptive operator selection with rank based multi armed bandits”
Why do we want to set parameters online? Because there is no single best operator.
Operator selection factors to consider: the exploration vs exploitation balance, dynamical setting, using a change detection test eg Page-Hinkley which enforces the efficient detection of changes in time series.
The Area Under the Curve AUC in machine learning can be used to evaluate binary classifiers with performance a percentage of missclassification. This is equivalent to the MannWhitneyWilcoxon test, where two sequences have the same order.
Rank based AUC can be used with Multi Armed Bandits MAB.
The goals of the experiments, carried out were: performance, and robustness/generality.
The results show that MAB outperforms other methods.
Sasa Misailovic talks next about “Accuracy aware program transformations“.
Current trends are big data sets and energy consciousness.
This gives an opportunity to automatically transform snd adapt programs. This can also help with energy related problems like battery life etc.
To optimize transformations a code perforation framework has been developed to enable automatic discovery of profitable trade-offs between the time required to perform a computation and the accuracy of the final result. The process has three main stages: find candidates, analyse effects, navigate tradeoff space.
Results of a code perforation experiment showed that performance improved by a factor of over two for a range of applications, but there is no guarantee of accuracy or safety with changing the result that the application produces by less than 10%.
TEA BREAK [Yay! This is thirsty work]
UCL CREST’s Yuanyuan Zhang is next presenting “Hyper-heuristic based strategic release planning”
Difficulties with release planning: complex constraints, incompletexinformation, multiple objectives and large decision space.
Strategic Release Planning: using hyper heuristic algorithms in selecting and assigning requirements.
A model has been developed which has been used with five different data sets, including two from Motorola and Ericsson.
The data sets all have multiple stakeholders with multiple needs and requirements. Yuanyuan presents a detailed breakdown of the results.
Another experiment compares the performance of hyper heuristic algorithms.
Leandro Minku from CERCIA, University of Birmingham presents on “Ensemble learning for software effort estimation: from static to dynamic analysis”
Simon Poulding from Iniversity of York on “Searching for strategies that verify MDE Toolchains”
DINNER and see you all tomorrow….