DAASE project blogpost by Dr Sandy Brownlee, Senior Research Assistant, DAASE project, Stirling University
As more of our time is spent on our smart phones, the curse of empty batteries is increasingly important. While battery technology is continually advancing, we can also tackle the problem by reduce the power consumed by the phone. Several factors (like the display, wifi and GPS) impact on power consumption, but the CPU that runs the phone’s software is still a big consumer of energy. For example, the maximum CPU power of a Samsung Galaxy S3 is 2,845 mW, 2.53× the maximum power consumption of the screen and 2.5× that of the 3G hardware.
At the other end of the scale, electricity consumed by server centres run by the likes of Google and Facebook impacts on global climate change: in fact, the electricity consumed by servers was between 1.1 and 1.5% of global electricity production in 2010. So, reducing the energy consumed by CPUs is important!
Much of the code that’s being run relies on the same few algorithms, reimplemented for particular applications and hardware. Ideally code should be retuned for each new application but there are a few things that mean this is difficult in practice, so automated approaches are needed. We already have this for making software run faster – a tool called a “compiler” takes human readable source code and turns it into machine code, making speed improvements along the way. Often making a program run faster will mean it consumes less energy, but some instructions are more power hungry than others, so simply speeding the code up doesn’t guarantee a longer-lasting battery.
We have developed a tool called Opacitor, designed to measure the energy consumed by programs written in Java, one of the most popular programming languages. Opacitor estimates the energy each instruction will consume on a live system. This can be combined with search-based techniques to explore possible changes to a given program, finding a low-energy variation. The work is reported in a journal paper published this month, which focuses on three aspects of reducing the energy consumed by software.
Firstly, programs perform differently depending on the data they process. A common software task is sorting data, and variants of the Quicksort algorithm are a popular way to do this. Quicksort repeatedly chooses a point in the data called the pivot, and rearranges the data around it. The choice of the pivot and the “distribution” of the data being sorted affect how well this works. Our approach generated an automatic way to choose the pivot: this can be tuned to different distributions of data, with the sorting operation using up to 70% less energy than conventional approaches.
Secondly, energy consumption relates to other software properties. Would you exchange a lower quality camera or a few more speech recognition mistakes for another few minutes of battery when it’s nearly empty? We were able to show how multi-layer perceptrons, the core of deep learning and applications like text recognition, can be tuned to balance accuracy and energy use. In fact, sometimes we could get the best of both, but this relationship is complicated enough that automatically exploring the trade-off between the two properties is very helpful.
Thirdly, different components of an application interact in very subtle ways. Modern software typically makes use of libraries (software written by others to do common tasks), and there are often many alternatives doing the same job. It isn’t as simple as choosing the most energy efficient libraries and assembling them into one application, because two parts might be more energy efficient together than separately, perhaps because they share a common resource. We showed that by searching different combinations of “container classes” (software components for storing data), we were able to reduce energy consumption by almost 40%.
Overall then, this seems to be an area ripe for further development, and we hope that this work represents a step towards the ideal where software compilers include energy improvements as a matter of course.
DAASE researcher Dr Justyna Petke has been awarded a prestigious five year fellowship by the Engineering and Physical Sciences Research Council for her research project: Automated Software Specialisation Using Genetic Improvement.
Dr Petke aims to change the face of software engineering by transferring the task of software specialisation from human to machine. The genetic improvement techniques that will be developed will provide an automated way to speed up computationally intensive calculations within software, saving time and money.
Genetic improvement is a novel field of research that has only arisen as a standalone area in the last few years. Several factors have contributed to the development and success of this field recently including the sheer amount of code now available online and the focus on automated improvement of non-functional properties of software, such as memory consumption.
“This fellowship is a dream come true. It will allow me to start my own small research group and pursue the development of automated software improvement techniques. We have already had a few success stories in our group, yet the area of genetic improvement is still in it’s early stages, leaving lots of research opportunities to explore.”
Dr Justyna Petke
Dr. Petke is a world-leading expert on genetic improvement, publishing award-winning work on automated software specialisation and transplantation. She won two `Humies’ awarded for human-competitive results produced by genetic and evolutionary computation and a distinguished paper award at the International Symposium on Software Testing and Analysis. This work was also widely covered in media, including Wired magazine and BBC Click.
Dr. Petke will collaborate with a UK-based company, called Satalia, which provides the latest optimisation techniques to the industry.
DAASE research group MaJiCKe have moved to work with Facebook in London. MaJiCKe have developed software that uses Search Based Software Engineering (SBSE) to help engineers find bugs while reducing the inefficiencies of writing test code. Their product Sapienz automatically generates test sequences using SBSE to find crashes using the shortest path it can find. Sapienz is the world’s first automated test tool able to minimise the length of tests which simultaneously maximising the amount of code checked.
“We are very excited to have this outstanding opportunity to achieve real world impact for UCL’s excellent software engineering research”
Professor Mark Harman
The MaJiCKe team comprise Professor Mark Harman – Scientific Advisor, Yue Jia – CEO and Ke Mao – CTO from the department of Computer Science at University College London (UCL). Professor Harman cofounded the area of SBSE in 2001.
“We provide an atmosphere to nurture collaborations with industry, its great to see a team like this taking their world leading expertise to Facebook”
Jane Butler, UCL Engineering Vice Dean
All at DAASE wish Mark, Yue and Ke the very best of luck and look forward to hearing all about their contribution to Facebook’s goal of connecting the world.
DAASE research Alexandru Marginean describes his research into automated software transplantation
Federica Sarro, research associate on the DAASE project describes her research in requirements engineering
Yuanyuan Zhang, senior research associate on the DAASE project describes her research in software testing, Natural Language Processing and model inference.