Our projects

Our projects

With software systems becoming more complex, the need to have good quality code is becoming increasingly important. Confidence in the code can only be gained if one has confidence in the test suite which checks that code. To do so, code coverage is widely used in the industry as a means of measuring how much lines of code are executed in the test suite. However, a high coverage does not guarantee absence of faults.

Mutation testing is a fault-injection technique, which has been shown to be better than code coverage in measuring the quality of the test suite. If the test suite does not show errors when faults are injected, it may indicate possible shortcomings. Although this is a useful technique, it comes with a heavy computational expense, making it unattractive for software companies. The Agile development process is based upon developing software through numerous iterations. Incremental mutation testing exploits this iterative nature, by mutating only modified methods between iterations.

While this has been shown to be faster, accuracy is lost because methods directly related to the changed methods, are not included in the mutation process. With the help of static analysis techniques, these related methods will be included in mutation analysis, while also finding a suitable compromise between speed and accuracy.

Luke Bajada

When it comes to software development, modifications in the code occur frequently for reasons such as bug fixes and software requirement changes.  Thus, regression testing is carried out so as to make sure that the program’s changed parts behave as anticipated and that the unmodified parts are not negatively affected by such changes.  This is done by rerunning the tests found in the program’s test suite. This is an expensive task because running all the tests in the suite may take a very long time especially if the developer runs these tests several times a day.  For this reason, selective retest strategies are used so as to reduce the execution time taken by the tests while still uncovering any bugs which surface due to the modifications.  In this project the plan is to investigate different selective retest strategies through the use of static analysis of object-oriented code, specifically that written in Java. This would allow one to evaluate each strategy as well as compare and contrast one strategy to another. Such an evaluation would only be possible if such strategies are applied in the context of a real-life case study.

The project has been successfully completed by Graziella Galea.

Good software design is of empirical importance in the industry as studies have outlined its critical role in producing quality software. When designing software, models are used to obtain a higher level of abstraction before coding since these give an insight into various properties of the program such as its coupling and cohesion. During this stage developers are faced with a non-exhaustive search space preventing them from manually exploring all options. The scope of this study is to determine the efficiency and effectiveness of various search techniques when, using a series of automated refactorings, a search is guided towards coupling and cohesion optimisation.

The project has been successfully completed by Clyde Vassallo.

Gaming is nowadays very popular, and in order to have a quality product it has to be tested adequately. However, current automation technologies suffer from understandability and maintainability issues. Domain specific languages (DSLs) have been used successfully to express notions in other domains. The aim of this project is to investigate the use of DSLs in testing graphically intensive games.

Alan Grixti is currently working on the project.

The number of mobile applications has continued to grow over the last few years, many of which are expected to function across multiple mobile platforms (and versions thereof). This considerably increases the testing effort required, especially since users have high expectations and assume apps to be fully functional and bug free. If an app is found to be less reliable than others, users will easily switch to other similar apps and the reputation mechanisms further compound the problem.

Although there are multiple platforms, there is effectively one domain, that is to say mobile applications. Gestures, accelerometers and GPS locators are some examples of the main notions in this domain and remain consistent regardless of the specific platform.

This project is concerned with consolidating concepts in the mobile and testing domains into one domain-specific language, which then can be used to express and automate tests on numerous device platforms. Whilst language design is currently the main aim of this project, a prototype tool which implements a subset of this language is also being developed to evaluate the effectiveness of this technique.

The project has been successfully completed by Daryl Camilleri.

Business-to-Consumer (B2C) e-commerce systems are common place and most companies launch their own online stores with the hope of reaching a wider spectrum of customers. The impact of such systems on a company’s income emphasises the need for ensuring a high level of technical quality. With an industry-wide shift to automated test execution justified by regular and ongoing changes to such systems, most companies invest considerably in setting up and maintaining test automation frameworks. In this work, we argue that whilst all systems will have their own particular implementations, they nevertheless share a common domain with identical notions such as shopping carts, product lists, and so on. We explore the design of Domain Specific Language (DSL) which can express tests over the B2C domain and implement a prototype tool which given a script in the DSL, will be able to execute tests on a B2C system without the need of intimate knowledge about its underlying implementation.

The project has been successfully completed by Neil Thomas Abela.

Mutation testing is based on fault injection and is an effective technique for assessing the quality of test suites. The modern software engineering industry is highly dependent on automated test suites in that they are used to detect regressions as software systems undergo continuous change. In such situations, trust in the quality of the system can only follow from trust in the quality of the automated tests. Despite this scenario, the industry seems to favour more primitive measures such as statement coverage analysis for analysing test suites.  Such techniques have been shown to give a false sense of security. This project is concerned with identifying factors preventing the wider uptake of mutation testing in the industry and finding new ways to address them.  Although this has been attempted in the past, we take a novel approach in that we make leverage various aspects of Agile Development Processes in order to obtain faster results.  We are currently working on the concept of localised mutation as well as addressing the equivalent mutant problem using differential symbolic execution.

Mark Anthony Cachia has successfully completed the project.

While testing is still the prevailing approach to ensure software correctness, the use of runtime verification as a form of post-deployment testing is on the rise. Such continuous testing ensures that if bugs occur, they don't go unnoticed. Apart from being complementary, testing and runtime verification have a lot in common: Runtime verification of programs requires a formal specification of requirements against which the runs of the program can be verified. Similarly, in model-based testing, checks are written such that on each (model-triggered) execution step, the system state is checked for correctness. Due to this similarity, applying both testing and runtime verification techniques is frequently perceived as duplicating work. Attempts have already been made to integrate the runtime verification tool Larva with QuickCheck. We plan to take this forward by integrating the Larva tool with a Java model-based testing tool, ModelJUnit. The aim is to write one set of properties which can be used seamlessly for both testing and runtime verification.

Renzo Schembri has successfully completed the project.

This project is concerned with combining tried-and-tested technologies and technies from two different worlds in order achieve new value in the world of web testing.  Selenium is an open source technology framework which allows developers to automate interaction with browsers.  It is widely used in the industry as the predominant technology for web test automation.  On the other hand, QuickCheck is a model-based test suite generation and execution framework which works in Erlang, a language which has been gaining steadily in popularity.  In this project we are implementing the Selenium API in Erlang, wrapping it in an API and subsequently connect it to QuickCheck such that web applications can be tested automatically through model-based testing.

Mark Scerri has successfully completed the project.

The ICT industry invests a substantial amount of resources in the development of unit tests for their products. These tests are then used as a safety net in that they are executed every time a change is carried out to ensure that no regression has occurred. This project is concerned with the automated generation of unit tests. More specifically, the research will complement existing techniques to address the problem of indirect inputs.  In object oriented programming, an indirect input occurs when a method foo() in object X modifies its path of execution based on the return value of its call to a method foobar() in object Y. Since foo() calls foobar() as part of its internal algorithm, any unit tests which exercise foo() do not have control over the return value of foobar() through parameter manipulation.  The industry's response to this is the development of mock objects which behave in a specific manner in order to force a certain path of execution during particular tests.  For non-trivial interactions, this can become cumbersome when done manually. This project will investigate ways of generating such mocks automatically as part of automated unit test generation.

A project by Matthew Farrugia

While monitors are typically specified in a high level language to minimise the chances of errors, testing out that monitors are correctly specified is still a significant concern particularly if the monitor executes reparatory code at runtime. Testing monitors manually is a challenging task as one would need to exercise the monitored system to drive the monitor through satisfying and violating executions. Automating this process would give monitor developers precious feedback at no extra cost.

This project aims to investigate whether the use of symbolic execution techniques can be successfully used to automatically generate test cases for monitor testing.

A project by Mark Tanti

The problem stems from the real situation whereby systems developed within a company are deployed worldwide to a large number of different environments. These environments can consist of different devices, operating systems, device drivers, internet browsers, and so on. It is impractical for all these scenarios to be reproduced in a lab environment within a company. We are working on developing a framework that enables companies to set up trusted peer-to-peer networks which are subsequently used to deploy systems and automated tests to participants. This enables companies to test systems in realistic environments whilst massively increasing their capability to test different scenarios.

A project by Andrea Mangion

Malware analysis is the reverse engineering of malicious binaries in order to explore and extract their complete behaviour for protecting against attacks and to disinfect affected sites. The dynamic analysis of malware inside sandboxes is useful since it removes the need for the analyst to look into the malware code itself. However, this approach could end up disclosing no behaviour at all if faced with trigger-based malware. Existing work in this area takes an execution path exploration approach with the aim of maximizing effectiveness by increasing both the path coverage along with the precision, which is increased by excluding infeasible paths and executing paths under the correct runtime values. For this purpose, Symbolic Execution fits the bill. The main aim of this project is to build on existing work in order to provide a solution for automated malware analysis that is capable of uncovering hidden behaviour in malware, but is also tunable in terms of its efficiency versus its effectiveness, while also being Sandbox Independent.

A project by James Gatt

With the mobile devices set to outnumber people by the end of 2013, mobile applications are the holy grail for developers and companies alike. Yet the relative ease with which one can deploy and sell a mobile application comes with a steep increase in competition. This effectively means that anyone who deploys an inferior quality application is likely to suffer swift negative repercussions. Yet testing an application across a wide variety of devices each with varying screen sizes, features and operating systems is daunting to say the least. This is further compounded by varying environmental factors which each device is exposed to as its owner moves around with it. No lab can feasibly replicate a representative number of scenarios.

In this project we are looking at utilising mobile devices made available by their owners for testing purposes. This can be on a voluntary or (for example) pay-per-test model. The idea is to automatically deploy and app and test suite to devices which fit a particular profile, automatically test the app and return results to the developer. This of course brings up a considerable number of non-trivial challenges ranging from technical ones such as how do we deploy code remotely to mobile devices, all to way to complex security concerns which require us to guarantee that the interests of all parties are protected.

A project by Sebastian Attard

Testing is an integral part of any software development life-cycle. Unit testing is usually the first form of testing that is undertaken, focused primarily on examining  the components that make the system. With that said, test engineers might require a means of monitoring certain critical points during the system's execution.

Thus the automated translation from one form of verification to the other would save time, money and man-power by removing the need to manually write both unit tests and runtime verification monitors, whilst keeping the advantages of both within plausible reach. The solution is a system which extracts the necessary data from unit tests and, using this data, generates the appropriate monitors. With this solution, developers can have a higher amount of coverage in their verification, without any unnecessary increase in workload.

Runtime verification is designed to handle such a requirement. It monitors the system during its execution to see that the system adheres to a set of predefined properties. Either one alone is not enough to sufficiently verify a system. However, designing systems for both unit testing and runtime verification is a time consuming and repetitive affair, with the former taking precedence since the definition of properties for monitors is seldom straight forward. This results in systems being finalized without the proper verification.

A project by Jonathan Micallef

Currently working on the development of Aspect-Oriented system-specification language, based on the widely-used Gherkin specification language as well as investigating the effects that such a language might have on system specification.

Verifying that a system meets the specification requirement is one of the most common software development challenges. Specification languages have been widely employed to mitigate this issue, however, if the specification language is not designed to clearly capture the system's aspects, we will have to face redundancy and maintainability issues at system specification level. This may in-turn degenerate into developing a system that does not meet the specification requirements.

The investigation and improvement of system specification languages is essential to develop software which meet the specification requirements.

A project by John Aquilina Alamango


https://www.um.edu.mt/r/research/pest/ourprojects