Startseite // SnT // News & E... // SnT Team wins Facebook Software Testing Competition

SnT Team wins Facebook Software Testing Competition

twitter linkedin facebook email this page
Veröffentlicht am Mittwoch, den 30. Oktober 2019

When Facebook develops new features, they rely on software-quality engineers to ensure that everything will work according to plan. When their tests indicate a problem, these software-quality engineers spring into action and begin tracking it down.

An alert could mean anything from a minor bug to a security failure to, well… nothing at all. In fact, many alerts turn out to be just that – false alarms. That’s because, as SnT’s Renaud Rwemalika explains, “all tests are flaky”.

“A flaky test is a software test that can put out false alarms, perhaps due to minor fluke, like a third-party server being temporarily down,” Renaud continues. “Up until quite recently, perhaps three years ago or so, academics touted the idea that only ‘bad’ tests are flaky – ‘good’ tests account for every contingency. So when a test was flaky, the solution used to be to ‘fix’ the test.”

The trouble with this philosophy has been that beyond the ivory towers of academia, it was simply unworkable. Organisations (like Facebook) manage increasingly complex – even sprawling – systems of software. Managing flaky tests across these massive systems has proved to be a Sisyphean task, even for Facebook’s army of expert engineers. They needed a better way to work with flaky tests. So they set up the 2019 Facebook Testing and Verification competition to find new approaches to code analysis and testing to see what researchers might propose.

Facebook found their winners, PhD student Renaud Rwemalika, research scientist Michail Papadakis, and Professor Yves Le Traon, right here at SnT. “Because we assume all tests are flaky, rather than attempting to fix every test over and over again, what we do is essentially a triage of the results. Then developers can use the triage information to decide which alerts most merit their attention.” Their tool does this by producing a 0-to-1 score of the likelihood that a given alert is the result of flakiness and providing contextual information that can help developers decide which alerts really deserve attention.

“The goal here is to keep as much of the testing process as automated as possible. We want to use human resources where they are most effective,” Michail adds. “So we’ve created a tool that can learn independently and that can be added to existing automated systems to keep those systems running smoothly, which means keeping them running with as little human interference as possible.”

Their greatest challenge has been developing a model flexible enough to be used on a variety of software, while also being reliable. “On the one hand, if you have too many false positives, users might start to ignore the tool. On the other hand, just one false negative in a safety-critical system could mean the end of a company (or worse)”, Renaud explains. “And on top of that,” Michail continues, “ the system needs to be able to grow with the software it is monitoring, so it needs to be capable of learning as features are updated”.

Their tool will help developers cut through the noise that flaky tests inevitably create around the complicated software systems they monitor. “Working on Facebook’s challenges is great because they can really show you where the pain points are”, Michail added. And in November they will have another opportunity to hear back from their peers both at Facebook and in the research community when they present their award-winning work at a workshop during Facebook Testing and Verification Workshop & Symposium 2019.

SnT is turning 10! We’ve come a long way since launching our activities in 2009. Stay tuned for a year full of celebrations, cutting-edge research, and new milestones.