Lightweight Lexical Test Prioritization for Immediate Feedback
The practice of unit testing enables programmers to obtain automated feedback on whether a currently edited program is consistent with the expectations specified in test cases. Feedback is most valuable when it happens immediately, as defects can be corrected instantly before they become harder to fix. With growing and longer running test suites, however, feedback is obtained less frequently and lags behind program changes.
The objective of test prioritization is to rank tests so that defects, if present, are found as early as possible or with the least costs. While there are numerous static approaches that output a ranking of tests solely based on the current version of a program, we focus on change-based test prioritization, which recommends tests that likely fail in response to the most recent program change. The canonical approach relies on coverage data and prioritizes tests that cover the changed region, but obtaining and updating coverage data is costly. More recently, information retrieval techniques that exploit overlapping vocabulary between change and tests have proven to be powerful, yet lightweight.
In this work, we demonstrate the capabilities of information retrieval for prioritizing tests in dynamic programming languages using Python as example. We discuss and measure previously understudied variation points, including how contextual information around a program change can be used, and design alternatives to the widespread TF-IDF retrieval model tailored to retrieving failing tests.
To obtain program changes with associated test failures, we designed a tool that generates a large set of faulty changes from version history along with their test results. Using this data set, we compared existing and new lexical prioritization strategies using four open-source Python projects, showing large improvements over untreated and random test orders and results consistent with related work in statically typed languages.
We conclude that lightweight IR-based prioritization strategies are effective tools to predict failing tests in the absence of coverage data or when static analysis is intractable like in dynamic languages. This knowledge can benefit both individual programmers that rely on fast feedback, as well as operators of continuous integration infrastructure, where resources can be freed sooner by detecting defects earlier in the build cycle.
Wed 25 MarDisplayed time zone: Belfast change
14:00 - 15:30 | |||
14:00 30mResearch paper | Lightweight Lexical Test Prioritization for Immediate Feedback Research Papers Toni Mattis Hasso Plattner Institute, University of Potsdam, Robert Hirschfeld Hasso-Plattner-Institut (HPI), Germany Link to publication DOI Pre-print | ||
14:30 30mResearch paper | Robust Contract Evolution in a TypeSafe MicroServices Architecture Research Papers João Costa Seco NOVA LINCS -- Universidade Nova de Lisboa, Paulo Ferreira OutSystems SA, Hugo Lourenço OutSystems SA, Carla Ferreira Universidade Nova Lisboa, Lucio Ferrao OutSystems Link to publication DOI Pre-print | ||
15:00 30mResearch paper | Sthread: In-Vivo Model-Checking of Multithreaded Programs Research Papers Link to publication DOI Pre-print |