‹Programming› 2020
Mon 23 - Thu 26 March 2020 Porto, Portugal
Tue 24 Mar 2020 11:50 - 12:10 at W1 - Runtime Systems and Performance Analysis

Continuous performance tracking for language implementations and systems work is a best practice. Webkit, Chrome, Firefox, PyPy, Python, Rust, GraalVM, and other keep track of their performance, often on the level of commits or pull requests. It enables us to see performance over time, but also to answers questions like: does my commit affect performance, does this idea improve things, and what are the tradeoffs between two possible optimizations?

Unfortunately, continuous performance tracking is even more rare in academia than automated testing. This talk is a plea for better software engineering in the programming language implementation community. Following the credo: if it ain’t automatically tested, it is broken; if there aren’t any benchmarks, it never worked; and if there is no continuous benchmarking, good performance is merely a dream!

This talk is going to make the case of using automated infrastructure for continuous performance tracking, going from the first benchmark, over performance analysis, to paper writing, and eventually having everything packed up as an artifact. Automated benchmark execution can be the foundation for such a complete tool chain for good software engineering principles in our community. It enables us to better understand the systems we build and the ideas we evaluate. Using the same technology for continuous performance tracking and producing beautiful plots for our papers also reduces the likelihood to fall victim of unexpected bugs and mistakes that invalidate results entirely. Additionally, automation can enable repeatability of experiments and traceability of results, which are worthwhile goals for scientific experiments in their own right.

As part of the talk, I am going to introduce ReBench and ReBenchDB, demonstrate how to use them to go from the first benchmark, over beautiful and statistically sensible plots in a paper, to an academic artifact for evaluation by our peers. While such an approach has clear benefits, it also his problems and pitfalls, which will be discussed.

Nonetheless, continuous performance tracking is sound software engineering advice for a wide range of projects, and the necessary automation facilitates good scientific practice we all should strive for.

Tue 24 Mar
Times are displayed in time zone: (GMT+01:00) Greenwich Mean Time : Belfast change

MoreVMs-2020-papers
11:00 - 12:30: MoreVMs'20 - Runtime Systems and Performance Analysis at W1
MoreVMs-2020-papers11:00 - 11:20
Talk
Aleksandar ProkopecOracle Labs, Andrea RosàUniversity of Lugano, Switzerland, David LeopoldsederOracle Labs, Gilles DuboscqOracle Labs, Petr TumaCharles University, Martin StudenerJKU Linz, Austria, Lubomír BulejCharles University, Yudi ZhengOracle Labs, Alex VillazónUniversidad Privada Boliviana, Bolivia, Doug SimonOracle Labs, Thomas WuerthingerOracle Labs, Walter BinderUniversity of Lugano, Switzerland
MoreVMs-2020-papers11:20 - 11:50
Talk
Eduardo RosalesUniversity of Lugano, Switzerland, Andrea RosàUniversity of Lugano, Switzerland, Walter BinderUniversity of Lugano, Switzerland
MoreVMs-2020-papers11:50 - 12:10
Talk
Stefan MarrUniversity of Kent
MoreVMs-2020-papers12:10 - 12:30
Talk
Oleks ShturmovUniversity of Oslo