‹Programming› 2020
Mon 23 - Thu 26 March 2020 Porto, Portugal
Tue 24 Mar 2020 11:50 - 12:10 at W1 - Runtime Systems and Performance Analysis

Continuous performance tracking for language implementations and systems work is a best practice. Webkit, Chrome, Firefox, PyPy, Python, Rust, GraalVM, and other keep track of their performance, often on the level of commits or pull requests. It enables us to see performance over time, but also to answers questions like: does my commit affect performance, does this idea improve things, and what are the tradeoffs between two possible optimizations?

Unfortunately, continuous performance tracking is even more rare in academia than automated testing. This talk is a plea for better software engineering in the programming language implementation community. Following the credo: if it ain’t automatically tested, it is broken; if there aren’t any benchmarks, it never worked; and if there is no continuous benchmarking, good performance is merely a dream!

This talk is going to make the case of using automated infrastructure for continuous performance tracking, going from the first benchmark, over performance analysis, to paper writing, and eventually having everything packed up as an artifact. Automated benchmark execution can be the foundation for such a complete tool chain for good software engineering principles in our community. It enables us to better understand the systems we build and the ideas we evaluate. Using the same technology for continuous performance tracking and producing beautiful plots for our papers also reduces the likelihood to fall victim of unexpected bugs and mistakes that invalidate results entirely. Additionally, automation can enable repeatability of experiments and traceability of results, which are worthwhile goals for scientific experiments in their own right.

As part of the talk, I am going to introduce ReBench and ReBenchDB, demonstrate how to use them to go from the first benchmark, over beautiful and statistically sensible plots in a paper, to an academic artifact for evaluation by our peers. While such an approach has clear benefits, it also his problems and pitfalls, which will be discussed.

Nonetheless, continuous performance tracking is sound software engineering advice for a wide range of projects, and the necessary automation facilitates good scientific practice we all should strive for.

Tue 24 Mar

Displayed time zone: Belfast change

11:00 - 12:30
Runtime Systems and Performance AnalysisMoreVMs at W1
11:00
20m
Talk
Renaissance: Benchmarking Suite for Parallel Applications on the JVM (Talk)
MoreVMs
Aleksandar Prokopec Oracle Labs, Andrea Rosà University of Lugano, Switzerland, David Leopoldseder Oracle Labs, Gilles Duboscq Oracle Labs, Petr Tuma Charles University, Martin Studener JKU Linz, Austria, Lubomír Bulej Charles University, Yudi Zheng Oracle Labs, Alex Villazón Universidad Privada Boliviana, Bolivia, Doug Simon Oracle Labs, Thomas Wuerthinger Oracle Labs, Walter Binder University of Lugano, Switzerland
11:20
30m
Talk
Profiling Streams on the Java Virtual Machine
MoreVMs
Eduardo Rosales University of Lugano, Switzerland, Andrea Rosà University of Lugano, Switzerland, Walter Binder University of Lugano, Switzerland
11:50
20m
Talk
Continuous Performance Tracking for Better "Everything"! (Talk)
MoreVMs
Stefan Marr University of Kent
12:10
20m
Talk
Towards Modern Runtime Support for an Object-Based Distributed Programming Language (Talk)
MoreVMs
Oleks Shturmov University of Oslo