Reproducible Evaluation of Systems
Website: getpopper.io
Funding: NSF OAC-1450488, NSF OAC-1836650, CROSS
Overview: USENIX ;login: Winter 2016
Workshops:
- 1st International Workshop on Practical Reproducible Evaluation of Computer Systems (P-RECS 2018) held in conjunction with ACM HPDC 2018.
- 2nd International Workshop on Practical Reproducible Evaluation of Computer Systems (P-RECS 2019) held in conjunction with ACM HPDC 2019.
- 3rd International Workshop on Practical Reproducible Evaluation of Computer Systems (P-RECS 2020) held in conjunction with ACM HPDC 2020.
Independently validating experimental results in the field of computer systems research is a challenging task. Recreating an environment that resembles the one where an experiment was originally executed is a time-consuming endeavor. Popper is a convention (or protocol) for conducting experiments following a DevOps approach that allows researchers to make all associated artifacts publicly available with the goal of maximizing automation in the re-execution of an experiment and validation of its results.
Carlos Maltzahn
Adjunct Professor, Sage Weil Presidential Chair for Open Source Software, Founder & Director of CROSS, OSPO
My research interests include programmable storage systems, big data storage & processing, scalable data management, distributed systems performance management, and practical reproducible research.