December 4 @ 10:00 am - 11:00 am MST
Title: Obtaining Real-World Benchmark Programs From Open-Source Repositories Through Abstract-Semantics Preserving Transformations
Program: Master of Science in Computer Science
Advisor: Dr. Elena Sherman, Computer Science
Committee Members: Dr. Catherine Olschanowsky, Computer Science and Dr. Sole Pera, Computer Science
Benchmark programs are an integral part of program analysis research. Researchers use benchmark programs to evaluate existing techniques and test the feasibility of new approaches. The larger and more realistic the set of benchmarks, the more confident a researcher can be about the correctness and reproducibility of their results. However, obtaining an adequate set of benchmark programs has been a long-standing challenge in the program analysis community.
In this thesis, we design and implement the APT tool, a framework designed to automate the generation of realistic benchmark programs suitable for program analysis evaluations. Our tool targets a of family intra-procedural analyses that operate on an integer domain, specifically symbolic execution. The framework is composed of two main components. The first component extracts potential benchmark programs from open-source repositories suitable for symbolic execution. The second component transforms the extracted programs into compilable, stand-alone benchmarks by removing external dependencies. Our work provides researchers with concise, compilable benchmark programs that are relevant to symbolic execution, allowing them to focus their efforts on advancing symbolic execution techniques.