POPL 2015 Artifact

Quantitative Interprocedural Analysis (#22)

This page contains a prototype implemenation to accompany the paper "Quantitative Interprocedural Analysis" accepted for publication in POPL 2015 (pdf)

The package can be downloaded here.


Static profiling (Section 5.3)

You may execute the artifact without building it. Simply extract the package, and execute the profiles.sh script for a specific benchmark, or all of them:

./profiles.sh <benchmark>        (Runs for specific benchmark)
./profiles.sh                 (Runs for all benchmarks)

The choices of <benchmark> are the following. Running for all benchmarks is expected to take several hours.

Results are gathered for each benchmark in the folder results_methods. Each file contains a header with summary statistics of the run, and afterwards one entry per examined method. Every such entry consists of the method's name, the thresholds it was found to satisfy, and the time it took to examine that particular method. Times are reported in seconds.

After the static analysis has finished, you can generate the ROC curves (Figure 4) which compare the static profiling results with dynamic profiling results.
Dynamic profiles have been obtained using YourKit Java Profiler on the inputs provided by the DaCapo suit. The dynamic profiles exist in the folder methods_profiles_ypj (you can generate your own dynamic profiles).
Running methods.py requires that you have python 2, numpy and matplotlib installed in your system. The -m option masks out methods that are present in the static, but not the dynamic analysis.

python methods.py            (Creates file False_masked.eps)
python methods.py -m         (Creates file True_masked.eps)

You can use methods.py with existing static profiling results (from our runs), found in folder results_methods_ours, to generate the ROC curves as reported in the paper:

python methods.py -s results_methods_ours/
python methods.py -s results_methods_ours/ -m


Container analysis (Section 5.2)

You may execute the artifact without building it.
For the detection of overpopulated containers (column #OP in Table 1) execute overpopulated.sh on a specific, or all benchmarks.

./overpopulated.sh <benchmark>        (Runs for specific benchmark)
./overpopulated.sh                 (Runs for all benchmarks)

For the detection of underutilized containers (column #UC in Table 1) execute underutilized.sh on a specific, or all benchmarks.

./underutilized.sh <benchmark>        (Runs for specific benchmark)
./underutilized.sh                 (Runs for all benchmarks)



The choices of <benchmark> are the following. Running for all benchmarks is expected to take several hours.

Results are gathered in folder "results_containers", one txt file for each benchmark and analysis (e.g. file "final_overpopulated_manu_antlr.txt" will containthe analysis of overpopulated containers of the benchmark antlr).

Each file contains a header with summary statistics of the whole benchmark, and a list of the details on the analysis of each container. The relevant entries (reported in Table 1) are the following:

As reported in Table 1, we have experienced that the alias analysis might take considerable amount of time.

The reported results for the container analysis depend on the precision of the underlying control-flow graph, which is obtained using the Soot framework. We have observed that this precision might differ depending on the machine and java version that is run, occasionally leading to small differences in the reported results.

Build

You may wish to build the artifact yourself, or import it in your own project. If so, please follow these steps in Eclipse:

1. Create a new java project
2. Copy the contents of the src folder of the artifact to the src folder of your project.
3. Under the java build path of your project, import the following jars: jasminclasses-2.5.0.jar, sootclasses-2.5.0.jar, polyglotclasses-1.3.5.jar

Contact

For questions please send email to pavlogiannis-ist_ac_at [replace '-' with '@' and '_' with '.']