This page contains a prototype implemenation to accompany the paper "Quantitative Interprocedural Analysis" accepted for publication in POPL 2015 (pdf)
The package can be downloaded here.
You may execute the artifact without building it. Simply extract the package, and execute the profiles.sh script for a specific benchmark, or all of them:
./profiles.sh <benchmark> (Runs for specific benchmark)
./profiles.sh (Runs for all benchmarks)
The choices of <benchmark> are the following. Running for all benchmarks is expected to take several hours.
Results are gathered for each benchmark in the folder results_methods. Each file contains a header with summary statistics of the run, and afterwards one entry per examined method. Every such entry consists of the method's name, the thresholds it was found to satisfy, and the time it took to examine that particular method. Times are reported in seconds.
After the static analysis has finished, you can generate the ROC curves (Figure 4) which compare the static profiling results with dynamic profiling results.
Dynamic profiles have been obtained using YourKit Java Profiler on the inputs provided by the DaCapo suit. The dynamic profiles exist in the folder methods_profiles_ypj (you can generate your own dynamic profiles).
Running methods.py requires that you have python 2, numpy and matplotlib installed in your system. The -m option masks out methods that are present in the static, but not the dynamic analysis.
You can use methods.py with existing static profiling results (from our runs), found in folder results_methods_ours, to generate the ROC curves as reported in the paper:python methods.py -s results_methods_ours/ python methods.py -s results_methods_ours/ -m
You may execute the artifact without building it.
For the detection of overpopulated containers (column #OP in Table 1) execute overpopulated.sh on a specific, or all benchmarks.
./overpopulated.sh <benchmark> (Runs for specific benchmark)
./overpopulated.sh (Runs for all benchmarks)
For the detection of underutilized containers (column #UC in Table 1) execute underutilized.sh on a specific, or all benchmarks.
./underutilized.sh <benchmark> (Runs for specific benchmark)
./underutilized.sh (Runs for all benchmarks)
Results are gathered in folder "results_containers", one txt file for each benchmark and analysis (e.g. file "final_overpopulated_manu_antlr.txt" will containthe analysis of overpopulated containers of the benchmark antlr).
Each file contains a header with summary statistics of the whole benchmark, and a list of the details on the analysis of each container. The relevant entries (reported in Table 1) are the following:
As reported in Table 1, we have experienced that the alias analysis might take considerable amount of time.
The reported results for the container analysis depend on the precision of the underlying control-flow graph, which is obtained using the Soot framework. We have observed that this precision might differ depending on the machine and java version that is run, occasionally leading to small differences in the reported results.
You may wish to build the artifact yourself, or import it in your own project. If so, please follow these steps in Eclipse:1. Create a new java project