[Soot-list] Paddle - BDD variable orderings

Edgar Pek edgar.pek at gmail.com
Sat Nov 12 00:49:40 EST 2011


Thank you very much for the elaborate reply!

On Fri, 2011-11-11 at 21:17 -0500, Ondrej Lhotak wrote:
> This is quite an unusual configuration. CHA will generate a very
> imprecise call graph, which will slow down the rest of the analysis. I
> don't have very much experience with this configuration.
> 
> Are you also running Spark in such an unusual configuration?
I've used the configuration that you refer to as ot-cha-fs in the Spark
paper(CC03), and as context-sensitive AOT (ot-aot-fs) in the Paddle
paper (TOSEM08). You wrote in the latter paper that this configuration
"was identified as very fast and also quite precise" [p3:26, Section
4.2, par. 3]. 

Specifically, for Spark the relevant option is: "on-fly-cg:false", and
for Paddle the relevant options are: "conf:cha,bdd:true,backend:buddy".

Are you saying that pt-analysis with ahead of time call-graph
construction doesn't make much sense because it lacks precision?

> Are you analyzing the benchmark with a very large standard library?
> This could increase the effect of the imprecise initial call graph
> due to CHA, since CHA will include many unreachable parts of the
> library.
I've experimented with versions 1.4_18, 1.5_0.16, 1.6_0.10, and the
analysis 1.4_18 still took a long time. I have to see what is going on,
because Bravenboer reports much better Paddle times in the DOOP paper
(OOPSLA 09). 

> 
> How are you telling Soot which classes to include in the CHA analysis?
I'm using Eric Bodden's script for running Dacapo 2006 benchmarks, and
on the top of the script there the JRE variable which I set to various
JREs mentioned before. 
> 
> Are the results that you get (e.g. call graph size, number of classes
> analyzed) comparable between Spark and Paddle?

Can you tell me the option to set which would give that information for
Paddle? 
Ideally, I'd like to generate solution file as Spark, so I can have the
exact comparison. 
Also, Paddle's verbose mode doesn't generate timing information, which I
found very useful when running Spark's in the verbose mode.

> Which Paddle backend are you using? BDD or non-BDD? If BDD, which BDD
> implementation? It may be that due to the 2GB of memory, you are running
> out of memory and starting to thrash. This is especially likely if you
> are using one of the native BDD backends (BuDDy or CUDD) since they
> need memory for the BDD node table in addition to the memory that the
> JVM needs for its heap.
I'm using Buddy BDD back-end. I was tracking the memory consumption
using "top", and it never indicated running swapping. 

Thanks for the detailed answer! 

> > For example, analyzing eclipse benchmark with default ordering takes 20
> > minutes ([Paddle] Propagation in 1214.3 seconds.). While Spark, with
> > what I think is the same configuration, takes 7 seconds ([Spark]
> > Propagation in 7.2 seconds.) 
> > Is this expected or am I using the wrong ordering for Paddle?
> > 
> > Btw. I'm using paddle-nightly-build and experimenting on a low-end
> > laptop (Intel Core 2, T5500, 1.66GHz with 2GB).
> > 
> > Thanks,
> > Edgar
> > 
> > _______________________________________________
> > Soot-list mailing list
> > Soot-list at sable.mcgill.ca
> > http://mailman.cs.mcgill.ca/mailman/listinfo/soot-list
> > 





More information about the Soot-list mailing list