soot.options
Class SparkOptions
java.lang.Object
soot.options.SparkOptions
public class SparkOptions
- extends Object
Option parser for Spark.
Method Summary |
boolean |
add_tags()
Add Tags --
Output points-to results in tags for viewing with the Jimple. |
boolean |
class_method_var()
Class Method Var --
In dump, label variables by class and method. |
boolean |
cs_demand()
Demand-driven refinement-based context-sensitive points-to analysis --
After running Spark, refine points-to sets on demand with
context information. |
int |
double_set_new()
Double Set New --
Select implementation of points-to set for new part of double
set. |
int |
double_set_old()
Double Set Old --
Select implementation of points-to set for old part of double
set. |
boolean |
dump_answer()
Dump Answer --
Dump computed reaching types for comparison with other solvers. |
boolean |
dump_html()
Dump HTML --
Dump pointer assignment graph to HTML for debugging. |
boolean |
dump_pag()
Dump PAG --
Dump pointer assignment graph for other solvers. |
boolean |
dump_solution()
Dump Solution --
Dump final solution for comparison with other solvers. |
boolean |
dump_types()
Dump Types --
Include declared types in dump. |
boolean |
empties_as_allocs()
Treat EMPTY as Alloc --
Treat singletons for empty sets etc. |
boolean |
enabled()
Enabled --
. |
boolean |
field_based()
Field Based --
Use a field-based rather than field-sensitive representation. |
boolean |
force_gc()
Force Garbage Collections --
Force garbage collection for measuring memory usage. |
boolean |
geom_blocking()
Blocking strategy for recursive calls --
Enable blocking strategy for recursive calls. |
String |
geom_dump_verbose()
Verbose dump file --
Filename for detailed execution log. |
int |
geom_encoding()
Encoding methodology used --
Encoding methodology. |
int |
geom_eval()
Precision evaluation methodologies --
Precision evaluation methodologies. |
int |
geom_frac_base()
Fractional parameter --
Fractional parameter for precision/performance trade-off. |
boolean |
geom_pta()
Geometric, context-sensitive points-to analysis --
This switch enables/disables the geometric analysis.. |
int |
geom_runs()
Iterations --
Iterations of analysis. |
boolean |
geom_trans()
Transform to context-insensitive result --
Transform to context-insensitive result. |
String |
geom_verify_name()
Verification file --
Filename for verification file. |
int |
geom_worklist()
Worklist type --
Worklist type. |
boolean |
ignore_types_for_sccs()
Ignore Types For SCCs --
Ignore declared types when determining node equivalence for SCCs. |
boolean |
ignore_types()
Ignore Types Entirely --
Make Spark completely ignore declared types of variables. |
boolean |
lazy_pts()
Create lazy points-to sets --
Create lazy points-to sets that create context information only
when needed.. |
boolean |
merge_stringbuffer()
Merge String Buffer --
Represent all StringBuffers as one object. |
boolean |
on_fly_cg()
On Fly Call Graph --
Build call graph as receiver types become known. |
int |
passes()
Maximal number of passes --
Perform at most this number of refinement iterations.. |
boolean |
pre_jimplify()
Pre Jimplify --
Jimplify all methods before starting Spark. |
int |
propagator()
Propagator --
Select propagation algorithm. |
boolean |
rta()
RTA --
Emulate Rapid Type Analysis. |
int |
set_impl()
Set Implementation --
Select points-to set implementation. |
boolean |
set_mass()
Calculate Set Mass --
Calculate statistics about points-to set sizes. |
boolean |
simple_edges_bidirectional()
Simple Edges Bidirectional --
Equality-based analysis between variable nodes. |
boolean |
simplify_offline()
Simplify Offline --
Collapse single-entry subgraphs of the PAG. |
boolean |
simplify_sccs()
Simplify SCCs --
Collapse strongly-connected components of the PAG. |
boolean |
simulate_natives()
Simulate Natives --
Simulate effects of native methods in standard class library. |
boolean |
string_constants()
Propagate All String Constants --
Propagate all string constants, not just class names. |
boolean |
topo_sort()
Topological Sort --
Sort variable nodes in dump. |
int |
traversal()
Maximal traversal --
Make the analysis traverse at most this number of nodes per
query.. |
boolean |
types_for_sites()
Types For Sites --
Represent objects by their actual type rather than allocation
site. |
boolean |
verbose()
Verbose --
Print detailed information about the execution of Spark. |
boolean |
vta()
VTA --
Emulate Variable Type Analysis. |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
propagator_iter
public static final int propagator_iter
- See Also:
- Constant Field Values
propagator_worklist
public static final int propagator_worklist
- See Also:
- Constant Field Values
propagator_cycle
public static final int propagator_cycle
- See Also:
- Constant Field Values
propagator_merge
public static final int propagator_merge
- See Also:
- Constant Field Values
propagator_alias
public static final int propagator_alias
- See Also:
- Constant Field Values
propagator_none
public static final int propagator_none
- See Also:
- Constant Field Values
set_impl_hash
public static final int set_impl_hash
- See Also:
- Constant Field Values
set_impl_bit
public static final int set_impl_bit
- See Also:
- Constant Field Values
set_impl_hybrid
public static final int set_impl_hybrid
- See Also:
- Constant Field Values
set_impl_array
public static final int set_impl_array
- See Also:
- Constant Field Values
set_impl_heintze
public static final int set_impl_heintze
- See Also:
- Constant Field Values
set_impl_sharedlist
public static final int set_impl_sharedlist
- See Also:
- Constant Field Values
set_impl_double
public static final int set_impl_double
- See Also:
- Constant Field Values
double_set_old_hash
public static final int double_set_old_hash
- See Also:
- Constant Field Values
double_set_old_bit
public static final int double_set_old_bit
- See Also:
- Constant Field Values
double_set_old_hybrid
public static final int double_set_old_hybrid
- See Also:
- Constant Field Values
double_set_old_array
public static final int double_set_old_array
- See Also:
- Constant Field Values
double_set_old_heintze
public static final int double_set_old_heintze
- See Also:
- Constant Field Values
double_set_old_sharedlist
public static final int double_set_old_sharedlist
- See Also:
- Constant Field Values
double_set_new_hash
public static final int double_set_new_hash
- See Also:
- Constant Field Values
double_set_new_bit
public static final int double_set_new_bit
- See Also:
- Constant Field Values
double_set_new_hybrid
public static final int double_set_new_hybrid
- See Also:
- Constant Field Values
double_set_new_array
public static final int double_set_new_array
- See Also:
- Constant Field Values
double_set_new_heintze
public static final int double_set_new_heintze
- See Also:
- Constant Field Values
double_set_new_sharedlist
public static final int double_set_new_sharedlist
- See Also:
- Constant Field Values
geom_encoding_Geom
public static final int geom_encoding_Geom
- See Also:
- Constant Field Values
geom_encoding_HeapIns
public static final int geom_encoding_HeapIns
- See Also:
- Constant Field Values
geom_encoding_PtIns
public static final int geom_encoding_PtIns
- See Also:
- Constant Field Values
geom_worklist_PQ
public static final int geom_worklist_PQ
- See Also:
- Constant Field Values
geom_worklist_FIFO
public static final int geom_worklist_FIFO
- See Also:
- Constant Field Values
SparkOptions
public SparkOptions(Map options)
enabled
public boolean enabled()
- Enabled --
.
verbose
public boolean verbose()
- Verbose --
Print detailed information about the execution of Spark.
When this option is set to true, Spark prints detailed
information about its execution.
ignore_types
public boolean ignore_types()
- Ignore Types Entirely --
Make Spark completely ignore declared types of variables.
When this option is set to true, all parts of Spark completely
ignore declared types of variables and casts.
force_gc
public boolean force_gc()
- Force Garbage Collections --
Force garbage collection for measuring memory usage.
When this option is set to true, calls to System.gc() will be
made at various points to allow memory usage to be measured.
pre_jimplify
public boolean pre_jimplify()
- Pre Jimplify --
Jimplify all methods before starting Spark.
When this option is set to true, Spark converts all available
methods to Jimple before starting the points-to analysis. This
allows the Jimplification time to be separated from the
points-to time. However, it increases the total time and memory
requirement, because all methods are Jimplified, rather than
only those deemed reachable by the points-to analysis.
vta
public boolean vta()
- VTA --
Emulate Variable Type Analysis.
Setting VTA to true has the effect of setting field-based,
types-for-sites, and simplify-sccs to true, and on-fly-cg to
false, to simulate Variable Type Analysis, described in our
OOPSLA 2000 paper. Note that the algorithm differs from the
original VTA in that it handles array elements more precisely.
rta
public boolean rta()
- RTA --
Emulate Rapid Type Analysis.
Setting RTA to true sets types-for-sites to true, and causes
Spark to use a single points-to set for all variables, giving
Rapid Type Analysis.
field_based
public boolean field_based()
- Field Based --
Use a field-based rather than field-sensitive representation.
When this option is set to true, fields are represented by
variable (Green) nodes, and the object that the field belongs to
is ignored (all objects are lumped together), giving a
field-based analysis. Otherwise, fields are represented by field
reference (Red) nodes, and the objects that they belong to are
distinguished, giving a field-sensitive analysis.
types_for_sites
public boolean types_for_sites()
- Types For Sites --
Represent objects by their actual type rather than allocation
site.
When this option is set to true, types rather than allocation
sites are used as the elements of the points-to sets.
merge_stringbuffer
public boolean merge_stringbuffer()
- Merge String Buffer --
Represent all StringBuffers as one object.
When this option is set to true, all allocation sites creating
java.lang.StringBuffer objects are grouped together as a single
allocation site.
string_constants
public boolean string_constants()
- Propagate All String Constants --
Propagate all string constants, not just class names.
When this option is set to false, Spark only distinguishes
string constants that may be the name of a class loaded
dynamically using reflection, and all other string constants are
lumped together into a single string constant node. Setting this
option to true causes all string constants to be propagated
individually.
simulate_natives
public boolean simulate_natives()
- Simulate Natives --
Simulate effects of native methods in standard class library.
When this option is set to true, the effects of native methods
in the standard Java class library are simulated.
empties_as_allocs
public boolean empties_as_allocs()
- Treat EMPTY as Alloc --
Treat singletons for empty sets etc. as allocation sites.
When this option is set to true, Spark treats references to
EMPTYSET, EMPTYMAP, and EMPTYLIST as allocation sites for
HashSet, HashMap and LinkedList objects respectively, and
references to Hashtable.emptyIterator as allocation sites for
Hashtable.EmptyIterator. This enables subsequent analyses to
differentiate different uses of Java's immutable empty
collections.
simple_edges_bidirectional
public boolean simple_edges_bidirectional()
- Simple Edges Bidirectional --
Equality-based analysis between variable nodes.
When this option is set to true, all edges connecting variable
(Green) nodes are made bidirectional, as in Steensgaard's
analysis.
on_fly_cg
public boolean on_fly_cg()
- On Fly Call Graph --
Build call graph as receiver types become known.
When this option is set to true, the call graph is computed
on-the-fly as points-to information is computed. Otherwise, an
initial CHA approximation to the call graph is used.
simplify_offline
public boolean simplify_offline()
- Simplify Offline --
Collapse single-entry subgraphs of the PAG.
When this option is set to true, variable (Green) nodes which
form single-entry subgraphs (so they must have the same
points-to set) are merged before propagation begins.
simplify_sccs
public boolean simplify_sccs()
- Simplify SCCs --
Collapse strongly-connected components of the PAG.
When this option is set to true, variable (Green) nodes which
form strongly-connected components (so they must have the same
points-to set) are merged before propagation begins.
ignore_types_for_sccs
public boolean ignore_types_for_sccs()
- Ignore Types For SCCs --
Ignore declared types when determining node equivalence for SCCs.
When this option is set to true, when collapsing
strongly-connected components, nodes forming SCCs are collapsed
regardless of their declared type. The collapsed SCC is given
the most general type of all the nodes in the component. When
this option is set to false, only edges connecting nodes of the
same type are considered when detecting SCCs. This option has
no effect unless simplify-sccs is true.
dump_html
public boolean dump_html()
- Dump HTML --
Dump pointer assignment graph to HTML for debugging.
When this option is set to true, a browseable HTML
representation of the pointer assignment graph is output to a
file called pag.jar after the analysis completes. Note that this
representation is typically very large.
dump_pag
public boolean dump_pag()
- Dump PAG --
Dump pointer assignment graph for other solvers.
When this option is set to true, a representation of the
pointer assignment graph suitable for processing with other
solvers (such as the BDD-based solver) is output before the
analysis begins.
dump_solution
public boolean dump_solution()
- Dump Solution --
Dump final solution for comparison with other solvers.
When this option is set to true, a representation of the
resulting points-to sets is dumped. The format is similar to
that of the Dump PAG option, and is therefore suitable for
comparison with the results of other solvers.
topo_sort
public boolean topo_sort()
- Topological Sort --
Sort variable nodes in dump.
When this option is set to true, the representation dumped by
the Dump PAG option is dumped with the variable (green) nodes in
(pseudo-)topological order. This option has no effect unless
Dump PAG is true.
dump_types
public boolean dump_types()
- Dump Types --
Include declared types in dump.
When this option is set to true, the representation dumped by
the Dump PAG option includes type information for all nodes.
This option has no effect unless Dump PAG is true.
class_method_var
public boolean class_method_var()
- Class Method Var --
In dump, label variables by class and method.
When this option is set to true, the representation dumped by
the Dump PAG option represents nodes by numbering each class,
method, and variable within the method separately, rather than
assigning a single integer to each node. This option has no
effect unless Dump PAG is true. Setting Class Method Var to
true has the effect of setting Topological Sort to false.
dump_answer
public boolean dump_answer()
- Dump Answer --
Dump computed reaching types for comparison with other solvers.
When this option is set to true, the computed reaching types
for each variable are dumped to a file, so that they can be
compared with the results of other analyses (such as the old
VTA).
add_tags
public boolean add_tags()
- Add Tags --
Output points-to results in tags for viewing with the Jimple.
When this option is set to true, the results of the
analysis are encoded within tags and printed with the resulting
Jimple code.
set_mass
public boolean set_mass()
- Calculate Set Mass --
Calculate statistics about points-to set sizes.
When this option is set to true, Spark computes and prints
various cryptic statistics about the size of the points-to sets
computed.
cs_demand
public boolean cs_demand()
- Demand-driven refinement-based context-sensitive points-to analysis --
After running Spark, refine points-to sets on demand with
context information.
When this option is set to true, Manu Sridharan's
demand-driven, refinement-based points-to analysis (PLDI 06) is
applied after Spark was run.
lazy_pts
public boolean lazy_pts()
- Create lazy points-to sets --
Create lazy points-to sets that create context information only
when needed..
When this option is disabled, context information is computed
for every query to the reachingObjects method. When it is
enabled, a call to reachingObjects returns a lazy wrapper object
that contains a context-insensitive points-to set. This set is
then automatically refined with context information when
necessary, i.e. when we try to determine the intersection with
another points-to set and this intersection seems to be
non-empty.
geom_pta
public boolean geom_pta()
- Geometric, context-sensitive points-to analysis --
This switch enables/disables the geometric analysis..
This switch enables/disables the geometric analysis.
geom_trans
public boolean geom_trans()
- Transform to context-insensitive result --
Transform to context-insensitive result.
If your work only concern the context insensitive
points-to information, you can use this option to transform the
context sensitive result to insensitive result. Or, sometimes
your code wants to directly access to the points-to vector other
than using the standard querying interface, you can use this
option to guarantee the correct behavior (because we clear the
SPARK points-to result when running the geom solver). After the
transformation, the context sensitive points-to result is
cleared in order to save memory space for your other jobs.
geom_blocking
public boolean geom_blocking()
- Blocking strategy for recursive calls --
Enable blocking strategy for recursive calls.
When this option is on, we perform the blocking
strategy to the recursive calls. This strategy significantly
improves the precision. The details are presented in our paper.
traversal
public int traversal()
- Maximal traversal --
Make the analysis traverse at most this number of nodes per
query..
Make the analysis traverse at most this number of nodes per
query. This quota is evenly shared between multiple passes (see
next option).
passes
public int passes()
- Maximal number of passes --
Perform at most this number of refinement iterations..
Perform at most this number of refinement iterations. Each
iteration traverses at most ( traverse / passes ) nodes.
geom_eval
public int geom_eval()
- Precision evaluation methodologies --
Precision evaluation methodologies.
We internally provide some precision evaluation
methodologies, and classify the evaluation strength into three
levels. If level is 0, we do nothing. If level is 1, we report
the basic information about the points-to result. If level is 2,
we perform the virtual callsite resolution, static cast safety
and all alias pairs evaluations.
geom_frac_base
public int geom_frac_base()
- Fractional parameter --
Fractional parameter for precision/performance trade-off.
This option specifies the fractional parameter, which
is used to manually tune the precision and performance
trade-off. The smaller the value, the better the performance but
the worse the precision.
geom_runs
public int geom_runs()
- Iterations --
Iterations of analysis.
We can run multiple times of the geometric analysis
to continuously improve the analysis precision.
geom_dump_verbose
public String geom_dump_verbose()
- Verbose dump file --
Filename for detailed execution log.
If you want to persist the detailed execution
information for future analysis, please provide a file name.
geom_verify_name
public String geom_verify_name()
- Verification file --
Filename for verification file.
If you want to compare the precision of the points-to
results with other solvers (e.g. Paddle), you can use the
"verify-file" to specify the list of methods (soot method
signature format) that are reachable by that solver. Then, in
the internal evaluations (see the switch geom-eval), we only
consider the methods that are present to both solvers.
propagator
public int propagator()
- Propagator --
Select propagation algorithm.
This option tells Spark which propagation algorithm to use.
set_impl
public int set_impl()
- Set Implementation --
Select points-to set implementation.
Select an implementation of points-to sets for Spark to use.
double_set_old
public int double_set_old()
- Double Set Old --
Select implementation of points-to set for old part of double
set.
Select an implementation for sets of old objects in the double
points-to set implementation. This option has no effect unless
Set Implementation is set to double.
double_set_new
public int double_set_new()
- Double Set New --
Select implementation of points-to set for new part of double
set.
Select an implementation for sets of new objects in the double
points-to set implementation. This option has no effect unless
Set Implementation is set to double.
geom_encoding
public int geom_encoding()
- Encoding methodology used --
Encoding methodology.
This switch specifies the encoding methodology used
in the analysis. All possible options are: Geom,
HeapIns, PtIns. The efficiency order is (from slow to
fast) Geom - HeapIns - PtIns, but the precision order is
the reverse.
geom_worklist
public int geom_worklist()
- Worklist type --
Worklist type.
Specifies the worklist used for selecting the next
propagation pointer. All possible options are: PQ, FIFO. They
stand for the priority queue (sorted by the last fire time and
topology order) and FIFO queue.