[Soot-list] Spark and custom entry points

Marc-Andre Laverdiere-Papineau marc-andre.laverdiere-papineau at polymtl.ca
Wed Mar 6 15:24:18 EST 2013


Hello,

Because Spark recognizes nulls and handles them accordingly, you need to 
create actual instances. I have done a bit of that using constants, 
which is pretty limited.

I am afraid you are stuck with two options: either you change Spark's 
behavior to handle with static entry points, or you find a way to create 
lots and lots of dummy objects. Both options are going to be a bit of 
work :(

I also think that you should make sure not to share any instances 
between the calls, in order to make sure that the points-to analysis 
doesn't get oversimplified because it somehow would detect that some 
objects are identical. That would reflect the 'any object' reality 
pretty well.

Marc-André Laverdière-Papineau
Doctorant - PhD Candidate

On 13-03-06 12:38 PM, Michael Faes wrote:
> Hi Marc-André,
>
> I'm not doing an IFDS analysis directly, but my analysis is also
> interprocedural.
>
> The main problem with generating dummy main methods are really the
> parameters that are passed to the library methods. Basically, the
> analysis should assume that any object (that has the right type) can be
> passed to a method. In particular also any object that is passed to or
> returned from any other method or stored in any public field of any
> object. So really *any* object.
>
> Michael
>
> -------- Original-Nachricht --------
> Betreff: Re: [Soot-list] Spark and custom entry points
> Von: Marc-Andre Laverdiere-Papineau
> <marc-andre.laverdiere-papineau at polymtl.ca>
> An: soot-list at sable.mcgill.ca
> Datum: 06.03.2013 18:18
>
>> Hello Michael,
>>
>> I have been working on Web services and Servlets with Bernhard Berger,
>> so we both got some good insights so far.
>>
>> Were you thinking of doing IFDS analyses? If so, one 'trick' is that you
>> can make the calls to your library in your dummy main in a loop that
>> looks like this:
>>
>> while (true){
>> switch (random.nextInt())
>> case 1: a()
>> case 2: b()
>> ...
>> case n: return;
>> }
>> That way, the solver will operate without any assumption about the
>> ordering of the operations.
>>
>> If you want to model some life cycles, then that is a bit more work, but
>> still doable in this pattern.
>>
>> Marc-André Laverdière-Papineau
>> Doctorant - PhD Candidate
>>
>> On 13-03-06 11:58 AM, Michael Faes wrote:
>>> Wow, that is a big bummer. If this is truly the case, then I think it's
>>> absolutely be necessary that this is stated somewhere.
>>>
>>> I also already thought about generating dummy methods. The problem is
>>> that I'm analyzing libraries and that my analysis should consider all
>>> possible uses of a library. So every method could be called with any set
>>> of parameters. To generate a main method that reflects this usage is not
>>> possible, I think.
>>>
>>> If someone has another idea, please let me know.
>>>
>>> Thanks,
>>> Michael
>>>
>>> -------- Original-Nachricht --------
>>> Betreff: Re: [Soot-list] Spark and custom entry points
>>> Von: Marc-Andre Laverdiere-Papineau
>>> <marc-andre.laverdiere-papineau at polymtl.ca>
>>> An: soot-list at sable.mcgill.ca
>>> Datum: 06.03.2013 17:14
>>>
>>>> Hello,
>>>>
>>>> I worked for a while on a custom entry point framework, and hit the
>>>> same
>>>> brick wall.
>>>>
>>>> Spark doesn't reason well when the entry points are not static, so the
>>>> trick is to generate a dummy main that creates the instances of your
>>>> classes and calls them.
>>>>
>>>> Marc-André Laverdière-Papineau
>>>> Doctorant - PhD Candidate
>>>>
>>>> On 13-03-06 11:11 AM, Michael Faes wrote:
>>>>> Hi again,
>>>>>
>>>>> Using the information in Quentin's script I was able to build the
>>>>> develop branch of soot. It took me quite some time as the whole build
>>>>> procedure is not really compatible to Windows, even with a Cygwin
>>>>> environment. But it worked in the end, so thanks!
>>>>>
>>>>> However, I encountered another problem. Using Spark with custom entry
>>>>> points seems not to work at all. Using CHA, this simple class:
>>>>>
>>>>> public class CallGraphTest {
>>>>>
>>>>>        private final Object object;
>>>>>
>>>>>        public CallGraphTest(final Object object) {
>>>>>            this.object = object;
>>>>>        }
>>>>>
>>>>>        @Override
>>>>>        public int hashCode() {
>>>>>            return object.hashCode();
>>>>>        }
>>>>> }
>>>>>
>>>>> produces a reasonable call graph with about 90 edges. Using Spark, the
>>>>> call graph is plain empty. As mentioned before, I'm setting up Soot to
>>>>> use all public methods as entry points.
>>>>>
>>>>> Now, I checked the mailing list archive and found this:
>>>>>
>>>>> http://www.sable.mcgill.ca/pipermail/soot-list/2011-December/003983.html
>>>>>
>>>>>
>>>>>
>>>>> It suggests that Spark may have problems with non-static entry points.
>>>>> Is this still the case? Is there a way around this problem?
>>>>>
>>>>> Thanks again for your help.
>>>>> Michael
>>>>> _______________________________________________
>>>>> Soot-list mailing list
>>>>> Soot-list at sable.mcgill.ca
>>>>> http://mailman.cs.mcgill.ca/mailman/listinfo/soot-list
>>>>>
>>>> _______________________________________________
>>>> Soot-list mailing list
>>>> Soot-list at sable.mcgill.ca
>>>> http://mailman.cs.mcgill.ca/mailman/listinfo/soot-list
>>>>
>>>>
>>
>>


More information about the Soot-list mailing list