Re: [abc] oopsla outline

From: Eric Bodden <eric.bodden@mail.mcgill.ca>
Date: Tue Mar 07 2006 - 14:52:38 GMT

Great!

As far as I understood, the usual process would now be to derive a
native AJ aspect from this to test against as well as a PQL query is
possible (but I seem to remember that we already anticipated PQL not
being able to handle this). Also I guess we should rerun the benchmark
on cardinal and measure the footprint.

About the modification to abc: I think it's good to have it there for
testing the tracematch / aspect but for the actual benchmark, it might
not even be necessary: The version of JHotDraw we use for the iterator
example also does not generate any violations when just being run as a
benchmark. Still we can measure the overhead of tall the bookkeeping
that takes place.

Eric

Ondrej Lhotak wrote:

>I have the reweaving tracematch setup and checked in.
>It's in cvs on musketeer, module tmbenches, directory reweave.
>There are currently two versions: plain abc with no tracematch, and
>abc with the tracematch woven in. Both compile the quicksort example
>with reweaving enabled. On my workstation, they take roughly 4.4s
>and 8.4s, respectively. The abc in the benchmark is has a very slight
>modification to include a field which is not unwoven, so that the
>tracematch actually finds something.
>
>The build.sh script builds both variations (allow a good 15 minutes).
>The run.plain.sh and run.tm.sh scripts run each of the two variations.
>
>Am I correct in assuming that next we want an AspectJ non-tracematch
>implementation of the same checker?
>
>Ondrej
>
>On Sat, Mar 04, 2006 at 01:09:48PM +0000, Pavel Avgustinov wrote:
>
>
>>Ondrej Lhotak wrote:
>>
>>
>>
>>>Are there some common benchmarking scripts I should be using,
>>>or is everyone rolling their own?
>>>
>>>
>>>
>>>
>>In principle, I have a set of scripts to run benchmarks automatically.
>>They rely on a file specialised to each benchmark, however, so if you
>>just provide *some* scripts to compile and run the benchmark (preferably
>>separately), then I could easily use those by calling them from the
>>specialised script.
>>
>>If you want the exact details, the assumption is that there is a script
>>called run-benchmark.sh in the root directory of the example. This
>>script should
>>
>>- accept three flags: --clean to get rid of previously generated output
>>(*not* compiled code, just the results), --compile to remove previous
>>compilation results and recompile, and <anything-else>. This would then
>>be used as a 'tag'; it becomes part of the output filename, to allow
>>several iterations to be run.
>>- produce timing/memory measurements in some output directory, usually
>><benchmark-root>/output/Benchmar-description.$tag.out, where $tag is the
>><anything-else> value above.
>>
>>If you want a template for such a run-benchmark script that already does
>>the command-line argument handling, I can give you one, but, as I said,
>>just providing 'compile' and 'run' scripts (where 'run' produces
>>measurements in some predefined format) would make it quite easy for me
>>to just reuse these.
>>
>>- P
>>
>>
>>
>
>
>
Received on Tue Mar 07 14:52:41 2006

This archive was generated by hypermail 2.1.8 : Tue Mar 06 2007 - 16:13:27 GMT