Re: [abc] oopsla outline

From: Ondrej Lhotak <olhotak@uwaterloo.ca>
Date: Tue Mar 07 2006 - 22:22:47 GMT

On Tue, Mar 07, 2006 at 03:36:04PM -0000, Oege de Moor wrote:
> ah, that's a very interesting example. Thanks, Ondrej!

Great, glad you like it.

> Yes, an AspectJ version is the next step. If it's not obvious how to do
> it efficiently, we sometimes have done multiple aspects (in particular
> for jhotdraw+safeenum).

I'm working on an AspectJ version. At first glance, it seemed easy, but
it turns out it's surprisingly tricky to correctly exclude events that
are indirectly caused by the aspect itself. This really makes me
appreciate how tracematches handle such ugly details for you magically.
Anyway, still working on it.

Ondrej

> Looking at the pattern, it should also be a good showcase for the
> indexing scheme that Julian and Pavel are currently implementing :-)
>
> -Oege
>
>
> > -----Original Message-----
> > From: Majordomo list server [mailto:majordomo@comlab.ox.ac.uk] On Behalf
> > Of Ondrej Lhotak
> > Sent: 07 March 2006 14:32
> > To: abc@comlab.ox.ac.uk
> > Subject: Re: [abc] oopsla outline
> >
> > I have the reweaving tracematch setup and checked in.
> > It's in cvs on musketeer, module tmbenches, directory reweave.
> > There are currently two versions: plain abc with no tracematch, and
> > abc with the tracematch woven in. Both compile the quicksort example
> > with reweaving enabled. On my workstation, they take roughly 4.4s
> > and 8.4s, respectively. The abc in the benchmark is has a very slight
> > modification to include a field which is not unwoven, so that the
> > tracematch actually finds something.
> >
> > The build.sh script builds both variations (allow a good 15 minutes).
> > The run.plain.sh and run.tm.sh scripts run each of the two variations.
> >
> > Am I correct in assuming that next we want an AspectJ non-tracematch
> > implementation of the same checker?
> >
> > Ondrej
> >
> > On Sat, Mar 04, 2006 at 01:09:48PM +0000, Pavel Avgustinov wrote:
> > > Ondrej Lhotak wrote:
> > >
> > > >Are there some common benchmarking scripts I should be using,
> > > >or is everyone rolling their own?
> > > >
> > > >
> > > In principle, I have a set of scripts to run benchmarks automatically.
> > > They rely on a file specialised to each benchmark, however, so if you
> > > just provide *some* scripts to compile and run the benchmark (preferably
> > > separately), then I could easily use those by calling them from the
> > > specialised script.
> > >
> > > If you want the exact details, the assumption is that there is a script
> > > called run-benchmark.sh in the root directory of the example. This
> > > script should
> > >
> > > - accept three flags: --clean to get rid of previously generated output
> > > (*not* compiled code, just the results), --compile to remove previous
> > > compilation results and recompile, and <anything-else>. This would then
> > > be used as a 'tag'; it becomes part of the output filename, to allow
> > > several iterations to be run.
> > > - produce timing/memory measurements in some output directory, usually
> > > <benchmark-root>/output/Benchmar-description.$tag.out, where $tag is the
> > > <anything-else> value above.
> > >
> > > If you want a template for such a run-benchmark script that already does
> > > the command-line argument handling, I can give you one, but, as I said,
> > > just providing 'compile' and 'run' scripts (where 'run' produces
> > > measurements in some predefined format) would make it quite easy for me
> > > to just reuse these.
> > >
> > > - P
> > >
>
>
Received on Tue Mar 07 22:22:05 2006

This archive was generated by hypermail 2.1.8 : Tue Mar 06 2007 - 16:13:27 GMT