I think the story is unfolding quite nicely. There is actually
a lot of material, so we will have to make sure that we hit
the really important points in the most detail and try to make
sure that readers don't lose track of the story.
The last time we had a deadline I had a working weka - but I had to
change a bunch of things to avoid j2j bugs. I'll go back to that
example with a fresh copy of weka and hopefully I can make a proper
version.
I'm also looking at some of the benchmarks submitted to my 621 class.
I will continue on that - looking for nullcheck and hashcode
examples.
Cheers, Laurie
+-----------------------------------------------------------------
| Laurie Hendren --- laurie.hendren@mcgill.ca
| Associate Dean (Academic), Faculty of Science,
| Dawson Hall, McGill University, 853 Sherbrooke St W,
| Montreal QC H3A 2T6 Canada, 514-398-7179, fax 514-398-1774
+----------------------------------------------------------------
| For contact and home page info as Professor, Computer Science:
| http://www.sable.mcgill.ca/~hendren --- hendren@cs.mcgill.ca
| Research: http://www.sable.mcgill.ca http://aspectbench.org
+----------------------------------------------------------------
On Wed, 1 Mar 2006, Oege de Moor wrote:
> * deadline: March 18
>
> * aim to have preliminary numbers for all
> entries of tables outlined below asap
> >> These tables are the heart of the paper: <<
> >> if they're ok, the paper will be fine. <<
>
> * please comment, especially on issues
> in capitals and ??? issues
>
> * Oxford authors meet 1:30pm in 005
> on March 2 to discuss this paper plan
> and who does what
>
> Efficient trace monitoring
> --------------------------
>
> 1. Introduction
>
> - lots of proposals for trace monitoring
> concensus it's useful
> - severe efficiency problems, and therefore
> hardly any practical implementations
>
> contributions of this paper:
> - we carefully investigate these problems
> through a representative set of benchmarks
> - based on the benchmark results, we
> propose the following techniques for
> an improved implementation:
> * careful automaton generation
> (cf PQL: Julian's safeenum variation)
> * specialise code to automaton
> (cf J-Lo statemachine interpreter)
> * avoid memory leaks by pattern analysis
> (cf all other systems)
> * quickly find relevant bindings through
> indexing of partial solutions; the indexing
> scheme again determined by analysis of pattern
> all of these techniques are low-cost at
> compilation time, and only require careful
> analysis of the pattern, not whole-program
> analysis
> - our experiments show these techniques are
> effective
> - we also examine the relative cost of different
> features, in particular
> * cost of negative bindings
> * context-free properties
> - finally, we discuss how more advanced
> analysis of the monitored program can
> yield even better results, and present
> some experiments where such optimisations
> have been applied by hand
>
> 2. Systems
>
> 2.1 Benchmarked systems [what's this all about?]
>
> sketch of safe enum in Aspectj, J-Lo, tracematches
> and PQL
>
> WOULD BE NICE TO ALSO HAVE ALPHA, or HAWK.
>
> 2.2 Other systems [wow, this is a hot topic!]
>
> comparative table with + and - to highlight differences
> at a glance
>
> rows: (in this order)
>
> aspectj,
> JLo: bodden&stolz,
> Hawk: havelund,
> us: tracematches,
> tracecuts: walker&viggers,
> pql: martin et al,
> ptql: goldsmith et al,
> jassco: De Fraine et al
> (calculi): douence et al,
> jagadeesan et al
>
> columns (in this order, with subheadings)
> purpose
> - fault finding
> - functionality
> integration in Java
> patterns
> - free variables
> - filtering
> - contextfree
> implementation
> - semantics
> - indexing
> - leak busting
> - static matching
>
>
> 3. Experiments
>
> 3.1 General challenges
> Attempt to identify the main difficulties.
> Only one benchmark: safeenum on jhotdraw.
>
> Aspectj:
> naive (hash table)
> tmbenches/jhotdraw/naiveaspects
> proper (weak refs)
> tmbenches/jhotdraw/aspects
> smart aspect (the gold standard for efficiency)_
> tmbenches/jhotdraw/handoptim
> J-Lo
> tmbenches/jhotdraw/jlo
> tracematch:
> without leakbusting [need switch for that!]
> tmbenches/jhotdraw/tracematches
> with leakbusting but without indexing
> tmbenches/jhotdraw/tracematches
> without leakbusting but with indexing
> with leakbusting and with indexing
> tmbenches/jhotdraw/tracematches
> pql:
> plain
> ??? NOT CHECKED IN!! ???
> with Julian's variation
> ??? NOT CHECKED IN!! ???
>
> Conclusion: further experiments with AspectJ,
> tracematches (4 versions)
> and PQL
>
> 3.2 Further experiments
>
> for each (pattern, base program) combination
> runtime numbers for 7 systems:
> no observer
> aspectj,
> tms (with indexing and leakbusting on and off),
> pql where available
> also #lines for each of these
> (or some better metric of complexity?)
> KSLOC for base program
>
>
> pattern base program pql? where
> ------- ------------ --- -----
> nulltracker certrevsim no tmbenches/nulltracker
> hashcode aprove no tmbenches/aprove
> observer ajhotdraw yes tmbenches/ajhotdraw/ajhotdraw-tm/ajhotdraw-0.3-src/src/aspects/observdrawchange
> dbpooling artificial ?? tmbenches/dbpooling
> locallocks jigsaw yes tmbenches/Jigsaw/{LockAspect.aj,ReducedThisJPLockTraceMatch.aj}
> lor jigsaw ?? ??
>
> [6*7 table not bad, however:
>
> WOULD BE GOOD TO TRY NULLTRACKER AND HASHCODE ON SOME OTHER
> EXAMPLES SUCH AS WEKA ...
>
> more pql comparisons would be good, too]
>
> 3.3 What makes a good pattern?
> Main issues:
> - cost of negative bindings
> - leaky patterns
> - importance of restricting instrumentation points with "contains"
> - context-free with regexps plus binding thisJP
> - or with cflowdepth?
>
> Discuss with all the different versions of Jigsaw [all 17 of them!]
>
> 3.4 Conclusions of experiments [ooh! how well abc did there!]
>
> Main challenges in implementing trace monitoring features are:
> - careful construction of state machine
> - representing positive and negative bindings
> - specialising code gen to pattern
> - avoiding leaks
> - quick look up of partial matches in index
>
>
> 4. Local optimisation techniques [how abc does it]
> Rules of the game: only analyse monitor code, not
> the base program
> 4.1 state machine
> 4.2 representing bindings
> 4.3 specialising codegen to pattern
> 4.4 detecting and avoiding leaks
> 4.5 indexing
>
> 5. Future work [Scope for interprocedural optims]
> 5.1 predicting paths and bindings
> 5.2 compile binding info to itds
> numbers for tmbenches/itds
>
> 6. Conclusions
>
>
>
>
Received on Thu Mar 02 11:19:02 2006
This archive was generated by hypermail 2.1.8 : Tue Mar 06 2007 - 16:13:27 GMT