Eric Bodden wrote:
>Hi.
>
>I just evaluated the benchmark results for JHotDraw with J-LO
>instrumentation. They seem pretty interesting:
>
>http://www.sable.mcgill.ca/~ebodde/data/jlo-benchmark-results.htm
>
>Total runtime was 4565207ms, which is about 76 minutes, but more
>interesting is the memory and runtime-behaviour: First of all it seems
>completely related, which makes sense because more memory consumption
>means more that more objects are bound which then means that J-LO has to
>iterate over larger formulae. However, I cannot explain what those peaks
>actually come from. Have you guys seen similar peaks in your
>implementation?
>
I have looked at your graphs, and am rather baffled by them. No, we have
no such peaks in our TM graphs. Even without calling System.gc() before
each measurement (which, as Oege said, results in a nicely smoothed
line), our 'peaks' are about 40-50% higher than the base value at
*most*, and not at all a full two orders of magnitude, like J-LO. PQL
doesn't exhibit such drops either.
In fact, I'm a little worried by these graphs that something isn't
right. From my knowledge of the benchmark (which is reasonably
well-founded), I can see absolutely no reason why your memory
consumption should drop so dramatically at five points during the
execution history, and (associated with this) why you should get such
sudden huge speed boosts five times. I don't know your implementation,
but the graphs suggest that it drops the vast majority of stuff it keeps
track off at every one of these drops; the benchmark doesn't justify
that, as it behaves perfectly uniformly.
So, please please please investigate what causes the strange picture we
see. If we were to put the graphs in the paper, we would have to explain
them anyway, so I'd feel better knowing why they are as they are already.
- P
Received on Sat Mar 04 13:21:35 2006
This archive was generated by hypermail 2.1.8 : Tue Mar 06 2007 - 16:13:27 GMT