> Great! It's good to see that J-LO does not suffer a memory leak here.
> I think there have to be a couple of sentences in the paper
> that explain
> this: because J-Lo does not have a facility for using
> bindings in advice, you can make all variable bindings a weak
> ref. The same does not hold true for tracematches. Also, for
> tracematches there is a potential problem of disjuncts
> leaking. Why does that not happen in JLO, do you know?
Hmm, to be honest I do not see why it exists in the TM implementation.
In J-LO all the data is held within the formulae. So whenever we do a
transition, we take the formula and build it's successor formula under
the given input propositions. At this time, whenever we evaluate a
proposition held in the formula which in the meantime lost a binding
(the proposition has an internal counter to identify this), we evaluate
this proposition to "False", which means that it is on the one hand
semantically correct and on the other hand leads to elimination of the
proposition due to the way the successor formula is built (because
"False" AND "something" is "False" and "False" OR "something" is
"something"). Does that make sense?
So when exactly does this happen in the TM impl.?
> In any event, I think this is pretty strong evidence for the
> conclusion I prematurely wrote into the paper, namely that
> the #1 difference between the tm implementation and J-LO is
> the level of specialisation.
> Do you agree with this?
Yes, I would say exactly the same.
> Hm, calling gc does make a difference for the tm picture,
> making it a nice smooth line taking away any strange peaks.
> So I don't quite understand what is going on with your peaks.
Me neither. If I have the time I will try to investigate this further.
Eric
Received on Sat Mar 04 16:06:16 2006
This archive was generated by hypermail 2.1.8 : Tue Mar 06 2007 - 16:13:27 GMT