a very quick response from Eric...
---------- Forwarded message ----------
Date: Fri, 15 Apr 2005 17:41:35 +0200
From: Eric Bodden <eric@bodden.de>
To: 'Oege de Moor' <Oege.de.Moor@comlab.ox.ac.uk>
Subject: RE: RV05 publication
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Oege de Moor wrote:
> Hopefully this addresses the main point you raised. I'd be
> interested to hear in detail how your framework addresses this
> issue.
Ok, I think now I understand the major difference: LTL as we see it
is "stutter-free". So we do *not* allow one do express if there has
been one single call v.f() or a sequence of those. Maybe this did not
become clear in our paper, but we will allow the "Next" operator "X"
only in combination with another temporal operator, e.g. "XF
call(c.f())". Thus, things like "call(v.f()) && X call(v.f())" cannot
be expressed. So in fact the trace "v.f(); v.g(); v.g(); v.h();"
*would* in our semantics match the trace "call(v.f()) && XF(
call(v.g()) && XF (call(v.h()) ))". I think this is the major
difference and might be why I feel we need no "skip". Does that make
sense?
>> General comments on trace matches: You model events "which prevent
>> a tracematch from matching" by symbols which do not occur in the
>> pattern.
>> I would maybe find one *distinct* "exit" symbol or somthing like
>> this more intuitive. Using or (|) it could fulfill the same
>> purpose.
>
> I do not understand this comment; can you illustrate it with one of
> the examples?
Ok, assume an extended version of your example regarding db queries.
Assume the trace should not only not match when a logout occurs but
also when the user is banned. In your formalism this would look like:
tracematch (User u , Query q ) {
sym login after returning :
call (LoginManager.login(User,..)) && args (u,..) ;
sym query before :
call (Database.query(Query)) && args (q) ;
sym logout after :
call (LoginManager.logout()) ;
sym banned after :
call (LoginManager.banned()) ;
login query+
{
System.out.println(u + " made query " + q ) ;
}
}
So you have two symbols which "prevent from matching". My proposal
would be to have one distinct symbol, let's say "exit", with always
this name that prevents from matching:
tracematch (User u , Query q ) {
sym login after returning :
call (LoginManager.login(User,..)) && args (u,..) ;
sym query before :
call (Database.query(Query)) && args (q) ;
sym exit after :
call (LoginManager.logout()) ||
call (LoginManager.banned()) ;
login query+
{
System.out.println(u + " made query " + q ) ;
}
}
>> section 4.4: you define "interleaving" rather informally. thus i
>> would
>> expect "\Sigma* skip* f g" to be a valid interleaving for your
>> example
>> as well. "\Sigma* f g skip*" is obviously not contained, since
>> it's not
>> in "\Sigma A".
>
> correct, but note that
> \Sigma* skip* f g
> <= {because skip in \Sigma}
> \Sigma* f g
> <= {because epsilon in skip*}
> \Sigma* f skip* g
>
> So the statement that Pat = \Sigma* f skip* g is correct.
> I'm not sure a formal definition of interleaving would help?
No, that's ok then. My fault :-)
>> at the end of section 4.4 you write that the complete constraint
>> is (x=v)^(x!=w). This is correct, however I had expected a more
>> verbose form as
>> (x=v)^(x!=w)^(x!=w)^(x=v) according to Figure 4. In theory this is
>> equivalent, but your implementation rises the question which of
>> the two
>> we actually gain at runtime, because I did not read about any
>> simplification that you apply within the constraint. If you use a
>> set-based implementation, that's of course no issue.
>
> We intentionally left the constraint implementation open at this
> point, and didn't want to talk about the simplification rules. The
> actual implementation uses a set-based representation.
Same for us ;-)
>> Also on the aspect you generate: What confuses me is that I am
>> seeing no explicit state variable (state \in {1,2,3}) here. I am
>> sure, this is somehow hidden inside your constraints, but I don't
>> see it.
>
> The fact is that you can be in several states at once. The
> constraint tells you under what assumptions you are in a particular
> state. The intuition behind this is spelled out in the second
> paragraph of p. 11.
Yes, I began to understand this last week... That's basically why we
now model things with alternating automata... They exactly allow one
to be in several states at a time.
>> I find your operational semantics really hard to follow, so it
>> would be nice to have more explanation on your example to see why
>> the
>> constraints
>> have
> to
>> be updated the way you do.
>
> The fact is that it is tricky; the ultimate explanation of why it
> must be this way is the proof of Section 4.5. The intuition is
> explained in a lot of detail in Section 4.1. We had a lot of
> discussions whether it would be better to give the whole intuition
> first, or to intersperse that explanation with the formal
> presentation. In the end we decided that it was better to give a
> roadmap, and then leave it up to the reader whether they want to
> understand all the formal details. Do you think it would be better
> to merge 4.1 with the formal definitions that follow? Ideally we
> would do both, but then we have a space problem!
Hmm... no in that case I would leave it as is. Maybe you should just
point the reader back to section 4.1 at some point.
By the way: Interesting trick with the some()-pointcut, because we
had a similar problem, when it came to "matching several pointcuts on
a single joinpoint", which basically broke down to generating all 2^n
boolean combinations of pointcuts or "gather a list of pointcuts that
match" adding one affter the other as you do with n+1 pieces of
advice. Obviously the latter is much better.
> Hm, not 100% certain... but would it make a big difference, as it
> is more common to have non-matching traces, and thus that is where
> the memory problems are?
This is certainly true, yes.
> In the end it is a question of pragmatics: are most applications
> easier to express with regexs or with LTL? I don't know. Do you
> have LTL formulae that would be difficult to express, even if we
> allowed negation in regexps?
It depends. Negation already helps a lot. However during our research
we found out that LTL actually *is* really quite a difficult
formalism, which does not have an intuitive translation into regex at
all. For example we thought that one could easily translate LTL to
regex as follows:
regex(F \phi) = \sigma* regex(\phi) \sigma*
regex(\phi U \psi) = regex(\phi)* regex(\psi) \sigma*
and so on...
But take for example the formula (Fp) U q. Translating this as above
you get:
regex((Fp) U q) = regex(Fp)* q \sigma* = (\sigma* regex(\phi)
\sigma*)* q \sigma*
This would mean that the trace {},{q},{p} would not match, because
the first state is not in (\sigma* regex(\phi) \sigma*)*.
However {},{q},{p} does satisfy (Fp) U q because it is true that for
any state before {q}, the proposition p does finally occur!
This was hard to get into our brains and I also find this not very
intuitive. But that's the way LTL works and that makes translation
not so straightforward. There can be no direct translation. Also
negation does not help here. One does need some kind of intermediate
automaton...
> Finally, I've briefly read your paper, hopefully I'll have more
> time to study it in detail soon. I don't understand why you say
> that
> handling variable bindings is more cumbersome in our setting than
> in the use of alternating automata. Above you seem to be
> complaining
> about the complexity of our semantics, not the way variable
> bindings are described via constraints. It's a bit unfair to
> compare the
> complexity of our formal account with the lightness of your own
> informal treatment...
Ok, maybe we did formulate this a bit too sharp. It is true, you
provided a lot more formal account than we did. My feeling however is
that it maybe would have been of help for you, too, to use something
like an alternating automaton representation as well - just because
the way you gather, chain and propagate constraints during runtime
actually *is* nothing but an automaton in this model.
I tried to read and understand your formal proof and I am still not
sure if I fully understood everything. And I am just having the
feeling that having some kind of visual model could have helped a lot
to actually see why things work the way they work.
In fact the way it worked for us was that the Büchi-automata we tried
first had some semantic problems that made them smply unsound in
certain cases. Thus we chose alternating automata because it was the
natural representation for LTL. Then when we experiemented with
those, and how we would implement them and especially bindings
(constraints), I suddenly felt, that we were doing exactly the same
things again, you had done before. I simply did not recognize it...
That was actually all I wanted to say. Anyway, I will check this
section again.
> but then, everyone always finds their own ideas easier to
> understand :-)
Yes, that's unfortunately true. I was having a hard time explaining
my mentor all the alternating automata stuff during the last weeks
;-)
Regarding the implementation: I think in that case I will just stick
with what we have. It's actually working quite smoothly. It would
just have been nice to inherit some of the bits, you mentioned (e.g.
an "aster returning/throwing" pointcut etc.). But for a prototype, we
should be fine.
Thanks a lot,
Eric
- --
Eric Bodden
Chair I2 for Programming Languages and Program Analysis
RWTH Aachen University
-----BEGIN PGP SIGNATURE-----
Version: PGP 8.0.3
iQA/AwUBQl/gr8wiFCm7RlWCEQJaGACdFvXUh5BDFX8dpajnlhVMcwC2mmIAnRZg
4YdlLdEKzcC4fyncw71imyz9
=uOf7
-----END PGP SIGNATURE-----
Received on Fri Apr 15 18:10:10 2005
This archive was generated by hypermail 2.1.8 : Fri Apr 15 2005 - 18:30:05 BST