The question remains whether virtually unlimited amounts of processing time,
ahead of time, can be beaten by having this additional information at run
time with actually few opportunities to draw conclusions from it?
But why should we choose? It seems obvious to me that future compiler
technology will use *both* aggressive static analysis *and* dynamic
compilation of specialisations (the latter presumably tightly guided by
the former). All we're waiting for is for the mainstream of compiler
writers to stop ignoring (or 'finessing') higher order types and
functions, which, aside from being desirable language features in their
own right, are extractable by static analysis from programmes written in
common languages.