Indeed, as observed with OCaml profiling tools, Coq's engine, in particular the allocation of evars seems to be far from state of the art, looking at the memory numbers I wonder how much of the problems are just due to this.
So indeed comparing the proof engine to a reflective procedure is just comparing apples to oranges, in the sense that one engine has not even a GC, whereas the other will use OCaml's GC which is indeed pretty nice.
There are other sources of quadratic behaviour.
As mentioned, VM compilation of closure is quadratic in the number of binders as well.
Sure, but just that mem analysis / sharing is scary
the numbers I saw seem pretty worrying, that's a hard problem to solve, sure
Also, a lot of the code is very naive, e.g. unification where we repeatedly compute compositional data over subcomponents.
but indeed, I mean, not surprising. The current architecture cannot just do the kind of proofs Jason is doing efficiently
On that we agree.
It would be super-interesting to compare the number would engine
had a GC and better evar management
GC wouldn't help much, rather, we should change the evar representation.
But I don't know how to do that.
Actually the quesiton just asked now is mine
I was wondering if not storing contexts (having closed evars c : forall ctx, ty) would change anything but I guess not.
Nah, the problem lies in the instances.
It's an engineering problem. We want compact instance representation, but then we need to duplicate the kernel to handle evar-bearing terms.
'cause everytime you see an Evar node, you need an evarmap to retrieve the uncompressed evar instance.
I believe that implicit modules will help on that level.
You mean an algebraic representation would help ? Or would it just move the problem around.
What I mean is that when you get an EConstr in the kernel, you don't have enough data to perform anything if you don't have either the sigma or the full instance around
e.g. how do you substitute named variables without the instance?
Sure.
wonderful locally nameless?
@Enrico Tassi this is moving the price somewhere else.
I'd be very pleased to see a variant of the proof engine with locally nameless
?
but apart from backwards compat which would be hellish
the proof engine is in locally nameless
It's named though?
locally names in contrast to nameless
ah yeah
so indeed, I'd be curious to see a nameless variant à la Matita
so yes, it is named (w.r.t. the kernel)
gotta go
But I suspect that it'd be actually worse than the named approach
Because you need to substitute eagerly in huge proof terms
in that sense you're moving complexity around.
(see ya)
@Enrico Do you mean changing evars to use de Bruijn instead of names?
But viz the evar representation there is hope if we make the kernel parametric in the kind of terms it manipulates.
Would be the best of both worlds.
Lean uses local names iirc
Does that mean much more than passing functions like "safe_evar_value" ?
That means passing the evar map essentially.
You want the evar_info
I don't think that Lean has reached the "wall of complexity" yet.
@Matthieu Sozeau ideally you'd have a type 'a constr, and a type 'a env.
Where the env would be instantiated by a global_env + evar_map I guess?
In Why3, we are using a mixed representation. Bound variables are locally named, but substitutions stop at binders. In other words, we have implicit substitutions for unbound terms, and explicit ones for bound terms. Thus, performances would be disastrous if we were opening the same quantifiers again and again. But in practice, a quantified term is visited at most once, so the whole mechanism is kind of constant time. (But, since Why3 is first order while Coq is higher order, I am not sure we can have the same expectation for Coq.)
Gaëtan Gilbert said:
Lean uses local names iirc
Lean uses imperative updates + copy, yet another beast, AFAIK
Last updated: Nov 29 2023 at 06:01 UTC