Currently we have:

- All interactive proofs (manual and - via build_by_tactic - auto obligations, Qed/Defined proofs) call (in
`declare.ml`

):- shelf emptiness
- minimize_universes (= collapse_sort_variables + normalize_variables + UState.minimize), except for close_proof_delayed
- one of make_univs_deferred, make_univs_private_poly, make_univs (+ a special case for close_proof_delayed) which all call some form of:
- UState.restrict
- check_univ_decl

`Definition`

with body,`Parameter`

,`Program Definition`

call (all in`declare.ml`

):- finalize, that is
- minimize_universes
- UState.restrict

- evars defined check
- check_univ_decl
- for Program, what ending obligations call (that is again Evd.minimize_universes, UState.restrict, check_univ_decl)

- finalize, that is
`Fixpoint`

and`Program Fixpoint`

call:- at the end of interpretation (before entering
`declare.ml`

)- minimize_universes
- evars defined check on the statically-given part

- in
`declare.ml`

, before registration- UState.restrict
- check_univ_decl

- for Program, what ending obligations call (that is again minimize_universes, UState.restrict, check_univ_decl)

- at the end of interpretation (before entering
`Theorem`

,`Theorem with`

,`Definition`

without body,`Fixpoint`

without body call:- at the end of interpretation (before entering
`declare.ml`

)- a first call to Evd.minimize_universes (thus before starting the proof(s))
- evars defined check in the type(s)

- the calls for interactive proofs (that is minimize_universes, UState.restrict, check_univ_decl)

- at the end of interpretation (before entering
`Derive`

/`Equations`

call what proofs do (that is minimize_universes, UState.restrict, check_univ_decl)

In short, iinm, all paths call minimization, restriction and universe declaration check, some of them doing it several times.

The case the most different from the others is the one for all of `Definition`

with body, `Parameter`

, `Program Definition`

(that is `declare_definition`

) which does in `declare.ml`

the interpretation checks (esp. check_evars) what others do before entering `declare.ml`

.

Not clear yet how to make the different paths structured in a way that makes them more easily comparable.

Hi @Hugo Herbelin , that's very valuable info , thanks a lot!

I think working on this would fit well with the rest of the work you (mainly) been doing on this area.

For https://github.com/coq/coq/issues/10041 , using a project https://github.com/coq/coq/projects/33 worked fine for me. What do you think?

[In fact, I'd encourage you to add to that project your PRs, so they can be counted as part of the effort to rework how interactive proofs are handled internally]

using a project https://github.com/coq/coq/projects/33 worked fine for me. What do you think?

OK, I looked at it and made a pass on some of the issues. (BTW, is there a way to see the reasons why a PR was closed w/o being merged, like replaced, too complicated, etc.) I saw you added to the project the relevant PRs from me. Thanks.

Could `Declare.close_proof`

and `close_proof_delayed`

be merged, so that what is modified on one is modified for sure also on the other? Do I understand correctly that the only basic difference is that the former starts from a list of constr bodies while the latter starts from a list of lazy computation of constr bodies + that the former supports more variants (such as private universes, `Defined`

, etc.). Then, the latter with the generality of variants of the former should do the job?

Or are there subtleties, maybe in computation sharing, that I don't see.

IIRC they used to be merged

sometimes leaving similar but different things split is better

I'm somehow asking because I got a bit scald by the number of inconsistencies that I had found in the different execution paths for Fixpoint due to modifications on one side not reported on the other sides, but clarity is also important.

Maybe can I put my question in an another way: how much is `declare.ml`

considered satisfactory or unsatisfactory? What could be done to improve it if ever worth?

Actually, what I'm trying to see is if we can share the universe treatment in `declare_mutually_recursive`

and `declare_mutdef`

, and, for that, to move the `make_univs`

stuff after `finalize_proof`

, but if who worked with `declare.ml`

is satisfied, maybe this is not so necessary.

I have CEP #42 in mind. All of `Definition`

, `Theorem`

, `Co/Fixpoint`

, etc. could be the same with only differences such as: recursive or not, with obligations or not, turning evars into a goal or not, evaluating immediately or in the background, opaque or not, etc. So, ideally, I feel that there should be only one core routine which makes local choices on the way but that these choices do not fundamentally change the structure of the core routine.

Currently, we have 6 paths for starting (mutual or not, immediate or proof or obligations), 4 paths for ending (admitted or saved, immediate or delayed), while I feel we could have 1 and 2.

On the other side, making the 6 starting paths close in style and structure can be good too.

@Hugo Herbelin my feeling is that all your intuitions about the state of the code are very sound; for historical reasons the path diverged, but they share many common operations, IMHO the state of declare.ml is like 50% towards a satisfactory state.

Now it is easier to understand what is going on, but still too many separate but almost-equal paths, and many cleanups to do

IMHO work in this area has been pretty fruitful in terms of gaining experience about what works well for example in the document context

IMHO work in this area has been pretty fruitful in terms of gaining experience about what works well for example in the document context

Indeed, that's one part of my questions: what works well, and what are the requisites, in the document context?

Right now the document context works pretty well IMHO; coq-lsp is blocked on other issues like multi-threading (and per_thread map).

Incremental checking requires that opaque proofs don't leak universes, that's IMHO the main requirement for the part about declarations; investigations done mostly by Gaëtan an me show that there is hope with this approach. Then you can change opaque proofs, but the rest of the commands won't be re-executed. Note how this unifies the proof being broken / delegated / absent, which is super-convenient for incremental execution.

TL;DR: here follows a simple proposal to restructurate `declare_definition`

and `close_proof`

so that there is only one way to deal with universes whatever we are building an interactive, non-interactive, or obligation-based constant.

I continued trying to understand the execution paths for universes and I now feel rather close to a good picture of what happens.

So, after completion of elaboration (and a first universe minimization) we have three ways to go from econstr to a new constant in the environment depending on how we deal with evars:

A1. error on unresolved evars and if no error, we trivially embed the body as a `constr Future.computation`

and *continue*

A2. open subgoals for evars, and when fully resolved, we *continue* with the proof seen as a `constr Future.computation`

A3. open obligations for unresolved evars, and when all resolved, we *continue*

To continue means here going from `constr Future.computation`

to a `constant_entry`

. It means (roughly in sequence): B1. minimizing body universes, B2. restricting universes, B3. checking universe declaration, B4. computing universe entry, then, B5.: computing "using" clause, registering the resulting entry to safe_typing, doing auxiliary registrations (impargs, notations, etc.).

So morally, it seems we should have only three functions for dealing with evars (possibly times 2 if we treat recursive definition differently from non-recursive ones) + one unique function to take the relay once we have a full `constr Future.computation`

proof (or rather two, counting also the ability of either `Qed/Defined`

or `Admitted`

)

But currently, we have different ways to do so:

- there are two variants of B1, B2 and B3, one used by A1 (say B1-B3') and the other one used by A2 and A3 (say B1-B3'')
- in the no-unresolved-evar mode A1, we do B1-B3' as part of A1 and resume at the level of doing B4, B5.
- in the goal mode A3, we resume with B1-B3'' then with the common B4, B5.
- in the obligation mode A2, we do B1-B3'' then resume towards the beginning of A1 (non-recursive case) or to the B1-B3' part of A1 (recursive case), redoing in any case B1-B3', then B4, B5.

**The proposition**:

**no more B1-B3 in A1 and for each of A1-A3, after obtention of a**`constr Future.computation`

, this continues with a common B1-B3'', B4, B5.

Is this compatible with the various analyses already done on the code? Is it compatible with the various ongoing projects? If yes, I'd be tempted to proceed.

Technically, this means:

- moving B1-B3' out of A1 (=
`declare_definition`

), and replacing it by B1-B3'', e.g. by splitting`declare_definition`

into a`finalize`

part doing a first minimization and evar checks and, say, a`process`

part that does B1-B3'', B4, B5. - removing B1-B3'' from the Qed-triggered resumption (=
`close_proof`

and`close_proof_delayed`

) and let A2 and A3 continue instead at proof completion with the new`process`

function.

Additionally, there are various other behaviors in the wild:

- funind calls
`build_by_tactics`

which produces a constr and do B1-B3'' and B4, then calls`declare_entry`

to do B5; so, we would need to remove B1-B3'' and B4 from`build_by_tactics`

and call`process`

instead of`declare_entry`

(or even propose a variant of`declare_definition`

that computes the body with tactics???)

Open questions:

- I don't understand well the properties of minimization. It seems necessary to minimize before checking evars (otherwise I get QVar errors) but it seems necessary to re-minimize in cases A2 and A3 once the proof is completed. So, apparently, we should not be afraid to minimize twice.
- Sometimes there is also a
`inline_private_constants`

whose generality I don't understand yet. - Sometimes there is also a
`fix_undefined_variables`

whose generality I don't understand yet either. `declare_definition`

does its own evar check (which has its own error message) but some callers (e.g. classes) do also their own evar check call (using pretyping which has its own error message too); what would be the most convenient policy for the purpose of plugins?

It seems necessary to minimize before checking evars (otherwise I get QVar errors)

what check produces qvar errors?

E.g. if I don't do the initial minimization prior to calling `to_constr`

in `Evarutil.finalize`

, it fails already in `Logic.v`

on:

```
Definition ex_rect (P0 : ex P -> Type) (f : forall x p, P0 (ex_intro P x p))
: forall e, P0 e
:= fun e => rew <- ex_eta e in f _ _.
(* Undeclared qualities: α142 α143 *)
```

isn't that error from the kernel not finalize?

I don't understand where you see a second minimization

I don't understand where you see a second minimization

In `prepare_proof`

, used for `Qed`

, there is also a minimization. So, if you do `Theorem f : T. ... Qed.`

, there is a minimization at the time of `T`

(in `start_lemma_com`

) and another one at the time of the `Qed`

(in `prepare_proof`

).

isn't that error from the kernel not finalize?

According to `-bt`

, it seems so. But you get it when you move the `minimize_universes`

line of `finalize`

after the `f nf_constr`

line (or even after the `restrict`

line, so suggesting that minimize and restrict do not commute, and I don't know if this is intended or not).

no, it's minimize and to_constr which don't commute

OK, I think I see. Then, if we would like to do minimization only once, e.g. at the end of the proof, we would have to postpone the `to_constr`

of the type to the end of the proof too, or alternatively, to adjust the type with some substitution that minimization would have produced, right? (And, if so, is there already a function for this "adjustement".)

the function is to_constr

I made cep #89 with more details.

Last updated: Oct 13 2024 at 01:02 UTC