These files are deprecated since 8.16 :
./Div2.v
./Even.v
./Gt.v
./Le.v
./Lt.v
./Max.v
./Minus.v
./Min.v
./Mult.v
./Plus.v
Is it time now to remove them? If so, may I give it a try?
Yes, that would be welcome. Someone has to open a pull request to remove them, they won't magically disappear.
If I remember well, the main question is "what to do with the Arith_prebase
file?" introduced in coq/coq#14736.
Should statements be removed? incorporated in PeanoNat
directly? incorporated through Numbers
? what to do with bidirectional statements? etc.
The simplest thing is probably to merge Arith_prebase
and Arith_base
, but I am not sure it is very clean.
Note that, from my experience, the most work in those removal PR can be to fix/obtain fixes for all the devs in the CI that simply ignored the deprecation warning up to now.
those deprecations can actually be a lot of work to remove, a lot of old implication lemmas now have bidirectional new versions, etc.
I can try and see if it is too much work or too complicated for my competence level. In my (very) humble opinion, killing 10 files and many lines of code, as well as simplifying the lib might be worth it.
I'm noticing more deprecated files (e.g. Numbers/Natural/Peano/NPeano.v
) and lemmas. I believe they are all linked to this general policy of deprecation in favour of PeanoNat. Do you think it will be too much? Or is it better to clean up as much as possible in one go?
As you can see, the main difficulty in removing deprecating things is fixing the CI targets that are still ignoring the warnings. So I would say it all depends if those targets are the same or not. If both deletions affect the same targets, doing things together can avoid having to prepare overlays twice, thus indeed saving time and work.
Hmm, I think it's really the same philosophy here, I will try to remove everything related. I have a building first prototype. Next we will have to discuss the status of Arith_prebase.v
.
Pierre Rousselin said:
I'm noticing more deprecated files (e.g.
Numbers/Natural/Peano/NPeano.v
) and lemmas.
The list of files I addressed in the previous step is here: coq/coq#14736.
Thanks, these are exactly the files I removed so far.
Hmm... it passes the tests except bugs/bug_4725.v
and this one is above my knowledge. There is some Register
which goes bad (num.nat.le_lt_trans
), but it is in Arith_prebase
and I did not touch this file... Related files are funind/Recdef.v
and plugins/funind/recdef.ml
.
So, I have a branch in which the files are removed and the tests pass. I have not really changed Arith_prebase
at this point (except moving the Register commands in PeanoNat).
Do you think it's worth making a first PR and then making decisions about Arith_prebase
later?
I have put Registers in PeanoNat and the hint db in Arith_base. I have Search blacklisted the lemmas only used for hints. The PR is opened here.
speaking from experience, writing regexp-based substitutions for the removed Arith stuff does not work very well (e.g., due to some things switching to double-implications)
Karl Palmskog said:
speaking from experience, writing regexp-based substitutions for the removed Arith stuff does not work very well (e.g., due to some things switching to double-implications)
Thanks for the advice. Some things can be automatised, e.g. mult_comm
can be sed
ed into Nat.mul_comm
.
right, there is a subset of lemmas for which it works, such as le_trans
-> Nat.le_trans
a lot of old implication lemmas now have bidirectional new
substitutions for the removed Arith stuff does not work very well (e.g., due to some things switching to double-implications)
Maybe this means that the versions with unary implication should be kept. Would there be a way to name them rather canonically?
IIRC, Jason had a Python script running on .v files to rename automatically the deprecated names, using the location of the warning.
I think this [bidirectionality new lemmas] is mostly just an inconvenience, probably the bidirectional versions make more sense. But maybe there is some convenient tactic expression that applies a bidirectional lemma as a right or left implication?
Yes, there are many workarounds like proj1 (...)
or apply ->
. I think it's better to keep only the bidirectional ones.
Small issue there with implicit arguments (trying to patch color
) and equivalences.
From Coq Require Import PeanoNat.
Arguments Nat.succ_lt_mono {n m}.
Lemma foo (n m : nat) : S n < S m -> n < m.
Proof. intros H. now apply Nat.succ_lt_mono. Qed. (* ok *)
Lemma bar (n m : nat) : n < m -> S n < S m.
Proof.
intros H.
apply <-Nat.succ_lt_mono. (* not ok *)
The last one fails with
Cannot infer the implicit parameter n of Nat.succ_lt_mono whose type is
"nat" in environment:
n, m : nat
H : n < m
Pierre Rousselin said:
Yes, there are many workarounds like
proj1 (...)
orapply ->
I had troubles with apply <-
not allowing for apply <- ... in ... *as* ...
.
Pierre Rousselin said:
apply <-Nat.succ_lt_mono. (* not ok *)
it seems to work with apply <-@Nat.succ_lt_mono
.
Can you make those args non-maximally inferred — [n m]
instead of {n m}
?
Paolo Giarrusso said:
Can you make those args non-maximally inferred —
[n m]
instead of{n m}
?
No, with
Arguments Nat.succ_lt_mono [n m].
Coq tells me
Argument m is a trailing implicit, so it can't be declared non maximal.
Please use { } instead of [ ].
Olivier Laurent said:
Pierre Rousselin said:
apply <-Nat.succ_lt_mono. (* not ok *)
it seems to work with
apply <-@Nat.succ_lt_mono
.
Yes, but it's counter-intuitive that Coq is not able to infer these parameters.
That is due to the definition of apply <-
.
Tactic Notation "apply" "->" constr(lemma) := ...
means that Coq should fill all the holes in lemma
before even starting, which is never possible.
If the definition had been as follows, it would be a whole different matter.
Tactic Notation "apply" "->" open_constr(lemma) := ...
Indeed, this works:
From Coq Require Import PeanoNat.
Tactic Notation "apply" "->" open_constr(lemma) :=
bapply lemma ltac:(fun H => destruct H as [H _]; apply H).
Tactic Notation "apply" "<-" open_constr(lemma) :=
bapply lemma ltac:(fun H => destruct H as [_ H]; apply H).
Tactic Notation "apply" "->" open_constr(lemma) "in" hyp(J) :=
bapply lemma ltac:(fun H => destruct H as [H _]; apply H in J).
Tactic Notation "apply" "<-" open_constr(lemma) "in" hyp(J) :=
bapply lemma ltac:(fun H => destruct H as [_ H]; apply H in J).
Arguments Nat.succ_lt_mono {n m}.
Lemma foo (n m : nat) : S n < S m -> n < m.
Proof.
intros H.
now apply Nat.succ_lt_mono.
Qed.
Lemma bar (n m : nat) : n < m -> S n < S m.
Proof.
intros H.
apply ->Nat.succ_lt_mono.
exact H.
Qed.
I guess it can't be changed because it would break too many things?
Hard to tell. In my opinion, this is what apply <-/->
should have been in the first place (though it might not have been possible at the time). But some users might unknowingly depend on the fact that implicit arguments are solved eagerly.
It seems to make sense to try to have apply ->
as close as possible to apply
(I only discovered quite recently that they are built quite differently).
(the discussion is slowly but surely going away from the effort to remove Arith files but...) What to do of this discussion about apply
? An issue? An entry somewhere (where?)? I don't think it should be only recorded in a zulip post.
Certainly looks like a bug in apply->, and worth an issue... And an MR to assess breakage?
the failure in SerAPI is a simple test-case failure. Just a heads up @Emilio Jesús Gallego Arias, I plan to fix this and merge to SerAPI master
later today (no need to change other branches)
The final step is at hands: fiat-crypto, fiat_parsers and cross-crypto now build. Please, consider https://github.com/coq/coq/pull/18362 for merging: if these libs get back quickly in the CI there will be less risk of breaking them silently.
while updating some projects, I noticed that some Div2
constants that lived in Set
completely disappeared and have no replacement. For example, Div2.even_2n
had signature forall n : nat, Even.even n -> {p : nat | n = Nat.double p}.
, and I had to prove it from scratch to get something similar. See commit here.
I've hit this in two different projects now, so I think it makes sense to ask if this was intended?
what's the point of that when there's Even_double n : Even n -> n = double (div2 n)
?
this was inside something living in Set
, so Even_double
is not a direct replacement.
I don't understand.
Why do you need a direct replacement?
if I write a function that uses Div2.even_2n
, and I replace that with Even_double
, how will that typecheck?
it's been in Stdlib for ages, why not keep it as-is but just adapt to new definition of even/odd?
new statement that is direct replacement: Lemma even_2n : forall n : nat, Nat.Even_alt n -> {p : nat | n = Nat.double p}.
what library design principle(s) says that Even_double
must alone replace even_2n
?
The old Even.even
was:
Inductive even : nat -> Prop :=
| even_O : even 0
| even_S : forall n, odd n -> even (S n)
with odd : nat -> Prop :=
odd_S : forall n, even n -> odd (S n).
I guess it was not very practical. There are many things to choose from: Even (in Prop), Even_alt in (Set), Nat.even (in bool) and helper lemmas to switch between.
For your problem :
Lemma plop : forall n : nat, Nat.Even n -> {p : nat | n = Nat.double p}.
Proof. now intros n H%Nat.Even_double; exists (Nat.div2 n). Qed.
In practice, when patching libs, I agree this could be a bit painful. But always in the end it was a lot simpler (and indeed many lemmas could just disappear).
Now, I do agree that there are not perfect replacements for every case.
sub
are missing, e.g. n + m - n = m
apply
is still not perfect to either guess correctly the good direction, or to select it explicitly easily, so in the end many proj1
and proj2
were added, which is not very "mathy".TBH, I did not really think about it when proceeding to the removal, I was just trying to help since these files were already deprecated. Still, IMHO, the result is better than before. Not perfect, sure.
There are still other problems with Arith, but they're not related to this PR.
to be more precise, I wasn't criticizing the removal itself, but rather flagging up issues likely to affect maintainers. The key issue is that I think a lot of projects will "die" from the removal and people will have to scramble to fix their projects, and possibly run into things like lacking a substitute for even_2n
.
also, I think the deprecation policy is very generous as it is, so the removal is completely justified (and we probably need to do more removals), but if we had a policy about keeping/removing equivalent results, it might be clearer what to expect
This, I completely agree. Also, as @Jason Gross was telling me, having a parsable warning which gives an exact replacement (even though it is a bit ugly like fun n m => proj1 (Nat.succ_lt_mono n m)
), would allow people to repair automatically (well, maybe not, everything can happen) their libraries.
Actually, the deprecation warnings were already nice (thank you @Olivier Laurent ), but could probably be better.
Still, the problem of having many names for a single lemma (le_refl
or Le.le_refl
or Arith.Le.le_refl
, ...) makes an automatic translation from the outside difficult.
While I'm at it... (sorry for the noise), I had also many frustrating experiences with things like rewrite ?
and try
.
Lemma plop : 2 + 2 = 4.
Proof.
rewrite ?this_lemma_does_not_exist_anymore.
try apply this_one_neither.
In a mountain of Ltac, these things can be easy to miss. Now, I realise I could/should have used more sh
than coq
for this. But I'm still very convinced that Coq should be more helpful in that regard.
Pierre Rousselin said:
Still, the problem of having many names for a single lemma (
le_refl
orLe.le_refl
orArith.Le.le_refl
, ...) makes an automatic translation from the outside difficult.
If the deprecation warning always gives the character location of the full identifier in the source, and gives a printing of the replacement that will be globalized correctly in the current environment, then I think this is not an issue (we can just parse the character locations from the warning log)
Last updated: Dec 05 2023 at 11:01 UTC