Stream: Coq users

Topic: ssreflect intropattern blowup


view this post on Zulip Andrej Dudenhefner (Jun 30 2021 at 10:56):

Compared to "standard" intropatterns the ssreflect intropatterns result in larger terms. This could lead to longer Qed time. Here an example:

Lemma test : (False \/ (True /\ False)) -> False.
Proof.
  intros [H0|[H1 H2]]; assumption.
Qed.
Print test.
(*
fun H : False \/ True /\ False =>
match H with
| or_introl H0 => H0
| or_intror (conj _ H2) => H2
end
*)

Require Import ssreflect.

Lemma test_ssr : (False \/ (True /\ False)) -> False.
Proof.
  move=> [H0|[H1 H2]]; assumption.
Qed.
Print test_ssr.
(*
fun __top_assumption_ : False \/ True /\ False =>
(fun
   (_evar_0_ : forall a : False,
               (fun _ : False \/ True /\ False => False)
                 (or_introl a))
   (_evar_0_0 : forall b : True /\ False,
                (fun _ : False \/ True /\ False => False)
                  (or_intror b)) =>
 match
   __top_assumption_ as o
   return ((fun _ : False \/ True /\ False => False) o)
 with
 | or_introl x => _evar_0_ x
 | or_intror x => _evar_0_0 x
 end) (fun H0 : False => H0)
  (fun __top_assumption_0 : True /\ False =>
   (fun
      _evar_0_ : forall (a : True) (b : False),
                 (fun _ : True /\ False => False) (conj a b) =>
    match
      __top_assumption_0 as a
      return ((fun _ : True /\ False => False) a)
    with
    | conj x x0 => _evar_0_ x x0
    end) (fun (_ : True) (H2 : False) => H2))
*)

Is this by design? Should this be reported as an issue and possibly improved upon?

view this post on Zulip Christian Doczkal (Jun 30 2021 at 11:17):

Summoning @Enrico Tassi who might know. I'm curious too.

view this post on Zulip Janno (Jun 30 2021 at 11:17):

I have also wondered about this. I much prefer ssreflect proofs but whenever I use tactics to build transparent terms I tend to use vanilla coq tactics to avoid the blowup. (Also for better readability of the term.)

view this post on Zulip Enrico Tassi (Jun 30 2021 at 11:57):

I know where this comes from, there is a routine which takes a term with holes and abstracts the holes out with lambdas (see the _evar_n_ thing). Then when you chain tactics you don't beta-reduce the proof.
To my knowledge this is not a source of slow Qed (for the type checker a variable of the right type is better than a term of the right type). Did you measure a slowdown, or it is a speculation?

I recall having added explicit beta steps for tactics which produce "transparent" steps, eg have @H (wrt to have H). I can add more if it helps Qed.

If you care about the readibility of the proof term, you should use no tactic at all IMO. It's like hoping a compiler generates readable assembly, an objective which is orthogonal, if not opposite, to its main purpose.

view this post on Zulip Andrej Dudenhefner (Jun 30 2021 at 12:10):

I will try to measure whether the slow Qed is because of large terms. (I just saw huge terms constantly repeating subgoals as an output of a larger automation procedure).

view this post on Zulip Christian Doczkal (Jun 30 2021 at 12:11):

I agree with everything you (Enrico) said except for the last part, because I have a use-case similar to the one of @Janno : Sometimes writing a (dependently typed) term is annoying/tricky to write by hand. In those cases I like to write it using tactics followed by some post-processing. So the less noise there is is, the less post-processing is required. Admittedly, this has nothing to do with proof checking time.

view this post on Zulip Enrico Tassi (Jun 30 2021 at 12:28):

Unless I'm mistaken, the blow up we see here can be solved by eval lazy in (or even lazy beta). But maybe you mean "manual post-processing"...

view this post on Zulip Andrej Dudenhefner (Jun 30 2021 at 12:54):

Here a pathological example (that is quite close to actual code). More than twice the time for less than half the iterations.
This emphasizes that for automation standard Coq could be a faster choice. I still use ssreflect for coherence.

Require Import List.

Lemma neq_0_1 : 0 = 1 -> False.
Proof. easy. Qed.

Set Ltac Profiling.
Reset Ltac Profile.
Lemma fast : In 1 (repeat 0 500) -> False.
Proof.
  intros H.
  do 500 (destruct H as [H|H]; [apply (neq_0_1 H)|]).
  exact H.
Qed.
Show Ltac Profile. (* 500 iterations; total time: 0.578s *)

Reset Ltac Profile.
Lemma slow : In 1 (repeat 0 200) -> False.
Proof.
  do 200 (move=> [/neq_0_1|]; [exact id|]).
  exact id.
Qed.
Show Ltac Profile. (* 200 iterations; total time: 1.281s *)

view this post on Zulip Andrej Dudenhefner (Jun 30 2021 at 12:55):

(Also I'm not sure how to measure Qed time. The timings given are only for the tactics.)

view this post on Zulip Andrej Dudenhefner (Jun 30 2021 at 12:58):

*Just counting seconds, the Qed time for larger iteration numbers also differs by more than a factor of 2.

view this post on Zulip Janno (Jun 30 2021 at 12:59):

You can use Time Qed. It doesn't track universe minimization so be aware that the timings will be only be comparable if both proofs have similar universes and constraints.

view this post on Zulip Janno (Jun 30 2021 at 13:02):

A useful trick to perform multiple measurements is to use Fail Fail Time Qed. which allows backtracking over just that statement instead of going back to the beginning of the proof.

view this post on Zulip Andrej Dudenhefner (Jun 30 2021 at 13:02):

Thanks @Janno Time Qed.: fast 1000 iterations: 5.018 secs; slow 500 iterations: 8.272 secs.

view this post on Zulip Andrej Dudenhefner (Jun 30 2021 at 13:04):

To be clear: I am ok with the more powerful ssreflect tactics taking more time to construct a term. I wish that the constructed term would Qed as fast as standard Coq.

view this post on Zulip Janno (Jun 30 2021 at 13:05):

I am getting similar results, I think. For 500 iterations in both proofs I get 1.4s versus 8.4s. Admittedly my machine is quite busy but the timings seem stable. I have also changed the ssreflect version to be a bit closer to the vanilla coq one:

  intros H.
  do 500 (case: H => [/neq_0_1 H|H]; [exact H|]).
  exact H.

view this post on Zulip Janno (Jun 30 2021 at 17:47):

@Enrico Tassi pinging you just in case you missed the last bits of discussion above. There does indeed seem to be a measurable difference in Qed times.

view this post on Zulip Enrico Tassi (Jun 30 2021 at 17:53):

Thanks for the heads up and the test case, I'll open an issue

view this post on Zulip Enrico Tassi (Jun 30 2021 at 17:56):

https://github.com/coq/coq/issues/14584


Last updated: Oct 03 2023 at 20:01 UTC