Stream: Coq users

Topic: `dependent destruction` of expressions


view this post on Zulip Joshua Grosso (Sep 13 2020 at 06:20):

AFAICT, dependent destruction requires an ident as its argument. Is there a way to apply it to an expression instead? Usually, I'd think to remember the expression in question, but I can't without running into an "Conclusion depends on the bodies of ..." error. Is there some sort of dependent remember tactic, perhaps?

view this post on Zulip Paolo Giarrusso (Sep 13 2020 at 08:23):

I'm thinking you can emulate something like remember with pose and change, but I expect that might not help.

view this post on Zulip Paolo Giarrusso (Sep 13 2020 at 08:30):

I don't know without more details, but it seems like one of those times where destruct will give you an ill-typed term because your goal is too restrictive, and you need some ad-hoc solution.

view this post on Zulip Joshua Grosso (Sep 13 2020 at 19:32):

Gotcha, that's what I was afraid of... I'll try to cook up an MVCE and ask again, in that case.

view this post on Zulip Joshua Grosso (Sep 13 2020 at 21:41):

Hmm, I think it's because I'm using a Program-generated equality proof as part of what I'm returning from the match (whose "matchee" I'm trying to destruct).

EDIT: The below example doesn't actually cause the issue in question, I don't think. Please disregard it until I can find a better one.

That is to say, I'm trying to destruct (foo bar) with (something analogous to) the following as my goal:

match foo bar as x return x = foo bar -> A with
| Some x => fun _ : Some x = foo bar => x
| None => fun Heq : None = foo bar => baz Heq
end eq_refl = quux

(for appropriate definitions of foo, bar, baz, and quux).

The specific error when I use normal destruct is analogous to:

Error: Abstracting over the term "o" leads to a term
fun o0 : option A =>
match o0 as x return x = o0 -> A with
| Some x => fun _ : Some x = o0 => x
| None => fun Heq : None = o0 => baz Heq
end eq_refl = x which is ill-typed.
Reason is: Illegal application:
The term "baz" of type
    "let filtered_var := foo bar in None = filtered_var -> A"
cannot be applied to the terms
    "Heq" : "None = o0"
The 1st term has type "None = o0" which should be coercible to
 "None = foo bar".

and remembering foo bar gives:

Error: Conclusion depends on the bodies of Heqo o

view this post on Zulip Joshua Grosso (Sep 14 2020 at 07:31):

Okay, I've managed to find a reasonably(?) small example (I can't figure out anything else to remove while preserving the error in question):

From Coq Require Import Lia Lists.List Program.Equality.

Program Definition safe_nth
        {A : Type}
        (l : list A)
        (i : nat)
        (H__i : i < length l) :
  A :=
  match nth_error l i with
  | None => _
  | Some x => x
  end.
Obligation 1.
  symmetry in Heq_anonymous.
  apply nth_error_None in Heq_anonymous.
  lia.
Qed.

Program Lemma safe_nth_correct :
  forall {A : Type}
    (l : list A)
    (i : nat)
    (H__i : i < length l),
  nth_error l i = Some (safe_nth l i H__i).
Proof.
  unfold safe_nth.
  intros.
  simpl in *.
  destruct (nth_error l i). (* THIS FAILS :-( *)

view this post on Zulip Jakob Botsch Nielsen (Sep 14 2020 at 08:14):

Here is a very indirect way to prove it. The thing to realize is that the only dependency is in the None branch, and there is a contradiction there. Then you generalize it to show that no matter what is in the None case, they will be equal.

Lemma eta_some
      {A}
      (o : option A)
      (i : option A -> option A -> Type)
      (f : forall (a : A), i o (Some a)) g h :
  match o with
  | Some _ => True
  | _ => False
  end ->
  match o as o' return i o o' with
  | Some a => f a
  | None => g
  end =
  match o as o' return i o o' with
  | Some a => f a
  | None => h
  end.
Proof.
  intros.
  now destruct o.
Qed.

Program Lemma safe_nth_correct :
  forall {A : Type}
    (l : list A)
    (i : nat)
    (H__i : i < length l),
  nth_error l i = Some (safe_nth l i H__i).
Proof.
  unfold safe_nth.
  intros.
  destruct (nth_error l i) eqn:nth in |-.
  - replace (match nth_error l i as anonymous' return (anonymous' = nth_error l i -> A) with
     | Some x => fun _ : Some x = nth_error l i => x
     | None =>
         fun Heq_anonymous : None = nth_error l i =>
         safe_nth_obligation_1 A l i H__i Heq_anonymous
     end) with
      (match nth_error l i as o return (o = nth_error l i -> A) with
       | Some x => fun _ => x
       | None => fun _ => a
       end); cycle 1.
    { apply (eta_some _ (fun o' o => o = o' -> A)).
      now rewrite nth. }
    now rewrite nth.
  - apply nth_error_None in nth.
    lia.
Qed.

There is definitely a more direct way than this, but this is a pretty general way to handle this problem that I have used before.

view this post on Zulip Paolo Giarrusso (Sep 14 2020 at 09:51):

“Abstracting” errors are actually rather accurate. They’re even readable, if you accept that destruct e first abstracts on e. However! The dependency is often unnecessary and can be thrown away via e.g. generalize — and as @Jakob Botsch Nielsen said, that’s the case here.

view this post on Zulip Paolo Giarrusso (Sep 14 2020 at 09:55):

that is, one caller of e (here nth_error l i) demands that e is exactly e, but you can throw away the body of that caller.

view this post on Zulip Paolo Giarrusso (Sep 14 2020 at 09:57):

separately, it’s not nice to do proofs against Program-generated definitions — Equations gives you unfolding lemmas (tho honestly I’m not sure what they’d look like here).

view this post on Zulip Joshua Grosso (Sep 14 2020 at 21:14):

Jakob Botsch Nielsen said:

Here is a very indirect way to prove it. The thing to realize is that the only dependency is in the None branch, and there is a contradiction there. Then you generalize it to show that no matter what is in the None case, they will be equal.

Lemma eta_some
      {A}
      (o : option A)
      (i : option A -> option A -> Type)
      (f : forall (a : A), i o (Some a)) g h :
  match o with
  | Some _ => True
  | _ => False
  end ->
  match o as o' return i o o' with
  | Some a => f a
  | None => g
  end =
  match o as o' return i o o' with
  | Some a => f a
  | None => h
  end.
Proof.
  intros.
  now destruct o.
Qed.

Program Lemma safe_nth_correct :
  forall {A : Type}
    (l : list A)
    (i : nat)
    (H__i : i < length l),
  nth_error l i = Some (safe_nth l i H__i).
Proof.
  unfold safe_nth.
  intros.
  destruct (nth_error l i) eqn:nth in |-.
  - replace (match nth_error l i as anonymous' return (anonymous' = nth_error l i -> A) with
     | Some x => fun _ : Some x = nth_error l i => x
     | None =>
         fun Heq_anonymous : None = nth_error l i =>
         safe_nth_obligation_1 A l i H__i Heq_anonymous
     end) with
      (match nth_error l i as o return (o = nth_error l i -> A) with
       | Some x => fun _ => x
       | None => fun _ => a
       end); cycle 1.
    { apply (eta_some _ (fun o' o => o = o' -> A)).
      now rewrite nth. }
    now rewrite nth.
  - apply nth_error_None in nth.
    lia.
Qed.

There is definitely a more direct way than this, but this is a pretty general way to handle this problem that I have used before.

That's a pretty cool solution! One question I have, re: generality: It seems to me like this requires having a term of type A to put into the None case (in this case, the a from the Some case). If that were not the case (e.g. if we weren't dealing with option here), would there still be a way to make this approach work?

view this post on Zulip Joshua Grosso (Sep 14 2020 at 21:16):

Paolo Giarrusso said:

separately, it’s not nice to do proofs against Program-generated definitions — Equations gives you unfolding lemmas (tho honestly I’m not sure what they’d look like here).

Gotcha, I'll put learning Equations on my todo list :-)

view this post on Zulip Joshua Grosso (Sep 14 2020 at 21:17):

(And thank you both—as a newcomer to Coq, I was starting to feel like the problem may have just been—for whatever reason—unsolvable, so I'm super excited that there's a way to move forward.)

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 02:20):

Newcomer? I inferred you were a rather advanced user... For a newcomer, my recommendation would include "consider avoiding that"

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 02:22):

E.g. Chlipala advocates doing proofs about ML-typed functions (unlike safe_nth); while there are advantages sometimes beyond that Coq subset, it does require more challenging proof techniques

view this post on Zulip Jasper Hugunin (Sep 15 2020 at 03:18):

Paolo's advice is very practical, and maintaining a separation between the relatively simply typed programs you are studying and their (mostly erasable/non-computationally-relevant/sub-singleton) dependently typed proofs is probably the most efficient approach for doing much program verification or formalized mathematics in Coq today.
To provide a different and somewhat irresponsible perspective however, I often feel that complex dependent types are where all the fun is. (Even though Coq sometimes makes it difficult and ugly.)

view this post on Zulip Jasper Hugunin (Sep 15 2020 at 03:19):

Not that the simpler typed approach is un-fun, just that I find complex dependent types more fun. Opinions vary.

view this post on Zulip Jakob Botsch Nielsen (Sep 15 2020 at 08:33):

Joshua Grosso said:

Jakob Botsch Nielsen said:

Here is a very indirect way to prove it. The thing to realize is that the only dependency is in the None branch, and there is a contradiction there. Then you generalize it to show that no matter what is in the None case, they will be equal.
<snip>
There is definitely a more direct way than this, but this is a pretty general way to handle this problem that I have used before.

That's a pretty cool solution! One question I have, re: generality: It seems to me like this requires having a term of type A to put into the None case (in this case, the a from the Some case). If that were not the case (e.g. if we weren't dealing with option here), would there still be a way to make this approach work?

Indeed, it is possible that the type of the match returned depends on the shape of o, in which case the body of the Some branch might not be welltyped to substitute into the body of the None branch. For example:

Definition weird_safe_nth {A} (l : list A) (i : nat) (H__i : i < length l) :
  match nth_error l i with
  | Some _ => A
  | None => False
  end.
Proof.
  destruct (nth_error l i) as [a|] eqn:nth; [exact a|].
  abstract (apply nth_error_None in nth; lia).
Defined.

From Coq Require Import JMeq.
Lemma foo {A} (l : list A) (i : nat) (H__i : i < length l) :
  JMeq (nth_error l i) (weird_safe_nth l i H__i).
Proof.

I'm not sure what one would do for this case. I guess stare at it for a while and try to generalize it even more :-)

view this post on Zulip Christian Doczkal (Sep 15 2020 at 08:34):

It might be worth pointing out that families of types indexed by natural numbers (e.g., 'I_n as the type of integers less than n in mathcomp) often behave reasonably well in practice. Indeed mathcomp includes both a simply-typed nth function (with default) and a dependently typed tnth (nth for tuples). However, it is defined as

Definition tnth (n : nat) (t : n.-tuple T) (i : 'I_n)  : T := nth (tnth_default t i) t i.

where tnth_default : forall (n : nat) (T : Type), n.-tuple T -> 'I_n -> T shows that given a correct index into the tuple, the base type must be inhabited. So the safe version is merely a non-recursive wrapper around the version with default. This makes it very easy to lift previously established lemmas for nth.

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 09:46):

Having used Agda, what you say on tnth is somewhat sad...

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 09:48):

It seems to me "non-dependent Coq" (for brevity) and dependent Agda are two local optima with a pretty wide gulf in-between

view this post on Zulip Karl Palmskog (Sep 15 2020 at 09:50):

how easy is it to use symbolic computation in proofs à la SSReflect in Agda? It seems to me that Coq has many different styles of encoding/proofs - how wide is this range in current Agda practice?

view this post on Zulip Karl Palmskog (Sep 15 2020 at 09:59):

I think it should also be noted that automation such as hammers (taken for granted in the HOL family of proof assistants) currently work best on non-dependent encodings

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 10:00):

IME, without significant deployed metaprogramming, I think there's much less variation in Agda proof styles. Your tactics are (beyond a very limited rewrite) definitional equality, dependent pattern matching, with, and design patterns.

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 10:01):

well, I forgot induction-recursion.

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 10:02):

It's a bit like programming in Haskell without Template Haskell. If you can come up with a principled metaprogram in a certain style, or your proof is manual, great. Not otherwise.

view this post on Zulip Christian Doczkal (Sep 15 2020 at 10:24):

Paolo Giarrusso said:

Having used Agda, what you say on tnth is somewhat sad...

Not having used Agda but having used mathcomp quite a bit, I wonder what would make you sad.

Indeed, mathcomp is geared heavily towards a very specific proof style: equational reasoning and symbolic computation. Just to have an idea, the mathcomp library has approximately 2.5 times more invocations of rewrite than apply, and many rewrites actually rewrite a long sequence of rules. So the ratio of rewrite steps to lemma applications is probably closer to 5:1.

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 10:37):

sorry, “sad” was not very specific; but indexing into Vector i for Fin i is the poster child for dependent pattern-matching.

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 10:38):

OTOH, reusing non-dependent proofs on dependent containers (or viceversa) is a partially open questions (a relevant answer involves ornaments, but I don’t understand them).

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 10:40):

in any case, I guess there’s nothing wrong about it within mathcomp. Just don’t show it to McBride.

view this post on Zulip Karl Palmskog (Sep 15 2020 at 10:42):

I think there is some kind of productivity study hiding there. Intuitively, the practical expressivity gains from going from FOL -> HOL are much greater than going from HOL -> Agda/Coq (e.g., Harrison has done just fine without having an explicit R^n)

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 10:45):

IIUC even mathematicians would argue about “just fine without R^n” — IIRC you can encode it as a predicate (?), but libraries using types and those using predicates are not fully compatible

view this post on Zulip Paolo Giarrusso (Sep 15 2020 at 10:47):

OTOH, with “light” dependent types you’re pretty productive with Agda — and it’s not about the type system (you’re much more productive than with Equations, because of the much better tool support)

view this post on Zulip Enrico Tassi (Sep 15 2020 at 10:47):

Indeed it's all about productivity. Using vectors is fun, but too hard and, more importantly, not modular. The next time you have a property on lists, say being duplicate free, what do you do? You define a new inductive data type and write for the third time rev, append, ... ? The whole point of defining tuples as a sigma type over lists is that you can reuse all you know about lists, and some stuff can be lifted to tuples since it changes the length in a predictable way and you cut out of your proofs the reasoning part about size. You can't reuse much if you define a new inductive type (that embeds the property you are interested in) every time.

view this post on Zulip Karl Palmskog (Sep 15 2020 at 10:51):

I have watched quite a few people learn Coq "on their own" lately. The problem I see is that people sort of mix and match booleans and some exotic dependent type constructions. This style tends to bring the worst of HOL and the worst of Coq.

view this post on Zulip Karl Palmskog (Sep 15 2020 at 10:55):

one way to view MathComp is that it gives you the proper machinery to deal with booleans (and other symbolic stuff) in a logic like Coq's. I conjecture that many people should probably stay at that level (principled booleans + minimum measure of dependent types) to be productive, since many of their FOL intuitions still hold

view this post on Zulip Karl Palmskog (Sep 15 2020 at 10:57):

for example, "proper" DTT style would probably never use booleans for anything (as per https://existentialtype.wordpress.com/2011/03/15/boolean-blindness/)

view this post on Zulip Christian Doczkal (Sep 15 2020 at 11:02):

Karl Palmskog said:

I think there is some kind of productivity study hiding there. Intuitively, the practical expressivity gains from going from FOL -> HOL are much greater than going from HOL -> Agda/Coq (e.g., Harrison has done just fine without having an explicit R^n)

While you are probably right, there are many concepts that can be expressed naturally (and painlessly) with dependent types. My favorite example are (of course) finite automata. The point here being that the typical textbook presentation (Q,delta,s,F) is dependent. Moreover, one rarely indexes other things on the states of an automaton, so one almost never runs into dependent-types related problems. "Dependent types hell" usually starts when indexing on something that has nontrivial operations (eg, vectors and arithmetic operations).

view this post on Zulip Christian Doczkal (Sep 15 2020 at 11:05):

Okay, another common form of "dependent types hell" is nested sigma types (e.g., removing three nodes from some graph, one after the other). In our latest paper, this forced us to resort to a non-dependently typed auxiliary representation :neutral:

view this post on Zulip Karl Palmskog (Sep 15 2020 at 11:11):

finite automata are a clean example. Gonthier wrote about how he used dependent types for accessing reflection in the 4color theorem, but to me it's less obvious that this is unique to dependent types (Isabelle has some limited forms of reflection).

view this post on Zulip Christian Doczkal (Sep 15 2020 at 11:42):

I think we are starting to digress here, but for what it's worth: I think the issue with reflection is more about how much is (conceptually) done within the theory. Isabelle/HOL doesn't have an intrinsic notion of computation, so IIUC they call a "ground evaluator" which compiles a term to ML, evaluates it, and then accepts the result as "equal". In Coq, everything beyond reification is part of the core theory. Of course, for performance reasons, people tend to use vm_compute for their reflective proofs, making the difference a lot smaller. :shrug:

view this post on Zulip Karl Palmskog (Sep 15 2020 at 16:29):

on this topic, I think we are all also waiting for @Jason Gross to sum up his mega-thread spanning the Agda and Coq mailing lists (https://coq.discourse.group/t/why-dependent-type-theory/657)

view this post on Zulip Alexander Gryzlov (Sep 15 2020 at 16:47):

Enrico Tassi said:

Indeed it's all about productivity. Using vectors is fun, but too hard and, more importantly, not modular. The next time you have a property on lists, say being duplicate free, what do you do? You define a new inductive data type and write for the third time rev, append, ... ? The whole point of defining tuples as a sigma type over lists is that you can reuse all you know about lists, and some stuff can be lifted to tuples since it changes the length in a predictable way and you cut out of your proofs the reasoning part about size. You can't reuse much if you define a new inductive type (that embeds the property you are interested in) every time.

Algebraic ornaments were supposed to be an answer to that problem, however I'm not sure what's the current state of that branch.

view this post on Zulip Alexander Gryzlov (Sep 15 2020 at 16:50):

Oh, @Paolo Giarrusso has already mentioned that

view this post on Zulip Joshua Grosso (Sep 15 2020 at 20:05):

Paolo Giarrusso said:

E.g. Chlipala advocates doing proofs about ML-typed functions (unlike safe_nth); while there are advantages sometimes beyond that Coq subset, it does require more challenging proof techniques

I guess I'm a bit confused re: Chlipala's approach: I recently read/skimmed CPDT, and it seemed to me like he recommended using "fancy" dependently-typed functions (with sigma types, etc.)—did I miss the book's point entirely?

view this post on Zulip Joshua Grosso (Sep 15 2020 at 20:06):

Jasper Hugunin said:

Paolo's advice is very practical, and maintaining a separation between the relatively simply typed programs you are studying and their (mostly erasable/non-computationally-relevant/sub-singleton) dependently typed proofs is probably the most efficient approach for doing much program verification or formalized mathematics in Coq today.
To provide a different and somewhat irresponsible perspective however, I often feel that complex dependent types are where all the fun is. (Even though Coq sometimes makes it difficult and ugly.)

(Just to make sure I understand: by "dependently typed proofs" do you mean e.g. those in sigma types?)

view this post on Zulip Enrico Tassi (Sep 15 2020 at 20:10):

A pointer to Adam's opinion on Vectors for beginners: https://sympa.inria.fr/sympa/arc/coq-club/2017-12/msg00087.html

view this post on Zulip Joshua Grosso (Sep 15 2020 at 20:12):

Thanks, @Enrico Tassi !

view this post on Zulip Karl Palmskog (Sep 15 2020 at 20:14):

I always saw the FRAP book (http://adam.chlipala.net/frap/) as a "tempering" (cooling down?) of the sometimes dependent-types-happy CPDT for engineering purposes

view this post on Zulip Joshua Grosso (Sep 15 2020 at 20:16):

Paolo Giarrusso said:

“Abstracting” errors are actually rather accurate. They’re even readable, if you accept that destruct e first abstracts on e. However! The dependency is often unnecessary and can be thrown away via e.g. generalize — and as Jakob Botsch Nielsen said, that’s the case here.

Hmm, since this really is a bona fide "issue" with dependent pattern matching, does Agda also run into this with its pattern matching?

view this post on Zulip Jasper Hugunin (Sep 15 2020 at 23:44):

How I see things: vectors defined as {l : list A & length l <= n} are difficult to use because they are dependently typed, while vectors defined as Inductive vec (A : Type) : nat -> Type := nil : vec 0 | cons (a : A) (n : nat) (v : vec n) : vec (S n). are even more difficult to use because they are both dependently typed and you don't get to reuse proofs about <=. If you want to keep a bright separation between the simply and dependently typed worlds, don't write safe_index : forall n, vec A n -> fin n -> A, instead write index : list A -> nat -> option A and prove forall l i, i < length l -> is_some (index l i). (In short, sigma-ing together data with a proof has pretty much all the problems of dependent types)
An example problem you start running into with sigma types, given (vec1 vec2 : {l : list A & length l <= n}), does fst vec1 = fst vec2 imply vec1 = vec2? (Yes, but it's not at all straightforward to prove).

view this post on Zulip Joshua Grosso (Sep 16 2020 at 00:37):

Jasper Hugunin said:

How I see things: vectors defined as {l : list A & length l <= n} are difficult to use because they are dependently typed, while vectors defined as Inductive vec (A : Type) : nat -> Type := nil : vec 0 | cons (a : A) (n : nat) (v : vec n) : vec (S n). are even more difficult to use because they are both dependently typed and you don't get to reuse proofs about <=. If you want to keep a bright separation between the simply and dependently typed worlds, don't write safe_index : forall n, vec A n -> fin n -> A, instead write index : list A -> nat -> option A and prove forall l i, i < length l -> is_some (index l i). (In short, sigma-ing together data with a proof has pretty much all the problems of dependent types)
An example problem you start running into with sigma types, given (vec1 vec2 : {l : list A & length l <= n}), does fst vec1 = fst vec2 imply vec1 = vec2? (Yes, but it's not at all straightforward to prove).

(Does fst equal projT1 here? or am I misunderstanding)

view this post on Zulip Jasper Hugunin (Sep 16 2020 at 01:01):

Yes, fst is projT1.

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 01:09):

FWIW in Agda, Vector is very easy to use thanks to pattern matching, while length l <= n seems a nightmare.

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 01:13):

concretely, this thread discusses how to write the following (https://github.com/agda/agda-stdlib/blob/master/src/Data/Vec/Base.agda#L58-L60):

lookup :  {n}  Vec A n  Fin n  A
lookup (x ∷ xs) zero    = x
lookup (x ∷ xs) (suc i) = lookup xs i

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 01:16):

which reminds me that POPLMark Reloaded (arguably) takes more code in Coq than in Agda.

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 01:17):

can you get into trouble with DTs in Agda? Yes; as usual, you need your proofs to use the right inductive structure, but this becomes harder.

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 01:20):

And sometimes you can paint yourself into a corner, especially if your statements/definitions require _propositional_ equalities to become type-correct — and you need to start over more cleverly, or to reduce the amount of dependent types

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 01:21):

that is: if you wrote Vector.cons to return Vec A (n + 1) instead of Vec A (S n), you’d need to rewrite n + 1 with S n often — but using S n in the first place makes your life easier.

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 01:23):

however, other cases can be harder. if you want well-scoped/typed substitutions, you’ll basically want to follow the sigma calculus to write your definitions — almost anything else will be painful...

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 01:24):

and in yet other cases, getting the right structure requires lots of cleverness — enough that I gave up, quite honestly.

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 01:26):

to put it better: in Agda you must “coordinate” the inductive structure of your type and your index, and vectors are easy because those structures coincide.

view this post on Zulip Joshua Grosso (Sep 16 2020 at 02:14):

Jasper Hugunin said:

Yes, fst is projT1.

Is it "cheating" to assume proof irrelevance (is that not done very often in practice?)

view this post on Zulip Jasper Hugunin (Sep 16 2020 at 02:21):

Joshua Grosso said:

Is it "cheating" to assume proof irrelevance

Depends on what game you are playing :stuck_out_tongue_wink: If you do use it, you want to be careful that your computations don't get stuck on the axiom. You often end up being able to prove irrelevance for all the propositions you care about if you make careful choices and maybe add some assumptions.

view this post on Zulip Jasper Hugunin (Sep 16 2020 at 03:36):

Paolo Giarrusso said:

FWIW in Agda, Vector is very easy to use thanks to pattern matching, while length l <= n seems a nightmare.

Coq is worse than Agda here, but not terribly so. If you have v : Vector.t A (S n), you can write match v in Vector.t _ (S _) with cons _ a n v => ... end to get a similar, though somewhat less smart and more verbose, result as Agda's pattern matching support. However, if other things in the context depend on n, you have to pass them through the match with the convoy pattern in Coq, while Agda seems to manage that automatically at least for earlier arguments of the same function.

view this post on Zulip Enrico Tassi (Sep 16 2020 at 06:27):

If the predicate is decidable you can prove that all its proofs are the same. And if you express the predicate as a function to bool, then you can prove it once an for all decidable predicates. This is how we do it in mathcomp, no need for an axiom.

view this post on Zulip Joshua Grosso (Sep 16 2020 at 06:52):

Enrico Tassi said:

If the predicate is decidable you can prove that all its proofs are the same. And if you express the predicate as a function to bool, then you can prove it once an for all decidable predicates. This is how we do it in mathcomp, no need for an axiom.

That's really interesting, I wasn't aware of that!

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 07:04):

Isn’t that too strong? _Equality_ on a decidable predicate is proof irrelevant (aka if P decidable and p1, p2: P, then p1 = p2 is proof-irrelevant/has only one inhabitant — that’s Hedberg’s theorem), and bool_decide P = true (or however you write it) is proof-irrelevant itself

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 07:05):

But even Inductive Foo : Prop := A | B seems decidable without being proof-irrelevant

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 07:09):

@Jasper Hugunin I’ve used Coq’s dependent pattern matching, but I think you’re underselling Agda and Coq-Equations quite a bit. The reason Agda’s fine without reuse from List to Vector is that Agda proofs by pattern matching here are short and easier (they’re not shorter by golfing).

view this post on Zulip Enrico Tassi (Sep 16 2020 at 07:10):

Indeed I don't know which are the exact conditions for an arbitrary inductive. In particular if the predicate "does not predicate" as in your example, it is likely to be false. My point was that some predicates can be both expressed as boolean function or inductive relations, and if you need proof irrelevance you'd better use the boolean function and use UIP from Hedberg. The classical example is leq : nat -> nat -> ?. You can prove proof canonicity even for the one in Prop, but it is not trivial at all. And the proof depends on the shape of the constructors, so you cannot reuse the same proof for another decidable predicate. Hedberg's proof is very tricky, but you do it once and for all.

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 19:23):

agreed... OT: somehow, stdpp seems to prove Hedberg in 10 lines. 30 if you add other typeclass instances (not sure if they’re needed): https://gitlab.mpi-sws.org/iris/stdpp/-/blob/master/theories/proof_irrel.v#L1-31

view this post on Zulip Paolo Giarrusso (Sep 16 2020 at 19:24):

I know nothing about it, except that Coq’s stdlib one is IIRC much bigger.

view this post on Zulip Karl Palmskog (Sep 16 2020 at 19:26):

MathComp version: https://github.com/math-comp/math-comp/blob/ddac4a5d1e560458c61faf81c14db8abfdd06a0c/mathcomp/ssreflect/eqtype.v#L299-L306

view this post on Zulip Karl Palmskog (Sep 16 2020 at 19:28):

so I think what Enrico means is that it's conceptually tricky, but seemingly straightforward to implement?

view this post on Zulip Enrico Tassi (Sep 17 2020 at 07:46):

IIRC is is very tricky because it cannot easily be decomposed in trivial steps without ending up with an illtyped term. And the original paper does not help much in that, since it is essentially a big lambdaterm written in TeX. It's the usual thing with dependent elimination, it's powrful because it replaces many things at the same time, but you can't run it in slow motions, since replacing only "a few things" gives you an illtyped goal.

view this post on Zulip Enrico Tassi (Sep 17 2020 at 07:50):

IIRC the proof by Georges carefully prepares the goal for the last line which is the brain killer. I believe the proof in the stdlib is essentially the same, modulo size.

view this post on Zulip Kenji Maillard (Sep 17 2020 at 08:04):

I really like Nicolai Kraus's proof of Hedberg theorem (in particular because it easily generalize to a locally decidable type, so it can be used to show that some types are hSets without having decidable equality globally)

view this post on Zulip Christian Doczkal (Sep 17 2020 at 11:18):

Enrico Tassi said:

IIRC the proof by Georges carefully prepares the goal for the last line which is the brain killer.

Yes. My own understanding , after disassembling the by case: y /; case: _ / (proj x _). into:

rewrite /cancel. case: y /. move: (proj x _). move: {-1}x => z. case: z /. reflexivity.

is that in order to case on an equality, it must be possible to generalize the goal wrt. to both the equality and its right hand side. This is why join needs to have the type forall s s' : T, x = s -> x = s' -> s = s'. Nifty! :smile:

Also, it seems like the _ in the original line cannot be made explicit, because it's the newly introduced variable for the RHS that is being replaced. But the typing errors one gets if one tries help in understanding what's happening.

view this post on Zulip Joshua Grosso (Sep 25 2020 at 22:00):

Paolo Giarrusso said:

separately, it’s not nice to do proofs against Program-generated definitions — Equations gives you unfolding lemmas (tho honestly I’m not sure what they’d look like here).

Does using a with x clause in an Equations definition not auto-generate an equality proof for x, like when using match x with in Program?

view this post on Zulip Paolo Giarrusso (Sep 25 2020 at 22:05):

Lifting my Agda experience, you’ll need the same with clause at the call site

view this post on Zulip Paolo Giarrusso (Sep 25 2020 at 22:07):

in general, the RHS is only well-typed assuming the results of all the matches on the LHS

view this post on Zulip Paolo Giarrusso (Sep 25 2020 at 22:08):

So, at the call site, you need to do the same matches, then you can refine the type of the LHS, and then the LHS and RHS have the same type

view this post on Zulip Paolo Giarrusso (Sep 25 2020 at 22:08):

Without any extra cast introduced by the translation.

view this post on Zulip Paolo Giarrusso (Sep 25 2020 at 22:09):

(Apologies that this is a horrible explanation)

view this post on Zulip Paolo Giarrusso (Sep 25 2020 at 22:10):

Also, back to your question, you don’t get that extra equation “for free” in your context; you must use the “inspect pattern” (which I learned from Agda’s stdlib, but which Equations reuses)

view this post on Zulip Paolo Giarrusso (Sep 25 2020 at 22:12):

In fact, my last message is probably the actual answer; the earlier messages answer a more advanced question, “how could you possibly get unfolding lemmas in the presence of with clauses, without seeing noise introduced by compiling to eliminators”.

view this post on Zulip Joshua Grosso (Sep 25 2020 at 22:25):

Paolo Giarrusso said:

Also, back to your question, you don’t get that extra equation “for free” in your context; you must use the “inspect pattern” (which I learned from Agda’s stdlib, but which Equations reuses)

Thanks! (For posterity: https://github.com/mattam82/Coq-Equations/blob/dea7a063492327bcb62eee9148187712ffbff2bf/examples/Basics.v#L506)

view this post on Zulip Joshua Grosso (Sep 25 2020 at 23:52):

uhhhh... if funelim foo generates a hypothesis of the form False_rect ... = foo, does that mean anything useful? It seems vacuous to me, but I'm scared nonetheless by such a wacky hypothesis.

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 00:45):

Well, why do you have a False in context, assuming False_rect is fully applied?

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 00:45):

It sounds like your in an absurd branch and should dispatch the goal by destructing the False expression.

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 00:46):

In fact, maybe you had False before, but it wasn’t obvious?

view this post on Zulip Joshua Grosso (Sep 26 2020 at 00:54):

When I Unset Printing Notations I get False_rect A (ZMicromega.ZTautoChecker_sound ...), where that ZMicromega.... proof object has type False

view this post on Zulip Joshua Grosso (Sep 26 2020 at 00:54):

A ha, destruct (ZMicromega....) works!

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 00:55):

for clarity: a function forall n, Fin n -> T needs to handle the n = 0 case, and one way to do that is to return False_rect (proof that Fin 0 -> False applied to the Fin 0` input)

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 00:55):

but (without knowing funelim’s spec) it seems unfortunate that funelim does not dispatch this case for you

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 00:56):

FWIW: that ZMicromega... term likely comes from one of the micromega tactics, e.g. lia — which I guess your obligations are using ;-)

view this post on Zulip Joshua Grosso (Sep 26 2020 at 01:00):

I'm apparently a bit unclear on how abstract interfaces with Obligations, since I'm only ever using abstract lia and abstract tauto (is it possible program_simpl is auto-discharging obligations with non-abstract-ified versions of such tactics)?

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 10:00):

IIRC program_simpl does not call lia. But Equations does not call program_simpl, but its own tactics. Moreover, this is controlled by the coq flags we mentioned, and Coq flags are pieces of global mutable state.

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 10:01):

So if you require multiple modules setting the flags, predicting the result is tricky. Test what you have.

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 10:02):

Otherwise, abstract does not do that much inside Qed-ed proofs.

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 10:03):

It only creates a separate function inside Defined blocks.

view this post on Zulip Paolo Giarrusso (Sep 26 2020 at 10:05):

Finally, I’m not sure myself, but Equations must process your definition somewhat to generate induction schemes. Still, there’s no reason the proof of False must be exposed.

view this post on Zulip Kenji Maillard (Sep 26 2020 at 12:32):

@Joshua Grosso Do you have a reasonably sized example of funelim producing a goal with False as hypothesis ? If so, you might want to open an issue on the github bug-tracker.


Last updated: Oct 13 2024 at 01:02 UTC