Quoted here is a typical example from Software Foundations. My novice question is, does the first row here p: nat "mean" anything?
I realize that the example wouldn't work if p had another type, like (p: Type) as the odd and even functions only accept nat. But it's still puzzling that p: nat is (or seem to be?) a hypothesis and not a "just" type declaration.
And so far p : nat and similar is seldom used. (Maybe they participate with the exists tactic sometimes.)
1 subgoal
p : nat
H : forall n : nat, even n = true -> even (S n) = false
H0 : forall n : nat, even n = false -> odd n = true
H1 : even p = true
______________________________________(1/1)
odd (S p) = true
(Maybe everything becomes clear when I come to the "The Curry-Howard Correspondence" chapter. Who knows? :-) )
the general meaning of t : T
is that: t
is a term that has type T
. So, p
is here a term that has type nat
, and nat
is an inductive type defined in the standard library.
there isn't much difference between a term-has-type declaration and a "hypothesis" inside a proof, if that's what you're after, they would both become part of the environment you're working in
but maybe you are after some external mathematical explanation of what a typing judgment is? For example, this recent overview of some of the foundations used by Coq could be helpful.
Thanx for the comments and the PDF links. They seem to be somewhat close to what I can digest.
I'm slightly surprised that you call t in t : T
a term and not a variable (yes, I'm a programmer. Maybe that shows).
I would have guessed it was a variable containing term of type T.
Coq and this type-centric world is a fascinating place.
yes, I would say that t : T (over the _____ line, or in a binder) is a variable not a term
(:
can also be used in terms to force the type, eg (fun x => x) : nat -> nat
is a valid term)
(of course a variable x can be used to make a term also written x so it can be a little confusing)
I tend to divide into ground terms like 1
, which are definitely not variables, and non-ground terms like x
or S x
, some of which, e.g., x
behave like mathematical variables
not sure this can be refined/justified more theoretically
at least in HOL, which has simpler foundations, the t
in t : T
is defined as a term in the metalanguage, regardless of whether it's ground or not
Even in simple type theory (hence HOL?), there’s a distinction between x : T (a variable) and t : T (a term). It’s not just a distinction for programmers.
from HOL4:
> ``1 : num``;
val it = “1”: term
> ``x : num``;
val it = “x”: term
the x
is a (free) variable, sure, which is a form of term in HOL.
Variables are terms in Coq too, but a context is a list of variable : type
pairs (with optional terms as definition bodies).
To recap: all variables are terms; not all terms are variables; your context only can declare variables, not arbitrary terms.
I think you have to qualify "context" to be in some abstract type-theoretic sense or at implementation level then, since users will be faced with stuff like this:
Section sec.
Context (x := 5 + 3).
Lemma x8 : x = 8.
Proof.
(* context shows x := 5 + 3 : nat *)
reflexivity.
Qed.
End sec.
what qualification is needed?
your context only can declare variables, not arbitrary terms.
Do you think x
in my example is a variable? It sure seems declared by the context.
x is a variable yes
the usual meaning of variable, is that it is given meaning by substitution. But I don't see how I can substitute x
for anything (it's already given a definite meaning)
There is only one part of Coq's logic that is able to distinguish between variable and non-variable: dependent pattern-matching. And according to its rules x := 5 + 3
is not a variable.
There is only one part of Coq's logic that is able to distinguish between variable and non-variable: dependent pattern-matching.
how so?
The abstraction that is performed upon the branches of a match
is forced to discard any definition and to keep only declarations. (Perhaps it is possible to create a kernel term with branches of the form let x := 5 + 3 in ...
instead of fun x => ...
, but this is certainly not available from the surface language.)
can you give an example because I don't get it
In a match construct, the return clause as well as the branches are all represented by abstractions. It is impossible to put a let-in construct in there. (At least not in the source language. I remember that, in the early days, SSReflect was wrecking havoc in the kernel by creating branches that were not abstractions.)
that's not the case anymore since 8.14
match branches (and return clause) have exactly the same shape as the context of the corresponding inductive (including let-ins)
except that they don't store any data, just the names (no types, and no body for the let)
Just by curiosity, does that mean that it would now be possible to lighten the requirement of the return clause being well-typed? (That is something we were discussing with Christine Paulin recently. This would give UIP in the general case, and hopefully without introducing any inconsistency, but who knows.)
what does that mean?
you need the type of the return clause to type check the match
if it's not well-typed...
(also, we already have UIP through SProp if you set the right flag)
you could say that the return clause needs to be well-typed after substituting for each branch and for the return type, but not in its abstracted form
how do you guarantee it's still well-typed as a type applied to variables?
so you get
Definition K {A x} (e : x = x :> A) : e = eq_refl
:= match e as e in _ = y return e = eq_refl with eq_refl => eq_refl end.
(* inside branch: eq_refl = eq_refl
return type: e = eq_refl with e : x = x *)
Pierre-Marie Pédrot said:
how do you guarantee it's still well-typed as a type applied to variables?
not sure I understand the question but I guess you don't
give me the typing rule of a match
in your example if e : x = y
it doesn't type check
(I know that historically in the old CIC versions, pattern-matching was enforcing UIP, but I never quite understood the rule)
something like
Γ ⊢ e : Ind params indices
Γ ⊢ rt[varis:=indices, vare:=e] : Type
Γ, patvars ⊢ branch : rt[varis:=indices of (c params patvars), vare:=(c params patvars)]
(implies Γ, patvars ⊢ rt[...c params patvars...] : Type)
————————————————————————————————————————————————————————————————————————————
Γ ⊢ match e in Ind _ varis as vare return rt with c _ patvars => branch end : rt[varis:=indices, vare:=e]
you forgot the type of the whole expression...
lol
how do you know that the substitution for the type of branches is valid?
??
I mean, you check it already when you say 'implies ...'
the usual Γ ⊢ e : t ==> Γ ⊢ t : Type
theorem
I see
the inference algorithm probably checks it semi-redundantly since it has to convert with whatever gets inferred for the branch
As Gaetan said, what I meant is that the n+1 instances of the return clause are well-typed (with n the number of constructors), but the return clause itself does not need to be well-typed in isolation.
Christine's claim is that this is strictly equivalent to UIP. (But I have yet to double check.)
a least propositional UIP
Yes.
Isn't this what agda people call the difference between pattern matching and pattern matching without K? https://agda.readthedocs.io/en/v2.6.1/language/without-k.html
not sure how you define being equivalent, because you'll probably get more reduction rules in one case still ?
@Ali Caglayan at least that was the way 30 years ago people were calling these different things
claims that "pattern-matching" (i.e. with K) was more expressive that MLTT recursors
The reduction rules do not depend on the types, so I do not see how it would change anything in that regard.
wdym? You can phrase UIP as "there exists an axiom uip : forall A (x : a) (e : x = x), e = refl"
but this breaks canonicity immediately
with the generalized pattern-matching you don't lose canonicity a priori
(you might lose SN, in which case you lose canonicity strictly speaking but you still have progress)
I guess you want uip to literally be refl, but that doesn't quite typecheck right?
Yes, sorry, I thought you meant the reduction rules of the match construct. Indeed, you need a dedicated rule for UIP itself.
this is definitional uip
(I was answering to @Ali Caglayan )
Since you can prove that uip = refl, definitional uip is the only way to have uip and canonicity right?
looks like a legit argument but I am somewhat still unconvinced
is the fact that CIC + pattern-matching with K not canonical a known thing?
uip=refl is not well typed
If you match end points then it is true no?
hmm, you can prove forall (e : x = x), e = refl
for sure
but by canonicity there is only one such closed e
and that's refl
, so your pattern-matching reduces
this should still be canonical
I believe I heard that lean3 does this now
the Lean3 story is a totally different one
they have definitional UIP
Ok I won't open that box here :)
Going back to Guillaume's question, which J rule is the match supposed to be?
The usual one or Christine's one?
i.e. is the first end point of = an index or a parameter?
Having an untyped return clause is like not typing J right?
It is like not typing the argument P of J. J itself is still typed.
Karl Palmskog said:
the usual meaning of variable, is that it is given meaning by substitution. But I don't see how I can substitute
x
for anything (it's already given a definite meaning)
Yes, the context just instructs which substitution to use. "Variable" isn't a semantic or philosophical concept here...
Formally, we let x
range over a countable set of variables, and define t ::= x | t t | lambda x. t | let x := t in t | ...
— no difference here. The only funky thing is that Coq contexts are given Gamma ::= <empty> | Gamma, x : t | Gamma, x : t := t
.
And indeed, the elaborated term x = 8
(or rather, @eq nat x 8
) is the same no matter what x
is bound to — depending on Gamma, it might be well-typed or not.
To be precise, I'm talking of the raw term/Curry term/preterm here — some formalisms combine the term and the typing derivation, while I'm talking of the term separate from the typing derivations.
since the term is the same, we can also substitute x
by 7
or any other term — even if the result might not be well-typed.
And of course, if a term is typed assuming x := 8
, substituting x
by 7
won't preserve well-typing.
Having said all that, https://coq.inria.fr/refman/language/cic.html#id6 seems to claim that top-level definitions are constants. But that's a bit funny: the module system lets you instantiate Parameter
by Definition
but "not via constants" such as inductive types and their constructors...
Quotes:
Local context. A local context is an ordered list of declarations of variables. The declaration of a variable 𝑥 is either an assumption, written 𝑥:𝑇 (where 𝑇 is a type) or a definition, written 𝑥:=𝑡:𝑇.
Definitions are written as 𝑐:=𝑡:𝑇, indicating that 𝑐 has the value 𝑡 and type 𝑇. We shall call such names constants.
To be clear: while all this is a matter of definitions, there are pretty standard ways to set things up in core calculi; but since calculi leave out top-level definitions, matters there are less standard :-)
more precisely, standard in the type system community (and probably also in type theory?); I'm not speaking for interactive theorem proving/formal methods/... etc, because as usual they seem like separate communities.
Gaëtan Gilbert said:
the usual
Γ ⊢ e : t ==> Γ ⊢ t : Type
theorem
Validity is not a thing when you do bidirectional typing. Or rather, not for checking: if you call it (which is what you do with branches, at least in the current situation), then you must ensure that the type you give is well formed. Here it is not… So if you want to do that you’d have to at least separately check that the type you give for each branch is sensible.
Ali Caglayan said:
Since you can prove that uip = refl, definitional uip is the only way to have uip and canonicity right?
Sorry for the late reply, but it looks like this wasn't cleared up. As I understand it, the claim was that ⊢ uip A x e : e = refl
, so we should have uip A x e ↝ refl
. But I think with a pattern-matching definition of uip
, we already have this. Proof sketch: by canonicity (inductively), e
must be refl
. But in that case, we hit a pattern for uip
, and indeed uip A x refl ↝ refl
.
Last updated: Sep 25 2023 at 12:01 UTC