when we say that x: nat
We know for sure that x
can only be either O
or S x'
where x'
is another nat, that is because nat has two constructors, and we simply do case analysis on both cases
Why cannot we say thing for identity type and say there is only one case and thus prove UIP ?
Goal forall A (x y :A) (p1 p2 : x = y), p1 = p2.
Proof.
destruct p1.
Fail destruct p2.
(* Cannot instantiate metavariable P of type "forall a : A, x = a -> Prop" with
abstraction "fun (x : A) (p2 : x = x) => eq_refl = p2" of incompatible type
"forall x : A, x = x -> Prop".
*)
That should have worked and I fail to understand what is the problem!
instead of tactics try to write the match terms out, with full return
annotations
Note that UIP _is_ provable for nat
— in fact, for any type with decidable equality
and that general statement is called Hedberg's theorem
I've never learned to _write_ that proof (for nat
or in general), but both stdlib and stdpp have proofs of this idea (with different packagings)
the proof of Hedberg's theorem is not that hard, basically you rectify the wrong equality in the return clause
you have a way to produce a canonical inhabitant of equality from any proof of equality.
stdlib's statement is UIP_dec
(in theories/Logic/Eqdep_dec.v
), stdpp's statement is eq_pi
(stdpp/stdpp/proof_irrel.v
)
Besides becoming an expert in dependent pattern-matching to convince yourself that you can't do it, note that it took until 1996 to confirm UIP isn't derivable, so it's no wonder it's not that obvious https://ncatlab.org/nlab/files/HofmannStreicherGroupoidInterpretation.pdf
I'm sure @Li-yao knows, but understanding dependent pattern matching (and any syntactic reasoning) only explains why _this proof_ (or similar?) won't work — UIP is indepedent, so Hofmann and Streicher needed a countermodel to UIP
Is there no hope for a purely syntactic proof?
I googled "syntactic independence proofs" and learned that at least one exists: https://dpt-info.di.unistra.fr/~narboux/slides/Herbrand-Euclid-vulgarization.pdf. Slide 9 has a toy example, but I didn't get through the real thing yet (work calls)
Can't we do something like what Boulier et al. did in their "next 700 models" paper: define a new Eq
type whose refl
constructor carries an additional dummy Boolean, and then define the usual/normal constructor and eliminator on top of it?
that's still a (syntactic) model, but I imagine it could work
That seems to be what's happening in the proof of Theorem 5.1 of Hofmann&Streicher.
since equality of nats has uip, enriching every Eq with a bool can't work I think
^ this. You need to preserve this additional structure everywhere, so it's very non-trivial (if doable).
I am not aware of a syntactic model of MLTT with one universe + ¬ UIP, i.e. one where we interpret conversion as some decidable conversion in a target type theory, and it looks very hard to do.
even doing a groupoid model is going to break the definitional J rule for refl.
Gaëtan Gilbert said:
instead of tactics try to write the match terms out, with full
return
annotations
I guess this is even more reason to learn the match .. with...
you can also try to write it using
Definition eq_match {A:Type} (x:A) (P : forall y, x = y -> Type)
(v:P x eq_refl)
y
(e:x = y)
: P y e
:= match e with eq_refl => v end.
which provides the full power of matching on eq (the autogenerated eq_rect is not dependent enough because of a generation heuristic when the inductive is Prop)
I don't think you can prove independence results syntactically, but I'm not an expert either
Indeed you can, that's the whole point of syntactic models.
Beyond syntactic models — which _are_ models — I linked earlier a pure proof-theoretic approach — that is, one that does _not_ construct a model (https://coq.zulipchat.com/#narrow/stream/237655-Miscellaneous/topic/Why.20is.20UIP.20not.20a.20lemma/near/315443781). I point this out because, like @Xuanrui Qi , I didn't know this was possible.
For this kind of results to hold, you still need some form of cut elimination / strong normalization, and for this you will never escape from a model somewhere.
Good point: "there exists an unprovable statement in X" implies "X is consistent". And if your theory extends PA, a consistency proof can't be formalizable purely in PA — so it can't be _purely_ proof-theoretic/syntactic.
Right? And logical relations and syntactic models escape that by exceeding PA
But I haven't studied this "properly" — I learned this point about PA from "Proofs and Types", but I might miss something. And whether "syntactic proofs" implies "formalizable in PA" (or even PRA?) is super-hazy
I'm not sure what you mean, but ultimately everything is syntactic. Semantics is just the syntax in the meta-theory. The point about syntactic models is that they are better behaved as they lower the requirement of the expressivity of the meta by becoming a relative proof of consistency (or whatever).
It's very easy to reach confusing situations in this kind of discussion, since the very notion of "syntactic" is just like "truth", it's not absolute but relative to some ambient theory.
I agree it's relative, so for this discussion I can choose the ambient metatheory to be about a (Platonist?) world with genuinely uncountable objects.
people are free to choose strict finitism/formalism, but that's philosophy outside the maths: for the actual maths I prefer things that people can agree regardless of their philosophy.
Regardless, "you can/cannot construct a syntactic model using only PA" should be a precise question?
(I confess I've never learned how "syntactic" proof theory can be carried out in PA, but finitists seem adamant they did).
Pierre-Marie Pédrot said:
For this kind of results to hold, you still need some form of cut elimination / strong normalization, and for this you will never escape from a model somewhere.
What do you mean by this? I would say the opposite. To me, ordinal analysis is relatively far from model theory.
And ordinal analysis hasn't been brought up. EDIT: I was wrong here
more helpfully: at least as I understand from Wikipedia, Coq's infinitely more powerful than the systems ordinal analysis studies — so we don't know "how infinitely"
I'm not an expert, but that's what I gather from
Most theories capable of describing the power set of the natural numbers have proof-theoretic ordinals that are so large that no explicit combinatorial description has yet been given.
and from the fact that Coq can go quite a bit beyond nat -> Prop
https://en.wikipedia.org/wiki/Ordinal_analysis#Theories_with_larger_proof-theoretic_ordinals
back to the topic, most proofs of strong normalization proceed by logical relations (or one of the twenty other names for it), which are ways to construct models. For systems powerful enough (like System F, and even more so for Coq), we can _prove_ that pure proof theory (anything formalizable in PA) isn't sufficient: I think System T is already too powerful.
(technically, logical relations are often built on operational semantics for term reduction — but the logical relation remains rather semantic)
but corrections welcome — I understand the "logical relations" part much better than the ordinal analysis part
looking at https://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof I owe @Patrick Nicodemus an apology for my comment. It's fair to say that some cut elimination proofs involve ordinal analysis, but I think the ones we are discussing do not.
to prove consistency of System F's impredicativity, for instance, instead of using a different system, we map object-language impredicativity to meta-language impredicativity (tho in some proofs you can use a slightly weaker form).
Paolo Giarrusso said:
(I confess I've never learned how "syntactic" proof theory can be carried out in PA, but finitists seem adamant they did).
Gentzen's proof is something like: we define a linear ordering "<_C" on natural numbers, C for "complexity" which we, working outside the formalism, can "easily see" is an embedding of a certain countable ordinal into N. Then we associate to each proof a measure of its complexity which is an element of this ordinal. We prove that all the various reductions strictly reduce the complexity, so that a sequence of reductions of a proof term gives a sequence of decreasing elements of the ordinal. Since the ordinal is well founded, the sequence of reductions must eventually terminate.
All of this coding, and the proof that the translation from (terms, ) into is an order homomorphism, can be done in primitive recursive arithmetic. That is what finitists mean by saying it is a finitistic proof of consistency. However, one cannot prove in PA that the embedded ordinal is actually well-founded. More specifically, for any definable binary relation on one can formulate the accessibility predicate and then ask if all natural numbers are accessible, or all natural numbers such that for some fixed (so an initial segment of the ordering), and for our 'ordinal' we cannot prove this. Thus, this is where incompleteness rears its head.
So it is a finitistic proof except that one needs an extremely strong induction principle to carry it out.
I've often wondered how that principle compares with structural induction on PA_ω proof trees — those are infinitely branching trees, so structural induction on them is transfinite
see e.g. http://lists.seas.upenn.edu/pipermail/types-list/2018/001985.html.
structural induction is, in a precise sense, equivalent to ordinal induction
also https://www.cs.bham.ac.uk/~mhe/papers/ordinals/ordinals.html#8626 seems to encode ε₀ using
data B : Set where
Z : B
S : B → B
L : (ℕ → B) → B
where L
makes this an infinitely branching node, similarly to what we see in PA_ω
either way, thanks for the explanation!
Last updated: Sep 30 2023 at 15:01 UTC