Disclaimer: I have a strong suspicion these questions are borne out of ignorance and inexperience, but hopefully it's a worthwhile question nonetheless. (Also, I feel like the below tone is unexpectedly argumentative—please be assured that my questions are sincere.)

As I learn more about Coq, I've started to feel a sort of cognitive dissonance—constructivity is obviously foundational to Coq, but the fact that `Prop`

is separate from `Set`

makes me wonder how practically relevant it is. E.g. `exists x, P x`

means that an `x`

has been explicitly provided; but as far as computations are concerned, whatever proof it came from might as well be non-constructive. Unless constructive proofs are a desirable goal in-and-of-itself (i.e. you'd try for it even if Martin-Löf didn't require it), why not include LEM for proofs from the get-go?

Proof (ir)relevance feels to me like it's related to all this, but thinking of how it fits in just makes me more confused :-P

I have a strong suspicion these questions are borne out of ignorance and inexperience

On the contrary! This line of thinking is interesting and actually the one of several major Coq developers and contributors today (e.g. @Hugo Herbelin, @Matthieu Sozeau, @Gaëtan Gilbert, @nicolas tabareau). I'll let these experts explain better than I can, but there is this idea that by having separate sorts, you can add all sorts of axioms "for free", including axioms which are supposed to be incompatible (but that won't create inconsistencies as long as they are well separated in different sorts). `SProp`

is also an attempt toward this kind of thing. It is a sort similar to `Prop`

but where proof irrelevance holds "definitionally". Excluded middle and even stronger classical axioms are also something which isn't frown upon by theoreticians (in particular @Hugo Herbelin) since the constructivist understanding of these things has expanded.

It's a great observation that to preserve the important computational properties of Coq such as extractability to general-purpose programming languages, it seems like constructiveness of logic in Prop is entirely irrelevant. However, there are various other reasons why it is still desirable to keep Prop constructive in principle. For instance, it allows doing constructive reverse mathematics, where you classify theorems (traditionally of analysis, but also of computability theory and other areas) in terms of the axioms they require, i.e. you prove something like that LLPO (a consequence of excluded middle) is equivalent to the intermediate value theorem (I just looked up a random statement from constructive reverse mathematics). But keeping Prop axiom-free also means that one can assume axioms contradicting the law of excluded middle, as is often done in synthetic mathematics (https://ncatlab.org/nlab/show/synthetic+mathematics). So in other words, for me keeping Prop constructive is not about decidedly being against classical axioms, but just about being agnostic towards them.

This "agnostic" constructive mathematics is often connected to Bishop's school of constructive mathematics.

Here's a bit of discussion about possible interpretations of Prop

https://github.com/coq/ceps/pull/55/files

A problem with LEM is that computation in coq can get stuck when adding it as an axiom.

Somewhat related plug: we have a new paper that’s just out of the factory specifically about interaction between universes: https://sozeau.gitlabpages.inria.fr/www/research/publications/drafts/multiverse.pdf

@Matthieu Sozeau Nice! I've only read the intro. Have you considered a categorical semantics?

@Kenji Maillard might answer better. My guess is that we could build a kind of generic CwF easily.

I think a way the people behind `SProp`

think about this (at least that’s my understanding) is that certain things are "naturally" irrelevant. Think about the h-propositions of HoTT if that means something to you: you can *prove* that those are irrelevant. And since irrelevance is something really useful to know (e.g. knowing that a type has an irrelevant equality gives you plenty of nice properties on it), automatically detecting it is cool. This is what `Prop`

does to some extent with this weird sub-singleton elimination rule that lets only irrelevant content leak out, and that `SProp`

does even more radically by giving you definitional irrelevance. So in the end `Prop`

/`SProp`

are a way to segregate some fragments of your theory that is "naturally irrelevant" and doing the bookkeeping for you on those.

Whether you want to add classical axioms is somewhat independent of that, but only having them on the irrelevant part of your theory is better in the sense that they should not disturb the computational content of the rest of the system.

A CwF only gives you type dependency. It would be interesting to know in what categorical structures your type theory can be interpreted?

E.g. @Mike Shulman type theoretic model toposes

@Bas Spitters the original intended categorical model for this work is a variation on CwF with a single category of context (as usual) but multiple presheaves of types and terms, one for each sort. Then we would ask for a slightly peculiar representability condition, namely that each of these families are represented by a universe, a type in some distinguished family. Additional structure such as $\Pi$ or Id-types should be added afterwards.

Does it matter whether it would be Russell, Tarski or Coquand style universes?

I don't think it matters much for what we are doing: our main technical source of inspiration is Abel et al's proof of decidability for MLTT that uses a Russel presentation for its universe; I have personally a natural inclination for Tarski style universes and Coquand's style is a compromise we took in the paper

I didn't get that for in the paper yet. Great to see you are using Coquand style!

This paper looks interesting! I'm just starting out reading it, but I was a bit puzzled by this sentence:

...mixing classical logic with the axiom of choice... is a well-known source of foundational problems.

Surely this must be saying something other than it sounds like?

I guess the "type-theoretic"/non-truncated "axiom" of choice is meant there?

@Matthieu Sozeau Does MuTT shed any light on CEP 55 ?

Matthieu Sozeau said:

Somewhat related plug: we have a new paper that’s just out of the factory specifically about interaction between universes: https://sozeau.gitlabpages.inria.fr/www/research/publications/drafts/multiverse.pdf

Looks very interesting, thanks for the link!

Yannick Forster said:

So in other words, for me keeping Prop constructive is not about decidedly being against classical axioms, but just about being agnostic towards them.

Out of curiosity, how widespread is this attitude in the Coq community? As a newcomer, it seems to me like there's a strong cultural lean (no pun intended) towards purely-constructive formalism (rather than simple agnosticism), at least when formalizing computer science–specific material—is that actually the case?

e.g. if a non-constructive approach to a problem is significantly easier than the alternative for some problem, would such a solution be considered "valuable" by the general Coq community?

Classical logic is just not very useful in typical PL formalizations, so people tend to avoid it.

Matthieu Sozeau said:

I guess the "type-theoretic"/non-truncated "axiom" of choice is meant there?

But since that one is *provable*, whether or not one has classical logic, it surely seems even *less* problematic to mix it with anything else?

The problem would be to have fully informative excluded-middle, as provided by control operators for some simply typed languages (e.g. `call/cc`

or the $\mu$ operator in $\lambda\mu$-calculus), together with unrestricted dependent elimination as we are used to in MLTT.

So perhaps it should say "mixing *computational* classical logic..."?

Oh, so it's a PL-specific custom rather than just Coq... that makes sense!

Yannick Forster said:

It's a great observation that to preserve the important computational properties of Coq such as extractability to general-purpose programming languages, it seems like constructiveness of logic in Prop is entirely irrelevant.

Incidentally, I was reading through the "Lean is like Coq but better" coq-club thread from last year when I came across https://sympa.inria.fr/sympa/arc/coq-club/2020-01/msg00048.html:

The standard computational uses of Prop are Acc (the inductive type of well-founded accessibility proofs, needed for running well-founded recursion) and eq (which, unlike Acc, one might claim should be erased in classical / anti-univalent settings).

So does this imply that computability is definitively necessary for proofs after all (even if. say, we're eventually going to erase them during extraction)?

In my case, i.e. coming to logic and proof systems from the side of programming languages, constructive logic is simply the more natural choice. Oddly enough, it also makes impredicativity seem quite natural.

Joshua Grosso said:

So does this imply that computability is definitively necessary for proofs after all (even if. say, we're eventually going to erase them during extraction)?

I think this depends on what you mean by "computability" :) I'm trying to give my explanation of what @Jason Gross said there, so you can continue thinking about it. Maybe afterwards Jason wants to correct me if I misinterpreted. I think the point Jason is making that for computation *inside Coq* one sometimes has to compute on proofs. For instance to write programs by well-founded recursion and then compute values inside Coq. Here is a trivial program computing the double of `x`

via an (unnecessary) well-founded recursion:

```
Require Import Arith Lia Program.
Obligation Tactic := cbn.
Program Fixpoint multwo x {wf lt x (* 4 *) } : {y | y = 2 * x } :=
match x with
| 0 => exist _ x _ (* 1 *)
| S n => exist _ (S (S (proj1_sig (multwo n (* 2 *) )))) _ (* 3 *)
end.
Next Obligation.
intros. lia.
Qed.
Next Obligation.
intros. lia.
Qed.
Next Obligation.
intros. destruct multwo; cbn; lia.
Qed.
Next Obligation.
apply lt_wf.
Defined.
Compute (proj1_sig (multwo 2)).
```

Try closing the last obligation with `Qed`

, which will break computation inside Coq. However, `Require Import Extraction. Recursive Extraction multwo.`

will always yield an OCaml program which can compute, because the proof is erased.

Instead of making the well-foundedness proof of `lt`

opaque, we could as well prove it using classical logic. Here's a simple example doing that:

```
Axiom DN : forall P : Prop, ~~ P -> P.
Program Fixpoint multwobad x {wf lt x (* 4 *) } : {y | y = 2 * x } :=
match x with
| 0 => exist _ x _ (* 1 *)
| S n => exist _ (S (S (proj1_sig (multwobad n (* 2 *) )))) _ (* 3 *)
end.
Next Obligation.
intros. lia.
Qed.
Next Obligation.
intros. lia.
Qed.
Next Obligation.
intros. destruct multwobad; cbn; lia.
Qed.
Next Obligation.
eapply DN.
intros H. apply H.
apply lt_wf.
Defined.
Compute (proj1_sig (multwobad 2)).
Recursive Extraction multwobad.
```

As you can see, computation in Coq is stuck once again because the `DM`

axiom cannot be computed, but extraction to OCaml still works fine.

If somebody has an example where using classical logic in a proof breaks extraction to OCaml, I'd be interested in that, but right now I don't remember ever seeing that.

Yannick Forster said:

I think this depends on what you mean by "computability" :) I'm trying to give my explanation of what Jason Gross said there, so you can continue thinking about it. Maybe afterwards Jason wants to correct me if I misinterpreted. I think the point Jason is making that for computation

inside Coqone sometimes has to compute on proofs. For instance to write programs by well-founded recursion and then compute values inside Coq. Here is a trivial program computing the double of`x`

via an (unnecessary) well-founded recursion:

I wasn't aware that computation could fail even when extraction succeeds. Thanks for the explanation, that's very interesting!

Intuitively, this situation seems weird to me—are there theoretical ramifications (e.g. you 100% need proofs to be constructive in order to have recursion without unsoundness), or is it just a technical curiosity?

There is no unsoundness involved, but it's also more than a technical curiosity.

Intuitively, every Coq program has to be structurally recursive. Now if an algorithm isn't structurally recursive one can still implement it in Coq by (1) proving that the recursion relation of the algorithm is terminating and (2) then implementing the algorithm by recursion on the termination proof. Classical termination proofs are completely fine and by no means unsound - but of course recursion on them will get stuck when evaluated. The extraction however is fine, and we obtain an OCaml program with a classical termination proof.

In Concert (https://github.com/AU-COBRA/ConCert) were using metacoq to do the extraction internally in coq. So, the extracted term is again a coq function, and one can compute with it.

Can you really extract `multwobad`

from above to an executable Coq function of type `nat -> nat`

@Bas Spitters ? Would the extracted function then still depend on `DN`

?

Joshua Grosso said:

I wasn't aware that computation could fail even when extraction succeeds. Thanks for the explanation, that's very interesting!

Intuitively, this situation seems weird to me—are there theoretical ramifications (e.g. you 100% need proofs to be constructive in order to have recursion without unsoundness), or is it just a technical curiosity?

The key problem is that *normalization* can fail when evaluation can succeed — and typical extraction target only use evaluation: in an evaluator, reducing a match on a stuck proof by ignoring the proof is not a problem — if your axioms are sound! However, a normalizer must also deal with open terms, and free variables can behave like unsound axioms — for instance, a proof of `forall (H : 0 = 1), False`

is totally free to cast a `nat`

to a function using the `H`

assumption, but the cast better stay stuck on `H`

.

Which I guess implies you _could_ write an evaluator for Coq functions that works like extraction, and make that available through a Coq command (just not in terms).

This would probably break subject reduction.

Extraction is essentially a realizability model, so the trade-off is that you get erasure at the cost of undecidability of type-checking.

I did say for closed terms, and without going under binders. That might be what happens in Lean; there have formal proofs that similar things work for Scala, and intuitively I'd expect the same to happen even in ETT (not that we know how to prove that).

you're right that in ITT intermediate steps might violate SR, but I'd still expect type safety.

AFAIK, endowing ETT with a weak head reduction that preserves typing derivations in some sense is still an open problem.

It has been debated a lot, but the experts here are probably @Théo Winterhalter and @Kenji Maillard

Dependent types are really troublesome when it comes to usual PL properties.

Are there counterexamples to type soundness, or do we just lack proofs that type soundness works?

We just don't know how to retype a term in general, so it's a lack of proof so far.

But it's a very subtle problem that depends a lot on the way it is phrased.

Pierre-Marie Pédrot said:

AFAIK, endowing ETT with a weak head reduction that preserves typing derivations in some sense is still an open problem.

I did propose https://github.com/TheoWinterhalter/sirtt at TYPES with a variant of ETT with strong normalisation but I haven't read the whole discussion so it might be completely irrelevant.

Plain ETT without type annotations on applications and binders is pretty broken reduction-wise.

it might be completely irrelevant

it would thus be totally on-topic :goat:

Can you really extract multwobad from above to an executable Coq function of type nat -> nat @Bas Spitters ? Would the extracted function then still depend on DN?

@Yannick Forster even though in ConCert we have all the extraction pipeline in Coq, what we get is an AST of the extracted function in Coq (it's basically lambda-box + type annotations). From this syntax, we can get code in one of the target languages. So we don't unquote the result back to Coq. However, in the case of `multwobad`

it actually should be possible to unquote it back (given that we have all required typing information), since what we get using the standard Coq extraction to Ocaml is

```
(** val multwobad : nat -> nat **)
let rec multwobad x = match x with
| O -> x
| S n -> S (S (multwobad n))
```

Which clearly can be written in Coq as

```
Fixpoint multwobad (x : nat) : nat :=
match x with
| O => x
| S n => S (S (multwobad n))
end.
```

But this is only because wf recursion is not necessary in `multwobad`

, as it was pointed out in the example.

Last updated: Jun 16 2024 at 03:41 UTC