I have a relation, let's call it `P : T -> T -> Prop`

, where `T`

is some type. I am formalizing a paper proof about `P`

, and at some point it says "let `s`

be the set of pairs `(a, b)`

such that `P a b`

holds (`a`

and `b`

restricted to a specific finite set)". I am trying to create this set `s`

in Coq, but the only way I can think of doing it assumes `P`

is decidable, in the sense of `{P a b} + {~ P a b}`

. This seems stronger than simply assuming the excluded middle axiom, which is what I was hoping to assume.

- Am I right that in classical logic it is fine to create such a set
`s`

without assuming the decidability of`P`

? - Is there a way of doing that in Coq?

For 1., classical logic in Type implies that all predicates are decidable, so it's trivially correct.

For 2., there is not enough information in your post but the natural way to do that in type theory would simply be to declare your type to be `{p : A * B | P (fst a) (snd a)}`

or something like that.

I don't know how much of set theory you're using, but most of the time just looking at subsets captured via unary predicates is enough.

In this view, `s`

is trivial, it's just `s : T * T -> Prop := fun p => P (fst p) (snd p)`

.

Pierre-Marie Pédrot said:

For 1., classical logic in Type implies that all predicates are decidable, so it's trivially correct.

Isn't "classical logic in Type" precisely `{p} + {~ p}`

? In that sense I agree, but here I was expecting that LEM in `Prop`

would be enough.

As for 2, you're right that `s`

is kind of trivial. I should have given more details. Here is what I would do if `P`

were decidable:

```
From mathcomp Require Import all_ssreflect finmap.
Variable T : choiceType.
Variable P : T -> T -> Prop.
Axiom decP : forall (a b : T), decidable (P a b).
Definition s (a : T) (B : {fset T}) := [fset b in B | decP a b]%fset.
```

Obtaining an `fset`

was my goal, as it meshes well with the rest of my formalized proof. If that was a bad decision to begin with, well... I will have to rethink basically everything.

Originally I was assuming `LEM`

would imply `decP`

, but on reflection it seems clear that it doesn't, because the former is in `Prop`

and the latter is in `Set`

.

I think that declaring the type to be a sigma type doesn't imply that it is inhabited, which you need.

I dunno, but I guess I would use the `pick`

infrastructure for choicetypes

that, and your relation being boolean-value, should provide a convenient way to say "pick a set of pairs `a,b`

"

I was lazy so did it for finmap:

finset sorry

```
Section NE.
Variables T : finType.
Variable P : rel T.
Definition someAB : { X : {set T * T} | forall x, x \in X -> P x.1 x.2 }.
Proof. by exists [set x : T * T | P x.1 x.2] => x; rewrite inE. Defined.
End NE.
```

Having the relation be boolean-valued is precisely the thing I don't see how to do.

Oh

```
Axiom EM : forall (P : Prop) , { P } + { ~ P }.
Definition to_rel T (P : T -> T -> Prop) : rel T :=
fun x y =>
match EM (P x y) with
| left _ => true
| right _ => false
end.
```

So `EM`

in `Set`

is ok? I thought only the `Prop`

version was allowed for some reason.

```
Lemma to_relP T (P : T -> T -> Prop) x y : reflect (P x y) (to_rel P x y).
Proof. by rewrite /to_rel; case: EM; [left | right]. Qed.
```

It's because here this version of `EM`

says that every proposition is decidable in the sense that you can extract either a proof of `P`

or a proof of `~ P`

. This isn't as harmless as `forall P, P \/ ~ P`

.

@Ana de Almeida Borges it is stronger of course, see the beginning of `Logic/ClassicalDescription.v`

on the stdlib

or in particular this one lemma:

```
Theorem constructive_definite_descr_excluded_middle :
(forall A : Type, ConstructiveDefiniteDescription_on A) ->
(forall P:Prop, P \/ ~ P) -> (forall P:Prop, {P} + {~ P}).
```

In general it is a safe procedure to construct oracles, this is the way for example math-comp analysis decides reals

but as always, you need to be aware of the global set of axioms you are using in the proof

and check the final output of `Print Assumption`

will produce a consistent axiom set

Ok, this looks like a very worthwhile file for me to read, thank you!

This whole thing is messing with my head because the proof I'm formalizing is later used to prove the decidability of `P`

. So it seems very wrong to have an axiom saying `decidable (P a b)`

lying around, or the stronger version of `EM`

that implies everything is decidable.

That's an interesting point you make @Ana de Almeida Borges ! I'd dare to say the formal proof engineer finds this kind of stuff often.

Maybe you'll have to untie the knot somehow, and then prove the decidability of P directly

which you can use later, but indeed who knows; usually it takes a lot of walks in the forest to figure things out, at least for me.

So what is actually happening here? Is it a clash between classical logic as it's usually presented and type theory? If you read this paper proof as a mathematician you don't see anything wrong, but formalizing it in Coq necessarily makes it circular?

Actually, taking a step back. I am pretty convinced not everything is decidable. Doesn't this version of `EM`

allow one to solve the halting problem?

Yes, as I was saying this is assuming the existence of a function which determines if a proposition is provable or not, which is not the same as excluded middle.

And how is assuming the existence of such a function consistent?

Ana de Almeida Borges said:

And how is assuming the existence of such a function consistent?

I think there are many possible ways to see it, but indeed it breaks nothing

@Ana de Almeida Borges I think that it'd be easier if we could get a glimpse of the theorem you want to prove. But I believe you're implicitly assuming the synthetic computability approach, where Type-level functions in Coq represent computable functions.

algebraically can be understood as the inclusion of Heyting algebras into boolean algebras

Obviously if you assume EM in Type you break this interpretation.

Yup, as Pierre-Marie says it is safe as long as you don't use any other axioms that are known to be "anti-classical"

I think that it'd be easier if we could get a glimpse of the theorem you want to prove.

It's a Kripke completeness proof for a modal logic. I think the relevant details for this discussion are fairly standard. So you start with an unprovable formula and build a term model where the formula isn't satisfied. The worlds of the model are sets of consequences of certain formulas. Thus the set `s`

I mentioned above would be a world of the model. For this particular logic you can build finite counter-models, so you get decidability via Post's Theorem. Lemma 5.5 of this paper, although hopefully we can have a meaningful conversation without anyone needing to read it.

But I believe you're implicitly assuming the synthetic computability approach, where Type-level functions in Coq represent computable functions.

I probably am, since I'm failing to think of an alternative mind model. Or rather, it at least feels true assuming there are no axioms lying around. Evidently my understanding of what happens when one adds axioms is lacking.

You can think of classical logic being a subset of intuitionistic logic

when you add EM

a lot of things "collapse"

thus you get a boolean system, where every proposition can be decided

this is for simpler systems than the CoC

Indeed Coq is weaker than your mental model

_internally_

think of it in terms of model theory

A model that is "classical" , it is still a model for Coq

thus, you can add EM, and still show a model for the calculus

there are other models

for example, the initial model which you could think as being a form of "intuitionistic logic"

in fact one can prove this "free" or initial model to be indeed the most general one, in the sense that any other model does arise from it thu restrictions

also, the computable model Pierre-Marie refers to, it is a model of Coq

however, that model is ruled out if you add EM

so when you add an axiom you change your class of models

or course you gotta be careful as not to change it too much

as for it to become the empty set

I see

Historically, I believe that the consistency of CoC was a great milestone, as it is a calculus relatively simple, but very general and powerful on the other hand

and it has some very interesting features, as well as downside of course

So my takeaways are: 1) `EM`

is consistent with vanilla Coq, and the only thing I need to do is be careful with any other axioms I might want to add. 2) My mental model of Coq stops being a model upon the addition of `EM`

. 3) The framework I usually use to prove things on paper is incompatible with my mental model of Coq.

Indeed relating to 2) , the addition of EM it is maybe not such a great disaster; you can still recover a more pleasant model maybe if you look at all the literature starting from Gödel's double negation translation

but yes, axioms do break a lot of properties one would expect for the CiC

however these properties are not provable internally, otherwise you would get a direct proof of contradiction

this is an area where model theory does OK so far [but it requires some exoteric stuff IMHO] , but not every person likes model theory, there is a bit of a clash there :)

Well. I'll have to tune my intuitions. Thanks a lot for this conversation!

May I ask on what modal logic you are working on exactly? :) It might be interesting for you to talk to @Dominik Kirst and @Christian Hagemeier who are also working on constructive completeness and decidability proofs for modal logic in Coq at the moment

Regarding what Emilio and Pierre-Marie said, I would argue that adding `EM := forall P : Prop, P \/ ~ P`

still keeps computational intuitions about Coq consistent. For all I know, you will still not be able to prove a predicate decidable (`forall x, {p x} + {~ p x}`

) which is not actually Turing-machine decidable, even when adding `EM`

. So while fully constructive decidability proof might be interesting, I see no reason why a proof `EM -> forall x, {p x} + {~ p x}`

for a predicate `p`

is per se uninteresting

More generally, it can be easier to first use `EM`

to get the proof done, and then revisit uses of `EM`

to get rid of them

Yannick Forster said:

May I ask on what modal logic you are working on exactly? :)

Yes, of course! It's described here. It's a strictly positive predicate provability logic that we came up with in order to defy Vardanyan's Theorem. My current efforts are here, but they are not particularly public-friendly right now.

Yannick Forster said:

Regarding what Emilio and Pierre-Marie said, I would argue that adding

`EM := forall P : Prop, P \/ ~ P`

still keeps computational intuitions about Coq consistent. For all I know, you will still not be able to prove a predicate decidable (`forall x, {p x} + {~ p x}`

) which is not actually Turing-machine decidable, even when adding`EM`

. So while fully constructive decidability proof might be interesting, I see no reason why a proof`EM -> forall x, {p x} + {~ p x}`

for a predicate`p`

is per se uninteresting

I agree, but the issue here is that the paper proof does not show `EM -> forall x, {p x} + {~ p x}`

; it shows `forall P : Prop, {P} + {~ P} -> forall x, {p x} + {~ p x}`

, which has a much more straightforward proof :)

But then the paper proof is just wrong, no? :) Or maybe the paper proof shows `(forall P, {P} + {~P}) -> exists f : nat -> bool, f is Turing-machine computable /\ forall x, p x <-> f x = true`

? That's usually how paper proofs about decidability look, but then they sweep the Turing machine under the carpet because it's obvious that it exists

In my experience, the assumptions can always be weakened to just use propositional `EM`

. In case it's really crucial in your example, I'd be very interested to discuss in more detail

Yes, hopefully that's not literally what the paper proof does, and I'm keen to figure out exactly what is going on. The only non-constructive step in the proof is a Lindenbaum-type lemma. My original question in this stream was precisely trying to understand how I could prove such a lemma using only `P \/ ~P`

, although I freely admit that I didn't explain this goal very well.

Hi Ana, you could have a look at @Christian Hagemeier 's memo on the completeness proof for intuitionistic epistemic logic. In the Lindenbaum section starting on page 2, you'll see how the construction can be done in Coq, and EM on Prop is actually only needed to establish primeness (Lemma 11).

And one more general comment: a paper proof of decidability of `p`

can be translated in three ways into Coq: First, fully analytic, i.e. very faithful to the axioms on paper. You assume arbitrarily strong things like `forall P, {P} + {~P}`

, and then you explicitly construct a Turing-machine deciding `p`

. Secondly, fully synthetic. You assume nothing, and prove `forall x, {p x} + {~ p x}`

. And then thirdly, classically but synthetic: `(forall P, P \/ ~ P) -> forall x, {p x} + {~ p x}`

. The first route usually is impossible, because constructing Turing machines is way too tedious. So routes 2 and 3 are open. Usually, you then want to translate the points where on paper somebody writes "function from X to Y" _not_ to `X -> Y`

, but to a total functional relation `X -> Y -> Prop`

. Because once your committing to the mental model that every function is computable, then translating a set-theoretic function (which might not be computable) to `X -> Y`

is wrong, but total functional relations will still behave like set-theoretic functions.

I would guess that pinpointing the difference of what e.g. Christian is doing to paper proofs is about the question whether a model is a function `X -> bool`

(it should not be, because then this would be a decidable model), or a predicate `X -> Prop`

(that's completely fine and is also what Christian does)

Yannick Forster said:

Because once your committing to the mental model that every function is computable, then translating a set-theoretic function (which might not be computable) to

`X -> Y`

is wrong, but total functional relations will still behave like set-theoretic functions.

Does this approach works out in practice in the absence of unique choice ? It sounds promising but I'm a bit puzzled by the fact that you cannot evaluate a functional relation to obtain an actual point of your codomain.

You can evaluate it, but only propositionally. That means you can always obtain an element of the co-domain because it `exists`

via totality. What you can indeed not do is compute the element. But in computability proofs, you don't need that, because also on paper nowhere you have a Turing machine computing elements in the co-domain for a non-computable relation. In particular, when constructing a model of type `X -> Prop`

, you can do `M x := forall y, R x y -> ...`

and in the omitted part `y`

will be the unique element corresponding to `x`

in the relation (equivalently at least using propositional EM: `M x := exists y, R x y /\ ...`

)

So I'd say unique choice just gives you a more convenient way to write things down, because you can write `R(x)`

rather than spelling out "the unique `y`

such that `R x y`

" every time, and not actually more proof power - at least when you're concerned with a computability / decidability statement

But, just for completeness: Unique choice and EM together allow proving that `forall x, {p x} + {~ p x}`

is inhabited, which you don't want in a synthetic computability setting. So better to avoid unique choice and spell things out

Yannick Forster said:

So I'd say unique choice just gives you a more convenient way to write things down, because you can write

`R(x)`

rather than spelling out "the unique`y`

such that`R x y`

" every time, and not actually more proof power - at least when you're concerned with a computability / decidability statement

Indeed, and you can recover some convenience using monadic notations. My usual concern is rather with respect to type dependency and dependent elimination; I guess it does not appear too much in this kind of proofs.

Dominik Kirst said:

Hi Ana, you could have a look at Christian Hagemeier 's memo on the completeness proof for intuitionistic epistemic logic. In the Lindenbaum section starting on page 2, you'll see how the construction can be done in Coq, and EM on Prop is actually only needed to establish primeness (Lemma 11).

This is very nice! But doing it like this would basically mean starting my formalization from scratch, including the soundness part, which I'd rather not do.

After thinking for a bit, I believe I should be able to avoid instructive EM (the `{P} + {~P}`

version) without going for this route. In my case I only wish to create a Lindenbaum set `s`

for a finite set of formulas `Phi`

. Thus there are only `2^|Phi|`

possible sets, picking either `phi`

or `~phi`

for each `phi \in Phi`

. The Lindenbaum lemma states that there is a set `s`

with some properties. I can prove that one of these `2^|Phi|`

sets has the desired properties. Thus there is no need to explicitly construct `s`

.

Now, how to actually do this in Coq I have no idea. If I actually had a specific set `Phi`

with, say, two formulas `a`

and `b`

, I would destruct `EM a`

and `EM b`

and build `s`

accordingly. In the general case...?

The general case should be by induction on either the size of `Phi`

or on `Phi`

directly. But it sounds like an indirect approach. Is soundness hard to prove? And is the paper proof of completeness significantly harder than the one for soundness? (which usually is the case) If a finite set suffices, I'd normally prove that there exists a list `A : list form`

s.t. `fun phi => In phi A`

is a model of type `form -> Prop`

, and the boolean version is the corresponding decidable model

There's nothing particularly hard about any of these proofs, even if they were a bit tricky to come up with. The places where they deviate from the standard soundness and completeness proofs are easy to understand as long as one is familiar with the context.

Yannick Forster said:

If a finite set suffices, I'd normally prove that there exists a list

`A : list form`

s.t.`fun phi => In phi A`

is a model of type`form -> Prop`

, and the boolean version is the corresponding decidable model

So you're saying that you would use the `Prop`

membership instead of the decidable `fset`

s. This was my reading of what Christian did. I wanted to avoid it because I didn't allow for this possibility in the already existing formalization, assuming that since everything was finite I would be fine with `fset`

. Perhaps refactoring so that more generic types are allowed wouldn't be so bad, and it would end up stronger. I'll look into it.

I don't know what Christian is doing in detail, I'm just high-level familiar with his work. What I was trying to say is that no matter what your notion of model is, you should be able to prove that there exists a `list`

and then show that every `list`

gives rise to a model

list gives rise to a model

Do you mean that the model is defined precisely as satisfying the formulas present in the list and not satisfying the formulas not present in the list?

Yes! No matter how models are defined, from a list you should be able to define such a model

Well, the list should have some properties, like consistency and completeness at least. I feel like saying "just build the model from the list" is sweeping the only interesting part under the rug. The question is precisely how to build such a model.

We're misunderstanding each other. I don't know what your models are. You mentioned that you do not want to start from scratch, so I suggested that no matter what your notion of model is, you can always obtain a model from a list. That means it is sufficient to build a list with the right properties. Provided `EM`

, you can prove the existence of a list, because there will only be finitely many case distinctions via `EM`

involved

Ah, I see. I guess my answer is that building the procedure to go from a list to a model seems to be more annoying that directly building the set I had in mind here, which as you rightfully pointed out should be just a simple induction using `EM`

.

I brought up factoring the proof via a list because thinking about numbers like `2^|Phi|`

sounded way too complicated for such a proof

It does, but I think the `2^|Phi|`

will never actually be typed. I was just forgetting that induction is a thing :)

Ah, I see. Then good luck with your proof! :)

Thank you!

Usually, you then want to translate the points where on paper somebody writes "function from X to Y" _not_ to X -> Y, but to a total functional relation X -> Y -> Prop. Because once your committing to the mental model that every function is computable, then translating a set-theoretic function (which might not be computable) to X -> Y is wrong, but total functional relations will still behave like set-theoretic functions.

This is super interesting, by the way! I am really enjoying this thread.

You might find http://www.cs.ru.nl/~spitters/coqw_files/paper_1.pdf interesting. I would guess, that at least for classical logics directly building the lists which will correspond to states is possible. The part which might be hard (I guess), is how to obtain for example consistent sets of formulas, if you use Prop you can just assume that the sets are consistent, if you use lists you need to prove this (and you also need a way to construct them). (One possible way which is done sometimes is to use something like Hintikka sets).

@Christian Hagemeier @Ana de Almeida Borges I'm only seeing this now and I only skimmed this rather lengthy thread, but there reference you give is a pretty terrible one. This is like the first snapshot of a long series of iterations on how to do modal logic (properly and constructively) in Coq. We cleaned this up quite significantly for our CPP 2011 paper. Moreover, we changed the setup significantly for our later papers. For instance, there is CPP 2018 where we prove soundness, completeness, and decidability for PDL.

The gist of it is that we realized that we can do everything completely constructively by adapting the notion of model to provide just enough instances of LEM to prove soundness. In the PDL case this amounts to taking as "models" those "transition systems" for which the semantics is stable under double negation ($\forall \phi, \neg \neg M \models \phi \to M \models \phi$). The only unpleasant part is, that the notion of model depends on the interpretation and the type of formulas. However, this assumption becomes trivial if either the model is finite or if one assumes LEM. So one can see this as an interface to provide two kinds of soundness/completeness results.

Ana de Almeida Borges said:

After thinking for a bit, I believe I should be able to avoid instructive EM (the

`{P} + {~P}`

version) without going for this route. In my case I only wish to create a Lindenbaum set`s`

for a finite set of formulas`Phi`

. Thus there are only`2^|Phi|`

possible sets, picking either`phi`

or`~phi`

for each`phi \in Phi`

. The Lindenbaum lemma states that there is a set`s`

with some properties. I can prove that one of these`2^|Phi|`

sets has the desired properties. Thus there is no need to explicitly construct`s`

.

This indeed sounds related to the guessing stage we used to reduce Hybrid logic to Modal logic in the CPP 2011 paper, guessing for every nominal the Hintikka set in which it is true. I reduced this to a lemma I called finite choice:

```
Lemma finite_choice (X : finType) Y (R : X −> Y −> Prop) :
(forall x : X, exists y , R x y) −> exists f, forall x, R x (f x).
```

which is provable by induction on the enumeration of X .

Thank you for the references!

Last updated: Jan 31 2023 at 13:02 UTC