I was under the impression that classical wrong is strictly wrong because of the law of excluded middle and the fact that we have undecidable problems that can neither be true or false, that was the reason why we have intuitionistic logic/CIC. But then I learned about Isabelle and I just realized that they use classical logic, the fact that it is still used makes me wonder as to why is it used if classical logic is wrong, but then I thought perhaps it is not wrong so the question becomes why would we create a weaker CIC when classical logic has a very strong axiom forall X, X \/ ~ X
neither true nor false implies both not true and not false, i.e. both true and false otherwise known as a contradiction.
Undecdiable problems are consistent to be considered true and false (obv not at the same time), i.e. both assumptions are non-contradictory
so when would it make sense to consider them true/false and when would it make sense to leave them undecidable?
I am trying to understand the tradeoff.
The continuum hypothesis is a could example. In standard ZFC you cannot prove CH, nor can you prove the negation of it.
The reason is that there are models of ZFC which you can extend (called forcing) such that CH holds and models where the negation holds.
Therefore, unless ZFC is inconsistent, it was impossible to prove CH in the first place.
I could detail you how to build a wooden toy car for example, and we could invent games that we could play with it. But if I asked you what color the car was, there is nothing telling me this information. It is perfectly consistent to play these games with a blue car or a red car.
In that analogy, classical logic makes the assumption that the color of the car has to be blue or red (here the example breaks down since color is not a dichotomy).
That way I can reason about the car like: In the case it is blue do this and in the case that it is red to this
But at no point have I assumed that it has to be one or the other.
Continuing the analogy, intuitionistic logic would be like never reasoning about the color of the car by cases. It would force me to think about the important parts: the wood, the wheels etc.
It would also mean there are more examples of toy cars fitting this description (i.e. models)
Of course at this point I am really stretching the analogy.
I think I am starting to see the whole picture. I really thought classical logic is wrong for some time!
thanks for the explaination
Sometimes it helps to think of classical logic as a special case of intuitionistic logic.
Not assuming classical principles forces people to argue more directly and has more models
Everything true in intuitionisitc logic is true in classical logic, but not the other way round.
walker said:
I think I am starting to see the whole picture. I really thought classical logic is wrong for some time!
It is not wrong, it is just not right ;-)
if you want to be non-wrong ("consistent"), classical is for you
I think this was the point of confusion, I always assumed if classical logic says it is true and intuitionstic logic says "we don't know if it is true" then this is contradiction. I never thought of it as "we can assume it is either true of false".
Equivalent to the law of excluded middle is double negation elimination, which says if something is not not true then it is true.
Curiously, in intuitionistic logic you can prove if something is not not not true then it is not true, but not the first (or otherwise it would be classical logic).
I think I imagined how the proof would look like, but it feels spooky.
In propositional intuitionistic logic you can even prove any theorem of classical logic as long as you put two negations on the outside.
I think in first-order and higher-order this gets a bit more complicated, but it is still nice nonetheless :)
In https://github.com/coq-community/coq-art/blob/master/ch5_everydays_logic/SRC/class.v , you may look at the equivalence of five characterizations of propositional classical logic. Btw, one of the main difference between intuitionistic and classical logic is that, in the latter, you can prove a proposition exists x:T, P x
without giving any witness.
I think one might say that the classical meaning of
there exists x such that P x
is roughly
it's not wrong that there could exist an x such that P x
walker said:
I think I imagined how the proof would look like, but it feels spooky.
Indeed, it's an instance of a simple scheme of minimal propositional logic:
Section PQ.
Variables P Q: Prop.
Lemma P3Q: (((P -> Q) -> Q) -> Q) -> P -> Q.
Proof.
intros H p; apply H; intro pq ; apply pq, p.
Qed.
End PQ.
Lemma triple_neg (P:Prop) : ~ ~ ~ P -> ~ P.
Proof. exact (P3Q P False). Qed.
Intuitionistic logic is a simple language, a pure lambda calculus. This makes it relatively easier to implement and to gain confidence in the implementation. In contrast classical logic is a language with call/cc, so in a sense it is more complex to interpret. At the same time there are various ways of embedding it in intuitionistic logic, such as by double-negation translation or by just adding an axiom, so there is little to lose in starting with that more minimalistic foundation.
But from a user's perspective, once you have a proof assistant that you trust, a non-constructive proof of correctness is just as reliable as a constructive one if all you care about is truth, as would likely be the case for most stakeholders of a verified software company.
It's bold to assume stakeholders of a software company care about truth :upside_down: .
Also worth noting that as far as arithmetic is concerned, classical and intuitionistic logic prove all the same theorems up to forall x, exists y, P(x, y)
statements, where P(x, y)
is decidable for concrete x
and y
.
This covers most software correctness applications.
I'm not sure what's bold about "truth"... OTOH, that claim about software correctness seems differently bold
arguably (a very tongue-in-cheek!), since most code uses OOP hence complex higher-order state, a modern separation logic is essential, and I don't think those statements are enough to formalize e.g. Iris :-D
(and yes, I'm far from fully serious)
OT warning, but has anyone really captured dynamic dispatch OOP in a formal and practical way convincingly? Even Dafny doesn't go there, for example. Should one even try to capture it? Some truths are probably not worth obtaining...
Not sure what you mean by "captured dynamic dispatch OOP" but Precision in Practice which encodes Java objects as a combination of exists+forall+mu might be such an example.
OK, I guess I can be more specific: does anyone have a machine-checked separation-logic-or-equivalent for a (realistic fragment of a) dynamic dispatch OOP language? I think sep-log-or-equivalent has been around for a while for plain OOP, like in Dafny.
Paolo Giarrusso said:
arguably (a very tongue-in-cheek!), since most code uses OOP hence complex higher-order state, a modern separation logic is essential, and I don't think those statements are enough to formalize e.g. Iris :-D
Ok, but the top-level theorem is at worst forall-exists
right?
@Karl Palmskog that seems odd, but if you’re talking about https://github.com/dafny-lang/dafny/issues/1588, that seems about termination, which is beyond what step-indexing handles. Ignoring termination, dynamic dispatch seems about as hard as closures?
@Cody Roux I’ve never been taught the arithmetical hierarchy, but Iris adequacy works for arbitrary pure properties: if WP e { v, bi_pure (Phi v) }
and e ->* v0
, then Phi v0
and Phi
is an arbitrary val -> Prop
.
Paolo Giarrusso said:
Cody Roux I’ve never been taught the arithmetical hierarchy, but Iris adequacy works for arbitrary pure properties: if
WP e { v, bi_pure (Phi v) }
ande ->* v0
, thenPhi v0
andPhi
is an arbitraryval -> Prop
.
Of course, higher order logic is great for stating and proving things, but my point is proofs about concrete programs tend to say "this program, in such and such a conditions, behaves in such and such a way", which is of the form my_prog_is_correct : forall x, some_decidable_property
.
Perhaps that proof uses all sorts of fancy higher-order and classical principles, but I claim (by some metatheoretical nonsense) that the use of the excluded in the proof of my_prog_is_correct
could be elimated.
Thanks for the link, but I don't think all top-level specs need be written as executable/decidable ones, and I'm not sure but I don't expect to rephrase my properties using Dialectica by hand.
I suspect all your top level specs happen to be "forall inputs some decidable property" just by accident! (though I'd be happy to hear about counter-examples)
I'm not suggesting you translate any of your proofs, I'm just saying anything you prove in Coq with EM will also be true without EM, if it's in that fragment, nothing more.
Last updated: Dec 05 2023 at 05:01 UTC