I noted that all the the definitions that are inductively defined can also be defined as sepaarate Axioms and vice versa for instance the axioms of Hoare logic can be inductively defined in Coq, So when would the usage of axioms be different from inductive definition (I am asking from the logic perspective).
Axioms don't have the ability to be destructed by a match
statement.
For instance you can do:
Axiom nat : Type.
Axiom O : nat.
Axiom S : nat -> nat.
But you won't be able to define a function on nat by cases and reduce it.
Also, if you care about that sort of thing, inductive types are generally constructable out of sigma types and nat which means they exist in models of type theory, e.g. Set. By assuming axioms you have no guarantee that there is indeed an object that acts this way.
Actually more important than functions is the fact that you have no dependent functions i.e. no induction principle. Natural numbers without induction aren't very good at proving much.
What about the otherway around something that are axiomatic by nature but then it gets represented by inductive types such as axioms of hoare logic, or separation logic ? I think those do work as axioms on their own yet when developed in coq they are not presented as axioms
People define "axioms of logic" all the time using inductive types. However don't conflate the two uses of the word axiom here. One is about the metatheory (Coq) the other has only the meaning you intend to give it. :-)
It's the same as usual mathematics: When formalizing Set theory in ZFC you never assume extra axioms on top of ZFC to talk about models of set theory, you simply have a set of sentences which you term "axioms".
A simpler example is Peano arithmetic, which is just a list of arithmetic statements you expect should hold and these are termed axioms.
because those are provable in ZFC,
But axiom of choice is not povable in ZF so they decided to add when when they need it.
I assume that hoare logic axioms are not provable in CIC so why was "lets assume CIC + hoare logic axioms" not the way to go ?
also I could be wrong, I am just trying to understand why no one did it
here is what the ANSSI guide to reviewing Coq code says about axioms:
The introduction of an axiom is delicate, as it may result in making the Coq logic inconsistent. Besides, certain “standard axioms” are known to be inconsistent with Coq logic (e.g., the axiom of description together with the excluded middle). However, the standard library of Coq provides several axioms that can be safely added to Coq, according to the Coq FAQ. That being said, the standard library provides the means to avoid using most of these axioms. For instance, e.g., generalized rewriting can be leveraged in place of relying on the proof irrelevance and functional extensionality axioms.
I think you are getting confused here. When you study "hoare logic" in CIC or set theory, you are not "proving its axioms" that doesn't even make sense in most cases because proof systems can have different languages.
You can sit down and foramlize the syntax of "hoare logic" in CIC or set theory, and then prove things about it. But the metatheory and theory of hoare logic don't coincide.
That's how logic is typically studied in mathematics.
Of course there are ways to make the metatheory and theory coincide (somewhat) but this is quite a complicated and delicate topic.
Where can I read more about that ?
Well there are efforts to formalize the CIC in CIC such as metacoq https://github.com/MetaCoq/metacoq and in mathematical logic you have "formal theories of truth" or "axiomatic theories of truth". I would warn you however, none of this would make much sense without having studied some logic first.
I assume boolean logic does not qualify ?
If you learn about Godels incompleteness theorem, then you can see these as efforts to thwart it as much as possible.
If by boolean logic you mean propositional logic, then probably not since it is too weak to begin talking about its own metatheory.
understood, then I got lots of reading to do, sadly..
I think the key point is that however you define an inductive type, you won't get unsoundness because of it (so far as we know, in the current Coq implementation). But it takes just one Axiom
line to get an unsound system.
Karl Palmskog said:
I think the key point is that however you define an inductive type, you won't get unsoundness because of it (so far as we know, in the current Coq implementation). But it takes just one
Axiom
line to get an unsound system.
Right, this is what I hinted at when I talked about models earlier up.
some systems such as HOL don't even have inductive types built in, instead you have to use "definitional packages" to get something resembling inductive types (but which are just regular stuff, no special status)
My advice would be read up on Godel's incompleteness theorems since they will pull in most of the logical theory you need.
Ali Caglayan said:
Karl Palmskog said:
I think the key point is that however you define an inductive type, you won't get unsoundness because of it (so far as we know, in the current Coq implementation). But it takes just one
Axiom
line to get an unsound system.Right, this is what I hinted at when I talked about models earlier up.
My problem is I cannot intuit the reason that gives inductive types this advantage.
Ali Caglayan said:
My advice would be read up on Godel's incompleteness theorems since they will pull in most of the logical theory you need.
I will start by reading about that, thanks a lot!
walker said:
Ali Caglayan said:
Karl Palmskog said:
I think the key point is that however you define an inductive type, you won't get unsoundness because of it (so far as we know, in the current Coq implementation). But it takes just one
Axiom
line to get an unsound system.Right, this is what I hinted at when I talked about models earlier up.
My problem is I cannot intuit the reason that gives inductive types this advantage.
If I write down an inductive types there is a "process" that lets me turn it into an equivalent type made out of simpler type formers such as sigma types and the natural numbers. Those are constructable in models of type theory hence it is consistent to assume inductive types exist (they have a possible implementation always).
There is no guarantee when I write down an axiom, I am just assuming it.
For instance I can write Axiom oops : 0 = 1.
and this wouldn't be great for my formalization since I can derive falsehood and prove anything.
There are much less obvious axioms which one could assume, that could unintentionally lead to contradictions.
Hence what Karl said.
understood, I will read about the incompleteness theorem (I vaguely know what is it about but I think I will read the proof now) and move from there.
I'm not sure I understand why the Gödel theorems are relevant to Hoare logic...
Ali Caglayan said:
I think you are getting confused here. When you study "hoare logic" in CIC or set theory, you are not "proving its axioms" that doesn't even make sense in most cases because proof systems can have different languages.
To me, of course you want to "prove the axioms of Hoare logic" — by giving a model
hmm, I thought one could justify Hoare logic rules by giving a WP calculus and proving relative completeness, etc.
the whole "giving a model" stuff sounds separation logic-y
Separation logic is a kind of Hoare logic, and for this discussion I think they are the same thing.
Iris treats { P } e { Q }
as a notation for P -* WP e { Q }
, so I don't know if translating between the two notations is a very interesting proof?
there might be another cultural divide elsewhere, but a more interesting proof seems the one relating "axiomatic semantics" to e.g. operational semantics.
or to the "canonical" semantics for your programming language (the one that's the official "language definition"), whatever that is.
but if you define P e Q using a WP calculus, then your Hoare "rules" are not axioms anyway. To me, it's fine to make the WP calculus definition the canonical semantics. But sure, it's a form of validating/justifying one form of semantics if you relate it to another form
but a WP calculus is a form of characterization of the state you get after running a program. It's an unambiguous description that could be used by a compiler implementer. Where does the model stuff come in? Is it really needed?
because of that notational, I'm not sure what's the difference between Hoare logic and WP semantics. Are you saying that people would choose different axioms in the two?
(I'd say the standard Hoare rules for loops seem something I'd want to prove and not a primitive)
in my reading of the classic presentation, the Hoare logic rules are taken as given "from nothing". You could/should declare them axiomatically as a bunch of logical implications.
sure, that seems mostly a stylistic choice. You can take those as primitive or prove them sound relative to an operational semantics.
then people figure out that this WP stuff, with an actual predicate transformer function, is actually a better way to do it, both in implementation and in formalization
one Hoare-separation difference is that it's probably more reasonable to take a Hoare logic than a modern separation logic as primitive — with some emphasis on "modern"
but a WP calculus is a form of characterization of the state you get after running a program.
I don't think that's of the WP rules in Iris, but that might be on the "modern separation logic" part in Iris
I just checked software foundations, and they indeed use a shallow embedding of Hoare logic using operational semantics: https://softwarefoundations.cis.upenn.edu/plf-current/Hoare.html#hoare_triple
yeah, I guess you can do that, but that's not how it's done classically I believe? the most convincing formalizations separate each semantic approach and have standalone theorems connecting them
I'm not pushing strongly for shallow embeddings, just saying "here's a common textbook which gives a model for Hoare logic"
ah OK, so that's what you mean by give a model in this context
yep, that's what we do in separation logics as well
I wasn't hinting at domain-theoretic denotational semantics or anything like that :sweat_smile:
but we agree that one could just as well take the Hoare-sep-log as primitive, and prove the opsem "correct" by giving it a "Hoare model"? Or one might say: interpreting the opsem in terms of Hoare-sep-log
yes, tho I confess I wouldn't know how to set that up — but I recall Mitchell's Foundations of Programming Languages shows both directions?
in practice, if you used Iris, then Iris experts might object strongly but for other reasons (and I suspect the same could be true in other such logics)
hmm, but couldn't I write my WP calculus / predicate transformer separately from Iris?
and then just plug it in and get proof mode stuff?
re proofmode: yes but let's set that aside
re "separate WP calculus", wikipedia's example has an imperative language without aliasing, which simplifies questions a bit too much
my naive understanding has been that if I bring with me some PL semantics in some reasonable form, I can get something close to "my own VST" from Iris, i.e., a Hoare interface of sorts to prove my before-after specs for stateful programs
Iris is a flexible framework, but the Iris program logics proper demands specifically an operational semantics
ah, so the Iris WP calc is "fixed" and based on the operational semantics parameter?
that's a bit less flexible than I expected
the definition of WP
by the core program logic is fixed and based on operational semantics; the "actual WP calculus" is something that you derive on top...
but also
you _can_ swap out the program logics with something else.
you can just reuse the proofmode (as you said), or you can also reuse the Iris base logic (which gives you step-indexing, higher-order ghost state, etc.)
I think the main rub is that nobody knows how to make sense of _step-indexing_ without steps
and that there are too many ways to write fancy theorems which amount to True
or False -> ...
(a funny way to write True
is ∃ i, \later^i False
)
just for reference, the abstract https://drops.dagstuhl.de/opus/volltexte/2017/7275/ also mentions that Iris relies on some operational semantics:
Iris only applies to languages with an operational interleaving semantic
hmm, I think this is the problem with these separation logic specs, there are so many fancy built-in predicates it's really hard to follow what's going on
yeah that's basically it
ah, interesting, then also the Nuprl style "logic of events" is ruled out (based on happens-before relation rather than interleaving)
and there are tons of "obviously true" rules which are unsound — if you store propositions P
in ghost state, for instance, you can't read them precisely
more accurately: if you store step-indexed predicates in ghost state :wink:
I don't know that logic of events but I've seen happens-before for concurrency, and that paper transforms indeed such an axiomatic semantics (for C/C++/...) into an operational one to apply Iris on top.
anyway, you shouldn't worry about those fancy predicates for Iris works: they apply Iris adequacy to end up with Iris-free top-level theorems about operational semantic traces
I think I would be happy with trace properties as a final deliverable regardless of whether they came out of an operational semantics or not. Nakata and Uustalu even did a Hoare logic over traces, so it should be doable in Iris?
/me trying to follow the dicussion, the problem is there are things I don't know and I don't even know what they are.
the recent discussion has been about programming language theory, in particular programming language semantics. I think it's a bit unfortunate that Hoare conflated his particular PL semantics style with "logic" and "axioms"
I think there are other semantics styles that could just as well be said to be about logic, such as denotational
@walker sorry for hijacking the discussion, this discussion is probably not the most accessible
but back to your questions, it's not impossible that Gödel's theorems are relevant, I just guess they're unlikely to be especially relevant depending on what you want to do
maybe if you say a bit more we can be more helpful
So let me see, I was watching prof Dreyer talking once about Iris and saying that the problem with previous separation logic is that have complex axioms, but then When I read an iris journal paper for iris 3, The axioms (I am not even sure if that is the right name) are really really complex. So I thought that perhaps a better approach would be to extend CIC to reason about pointer accesses given that CIC is powerful, this might be similar to the way typed lambda calculus was extended with inductive constructions to form CIC. But given my theoretical knowledge is really shallow, I wanted to know why didn't anyone do that before.
walker said:
I assume that hoare logic axioms are not provable in CIC so why was "lets assume CIC + hoare logic axioms" not the way to go ?
https://softwarefoundations.cis.upenn.edu/plf-current/Hoare.html "proves the axioms" of Hoare logic
it feels to me that they didn't prove it, they modeled it ?
isn't it possible to also model unsound logic rules the same way ?
no you can't model unsound logics
or rather: the point of a modeling logic A (hoare logic) in logic B (Coq) is to prove relative soundness: if A
is unsound, then B
is unsound as well!
From another POV:
So I thought that perhaps a better approach would be to extend CIC to reason about pointer accesses given that CIC is powerful, this might be similar to the way typed lambda calculus was extended with inductive constructions to form CIC.
When people add rules to CIC, they also write complex proofs to make sure those rules are sound.
Finally, people _have_ built tools that take complex separation logics as primitive rules. The only question is whether those axioms are true.
Paolo Giarrusso said:
When people add rules to CIC, they also write complex proofs to make sure those rules are sound.
Are these proofs something different from the local soundness and local completeness proofs (eliminating introductions and going the other way round)?
it sounds like you're referring to Pfenning's concepts? Those are different indeed.
(deleted)
sorry
Paolo Giarrusso said:
it sounds like you're referring to Pfenning's concepts? Those are different indeed.
Yes I learned it from one of his lectures.
Reposting:
Finally, people _have_ built tools that take complex separation logics as primitive rules. The only question is whether those axioms are true.
... It's a bit as if somebody wrote rules as complex as those of Iris, said "these rules are obviously sound", and skipped writing a proof.
Paolo Giarrusso said:
Reposting:
Finally, people _have_ built tools that take complex separation logics as primitive rules. The only question is whether those axioms are true.
... It's a bit as if somebody wrote rules as complex as those of Iris, said "these rules are obviously sound", and skipped writing a proof.
yes I understand.
one last question, so what is the problem with Pfenning's concepts approach to soundness and completeness proofs?
there's no problem, but the soundness in "local soundness" is just a different thing.
(I'm not even entirely sure why it has that name...)
I understand ... thanks for the discussion I will do a bit more research/ learning.
I should take back the last answer; I'm not entirely sure
If I recall correctly he always talked about local soundeness vs global soundness but never mentioned what that global soundness is.
but "soundness" in general means "you can't prove False", and for a logic as complex as Coq you can't just prove local soundness
for instance, Coq's soundness is proven by showing that there's all proofs can be normalized, and that there's no proof of False in normal form
(oversimplifying quite a bit!)
"no proof of False in normal form" is typically easy
"all proofs can be normalized" is very hard
Paolo Giarrusso said:
but "soundness" in general means "you can't prove False", and for a logic as complex as Coq you can't just prove local soundness
It is funny because Coq's logic looks deceivingly simple and very very small so I would never think of it as complex until I hear from people or go read papers about metacoq.
https://coq.github.io/doc/master/refman/language/core/inductive.html#theory-of-inductive-definitions is not that simple or small
imo
the notion looks complex, but I assumed they are the ones here https://hal.inria.fr/hal-01094195/document
nearly all of us want inductive types to get our work done. So we can either choose to build inductive types from inside the logic (lots of encoding work), or the designers of the logic can make inductive types primitive. If we do the latter, the designers and implementers of the logic have a lot of work to prove soundness. With a logic without inductive types, we have to do work over and over to get inductive types, but we know we are sound without extra work.
Paolo Giarrusso said:
I'm not sure I understand why the Gödel theorems are relevant to Hoare logic...
I mentioned Godel not because I know anything about Hoare, but because it highlights the difficulties in formalizing logic in logic. The question from before was roughly "where can I learn about logics who are their own meta theory" and I replied with some pointers but that in order to appreciate them, studying Godel is a good first step.
@Karl Palmskog we know they’re sound, but I’ve seen people point out that “do they have the right meaning?” can be tricky (e.g. Enrico I think brought that up?)
I agree, if you make inductive types primitive, there are many ways that can be sound but useless/strange
Enrico’s point was about the encodings
OK, a logic like HOL is to me a very "extensional" logic - one shouldn't really look inside, just take stuff at face value.
so most of the time, you don't really care that these encodings of inductive types are super convoluted, because the system shields you from the horrible details
But what happens if the elaboration in HOL is buggy?
I can definitely see that in a "non-extensional" theory like Coq's, bad encodings can make the whole thing crumble
I mean, someone can correct me here, but HOL really doesn't do much elaboration, at least it's very modest [compared to Coq]
You know HOL better than me, but in principle a buggy encoding could let you prove 0 = 1
, right?
every definition has to go through the core operations
Of course, but that doesn’t prevent Pollack inconsistency
OK, sure, yes, HOL doesn't protect from that
And maybe 0 = 1
holds because the code that encodes inductive types has a Pollack-inconsistency bug,
hmm, not immediately obvious, but, maybe?
I have a colleague who works on soundness of Isabelle/HOL kernel, can ask him at some point
IIUC, this isn't about the "kernel"
the definitional mechanism interacts directly with the kernel
but IIUC somebody (@Enrico Tassi ?) was pointing out that HOL does not let you prove 0 = 1
, only "crazy encoding of 0 = crazy encoding of 1", so if the crazy encoding is subtly wrong it could produce equivalent results for 0
and 1
hence (he said) if you don't check that the crazy encoding did the right thing, you need to treat it as part of your TCB. Which seems pretty much tautological.
not sure how that would work, but I can't say it's impossible
Coq elaboration has the same problem in principle, but people here suggest you should look at the elaboration result for the top-level theorems
(anyway, sorry @walker we digressed again in your thread)
don't be sorry, I am really enjoying reading stuff.
I confirm I consider the non trivial encoding of datatypes in HOL part of TCB, exactly as the positivity checker or the termination checker are part of the Coq one. But some Isabelle core devs had different ipinions IIRC last time a loophole was found in their datatype definitiona mechanism (which let you prove false in HOL but not in Pure).
The elboration of Coq can be left out since its output is not totally different than the input, still you can have Pollack's inconsistencies due to notations and stuff like that... but well, that is a matter of UI to me: it is like if vscode would stop colorimg errors in red, not food for logicians :-)
I think Coq's users are a bit too eager to forget that there is a ton of encoding in Coq too. Equality, existential, disjunction, even implication, are all encoded. And there is not even a consensus about this encoding. We had to modify the public API of Coq to please the many users that redefine equality with their own version.
Where do you put the limit between an encoding and a definition then?
There is no limit. There is a continuum. My point was more than, if you actually compare the encoding of 0 = 1
in HOL and Coq, the one in HOL does not look so crazy anymore.
For example, if you take a bunch of students with only basic knowledge of first-order logic, set theory, Peano's arithmetic, etc, I would not be surprised if you can convince them faster that the HOL's encoding of 0 = 1
makes sense.
quick question, is coq not a HOL ? @Guillaume Melquiond you are putting distinction between both of them ?
I'm a bit lost, how can you see peano numbers in Coq as encoded? They are literally how you find them on paper.
"HOL" is usually used to mean a specific higher order logic variant: https://plato.stanford.edu/entries/type-theory-church/
This does not rule out that there are other higher order logics
Enrico Tassi said:
I'm a bit lost, how can you see peano numbers in Coq as encoded? They are literally how you find them on paper.
No, Peano never talked about inductive types. And the equality he used was the primitive one from the logic, not a really strange thing with hard to explain eliminators.
walker said:
quick question, is coq not a HOL ?
And to extend on Karl's answer, what people tend to call HOL (written as such) is the logic used in the HOL family of proof systems: Isabelle/HOL, HOL4, HOL Light, and so on. Coq is completely unrelated to it and follows different paradigms.
Hum, it did no call it as such because the concept was not there... But the fact that naturals are made of a base symbol and another one meaning the successor is, afaict, very close to what the inductive nat says in Coq. If you start to define plus as a type theoretic function with computational rules the you start to diverge, but you don't need that to write 0=1. And the equal of Coq is the equality of the logic. The inductive incarnates the introduction and elimination of that concept, which is hardcoded (unlike say, setoids). That is technical to explain, sure.
maybe relevant here, the paper on the verified subset of HOL Light incl user interaction came out: https://cakeml.org/itp22-candle.pdf
It outlines what is typically assumed in a proof assistant implementation (background ZF(C) set theory)
Last updated: Sep 30 2023 at 05:01 UTC