I am strugging trying to unerstand how to use the seq
permutation, and I was wondering if there is a version of it that does not require eqType
and if there is, how to use it ?
you can always use Permutation
from Stdlib. I don't think there are any MathComp projects reasoning on permutations where the seq type is not an eqType
.
Without eqType, nothing computable, so definitely not in the spirit of MathComp (small scale reflection).
alright, a follow up question. what is example of a type that is not eqtype.
Everytype I can come up with look like eqtype, so I am trying to understand the limitations here
nat -> bool
quotients with undecidable relations
type universes
constructive reals as in CoRN don't have any decidable equality IIRC
forgot: codata
but there is a huge class of types where decidable equality is guaranteed, not sure how to phrase that clearly and correctly: finitely branching tree / finitary syntax / ...
Informally this should mean, if you build types from scratch and never use codata, functions from infinite domains, or elements of universes, you can't build types without decidable equality.
Even functions from finite domains can have decidable equality, but the proof requires fun_ext (thankfully, the use remains proof-irrelevant)
Alright, followup question 3: since I was working on library for implementing trie data structure, many of the operations require require either support from mathcomp (hence requires both key and value to be eqType) or then I will have to not use math-comp and instead use stdlib version.
At this point it is not clear what kind of limitation I am putting on myself by making everything eqtype. Ideally my goal is stuff that extracts, and my intuition is anything extractable is eqtype by definition (since it takes finite memory so it must be mem-comparable)
But on the other hand it feels wrong to restrict everything to eqType ( i think in a previous thread, I received advice not do that)
My take is that restricting to eqType is natural for keys, and unnatural for values but maybe it's natural in math-comp?
(I think I was the one objecting in the other thread, and maybe because I don't get/agree with math-comp here; different opinions are reasonable!)
anything extractable is eqtype by definition (since it takes finite memory so it must be mem-comparable)
I'd bet you can extract functions from N
even if they don't have decidable equality.
this restriction is probably fine in most cases, it's just stricter. And I suspect some/many of the other examples are also extractable.
I see, I guess I will have to use permutation then, I am really following how stdpp is doing things.
of course, _implementations_ of functions use finite memory, and comparing that memory will terminate. But you can't write this in Coq, and it'll give the wrong value in many cases: fun x : nat => x
and fun x : nat => x + 0
are propositionally equal (with funext) but they probably translate to different code.
I guess fun ext is reasonable assumption in general ?
based on my limited understanding it is a lemma in cubical type theory which is super set of MLTT/CIC in some sense
but I am talking about the proof-relevant version of it here*
fun_ext is true in (almost?) every foundation of maths, starting from ZFC and the ZFC-based models of Coq
cubical TT lets you prove it as well
when you say almost, it makes me wonder when it is false
most of the time, its only problem is it won't compute, but that's seldom a problem if you're not using equalities as proof-relevant.
walker said:
when you say almost, it makes me wonder when it is false
In some intensional models (exceptional MLTT + parametricity, or the model of polynomial functors)
intuitively, when funext is false, it's because the interpretation of a function in the model contains some thing extra beyond its input-output behavior; typically it's the function "code" (that's what "intensional model" means), but https://hal.inria.fr/hal-01445835/document shows a relatively simple model which stores less extra data (just booleans).
One issue of the "×bool" model is that it also breaks η-laws for functions, which at least in MLTT is usually included. You need the extra data of the fancier models to negate function extensionality while retaining η
Indeed, · × bool is nasty because functions are not really functions anymore. In call-by-name η laws for fun types should hold.
but interestingly parametric exceptional TT and polynomial functors break funext for somewhat distinct reasons
the first one breaks it because there are "junk" inhabitants of types (exceptions) that do not exist at toplevel but exist transiently in the model, so that in practice e.g. for unit -> unit
there are two distinct inhabitants, the identity and the constant functions. They do not behave the same on junk inhabitants, hence breaking funext.
the latter breaks it for more fundamental reasons, namely that it exposes details about the way arguments are used
polynomial functors are basically a variant of Dialectica, and that means that you can get real intensional content out of functions.
a not-so-wrong way to picture is that this gives you the O(f n) of the runtime of the function
In a complexity-aware type theory, two functions can compute the same graph but have wildly different algorithmic complexities. As such they can't be equal, hence breaking funext.
Karl Palmskog said:
you can always use
Permutation
from Stdlib. I don't think there are any MathComp projects reasoning on permutations where the seq type is not aneqType
.
In FCSL-PCM we have https://github.com/imdea-software/fcsl-pcm/blob/master/core/seqperm.v
This comes in handy for reasoning on heap predicates sometimes.
@Alexander Gryzlov side question: If I wanted to learn about pcms in general, would there be a specific paper that in your opinion is a must read?
walker said:
Alexander Gryzlov side question: If I wanted to learn about pcms in general, would there be a specific paper that in your opinion is a must read?
Depends on what you want to do with them, in the context of proving things about single-threaded heap-based programs in SSReflect probably a good starting point is Nanevski, Vafeiadis, Berdine, [2010] "Structuring the verification of heap-manipulating programs"
Scaling this to concurrency requires quite a few generalizations to this base idea, my current hypothesis that I'm working on is that there are some middle grounds between classical "tree-like" separation logic proofs and full-blown concurrent SL featuring linearizability.
Specifically that the ideas from the latter can be motivated by working on proofs for graph- and array-manipulating programs
so what you are saying is that PCM is analogous to Hoare-logic, and what you are trying to do is to generalize PCMs to handle concurrency the same way CSL is generalization of hoare logic, is my understanding correct ?
Well we are also using some form of Hoare logic, the idea is that we work directly with models of state (PCMs) instead of axiomatizing it on a logical level with things like separating conjuction/implication.
So basically instead of viewing HL/SL as a sort of logical API that you can use to construct your proofs with, we say it's like a bridge that allows to generate a bunch of PCM equations from the syntax of your stateful program and use algebraic-style reasoning on them.
For single-threaded heaps (and tree-like structures built on top of them) the difference between two styles is not that big but they really start to diverge once you go into concurrency.
Re the side question, just to see if I understand (but I haven't used FCSL, only Iris):
Last updated: Jan 29 2023 at 18:03 UTC