I have not much worked with setoid equality.
I have heard that setoid equality is somewhat awkward to work with.
If I have the choice between two implementations of a data type, one of which would have the appropriate equality agreeing with the inductive equality type, and one of which would have to have a setoid equality, it seems like it would be in general preferable to use the one where I don't have to use setoid equality. I am asking for your opinion - to what extent is this justified even if the resulting type is somewhat awkward/unintuitive.
For example, if I want to work with the type of functions from{ x : nat | x < n } -> { y : nat | x < m }
, I could try to replace this for practical purposes everywhere with the type of lists of n elements from the set { x : nat | x < m }
; then function evaluation at k is simply accessing the k-th element of the list. Lists are equal iff their values are equal, so I wouldn't need to use setoid equality here to get the desired equivalence relation on functions.
This representation seems more than justified. There's even an intuition: you're representing certain functions by their graph.
Ok, cool. Let me go a bit further and suggest a slightly more awkward /excessively clever example that I was thinking about earlier which maybe better demonstrates this trade-off problem I'm wondering about.
We could define a monotonic function from [n = { 0. ... n-1 }
to [m] ={0, ... m-1}
inductively as follows.
[0] -> [0]
f : [n] -> [m]
monotonic, there is an induced monotonic function f' : [n] -> [m.+1]
given by composing with the obvious inclusion map.f : [n] -> [m.+1]
monotonic, there is an induced map f' : [n.+1] -> [m.+1]
which sends top element to top element.so something like
Inductive monfn : nat -> nat -> Type :=
| id0 : monfn 0 0
| dn : monfn n m -> monfn n m.+1
| addone : monfn n m.+1 -> monfn n.+1 m.+1.
I saw similar definitions in proper developments in Agda to represent weakenings between contexts of variables, so yes this presentation is used in practice. The tradeoff however is that writing concrete instances of this type is relatively bothersome compared to a functional representation (I wonder if it would be possible in practice to generate elements of monfn
from the functional representation on concrete instances, maybe using parametricity).
FTR, the main well-known trade-off with this encoding is you'll lose the definitional laws of composition / identity. This may or may not matter in your setting.
@Pierre-Marie Pédrot is your point here that definitionally (wrt ordinary function composition) we have
id x = x
and
f(g(x)) = (f o g) x
but this would fail in the inductive type context
so we only have propositionally apply_fn idmap x = x
and apply_fn (f o g) x = apply_fn f (apply_xn g x)
Indeed. Most of my syntactic model tricks are about replacing inductive definitions with functional ones because "they compute better".
(aka the Yoneda embedding)
My curiousity is piqued. I don't think I understand what you're referencing there with the Yoneda embedding or how you use it to think about computation. I came across a paper of Cubri, Dybjer and Scott, "Normalization and the Yoneda embedding" at one point where they discuss reduction via the Yoneda embedding in categories enriched over PER's. Is this something like what you're suggesting?
Yoneda embedding is a great way to strictify some structure. I am not exactly sure how it works with regards to models of type theory (i.e. what happens with CwFs) but I can say it is used for other stricfications. A classical example is the biequivalence of bicategories and strict 2-categories. The idea here is to (bi)Yoneda embed the bicategory, giving biequivalent bicategory which happens to be a strict 2-category.
(Typically) In type theory we have 3-forms of equivalence between terms. Judgemental/definitional equality, the identity type and the "canonical" form of equivalence (if our terms were types this is the HoTT notion of equivalence, for Groups, isomorphisms etc.). What is strange is that even though we have 3 forms of equivalence, we are only able to talk explicitly about the latter two in our language.
You can even design a type theory where all the judgemental equalities are actually terms in an identity type. You quickly find out that this is a severely impractical thing to do. The solution is then to pick some identity types to be strict. If you pick all of them, you get a poorly behaved (by some accounts) type theory called extensional type theory. This loses properties such as decidable type checking.
In Coq's type theory, there are a few computation rules that have been hand picked to be definitional. Function substitution, matching on constructors etc. The reason we can do this is because we have models of type theory where these things are actually strict.
You might be interested to know what HoTT does with the identity type, and the answer is that it makes it equivalent to the third kind of equivalence. i.e. identity types and isomorphisms become the same thing. in Coq, we typically employ UIP and so on, and this can make the identity type more like the definitional equality, but it can never fully reach it due to the issues with type theoretic properties outlined before.
Another thing people have experimented with is 2-level type theories, which consist of two type theories, where the first layer has an identity type which corresponds in some way to the judgemental equality of the second layer. This is a sort of formalization of a theory together with its metatheory.
In Set theory, we very rarely worry about computation rules, since we never actually have to compute things. In FOL you are free to substitute terms in predicates by equal terms, but there is no explicit way to run such a computation. As there is no need to.
Which equalities we can make judgemental in Coq however is still an open topic, and is by no means finished. Proof assistants such as Agda have implemented "rewriting rules" which have a "confluence check" (i.e. every route to normalization is the same). This has been used by formalizations, especially of type theory itself since it allows this strict rewriting business. No need to explicitly transport over equalities.
The trickiness with strict equality is not even inherent to the complexity of a type theory like Coq. Having a very simple (dependent) type theory with some terms and predicates, in which you have strict substitution is already a tricky thing to formalize the syntax for. You will never be able to write down an inductive type that computes the way the type theory should in Coq (this is what is known as a shallow embedding, as opposed to mapping the judgemental equality of the theory in study to the identity type which is known as a deep embedding).
I think @Pierre-Marie Pédrot is refering to tricks like these to transform propositional equations into definitional ones.
Kenji Maillard said:
I think Pierre-Marie Pédrot is refering to tricks like these to transform propositional equations into definitional ones.
Very cool, I get it now.
Like embedding a monoidal category into its own category of endofunctors by sending to .
Last updated: Sep 23 2023 at 13:01 UTC