As I understand it, MetaCoq proves consistency (no axiom-free proof of False in the empty context) assuming SN and some other axioms. What's the gap between this and a proof of soundness? (which is the same as a denotation function, right?) (Is there any literature on the relationship between proving consistency, SN, and soundness?)

This topic was moved here from #MetaCoq > Soundness vs Strong Normalizing by Jason Gross.

I don't think we can get a denotation function currently, even though we can prove consistency. The denotation function would have to define inductive types on the fly

Something along the lines of given a denotation of a global env, produce the denotation of a well typed term in this env might be more feasible, but I don't know how feasible

What would be the target of the denotation function you think of? The meta-theory?

I think there are quite a few difficulties. First, if you want to define denotation as a function, then you can only define it on well-typed terms. Which means you'd need to define mutually the denotation of types (as types) of terms (as inhabitants of the denotation of their types) and of conversion (as equalities between inhabitants of a given type). The induction principle that you'd need for this is probably some kind of inductive-inductive or inductive-recursive scheme, which would not be trivial to handle. Moreover, I am afraid of issues appearing with proof-relevant equality and setoid hell.

Defining denotation as a (functional) relation instead might be slightly nicer, although you might still hit the same problem that you need to know the interpretation of the type `T`

of a term `t`

to know what the type of its interpretation is.

And there is a second big issue which is that to type the interpretation relation/function, you'd need a very big universe – one that is able to contain all the types in your denotation…

Defining denotation as a functional relation effectively amounts to defining denotation as a function on `typing`

, which gives simultaneously an interpretation to the context, the type, and the term, right? Or, I guess it also separates out "existence", "uniqueness", and "correctness"? I guess I have not thought much about this part of the problem.

The issues of proof-relevant equality can, I hope, be solved by parameterizing over some sort of "partial gadget of semisimplicial types" (this is a phrase I'm making up now, not something from the literature) which provides enough layers of equality coherence for the typing derivation we happen to be dealing with.

The second big issue is comparatively easy to solve, though, I hope: parameterize over a model of the universe graph of the given environment, which would include enough evidence to guarantee that we have a universe big enough to contain all the types in the denotation. More specifically, I imagine that we'd carry a finite map of universe names to Gallina Sorts (maybe we need a finite map of universe algebraic expressions to sorts?), cast functions exhibiting all given cumulativity constraints, interpretation functions of all (co)inductive families and type-valued constants into their corresponding universes, a big enough universe to contain them all so we can talk about elements of types.

And, yes, the target would be the meta-theory. My goal here would be to be able to use Template Coq terms for reflective reasoning in the standard way, where we can reify a goal and then `change`

the goal with the denotation of the reified term.

The latter is for sure only going to work with a denotation function, not with a functional relation

Regarding your idea of parametrizing with some kind of map from universe levels to universes: I believe this would be hard to write in Coq, because universe levels are only second-class entities, so I don't know how we could define a mapping as a first-class object…

As for the interpretation of a derivation, you are right that we could try and have a "packed" version, which decodes a derivation $\Gamma \vdash t \colon T$ not as a term of type $\llbracket T \rrbracket$ over $\llbracket \Gamma \rrbracket$, but as a triple of a type (interpreting Γ), a type over that one (interpreting T) and a family in T (interpreting t).

Yet another issue (I'm just piling up all the difficulties I have in mind, which does not mean they can't be solved), probably related with equality, is that of coherence: because decoding takes in a typing derivation for `t`

, it is not obvious a priori that two interpretations for `t`

based on two different typing derivations will end up being the same. If you use different conversion rules in different places you'll end up with different transports and different subterms living in different types, which might not even be equal due to cumulativity. So probably you should at least try and interpret very canonical derivations, ie ones where you have at most one derivation for a term (the bidirectional judgments go in this direction).

Last updated: Jul 23 2024 at 20:01 UTC