In MetaCoq erasure (and in Pierre Letouzey's thesis) the target language $\lambda_\square$ is untyped. However, the resulting target languages for Coq extraction are usually typed (OCaml, Haskell). The previous extraction from CoC (without inductives) targeted System Fω. What is the exact reason for having an untyped intermediate step?

Ok, I'll try to answer myself how I see it :)

Since "internal" extraction in the style of the previous Coq extraction from the calculus of constructions (CoC) to System Fω by Christine Paulin-Mohring does not scale to the full calculus of inductive constructions (CIC), it makes sense to let the target language be untyped. In this way, it can accommodate all of CIC (not restrictions on typing/guardedness, etc.) and at the same time allows for flexibility of choosing target languages with various type systems. One of the downsides of this flexibility is the need to recover the types back, when extracting to typed functional languages.

I wonder if we could provide an intermediate step, for instance extracting to a "gradual/dynamic" simply typed language (replacing the expressivity of dependent types with unsafe coercions from/to a universal type).

Maybe I can try to explain in a less cryptic way the intuition: you can see the current extraction mechanism as replacing dependent types by a single uni-type $\star$ that satisfies all the isomorphisms you may need between type constructors, e.g. $\star \to \star \cong \star, \texttt{List }\star \cong \star$ and those are used implicitly everywhere. In a gradual language you manipulate both this universal type $\star$, often written \texttt{Dyn} or `?`

, and the usual type constructors, and relate the two through explicit coercions in a cast calculus. In particular, when compiling from CIC you could retain some structure of the types that are expressible in with simple types and hide away the traits that are actually dependent behind coercions to/from the Dynamic type.

That could be interesting.

Is your suggestion related to "Gradual Certified Programming in Coq" https://arxiv.org/abs/1506.04205 ?

Thanks @Kenji Maillard for your answer! I agree with @Bas Spitters it looks very interesting. Maybe this is a principled way of going about extraction from strong type systems like CIC.

In particular, when compiling from CIC you could retain some structure of the types that are expressible in with simple types and hide away the traits that are actually dependent behind coercions to/from the Dynamic type.

In fact, what we can do in our extraction already, is to annotate all (or required) subterms with "erased" types. These erased types are prenex-polymorphic types plus $\square$ for irrelevant stuff and $\mathbb{T}$ for the types that cannot be expressed as prenex-polymorphic. Our erasure procedure for types is written completely in Coq and is defined on the PCUIC representation of MetaCoq. Basically, we follow the way Letouzey describes it in his thesis.

Bas Spitters said:

Is your suggestion related to "Gradual Certified Programming in Coq" https://arxiv.org/abs/1506.04205 ?

This is one of the points in the cloud of the gradual typing + dependent types literature (more recent ones are Gradual Dependent Types, Approximate Normalization for Dependent Types or Gradualizing the Calculus of Inductive Constructions which is still under review). But I guess that since the targets are not dependently typed, a gradual simple type system should be enough, something in the line of Refined Criteria for Gradual Typing which is *much* easier to handle, because it does not have to fight with the thorny issues with normalization and computation at typing time.

Last updated: Jun 14 2024 at 18:01 UTC