The question is a follows: I imagine it is possible to create some form of Bnats which is infinite sequence of bits with induction principle that says (P Zero -> (forall (n: Bnat) P n -> P (n + 1) -> forall n, P n
) I have seen this done for complex data structures like sets where the whole implementation is efficient but the induction principle is high level. So why don't we have this efficient Bnat in coq's standard library ?
Peano nat are still usually more convenient for proofs. It's still convenient to be able to do case analysis. It can also be convenient for small scale reflection (although this is better exploited in ssreflect than the stdlib). Anyway, don't expect everything in the stdlib to be state of the art, it's just the original library of Coq that grew organially with it over decades.
The standard library does have high-level induction principle for binary nats (N.peano_rect), tho of course those are unbounded finite sequences. Proofs are still more annoying, since the reduction rules you actually want don't hold definitionally.
And yet, you can work around this (mostly) by inventing those rules for N.zero and N.succ, proving them, and using rewriting. Rewriting is much slower than simpl, but for some small proofs it might be tolerable, and for large proofs you often need binary nats anyway
You can't, for instance, easily use nat for C verification: reducing 2 ^ 64 will surely DoS Coq.
You may not be able to use binary nats to index eg the size of vectors.
With lists/vectors, it seems unary nats add at most a constant factor to operation cost. And at least for fin.t, binary indexes work fine using sigma types
(I can link the implementation of fin.t : N -> Type we use at bedrock)
I see, I was curious, and good to know that even with a nicer induction, binary nats won't work as index for vectors
I guess if you wanted to have vectors indexed by binary nats, then you'd want some "binary vectors": the reason why vectors play rather nicely with unary nats is that a vector and its index have the same structure, and so in a lot of situations the two are "in sync", typically you can match on one when you can match on the other, and the pattern-matchings will reduce at the same time. So if you wanted the same with binary nats, you'd probably want "vectors" which somehow mimick the structure of binary nats, to get these niceties again. Although then your vectors would really look more like some sort of binary tree, I guess.
but I fail to see the problem with
Inductive vect T : bnat -> Type :=
| vnil : vect T 0
| vconst: (t : T) n (v: vect T n): vect T (1 + n).
obviously, this is not complete example, but what I see is that vect
will also require its own custom induction principle/ recursor.
but it should work and be as flexible as normal vector, right ?
The issue is that you will not be able to easily write a function concat: vect m -> vect n -> vect (m + n)
. (But anyway, indexed vectors are quite awful, in comparison with dependent sums.)
Meven Lennon-Bertrand said:
Although then your vectors would really look more like some sort of binary tree, I guess.
I think the appropriate structure is a snoc-list of leaf-labelled complete binary trees of strictly decreasing sizes. I think there's a sort of merge operation on these structures which is reasonably simple and efficient (following addition of binary numbers), but I don't think it does much sensible with the order of elements.
I suppose you could also arrange it as a complete node-labelled binary tree, but where some layers are empty.
the reason why vectors play rather nicely with unary nats is that
a vector and its index have the same structure
That and the fact that if the length is N, representing it as a binary
nat may reduce its cost to O(log N), but the cost of the vector would
still be O(N), so the gain is elusive.
@Guillaume Melquiond well vectors as lists packed with a proof of bounded length do not nest into trees unfortunately...
I suppose that you are talking about trees recursively defined as vectors of trees. If so, sure. But I would nevertheless go for a dependent sum, by moving the proof term at the root of the tree (so that trees are defined as lists of trees). That is less modular and elegant than having vectors at the nodes, but it should still prove much simpler to use.
(deleted)
Last updated: Oct 04 2023 at 23:01 UTC