I can fairly easily get Coq to stack overflow by running reduction tactics on terms that have a large normal form. Is this a known limitation that we just have to deal with, or is this considered a bug (perhaps this can be solved by rewriting the code in continuation passing style)?
I consider this a known limitation. Coq cannot handle term ASTs whose depth is more than 10000 to 70000 nodes. You can find the exact limit on your system by reducing large
nats. There is also a maximum width to ASTs, since you can't nest the types of the head identifier more than 10000--70000 deep (though in theory you could form an application of an identifier to an array of 18014398509481983 terms on a 64-bit machine or an array of at most 4194303 terms on a 32-bit machine; these are the maximum array lengths in OCaml according to https://www.systutorials.com/max-array-length-in-ocaml/ ). Granted, a balanced tree of width 1000 and depth 5000 is going to be quite enormous, and I don't believe Coq's reduction tactics have any trouble with such ASTs.
Thanks Jason. Well, I guess you can increase it a bit more by increasing the stack size using
ulimit, but that also has limits (apparently
2^16 on my system). Looking at the code paths of the reduction functions it would be quite a lot of work to rewrite them to CPS form. Still, it would be nice if these stack overflows could at least not fatally crash ltac expressions. Currently, if you have these reduction tactics in hints, it can cause the entire proof search of
auto to crash. Perhaps it would be an idea to make
Stack_overflow a non-fatal exception?
Ah, yeah, stack overflow in reduction tactics should not be a fatal error, I'll open an issue on the bug tracker.
Last updated: Oct 16 2021 at 03:02 UTC