Coq has nice libraries for Interval Arithmetics and rigourous Polynomial approximations, based on Coquelicot. Is there any reason why this should/could be not be done in mathcomp-analysis ? Would there be any interest in doing so ? Is there anything existing already here or in math-comp ?
Because the Interval library depends on Flocq which has not been ported to MathComp yet (also, is very hard to port, I guess)? @Reynald Affeldt
I'm not sure it is hard in the sense tricky but rather in the sense that it requires a non negligible amount of work.
However, thanks to the Rstruct file of Analysis, the results of Analysis and stdlib based libraries can be combined. I wouldn't call it comfortable but it definitely works.
Would there be any interest in doing so ?
I did express this interest on another channel and discussed it with other people. If I remember well we identified the port of Flocq has the time-consuming bit.
There is no reason why using canonical structures vs typeclasses might be better or worse for this ? I understand that if there is no specific reason for wanting a Flocq compatible with mathcomp, I might not be worth the time.
Is there a particular theorem you were looking for regarding polynomial approximations? Its one of my favorite topics, so I definitely share some interest in porting (or recreating?) that work.
Zachary Stone said:
Is there a particular theorem you were looking for regarding polynomial approximations? Its one of my favorite topics, so I definitely share some interest in porting (or recreating?) that work.
No theorem at all. I agree that a lot of it is fun, but I was talking with people who use it for certifying numerical approximations (approx-models coq-interval) and just wondered about mathcomp/mathcomp-analysis . I'm not even sure polynomials are well working within mathcomp.
I would love to learn more about verifying floating point algorithms. Do they care about the "constructiveness" of their theorems? For example, our Intermediate Value Theorem won't extract due to some classical axioms, but there are constructive "approximate" IVT alternatives. To use mathcomp, you'd probably need to write an extractable flocq implementation and a "pure" one, and prove they agree up to some perturbation in classical logic. We're missing some perturbation theory you'd probably want for that (taylor's theorem, implicit function theorem). But I'd be happy to just know what theorems they use in their proofs.
I don't think they need constructiveness, they are however interested in specific approximations for certain problems and not general approximation theory I guess. I don't know enough of that theory to understand why we would need implicit function theorem. @Guillaume Melquiond should be the one to answer here.
Flocq does not care about constructiveness. In fact, it cares about non-constructiveness, since it critically depends on the existence of a truncation function. As for the analysis theorems Flocq requires, the answer is basically none. Outside of arithmetic, the most "advanced" fact it depends on is the existence of an increasing morphism from (R+,1,beta,*) to (R,0,1,+), i.e., the logarithmic functions.
Ah, thank you. That makes sense. Mathcomp analysis has some neat features that are probably still relevant, such as automating proofs of positivity. By number of lines, the bulk of mathcomp analysis either depends on measure.v, or topology.v. It sounds like you're saying Flocq never mentions integration or continuity, so perhaps there just isn't much overlap in the goals? Sounds like I should read up on this.
I wonder if formalizations of computable analysis (e.g. https://holgerthies.github.io/continuity/) are better suited for floating point algoritms, via exact real arithmetic?
Last updated: Oct 13 2024 at 01:02 UTC