I started to look into mathcomp and I am a bit confused about the naming of number types, specifically `Num.RealField`

. Do I understand it correctly that this does not require a closed field, so that Q would be a Num.RealField?. If so, is there an instantiation of Num.RealField with an efficiently computing (native int based) Q type?

Also is there an instantiation of mathcomp RealClosedField with CoRN constructive reals? If not, is this considered to be difficult or hasn't it just been done as yet?

HI @Michael Soegtrop, $\mathbb{Q}$ is indeed a`Num.RealField`

and there could be indeed an instantiation with an efficiently computing `Q`

type. My personal take on it is that it should not be done, but that a refinement approach should be used instead when we want efficient computations. The rationale is that using several types to denote the same mathematical object can only lead to API confusion and poor proof reuse: it is less costly to have a discipline to use always the same type for proofs, and translate just for computations.

On the other hand, constructive real would not be an instance of a mathcomp `rcfType`

, because the latter requires at least decidable equality, so it's not only difficult, it's just not possible while remaining axiom-free. Further improvements of mathcomp could try to relax that, but I'm not sure there is an agreement among constructive mathematicians on what exactly the axiomatic of a constructive non-discrete real closed field should be...

@Cyril Cohen I am not so sure how I would do such a refinement. To get concrete: I am currently looking into (https://github.com/Coq-Polyhedra/Coq-Polyhedra/blob/master/theories/simplex.v) which - as far as I understand - defines a conventional (but proven correct) simplex algorithm for solving "Linear Program" optimization problems. The type of the resulting function is:

```
Simplex.simplex
: forall (R : realFieldType) (m n : nat),
'M_(m, n) -> 'cV_m -> 'cV_n -> Simplex.simplex_final_result R m n
```

Now I would like to actually compute the solution of an LP problem with this function - as far as I understand it should be computational if its field type is. What method would you recommend for doing this? My assumption was that I somehow instantiate the parameter R with a realFieldType structure on Q, but if I understood you right, there is a better way of doing this.

Regarding constructive reals: it might be worthwhile to have structures which do not require decidable equality. E.g. for the simplex algorithm it should be sufficient to ask if x>y with limited effort or precision, so that the answer might be "don't know". Edges on the polyhedra which are "don't know" one would follow anyway, but one would have to track such edges, so that one doesn't visit them twice. Of course this might lead to a situation in which the solution contains several points, which are mutually unordered.

Right now I round reals to rationals if needed, but tracking the rounding through longer computations is tricky. If one works with intervals, one has the same "don't know" comparisons as with constructive reals and otherwise one has to do complicated error propagation analysis. If something like a simplex algorithm would actually work with constructive reals is an interesting question I would like to explore.

Pinging @Xavier Allamigeon and @Quentin Canu who might be interested in this discussion.

If I where to do the refinement myself, I would probably go for CoqEAL which already offers a refinement of MathComp's `rat`

by `bigQ`

: https://github.com/CoqEAL/CoqEAL/blob/master/refinements/binrat.v but I might be biased.

@Pierre Roux : thanks for the pointer! I came across CoqEAL a few times, but never really had a deeper look, mostly because I didn't understand what it really is about. Can you shed some light on this?

Trying to sum up in two lines: when you have two types A and C where A is convenient for proof but doesn't really compute and C computes efficiently but is a nightmare for proofs, CoqEAL offers a machinery to refine programs on A to programs on C. It also offers a few refinements such as int/Z, rat/bigQ, matrix/lists of lists. This allows to conduct all proofs on the program on A and then change it to C before computing.

Indeed, a refinement here is usually a pair of relations R_i R_o such that given two programs f and f_r the following holds:

forall i1 i2 , R_i i1 i2 -> R_o (f i1) (f_r i2)

where f : I -> O and f_r : I_r -> O_r

@Pierre Roux : thanks for the explanation! I somehow didn't see the link between the various papers and developments in CoqEAL - now it is clear!

Hi @Michael Soegtrop , if you're interested into solving linear programs, I would recommend to certify a pair of primal/dual solutions precomputed by a LP solver (not formally proven) rather than trying to obtain a formally proven simplex algorithm from Coq-Polyhedra and CoqEAL.

In short, you just need to check that the value of the primal solution equals the value of the dual one.

(and that both are primal/dual feasible)

@Xavier Allamigeon : my main problem is that some main stream LP solvers fail rather frequently on my simple every day problems (say 5 variables and 10 inequalities). I have many cases, where the respective routines of e.g. Matlab and a Python wrapper for GLPK fail, that is give wrong solutions, frequently without even producing a warning. Interestingly GLPK also seems to give slightly (but for me substantially) off solutions in the "exact" variant which is supposed to use rationals - I guess the rounding from the float input is a bit too sloppy for my application and I couldn't figure out how to adjust this in the python wrapper. Anyway, inspired by the computational performance e.g. Coq-interval demonstrates inside Coq, I thought a simplex algorithm written in Coq might be more efficient than using something which fails every other day and having to investigate these failures - or even worse not noticing this and using a wrong solution in a production design. If this is of interest for you, I can supply examples - including the Python and Matlab code.

The only LP code which works reliably for me is the Maxima one when all inputs are upfront rational - and this also only after some discussions with the authors which lead to bug report. I would expect that Coq is not much slower than an interpreted language written in Lisp and Maxima can solve my problems in a few milliseconds - the rendering of the output typically takes longer than computing it.

This is not the only case in which main stream math tools give wrong answers quickly rather than taking a bit more time to give the correct answer. I think that meanwhile computers are fast enough to switch to tools which always give the correct answer. Sure an expert in LP solvers could probably tweak parameters - which the GLPK manual suggest to more or less never touch - so that they give correct results. But I have enough work to do without bothering about wrong solutions of LP solvers.

@Michael Soegtrop I'm quite surprised that LP solvers fail, especially on such small instances. Most solvers (e.g. in Matlab) are not based on the simplex method, but rather on interior point methods for more general convex programming. This can explain why they return incorrect results on badly conditioned instances (numerically speaking). But GLPK should work. You may also give a try to other solvers like CPLEX. In any case, don't hesitate to share some examples.

Let me also mention that with @Quentin Canu and @Pierre-Yves Strub we're currently working on the design/extraction/proof of computationally efficient algorithms from Coq-Polyhedra. For now, we focus on the vertex enumeration problem rather than on LP. But extracting the simplex method is also an interesting problem.

@Xavier Allamigeon : sorry for the delay - I had to find a quiet hour to put something together. I summarized it in this gist. Please first look at the file "Description.txt".

The example is piecewise Linf approximation of sin and cos with a degree 3 polynomial with a varying number of pieces with implementation in Maxima, Matlab and Python (GLPK). The results are astonishingly bad for such a simple task (unless I messed up the code - I am not that fluent in Matlab and Python). The Maxima results are correct as far as I can tell.

Last updated: Jan 29 2023 at 18:03 UTC