Hi.

I was trying to find the transpose of a matrix implemented as vector of vectors.

How would we go about that?

I tried:

```
Fixpoint trans {A: Type} {rows cols: nat}
(mat: matrix A rows cols): matrix A cols rows :=
match mat with
| [] => []
| _ =>
let rw := Vector.map hd mat in
let mat' := Vector.map tl mat in
rw :: (trans mat')
end.
```

and got errors all over.

Or would it be better to implement with lists and convert to vector only as needed?

The usual advice is: avoid `Vector`

entirely and/or switch to equations and/or use something like mathcomp's `tuple`

, basically `tuple A n := { xs : list A | length xs = n}`

Does using Equations alone help? I'm not familiar with it, but isn't that to make proofs involving recursions easier or something along those lines?

It also helps with dependent pattern matching

Actually, here's an option: https://coq.discourse.group/t/proving-matrix-transpose-function/803/10?u=blaisorblade

mathcomp already has some definitions and proof involving matrix, right?

https://github.com/math-comp/math-comp/blob/master/mathcomp/algebra/matrix.v

Can they be reused just like that? For example finding the transpose of a matrix represented as a list of lists?

Or are they meant to be used only within mathcomp? Any idea?

Does any one know of some examples of using the mathcomp version of `matrix`

? The docs by itself looks a bit intimidating at first.

Feels like one has to know much other mathcomp definitions before one start to use matrix definition.

One can read Section 7.8 of the MathComp book https://zenodo.org/record/7118596 Note that mathcomp matrices are abstract functions, not lists of lists but CoqEAL offers a refinement enabling to develop all proofs with the nice matchomp matrices and refine them with lists of lists if you need to perform actual computations: https://github.com/coq-community/coqeal/ (CoqEAL itself is unfortunately not very well documented)

https://proofassistants.stackexchange.com/questions/2104/how-to-prove-this-correctness-principle-of-transposition-of-lists-of-lists-in-co is relevant if you are ok with lists of lists

Pierre Roux said:

Note that mathcomp matrices are abstract functions, not lists of lists

Thanks. Does that mean we cannot extract out (as in to ocaml) those matrix elements in any way?

djao said:

https://proofassistants.stackexchange.com/questions/2104/how-to-prove-this-correctness-principle-of-transposition-of-lists-of-lists-in-co is relevant if you are ok with lists of lists

How is the matrix constraint being satisfied in this version of `transpose`

?

https://gist.github.com/davidjao/1a857a3c2d39dfec5cde0339d742be10#file-awesomelistlemmas-v-L266

Matrix seems to be represented just as `list (list A)`

without any constraint (or am I missing something?).

Wouldn't that cause problem?

And how does CoLoR look up on the matrix front?

I mean, is it easy to work with?

@Julin Shaji the key question about matrices is whether you want to find a representation that is "extensional", that is to say , two matrices A B are equal in Coq iff `forall i j, A i j = B i j`

This is super-convenient for proving (and defining things, transpose is just `tr A := \matrix_(i, j) A j i`

, however it doesn't yield the representation you'd expect (sadly, as there is no deep reason)

When that happens, there are several choices; a sensible one IMHO is to use program refinement to do your proofs on the extensional representation, but computations on a lower-level one. See the work on CoqEAL.

Emilio Jesús Gallego Arias said:

Julin Shaji the key question about matrices is whether you want to find a representation that is "extensional", that is to say , two matrices A B are equal in Coq iff

`forall i j, A i j = B i j`

Probably a trivial question, but if `forall i j, A i j = B i j`

is extensional proof, how could a more involved one look like?

(Is that called 'intentional' proof or something?)

Ah.. I guess that means it has to be exactly similar. Constructor-wise itself.

As in not just the meaning, but the structure itself has to be same as well.

Yes, so that's why is a bit tricky

Let me write lemma more exactly: `Lemma matrixP R n m (A B : matrix R n m) : (A = B) <-> (forall i j, A i j = B i j).`

So you use this lemma to prove a lot of stuff, for example, you can trivially show with it that matrix addition is associateive and commutrative, etc...

(if the underlying operator on `R`

is, of course)

Why's extensionality hard? Doesn't it just work with lists of lists and partial lookups?

(of course, with functions you'd need funext)

@Paolo Giarrusso for simple cases it may be easy, but think for example of more complex matrix representations for calculus, that may be sparse, use non-canonical pasting techniques, etc...

Also defining the "partial lookup" may be tricky, a subtype works but you need to use something like math-comp infra to make it usable IMVHO

lookup : nat -> list A -> option A

or in this case, probably `lookupMat : nat -> nat -> matrix A -> option A`

— that's enough to get `(A = B) <-> (forall i j, lookupMat i j A = lookupMat i j B).`

not saying that's enough for all cases, but sparse matrixes seem off-topic in this thread.

Would that definition admit a group / nonCom ring structure?

Yup that def works, but may not be fully usable, depending on your context. I see other efforts like QWIRE went and defined matrices as functions (+ funext), so indeed YMMV.

I'd rephrase my thoughts on that as "you need to have a usable matrix definition for your case, it may be or may not be extensional, it is common that extensional representations are not good for computing, and non-extensional representation making resoning too hard, so in this case you'd have to use program refinement techniques"

@Julin Shaji I guess the key question is how much are you going to use your matrices, is this a small exercise or do you plan to use this a lot in a large project? The more you use it, the more it makes sense to invest in advanced techniques

e.g. math-comp definitely has a learning curve

Yes, the math students I've taken here usually become productive with math-comp matrices from 0 like in 2 months, but we code together often

It has a learning curve, and their representation doesn't compute without CoqEAL

that's not an easy obstacle

(and to Emilio's question, I'm not 100% sure lists of lists form a group)

[I'm getting offtopic, but indeed I was trying to figure out how much simpler you can make an extensional matrix representation and still keep most of the nice properties of math-comp's matrices, like the isntances for algebraic structures, for example you can benefit from the fact that matrices with positive dimension over a (non) commutative ring are automatically understood by math-comp to be a nc ring. See this simpler example with an abelian group:

```
From mathcomp Require Import all_ssreflect all_algebra.
Import GRing.Theory.
Section MxEx.
Variable (R : zmodType).
Open Scope ring_scope.
Lemma mxA n m (A B : 'M[R]_(m,n)) :
A + B = B + A.
Proof. by rewrite addrC. Qed.
End MxEx.
```]
```

Paolo Giarrusso said:

(and to Emilio's question, I'm not 100% sure lists of lists form a group)

I guess they'd do, but to make them into a ring you'd need a constraint on size, as you need `0 != 1`

and that's not possible for the empty matrix

in math-comp this is done nicely by definiting the instance to be over (square) matrices for all size n, of true size `n.+1`

A very nice trick when combined with bidirectional type checking but that's offtopic.

Paolo Giarrusso said:

Julin Shaji I guess the key question is how much are you going to use your matrices, is this a small exercise or do you plan to use this a lot in a large project? The more you use it, the more it makes sense to invest in advanced techniques

I just need a representation of matrix which can be defined over a semiring. But I need it to be extractable.

Just need matrix addition and multiplication, padding with blocks of matrices to make square matrices and the like (I'm apprehensive whether I described it properly).

CoqEAL's refinement stuff sounds way better than what I had been trying till now (`Vector.t A n`

).

I can get familiar with mathcomp notations, I think. But I guess the style in which proofs are done needs some time to get familiar with.

I'm wondering how `\matrix`

can be refined down to a vector form.

I guess it first needs to be converted to a representation in terms of `seq`

?

`M := \matrix_(i, j) A i j`

is used as a function, so you can just init a `seq`

using a comprehension.

```
From mathcomp Require Import all_ssreflect all_algebra.
Section MxEx.
Variable (R : zmodType).
Open Scope ring_scope.
Definition a n (A : 'cV[R]_n) : n.-tuple R :=
[tuple A i 0 | i < n].
End MxEx.
```

Julin Shaji said:

I'm wondering how

`\matrix`

can be refined down to a vector form.

I have written one such implementation to move back and froth between ssreflect and coq [1].

[1] https://github.com/mukeshtiwari/secure-e-voting-with-coq/blob/master/Groups/Matrix.v

isn't it easier to go from MathComp matrices to lists to vectors (or other way around)? https://github.com/coq-community/coqeal/blob/master/refinements/seqmx.v

Last updated: Jun 22 2024 at 15:01 UTC