Hi all, I have a question on the use of bigop and the direct sum of modules.

I'm working on building a library which extends the ssralg library, in particular the lmodType (the type of left-modules over a ring). At the moment I'm looking in how to express arbitrary direct sums. My approach is to use the inbuilt pair_lmodType type, which equips the Cartesian product of two modules which the appropriate module structure, and generalise this to lists of modules and then finite functions of modules.

However as I don't wish to reinvent the wheel whenever I need a wheel I thought about using the bigop library to extend pair_lmodType to finite collections of modules instead. The idea is that "pair_lmodType" is now a binary operation on the type of all lmodTypes (for a fixed ring), which forms a monoid. Unfortunately cartesian product is only a monoid up to module isomorphism, so as a hack I've declared an axiom which I call CosetIrrelevance which basically declares two modules as equal if there is an isomorphism between them.

From a maths point of view I can't foresee any problems with this as we frequently ignore the distinction between equality and isomorphism, however I imagine this is considered bad practice from a programming perspective (for good reasons).

So anyway (sorry to ramble) my questions are: is this definitely a bad approach? and if so what would be a better alternative? I've looked a little at setoids but I'm not sure how to incorporate them into bigop. Has anyone else tackled this problem?

Many thanks,

Daniel

Hi, I'm not too familiar with modules, so I cannot comment on that part. However, I have run across a similar problem when dealing with graphs and their isomorphisms. In my case, the axiom that graph isomorphism implies equality would be inconsistent. I use finite types for states, and those are inherently orders (via `enum`

). Hence, one can have two isomorphic graphs that can be distinguished by the degree of the "smallest" vertex. I wouldn't be surprised if a similar argument were to apply to modules.

In my case, I ended up porting the generic part of the `bigiop`

library to setoids: https://github.com/coq-community/graph-theory/blob/master/theories/setoid_bigop.v

There is a mathcomp issue for this, but it's currently dormant, because it's not entirely clear what's the best approach: https://github.com/math-comp/math-comp/issues/407

If you think that this would be useful for you, I'd be willing to help cleaning this up.

Hi Christian, that sounds like it would be a big help, and combining setoids with bigop seems like something that would be very useful for the community in general, I'd be happy to help work on this. I'll check out your project and get up to speed with setoids (something I have needed to do for a while now). I'll be sure to contact you for any guidance I'll need. Thank you for your help.

Hi @Daniel Kirk I do not have much time to give a complete answer but I will give some pointers:

- turning isomorphisms into equality is the point of UF/HoTT (Univalent Foundations and Homotopy Type Theory), but it requires a bit of infrastructure to be sound and practical. In particular you might need to keep track of the construction of isos beneath your compositions of equalities, which can be technical without UA (Univalence Axiom) and support for eliminating it (such as Equivalence For Free!).
- A Setoid is not even enough because you would need the relation to be in
`Type`

, and I am not sure how much this is supported in Coq right now. - mathcomp traditional approach to this is to make everything live in a big module, cartesian product becomes a construction on sets inside this big module, and direct product is not guaranteed by construction anymore. Moreover if you have an infinite number of generators you might need to assume classical axioms to represent sets conveniently and decide whether intersections are empty or not. (But I suspect this would not be a problem, since we already do it in mathcomp/analysis).

Cyril Cohen said:

- A Setoid is not even enough because you would need the relation to be in
`Type`

, and I am not sure how much this is supported in Coq right now.

A priory, this is not a problem. The `rewrite`

tactic does support rewriting with relations in `Type`

. What gets cumbersome is if your context lemmas are dependent (i.e., if the isomorphism you use to rewrite occurs (as a function applied to some points) in the goal after rewriting. In our graph library, this occurs when rewriting with an isomorphism underneath of the `add_edge`

operation. That is, we have "context lemmas" of the following form (where `G + [x,u,y]`

is G with a `u`

-labeled edge from `x`

to `y`

):

```
forall (i : G ~ H), G + [x,w,y] ~ H + [i x, w, i y]
```

Note that the result is in `Type`

(and computes).

@Christian Doczkal I am not sure I understand how your example is showing evidence that rewrite supports relations in `Type`

since you use `Equivalence`

in your code (which characterizes relations in `Prop`

). Or did you redefine `Equivalence`

?

@Cyril Cohen The file I pointed to earlier and my last message are mostly unrelated. In other parts of the development we do use relations in type. At some point in the development process, we were using setoid rewriting for this relation. However, this has mostly been eliminated. The reason is that we're usually in one of two cases:

- Constructing a new isomorphism whose computational behavior actually matters. In that case we prefer to explicitly give the computational parts and then verify the construction.
- Rewriting with some isomorphim underneath of a graph operation that requires a dependent morphism lemma (e.g., the one I mentioned above). Setoid rewrite is not able to insert these automatically. Hence, most of our "rewrites" are at the top level. Here we use transitivity and
`apply`

, because this is more robust (e.g. works modulo conversion) and a lot faster.

So, the problem is not that Coq does not support rewriting with relations in Type. The problem is that it does not work well if you actually care about the computational content of your relation and/or if you need dependent morphisms.

The connection between the bigops over setoids and isomorphism is that we use the bigops for some algebra where the axioms only hold up-to an equivalence relation. The reason is that we instantiate said algebra with graphs up to (propositional) isomorphism (i.e., using `inhabited (iso F G)`

as equivalence).

Thank you @Cyril Cohen and @Christian Doczkal for your replies. I'm getting ready my code into a workable version before I put it online. I think the use of setoid/bigop for direct sums is a much bigger challenge so I'm going for something more ad hoc, though I do think it's worth doing eventually. I'm looking into the best way to handle infinite dimensions. I'll study what Cyril suggested in analysis.

Cyril Cohen said:

- mathcomp traditional approach to this is to make everything live in a big module, cartesian product becomes a construction on sets inside this big module, and direct product is not guaranteed by construction anymore. Moreover if you have an infinite number of generators you might need to assume classical axioms to represent sets conveniently and decide whether intersections are empty or not. (But I suspect this would not be a problem, since we already do it in mathcomp/analysis).

Hi @Cyril Cohen could you please direct me to where this approach is used? I'd like to see it in action? Mant thanks

@Daniel Kirk sorry for the lag. Here is a pointer: in the file fingroup.v there is a "universe" `gT : finGroupType`

for which some sets (type `{set T}`

for finite set on `T`

) actually are groups.

Last updated: Jul 25 2024 at 15:02 UTC