I saw old PRs on working with filters, but they seem dormant. Is someone still working on them?

I have given some thought to filters, sequences and series lately, but things seem to be changing pretty fast in mc-analysis right now so I probably need to discuss with someone in the know.

Sure, what do you want to know?

@Cyril Cohen Is someone actively working on them? Your PRs are pretty old... And I have ideas and sample code I'd like to discuss to see if there's interesting ideas in there in spite of my being a newbie.

@Julien Puydt Indeed, I'm not working on them anymore because every attempt I made at trying to satisfy all the requirements failed. I now think the only fully satisfactory approach, goes through a big refactoring of the design patterns at use, and that it cannot work without HB giving the right abstraction level so that everything else does not break. If you have suggestions to improve things I'd be glad to read them.

My ideas were precisely about refactoring ; can I send you some code?

Sure

I sent you a .tar.bz2 (8.4K) ; most of it doesn't compile

(so use `make -k`

)

Ok, I received it.

What would help me would be to understand which problem you are trying to solve with this?

The biggest problem I have is topology.v is a big many-thousands lines ; I tried to make much smaller chunks.

And then I wanted most results to be first results about filters, and only then deduce results on limits.

I was indeed going to propose to cut into 3 files topology / pseudometric / uniform

For example, in theories/filters/generalities, you'll find the notion of separated filtered set - a definition on filters. And from this notion (same file) I get unicity of limits. It's in theories/filters/reals.v that I show (or: will show...) that the considered structure is separated.

Julien Puydt said:

And then I wanted most results to be first results about filters, and only then deduce results on limits.

Most things are already done this way, the last remaining bits that were not I'm preparing a PR to generalize cf math-comp/analysis#772

(I'm reading)

Julien Puydt said:

For example, in theories/filters/generalities, you'll find the notion of separated filtered set - a definition on filters. And from this notion (same file) I get unicity of limits. It's in theories/filters/reals.v that I show (or: will show...) that the considered structure is separated.

Mmmh I'm not entirely sure what this notion of separated filtered set is? How is it related to ultrafilters?

In MCA we currently get unicity of limits through https://github.com/math-comp/analysis/blob/cd8f5e1e04a503c5236efe00e98eae8e0eab9baf/theories/topology.v#L2630

BTW the purpose of the old PRs on filters has nothing to do with logic properties of filters, most of which we already have in mathcomp/analysis, although we can always think of further generalizing many structures, or completing existing theories.

I define separated filtered set in generalities.v as the notion to get unique limits ; there's also a sequential filter notion to get sequential characterization (I didn't manage to finish the proof yet - not good enough with Coq+MC yet)

The purpose of theses old PRs was to attend extra-logic properties, such as which inference of Coq to trigger, to solve a maximum number of problems automatically, and give a consistent set of notations to filters (which is today very confusing), without losing the current expressivity. (Naming points instead of filters, automatically inferring that something is a filter, etc)

Yes, I saw topology.v had much material, and things with coercions to help smooth writing proofs (all things I'm not really able to do yet).

Do you have an ETA for this rework? Notice that my code is much more comments than code: I listed many properties which I thought would be helpful at the CPGE level...

Ah, `cvg_unique`

is the unicity of limits, and uses `Hypothesis sep`

a few lines above. And `normed_hausdorff`

gives separation on complex numbers and `ereal_hausdorff`

gives it for real numbers (and their infinities) ?

Julien Puydt said:

Do you have an ETA for this rework? Notice that my code is much more comments than code: I listed many properties which I thought would be helpful at the CPGE level...

Which rework? The one from math-comp/analysis#772 ? Or the one from my old PRs on filters?

Is there a sequential characterization of limits somewhere? It's pretty important to use the dominated convergence theorem to prove continuous limit results (continuity of parameter integrals for example).

And

`normed_hausdorff`

gives separation on complex numbers

For any normed vector space, including any `R : numFieldType`

(seen as a 1-dimensional vector space on itself), which encompasses reals and complex numberS.

`ereal_hausdorff`

gives it for real numbers (and their infinities) ?

This is specific for `\bar R`

which has no algebraic structure (except a canonical ordering, as of today)

The fact that \bar R lacks structure is a bit annoying (see comments in theories/filters/reals.v) ; of course it can't have proper algebraic structures since +oo + -oo doesn't make sense, but when it works, it should work (not really clear, but I still mean it).

Julien Puydt said:

The fact that \bar R lacks structure is a bit annoying (see comments in theories/filters/reals.v) ; of course it can't have proper algebraic structures since +oo + -oo doesn't make sense, but when it works, it should work (not really clear, but I still mean it).

I know, it might be solved with the ganularity HB will be able to introduce though...

Note that `filter_tends_to`

in MCA is `F --> l`

How are partial functions handled? I put (A: set X) and x\in A`&`

... everywhere to cope with it, because it's not clear how it works in MCA.

Note that the squeeze theorem is under generalization here:

https://github.com/math-comp/analysis/blob/082f934ba6b92679d4fa2bad0da39a94a984099d/theories/normedtype.v#L4176-L4185

Julien Puydt said:

How are partial functions handled? I put (A: set X) and x\in A

`&`

... everywhere to cope with it, because it's not clear how it works in MCA.

Partial functions are handled using the `function`

library https://github.com/math-comp/analysis/blob/master/classical/functions.v

and how is what I call "filter_is_adherent" handled?

`cluster F A`

I think

It can be related to the usual notion of cluster points for sequences ("valeurs d'adhérence" in French) : `u`

is a sequence,`cluster (u @ \oo)`

is the set of cluster points of the sequence `u`

.

Hmmm... that's only about points, isn't it? How does it make it easy to consider the limit of the mathematical sequence $(u_n)_{n\geqslant10}$, but not of $(u_n)_{n\leqslant10}$ ?

Julien Puydt said:

Hmmm... that's only about points, isn't it? How does it make it easy to consider the limit of the mathematical sequence $(u_n)_{n\geqslant10}$, but not of $(u_n)_{n\leqslant10}$ ?

I didn't understand your question

I meant I don't see where `cvg_to`

gets limited to reasonable filters, ie: those clustering to the definition domain of the function.

In the topology.v from MCA 0.5.4, `cvg_to`

appears line 652 while cluster is line 2557, so obviously there's something I don't get: in my experiments I had to define `filter_is_adherent`

before I could turn to `filter_tends_to`

...

(notice my comment before the definition of `filter_tends_to`

: FIXME: I'm pretty clueless how to work with partial functions in Coq/MC -- I knew there was a catch and it was wooshing way over my head)

Oh I see, I misread your definition of `filter_is_adherent`

it has nothing to do with clustering does it?

Since we allow filter to be non proper we do not really care in most cases. When a filter converges, we know it's proper

I use the word 'adherent' to make the link with topological adherence ; it's clustering indeed.

Ah, the fact that I force proper filters might explain why I have to add this condition. But I don't see why a converging filter needs to be proper.

`filter_tends_to f F G`

is denoted `f @ F --> G`

in MC.

`f @ F`

is the image filter by`f`

, it needs not be proper`H --> G`

is reverse inclusion`G `<=` H`

Julien Puydt said:

I use the word 'adherent' to make the link with topological adherence ; it's clustering indeed.

Ah, the fact that I force proper filters might explain why I have to add this condition. But I don't see why a converging filter needs to be proper.

you are right... it's not convergence that makes it proper, I got confused. It's because most of the time we input proper filters, as in `f @ x`

(neighborhoods filters are proper filters, hence image filter are too)

Neighborhood fiters are proper, but when restricted to a subset, that can disappear... unless the filter is "adherent"/"clustering"

Oh I see where you're going...

We have this definition of subspace : https://github.com/math-comp/analysis/blob/cd8f5e1e04a503c5236efe00e98eae8e0eab9baf/theories/topology.v#L6042

Yes, what prevents me to try a limit at 2 (proper filter) for a function defined on $[0;1]$?

Nothing will prevent you... you will just obtain a result that makes no sense.

Hmmmm... and this subspace makes me wonder: what if I have a function $]0;1]$ and want to try a limit at 0? It's not in the subspace, but there's no issue with it per se.

if you have `f : subspace `[0, 1] -> T`

and try `f @ 2`

, `nbhs 2`

will be interpreted as the filter generated by `[set 2]`

, it will be proper.

Same with your example, if you have `f : subspace `]0, 1] -> T`

and try `f @ 0`

, `nbhs 0`

will be interpreted as the trivial filter generated by `[set 0]`

, it will be proper too...

Oops I did a mistake, I'm going to correct it.

The first objection is fair (though students usually don't mind computing limits far outside the domain), but the second looks pretty annoying.

For what it's worth, I've done some work with things like ` subspace `]0,1] `

and it's not a problem (for local properties). That `0`

is in the `R`

limit of the domain never comes up, because all the theorems about `subspace`

let you aggresively intersect with ``]0,1]`

as you work.

For things like compactness, second countable, simply connected, and other global properties, our `subspace`

topology is quite annoying.

Well, I tend to consider the domain of definition of a function to be more on the side of brutally tyrannic rather than just aggressive...

Yeah, that's fair. It tends to feel like you have to specify the domain of your function twice. First the underlying `Type`

, then the underlying `Set`

. In typical textbook topology, there is no such distinction.

We could add an additional test in `nbhs_subspace`

to detect if the restriction still keeps the filter proper and return the restriction if appropriate.

What do you mean? I think `nbhs_subspace`

always returns a proper filter.

But it's true that `nbhs_subspace `]0,1] 0`

should return a filter provably equal to `at_right 0`

.

It currently returns the principle filter of `0`

. Isn't that what you want? `0`

should be totally separated from `]0,1]`

in this topology.

I think it will be useful for theorems about convergences/divergence of integral for e.g. `x^-1`

near 0.

Even though `0`

is not inbounds we want to consider the neighborhoods of `0`

in the same way for `[1, +oo[`

we want to consider the neighborhoods of `+oo`

(and that's why we have a special definition for it), but in the case of `]0, 1]`

we could make it so that `nbhs 0 = at_right 0`

.

Definitely that, yes

The fully closed subspace `nbhs_subspace `[0,1] 0`

is also `at_right 0`

, though.

`at_right`

is ambiguous - is it pointed or not?

in the current definition it is an unpointed one.

(depending on the convention, a function can be continuous at a point iff it has the same finite limit at right and at left)

So for `nbhs_subspace `[0,1] 0`

is would be the generalized of `at_right 0`

to a pointed version.

It sounds like the filter you'd want for `x^-1`

is `within `]0,1] (nbhs 0)`

.

Zachary Stone said:

It sounds like the filter you'd want for

`x^-1`

is`within `]0,1] (nbhs 0)`

.

yes indeed, but isn't it equal to `at_right 0`

?

Oh, yeah that's right. I wonder if the filters `fun x => within A (nbhs x)`

even forms a topology.

I have a feeling it's just `subspace (closure A)`

, although I don't have a proof off the top of my head.

In my experiments, I defined:

```
Definition filter_tends_to
{X Y: Type}
(A: set X) (f: X -> Y)
(F: Filter X)
(G: Filter Y) :=
filter_is_adherent F A
-> forall W, G W
-> exists V, F V /\ forall x, x \in A `&` V -> (f x) \in W.
```

but I could also have defined it by asking G to be finer than the image by f of the restriction of F to A.

@Zachary Stone just so you know, I'm doing a big refactor accross topology, normedtype and sequences, to perform some generalizations and complete with some missing lemmas.

Excellent, thank you. Let me know if you need any help/review on it.

A couple observations to make sure I'm understanding that definition:

- When
`X = Y`

and`f`

is the identity, I think this is the same as`G `=>` within A F`

- if
`F`

is`nbhs x`

, and`G`

is`nbhs (f x)`

, this says that`f`

is continuous at`x`

from the subspace`A`

(excluding the awkward boundary conditions issues above)

Are these accurate?

Yes ; notice I defined my filters to be proper and added filter_is_adherent conditions everywhere to enforce the domains of definitions of functions. It's a bit worrying me that MCA seems to not enforce it very clearly.

@Cyril Cohen you didn't comment on sequential characterization, did you?

Julien Puydt said:

Cyril Cohen you didn't comment on sequential characterization, did you?

No, I didn't,... actually I'm not sure anyone did it so far. But it should be very easy.

You are right that domains are weird in MCA. They are often not enforced. It has the obvious downside that it's possible (E.G. dangerously easy) to write well-typed statements with garbage semantic content. From a software engineering perspective, this delays error reporting until theorem application time. Then you'll get a subgoal like `0 \in `]0,1]`

. Delayed errors are usually indicative of deeper problem. (I'm a software engineer in my day-job, so this stuff pains me to see)

On the other hand, I totally understand why it's like this. Even the definition of `filter_tends_to`

doesn't prevent `f`

from taking values far from A. I believe this whole class of issues needs a solution. The `subspace`

topology was designed as a compromise, but certainly doesn't work everywhere.

By the way, the issue with the sigma type `{x : R |x \in `[0,1]}`

not inheriting algebra is a bit tragic. I recently updated `realfun.v`

to use the `{within `[0,1], continuous f}`

instead of `{in `[0,1], continuous f}`

which fixes issues at the boundary points. The proofs involved a lot of arithmetic in the domain. It's a _huge_ effort saver to have separate subgoals for `x + eps/2 \in `[0,1]`

and `f (x + eps/2) ... `

, which wouldn't work if `f`

had `{x : R |x \in `[0,1]}`

as its domain.

For sequential characterization, does having a "countable filterbase" suffice?

@Zachary Stone In my experimental code, I defined a "sequential filter" as one obtained through a nonincreasing sequence of nonempty subsets, and tried to prove a sequential characterisation result from that (failed because I'm not good enough with Coq+MC), so I guess the answer is yes.

The statement I was aiming for:

```
Lemma sequential_characterisation:
filter_tends_to A f F G
<-> forall (u: nat -> X),
(forall n, (u n) \in A)
-> filter_tends_to setT u inftyN F
-> filter_tends_to setT (f \o u) inftyN G.
```

I recently did some closely related stuff to prove uniform spaces, with a countably-based entourage, are metrizable. See the section `countable_uniform`

. It takes a similar approach

```
Context {T : uniformType} (f_ : nat -> set (T * T)).
Hypothesis countableBase : forall A, entourage A -> exists N, f_ N `<=` A.
```

I guess you need some kind of AC at some point?

Very much yes. There's also some heavy `constructible_indefinite_description`

.

But yeah, I'd take the approach of

- Defining filter base
- Using
`cardinality.v`

to define "countably_based" - Prove that if
`F`

tend to`G`

, and`B`

is a filter base for`F`

, then`f @ B`

forms a filter base for`G`

. (I think this is true?)

I think 1-3 are true, but I don't see how they give the characterization

If you've got a countable filter base `{U_n , n : nat}`

, then `U_1, U_1 `&` U_2, ...`

gives a non-increasing, sequence of non-empty subsets inside the filter. Is that strong enough? I guess a follow up question might be, what results are you trying to prove about sequential convergence that aren't true for general filter convergence?

@Zachary Stone To prove the continuity theorem for integrals with parameter, you need to be able to convert this continuous problem to a discrete one using sequential characterization ; then you apply the dominated convergence theorem and you're done.

Ah, I see. Then the existence of this countable basis + AC for picking a point from each set should be good enough to build a sequence of points. You'll need to work in a T1 space, that is where neighborhood filters have a countable basis. Metric spaces are easily T1.

Yes. But first I have to figure out how to play with negation of complex propositions...

I figured out how to play with the negation of complex propositions ; so now I'm on my next issue: is there some place where a countable choice axiom is used that I could study?

Excellent. I've used `cid`

or "constructive_indefinite_description" for this. There are lots of examples in the code of its applications.

```
F := forall x : X , exists y : Y, P x y
```

to functions

```
f := forall x, { y | P x y}
```

So that

```
(projT1 \o f) : X -> Y
```

This mirrors the "products of non-empty sets are non-empty" representation of choice. If someone else knows of a more elegant way to achieve this, I'd be eager to know.

@Zachary Stone Thanks, with this information I should be able to go further. For the moment, I found how to get from 7 to what I want... now I should be able to turn that into a nat -> X with the right property...

Ok, I have modified my proof using 7 to a proof using a parametric n and obtained:

```
v_by_exists : forall n : nat, exists vn : X, P vn
```

and I want to turn it into:

```
exists (v: nat -> X), forall n, P (v n)
```

but I don't see how `constructive_indefinite_description`

does that.

I am not expert in this, but it seems that it would work.

Applying `constructive_indefinite_description`

onto `v_by_exists`

it should give

```
fn : nat -> {vn : X | P vn}
```

Then, you should have

```
projT1 \o fn : nat -> X
projT2 \o fn : \forall n : nat, P (projT1 (fn n))
```

Providing the witness `projT1 \o fn`

, you can prove the statement by `projT2 \o fn`

.

Is there a reason for not using `functional_choice`

, in the same library `Coq.Logic.IndefiniteDescription`

as `constructive_indefinite_description`

?

The "philosophy" of Analysis is to use its `boolp.v`

file, not the standard library.

@abab9579 `constructive_indefinite_description`

will turn an `exists x, P x`

into a `{x | P x}`

, but that's not exactly what I want.

The best I managed was:

```
pose raw := fun n => constructive_indefinite_description (v_by_exists n).
pose v := fun n => projT1 (raw n).
```

but it gave `raw: forall n, {x: X | P x}`

which isn't the `exists v: nat -> X, forall n, P (v n)`

I'm looking for! And `v`

has a type `forall nat, [eta [eta crazy_stuff`

which doesn't cut my bread.

The `raw`

is the `fn : (forall n : nat, {x : X | P x})`

one. (the type is identical to `nat -> {vn : X | P vn}`

)

Interesting how applying `projT1`

blew the type up. Maybe it is from kind mismatch, `proj1_sig`

might work to give `nat -> X`

or `forall n: nat, X`

. Otherwise, I would suspect `raw`

might not have the intended type.

`[eta f]`

is also eta expanded version of `f`

, ideally something like `simpl`

should simplify the type.

@abab9579 Perfect! I got the result I wanted to prove!

Here is how it works on a simpler case, for reference:

```
From mathcomp Require Import ssreflect.
From mathcomp.analysis Require Import boolp.
Section Foo.
Variable X Y: Type.
Variable P: Y -> Prop.
Lemma foo: (forall x: X, exists y, P y) -> exists (f: X -> Y), forall x, P (f x).
Proof.
move=> H.
pose raw := fun x => cid (H x).
pose v := fun x => proj1_sig (raw x).
exists v.
move=> x.
by apply: proj2_sig (raw x).
Qed.
End Foo.
```

@Cyril Cohen I'm quite annoyed that uniqueness of limits comes this high in the concept hierarchy ; it really should come much lower. Students encounter unicity of limits with sequences in december when they met mathematics for the first time in september -- so it's really a core principle!

Last updated: Jun 25 2024 at 19:01 UTC