I would like to have a lemma with approximately the following statement:

```
Lemma continuous_near (f : R -> R) (y : R) (P : R -> Prop) :
{for y, continuous f} ->
(\forall x \near y, P (f x)) -> (\forall x \near (f y), P x).
Proof.
Admitted.
```

Does it fell right to you? Is there an easy proof? I first thought I would have an equivalence, but that is probably wrong.

If `f`

is a constant function, the premise tells you something about the one point `c=f(?)`

only. I don't see how one could derive something in a neighborhood of `c=f(?)`

from that.

(But I am not familiar with the math-comp analysis syntax yet, so I am just guessing what it means).

@Michael Soegtrop , you are right, this rules out the right to left direction.

sure! `f @ nbhs y --> nbhs (f y) `

(continuity) and `\forall x \near y, P (f x) = \forall x \near f @ y, P x`

definitionally, so it should hold

I am confused - doesn't my argument rule out the left to right direction? My gut feeling interpretation of the left side is "For all x in a neighborhood of y, P (f(x)) holds", so if `f=c`

, this gives `P c`

, which is weaker than the right side.

you are right, it's the other way around

```
Lemma continuous_near {T U : topologicalType}
(f : T -> U) (y : T) (P : U -> Prop) :
{for y, continuous f} ->
(\forall x \near f y, P x) -> \forall x \near y, P (f x).
Proof. exact. Qed.
```

However, I guess that if `f`

is continuous at `y`

and locally injective near `y`

, then its local inverse is also continuous at`f y`

and you can get the other direction.

(which rules out the counter-example with the constant function of @Michael Soegtrop )

```
Lemma continuous_inj_near {R : realType} (f : R -> R) (y : R) (P : R -> Prop) :
{for y, continuous f} -> {near y, injective f} ->
(\forall x \near f y, P x) = (\forall x \near y, P (f x)).
Proof.
Admitted.
```

(Fixed after @Sebastien Gouezel post below)

Cyril Cohen said:

`Lemma continuous_near {T U : topologicalType} (f : T -> U) (y : T) (P : U -> Prop) : {for y, continuous f} -> (\forall x \near f y, P x) -> \forall x \near y, P (f x). Proof. exact. Qed.`

(@Yves Bertot note that you do not need to state it since ` {for y, continuous f}`

is def eq to `forall P, (\forall x \near f y, P x) -> \forall x \near y, P (f x).`

:wink: )

The fact that a continuous bijective function has a continuous inverse is false in general (take the identity of the reals, with the discrete topology in the source space and the usual topology in the target space, for instance). However, this holds for functions on $\mathbb{R}^n$ with its usual topology, but this is a difficult theorem.

Thank you all. Yes, now I understand Michael's argument. "P(f x)" only gives information about very specific values, while P x has to cover a wider space. I have to avoid thinking only in terms of points.

My use case is with f instantiated by the opposite function, it has all the nice properties. I wanted to use this as an intermediate step to prove that every monotone surjective function is continuous, itself an intermediate step to prove that the inverse of an injective continuous function is continuous.

I already have a proof for the latter fact, but I thought it was too long and could be shortened.

Sebastien Gouezel said:

The fact that a continuous bijective function has a continuous inverse is false in general (take the identity of the reals, with the discrete topology in the source space and the usual topology in the target space, for instance). However, this holds for functions on $\mathbb{R}^n$ with its usual topology, but this is a difficult theorem.

Indeed I had $\mathbb{R}$ in mind (with the usual topology) in the initial example of Yves and generalized badly

PR https://github.com/math-comp/analysis/pull/385 now contains a few lemmas to prove the continuity of inverse functions. Sometimes, continuity of the initial funtion is required, sometimes only monotonicity of both the function and the inverse are required. The lemmas probably deserve renaming and fine-tuning of their interface, this can be discussed in the PR.

What I like about math-comp analysis compared to the standard library is that the lemma statements in @Yves Bertot's PR are intuitively readable, even knowing nothing about math-comp-analysis. Lemma statements about continuity in the standard library are rather hard to digest. I think this is a very important feature.

I just wonder in how far this improved readability makes it harder to write things - frequently these two goals are a bit contradicting. E.g. advanced notations easily lead to rather obscure error messages or unexpected effects if say a certain "notation product space" is incomplete. I would be interested in experience in this area.

The notations make for readable statements. The difficulty lies in finding lemmas to progress. I wanted to follow the abstraction barriers that are suggested by notations and existing lemmas, but I sometimes hit a difficulty. Personally, I would not have come with using the `{mono f : x y / x <= y}`

notation, that was suggested to my when @Laurent Théry proofread one of my first attempts. He also suggested a more systematic use of `ltrgtP`

to reason by cases comparisons between values. This kind of lemma is likely to fly under the radar of newcomers, but it is equipped to simplify many sub-expressions of your goal as you go. The tactic remains a simple `case`

tactic, but the lemma packs a host of automatic behavior. I find this kind of advanced proof programming is an extra layer of knowledge that is under-documented.

Anyway, these proof are still in draft state.

mono and ltrgtP are not mathcomp-analysis specific, right?

Reynald Affeldt said:

mono and ltrgtP are not mathcomp-analysis specific, right?

Nop they are not

It is worth clarifying, but I still mean that I often ignore how to make the best usage of math-comp or math-comp analysis features.

When proving lemma `near_opp`

, it is the first time I broke the abstraction barrier of the `(\forall _ \near _, _)`

. Until then, I did not know that this was "only" a notation for an existential statement with an espilon and a ball of that radius (I guess this is related to exact topology currently in use).

To summarize, you are saying that the main difficulty is that despite the nice and intuitive notations you have to see through them and know what they really mean going down at least a few layers of abstraction to actually work with them?

Do you think this is so because lemmas are missing at the higher levels of abstraction so that one frequently needs to go into lower layers, or do you think this is more a principal problem which cannot be overcome by better "closing" (I am searching for a better word here ...) each layer of abstraction?

Last updated: Apr 21 2024 at 01:02 UTC