## Stream: math-comp analysis

### Topic: Seeking better way of definition to prevent expansion

#### abab9579 (Aug 31 2022 at 02:17):

Hello, I have the following definition for "n times differentiable functions":

``````Fixpoint CPow (m: nat): set (U -> V) :=
match m with
| 0 => [set f: U -> V | continuous f]
| S m' => [set f | exists f' : U -> U -> V,
(forall x, is_diff x f (fun v => f' v x)) /\
(forall v, CPow m' (f' v))]
end.
``````

Problem is, sometimes it unfolds in an ugly way (that is, it includes fixpoint) where I cannot fold `CPow` back.
For example, I get

``````Cg : forall v : U,
(fix CPow (m : nat) : set (U -> R) :=
match m with
| 0%N => [set f | continuous f]
| m'.+1 =>
[set f | (exists f' : U -> U -> R,
(forall x : U, is_diff x f (f'^~ x)) /\
(forall v0 : U, CPow m' (f' v0)))]
end) m (g' v)
``````

How can I prevent this?

Possibly related: In this case,
I have `f: fct_ringType U (numFieldNormedType.normedMod_ringType R)` instead of conventional `f: U -> R`.

Also as an alternative, is it possible to have idiomatic inductive proposition in ssreflect? How do I go with that?

#### abab9579 (Sep 01 2022 at 05:00):

Can I cross-post this over #math-comp users ? It seems like traffic is relatively low here, and this could be reformulated into general question on inductive types & ssreflect.

#### Karl Palmskog (Sep 01 2022 at 05:57):

please don't cross post in the Zulip.

#### abab9579 (Sep 01 2022 at 06:53):

Then, can I delete this one and post it over there?

#### Karl Palmskog (Sep 01 2022 at 07:04):

it looks like your post asks about definitions to specific to MathComp Analysis. This means it belongs here, and we work actively to move topics to the most relevant stream.

#### Karl Palmskog (Sep 01 2022 at 07:05):

so if you delete this post and ask the same elsewhere, I think it's obvious what will happen

#### abab9579 (Sep 01 2022 at 07:06):

It will not be the same form, as I will focus on how to use inductive definition in ssreflect.

#### Karl Palmskog (Sep 01 2022 at 07:08):

feel free to ask new general MathComp questions in the math-comp users stream. But I suggest you don't use Analysis-specific code there.

#### abab9579 (Sep 01 2022 at 07:08):

Actually, it should be distinct enough for another thread. I have several definitions which would benefit from using inductive proposition, which should be fairly independent. Will not use analysis-specific code, including ones from `classical_sets`.

#### Cyril Cohen (Sep 01 2022 at 15:43):

abab9579 said:

Hello, I have the following definition for "n times differentiable functions":

``````Fixpoint CPow (m: nat): set (U -> V) :=
match m with
| 0 => [set f: U -> V | continuous f]
| S m' => [set f | exists f' : U -> U -> V,
(forall x, is_diff x f (fun v => f' v x)) /\
(forall v, CPow m' (f' v))]
end.
``````

Hi! From a non technical (related to inductives) point of view, I wouldn't define this inductively, I would go for something like:

``````Definition CPow (m : nat) : set (U -> V) :=
[set f |continuous ('d^(n.+1) f) /\ forall n, n < m -> forall x, differentiable ('d^`(n) f) x].
``````

for a suitable notion of \$n^th\$ Fréchet derivative `d^`(n) f` (that is currently missing from the library) that you are leaving implicit in your recursion...

#### abab9579 (Sep 01 2022 at 23:23):

Thank you! I do not know how it would work, though. I would need to define tensors or multilinear functions, and then `continuous` should work when image is a multilinear function space.

#### Zachary Stone (Sep 02 2022 at 01:20):

I'd love to see more linear algebra. We have several function space topologies available already, so I have some hole for this approach. What issues are you anticipating?

#### abab9579 (Sep 02 2022 at 01:34):

Just that I am somewhat daunted on this, as I am still a novice in coq & I expect tensors are quite a chore to implement.
By the way, would function space topologies work well in this case? Just looked up frechet differential again, it is well-defined for bounded linear space.

#### abab9579 (Sep 02 2022 at 10:27):

Construction of tensor product on general vectorspace require quotient types. How can I do that?
Also, is there a reason why linear type is not given instances for zmodtype?

#### Pierre Roux (Sep 02 2022 at 11:21):

What do you call linear type precisely? (in which file is it defined for instance)

#### abab9579 (Sep 02 2022 at 11:24):

`{linear U -> V}`, originally defined from `ssralg`. Is it because defining instance in another module is intrusive?

#### Pierre Roux (Sep 02 2022 at 13:07):

Looks like `opp_fun` (notation`\- f` for `fun x => - (f x)`) was added only recently so I guess nobody though about it but we could probably declare a zmodType on functions, with `\+` (`add_fun`) as additions and `\-` (`opp_fun`) as opposite. This would allow to use `addrA` to rewrite `f \+ g\+ h` into `f \+ (g \+ h)` for instance, not sure if super useful. Note that linear is already stable by `\+` and `\-` (c.f., `add_fun_additive` and `opp_fun_additive`)

#### abab9579 (Sep 02 2022 at 13:11):

I see. Was curious as I am having problems with defining Lmodule structure for subtype of linear.

#### Pierre Roux (Sep 02 2022 at 13:14):

I'm not sure what a zmodType structure on linear would mean. linear is itself a structure, rather than a type (like int, rat,...) to be equiped with structures.

#### Pierre Roux (Sep 02 2022 at 13:16):

Maybe you mean the subtype of functions that are linear?

#### abab9579 (Sep 02 2022 at 13:17):

I am defining a type which is subtype of `{linear U -> V}`, that is, type of functions which is linear and have additional conditions.

#### abab9579 (Sep 02 2022 at 13:18):

For instance, bounded linear operators.

#### Pierre Roux (Sep 02 2022 at 13:26):

Then, indeed having a zmodtype on functions would make sense for you. You could then derive a subzmodtype on your subtype of functions that are linear + additional condition.

#### Pierre Roux (Sep 02 2022 at 13:29):

Do you want help to define the zmodType on functions? Anyway if you do it we would certainly welcome a pull request to add it to MathComp.

#### abab9579 (Sep 02 2022 at 13:35):

By now, I added zmodtype and lmodtype on `{linear U -> V}` and derived these for subtypes as well. Was asking since I was afraid it might not be idiomatic. (Felt like I might be introducing inconsistent orphan instances)

#### Pierre Roux (Sep 02 2022 at 14:07):

I'd indeed be curious to have a quick look at your code if you don't mind.

#### Pierre Roux (Sep 02 2022 at 14:07):

(but the principle seems quite reasonable)

#### abab9579 (Sep 03 2022 at 01:09):

I am not good at programming so I ended up with this.

``````Section LinearZmod.
Context {R: ringType} (U V: lmodType R).
Implicit Types (u v : {linear U -> V}).

Definition linear_zero : {linear U -> V} := [linear of 0].
Definition linear_neg u := [linear of 0 \- u].
Definition linear_add u v := [linear of u \+ v].

Lemma linear_eq u v : (u: U -> V) = v -> u = v.
Proof. case: u v => [u +] [v +] /= Eq.
rewrite -{}Eq=> Hu Hu'. by have ->: Hu = Hu'.
Qed.

Proof. move=> u v w. apply/linear_eq.
case: u v w => [u _] [v _] [w _] /=. apply/addrA.
Qed.

Proof. move=> u v. apply/linear_eq.
case: u v => [u _] [v _] /=. apply/addrC.
Qed.

Proof. move=> u. apply/linear_eq.
case: u => [u _] /=. apply/add0r.
Qed.

Proof. move=> u. apply/linear_eq.
case: u => [u _] /=. have ->: 0 \- u = \- u by apply/sub0r. apply/addNr.
Qed.

Canonical linear_zmodType := ZmodType {linear U -> V} linear_zmodMixin.

End LinearZmod.

Section LinearLmod.
Context {R: comRingType} (U V: lmodType R).
Implicit Types (u v : {linear U -> V}).

Fact scale_linear (c: R) u : linear (c \*: u).
Proof. move=> b s t /=. rewrite linearP !scalerDr !scalerA.
by rewrite mulrC.
Qed.

Definition linear_scale (c: R) u := Linear (scale_linear c u).

Lemma linear_scalerA a b v : linear_scale a (linear_scale b v) = linear_scale (a * b) v.
Proof. apply/linear_eq. case: v=> [v _] /=. apply/scalerA.
Qed.

Lemma linear_scale1r : left_id 1 linear_scale.
Proof. move=> v. apply/linear_eq.
case: v=> [v _] /=. apply/scale1r.
Qed.

Lemma linear_scalerDr : right_distributive linear_scale +%R.
Proof. move=> c u v. apply/linear_eq.
case: u v=> [u _] [v _] /=. apply/scalerDr.
Qed.

Lemma linear_scalerDl v : {morph linear_scale^~ v: a b / a + b}.
Proof. move=> a b. apply/linear_eq.
case: v=> [v _] /=. apply/scalerDl.
Qed.

Definition linear_lmodMixin := LmodMixin linear_scalerA linear_scale1r linear_scalerDr linear_scalerDl.
Canonical linear_lmodType := LmodType R {linear U -> V} linear_lmodMixin.

End LinearLmod.
``````

#### Pierre Roux (Sep 05 2022 at 12:28):

This code seems perfectly reasonable to me. Note that this would IMHO be a nice addition to mathcomp-analysis (not to mathcomp proper because it is a classical result), not sure in which file it should go though. @Reynald Affeldt or @Cyril Cohen any idea?

#### abab9579 (Sep 08 2022 at 04:17):

So I tried the definition through Frechet derivative - it did not require any tensors, it does seem to work out.
However, it did not suit my specific usecase: also working on functions `U -> R`.

In detail, I need to use multiplication to describe derivation. Using frechet derivative makes proving relevant facts quite hard.
(E.g. product rule becomes ` 'd^() (f * g) x = f x *: 'd^() g x + g x *: 'd^() f x `, then I need to deal with scales, which complicates stuffs)

#### Zachary Stone (Sep 08 2022 at 21:11):

If I'm understanding the setting correctly, does the ' *: ' unfold to the normal ' * ' on 'R'? It's not ideal, but some lemmas to support rewrites may help. A separate product rule on R that uses the more idiomatic '*' would be nice either way.

On the opposite extreme, I'm eager to define Banach algebras in HB, which should improve the ergonomics of this kind of multiplication a bit.

#### abab9579 (Sep 09 2022 at 11:46):

Differentiation of function `U -> V` gives `U -> { bounded U -> V }`.
For multiplication with `V := R`, for `f : U -> R` and `g : U -> R` we have e.g. `df : U -> { bounded U -> R }` so it has to be `d(f * g) x = (f x) *: (dg x) + (g x) *: (df x)`. This cannot be changed.
To get induction step going, you need differentiation of such `*:`. I am worried it might explode on order.

Hmm, how would banach space improve ergonomics of this multiplication? I am a bit lost.

#### Zachary Stone (Sep 10 2022 at 03:08):

Ah, I see. I'm looking at the `deriveM` lemma, where a `rewrite /GRing.scale /=` helps. What context is this coming up in? I'm mostly curious about what kind of deratives you need.

The Banach algebra point is that this formulation of product rule basically cheats. It _only_ works for R, and exploits scalar multiplication in a weird way. Ideally you'd write `d (f * g) x = df * g + f * dg`, for some sensible notion of *. Apparently there's a better generalization regarding bilinear forms, so maybe I should do some more homework before I try to generalize this...

#### abab9579 (Sep 10 2022 at 04:47):

I just need to prove that multiplication of smooth functions is smooth. This is crucial for expressing product rule for derivations. (Ideally I should define germs and work on it, but that would take more effort. Still, that also needs multiplication)
So, I need to deal with "infinitely differentiable" on multiplication here.

I do not think Banach algebra would help with this. Inevitably, even if we do not restrict `R` to be `realType`, we still have `d(f * g) x = f x *: dg x + g x *: df x`. Here, `dg x : { bounded U -> R }` while `f x : R` - the types just do not match. No kinds of multiplication would amend this difference.
I think this is just a quirk coming from frechet differentiation.

#### abab9579 (Sep 12 2022 at 02:21):

(deleted)

Last updated: Jun 25 2024 at 17:02 UTC