I was recently made aware of EU's Cyber Resilience Act, which seemingly is set to override any nonliability clauses in all open source software licenses (making it possible to sue OSS authors in Europe for security issues). Has anyone looked at what this might mean for Coq, which at least has the Coq Consortium "commercial" wing?
Also: https://blog.opensource.org/what-is-the-cyber-resilience-act-and-why-its-important-for-open-source/
A worst case scenario from a comment to the Eclipse Foundation blog post:
As written, the CRA could spell the end of free and open source software in Europe, not only at Eclipse but also at countless projects that are backed by small businesses and will not be able to afford any of this level of ceremony: the road to hell is paved with good intentions. As it is written now, it could IMHO trigger a massive FOSSXIT out of Europe.
@Karl Palmskog : I am citing from the proposal page 15, number 10:
In order not to hamper innovation or research, free and open-source software developed or supplied outside the course of a commercial activity should not be covered by this Regulation. This is in particular the case for software, including its source code and modified versions, that is openly shared and freely accessible, usable, modifiable and redistributable. In the context of software, a commercial activity might be characterized not only by charging a price for a product, but also by charging a price for technical support services, by providing a software platform through which the manufacturer monetises other services, or by the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software.
I'm well aware of clauses like this, which is why I brought up the Coq Consortium (which arguably is engaged in "commercial activity"). OSI also said:
The term “commercial” has always led to legal uncertainty for software and is a term which should not be applied in the context of open source as specific commercial uses of open source projects by some users are frequently disconnected from the motivations and potential compensation of the wider community of maintainers. The software itself is thus independent of its later commercial application.The problem is not the lack of a taxonomy of “commercial”, it is the very act of making “commercial” the qualification rather than, for example, “deployment for trade”. Thus adding a taxonomy of commerciality is not a solution. OSI would be pleased to collaborate over better approaches to qualifying an exception.
I don't think the Coq Consortium is considered commercial in this sense, since its objective is not to generate money. Besides I would think that Coq development lives up to the standards requested in this proposal. It is essentially about forwarding known security fixes in a reasonable way to users. It explicitly says that with the state of the art it is not possible to create bug free software, but that one should at least live up to reasonable standards in providing security fixes.
according to my local expert, it doesn't matter if the goal is making money / profit or not: as soon as you do anything related to any EU market, like providing a service for a fee, it's commercial
as to "providing security fixes", see the Eclipse Foundation analysis, which says they are barely able to reach the standard they are proposing in CRA. People/organizations doing OSS inside EU with fewer resources are doomed, if they [Eclipse Foundation] are correct.
if GitHub becomes considered as a distributor of commercial software according to CRA, the likely outcome is that GitHub blocks EU as a whole rather than assume liability
I think the point I cited makes it clear that this is explicitly not the intention of the regulation. I expect that the final results lives up to that. But indeed we should do some lobbying to take care that it does.
to go with the classic analogy, if I write a recipe for traditional omelette, I can still claim that it is my intention that no eggs are broken by anyone following this recipe. But how credible is that claim?
OTOH if I write an omelette recipe and someone serves it to people allergic to eggs I hope I'm not the one who is liable for it
also, another interesting reading of the clause Michael cited: apparently the only reason they will make this exception for FOSS is to "not hamper innovation or research". In other words, FOSS that doesn't produce or is not connected to innovation/research (e.g., regular free expression or entertainment) deserves no special protection.
https://blog.opensource.org/the-ultimate-list-of-reactions-to-the-cyber-resilience-act/
OK, you convinced me to write a letter to my responsible EU parliamentarian. Up to now it always went the way I wanted in the (few) cases I felt inclined to do so.
I guess I'd write a letter too if not all "my" parliamentarians were corrupt to the bone. Possibly-unpopular view: the language in the proposal actually makes me think this is a hit job against OSS from "big proprietary EU software" [which can easily afford all the compliance theater]
Could be relevant: https://fosdem.org/2023/schedule/event/cyber_resilience/ (I did not attend this presentation and did not watch the recording yet.)
@Karl Palmskog : the outcome of the EU software patents legislation gives some hope.
some very worrying information (for Coq development) from the Apache Software Foundation on the Cyber Resilience Act here: https://news.apache.org/foundation/entry/save-open-source-the-impending-tragedy-of-the-cyber-resilience-act
Here are some relevant quotes:
there has been a lot of focus by open source foundations to help refine the current wording of the CRA to make open source software “exempt” [...] By and large, these efforts have not been successful.
the policy makers have made it crystal clear to the ASF [Apache Software Foundation] that they intend to have the CRA apply to open source foundations. The current exceptions for open source are either for pure hobbyists, code that is not used in real life, or for things such as mirrors and package repositories like NPM or Maven Central. The way they do this is a presumption of commercial intent if the software is used anywhere in a commercial environment.
The CRA would regulate open source projects unless they have “a fully decentralized development model.” However, a project where a “corporate” employee has commit rights would not be exempt (regardless, potentially, of the upstream collaboration having little or anything to do with their employer’s commercial product). [...] the lack of a transactional connection between those contributors and the commercial employers is problematic. For example, the developer could be an airline pilot employed by a commercial airline (i.e. a commercial entity) – who contributes to open source in their spare time: this part of the policy would make that contribution ‘commercial’.
Some of the obligations are virtually impossible to meet: for example there is an obligation to “deliver a product without known exploitable vulnerabilities”. [compare Coq's list of critical bugs, not all of which are fixed]
A possibly relevant earlier breakdown of the FOSS liability/responsibility issue: https://eclipse-foundation.blog/2023/03/10/product-liability-directive-more-bad-news-for-open-source/
Imagine a scenario where a year ago or so a consumer in Europe lost data as a result of using the Wizbang product from BigCo GmbH. [...] the defective open source component is (say) the Eclipse Modeling Framework (EMF), so the component manufacturer is the Eclipse Foundation AISBL, a European-based open source foundation.
My read of the PLD [legislative "part" of CRA] is that as the European manufacturers of the Wizbang product and the EMF component, BigCo GmbH and the Eclipse Foundation would both be jointly and severally liable to consumers of the defective product. If I am correct, this scenario puts European open source projects, communities, and foundations at a disadvantage relative to their international peers.
Isn't it so that in this statement of the PLD proposal:
(13) In order not to hamper innovation or research, this Directive should not apply to free and open-source software developed or supplied outside the course of a commercial activity. This is in particular the case for software, including its source code and modified versions, that is openly shared and freely accessible, usable, modifiable and redistributable. However where software is supplied in exchange for a price or personal data is used other than exclusively for improving the security, compatibility or interoperability of the software, and is therefore supplied in the course of a commercial activity, the Directive should apply.
the sentence after This is in particular the case for software
legally defines a subset of developed or supplied outside the course of a commercial activity
?
Are there arguments that Coq wouldn't match this definition?
If this definition applies to all libraries we use is a different but irrelevant question since the PLD regulates the question if anybody can be hold liable for defects in Coq. If this can be denied, the question who would be liable for libraries we are using is irrelevant.
Things will get tricky around CompCert, though.
In the example above, what is really the "commercial activity" of something like the Eclipse Modeling Framework? It looks like this comes precisely from "a presumption of commercial intent if the software is used anywhere in a commercial environment". Wouldn't you call CompCert and also stuff like this a commercial environment?
for example, a bug in CompCert causing damages may (theoretically) be due to an unfixed critical bug in Coq (theoretically) making AbsInt and Coq devs/Inria jointly and severally liable to consumers of the defective product under PLD/CRA
if the directive doesn't apply to coq (per the quoted bit) why would coq be liable?
if you read the Apache blog post, they make the case that it's nearly impossible for large open source projects to fall outside CRA. Specifically, they [CRA authors] "presume commercial intent", if it's not pure hobbyist code, code that is not used in real life.
there is on one hand the "current" text of CRA, but the Apache people actually met representatives from the European Commission, who apparently said they want open source foundations to be covered by CRA (regardless of if the foundation's software is seemingly produced outside "course of commercial activity")
But isn't the PLD the more relevant part (the posts you shared seem to agree with that). As far as I understand this, the CRA defines what the state of art is and the PLD defines in which situations you are liable if you don't follow the state of the art. So if the PLD says you are not liable, the CRA as a whole doesn't apply.
my understanding is that it goes the other way: CRA says you are responsible, then PLD says that if you are responsible then you are liable
OK, but then in what way does responsibility harm if it doesn't have legal consequences?
my understanding is that if if PLD gets added into national law of EU countries, suddenly all the responsibility has legal consequences (publish some open source software, get sued regardless of nonliability clauses)
anyway, I don't pretend to understand all the CRA/PLD details, but my overall argument is that if Apache and Eclipse are worried, maybe "we" should be too
I mean yes, the CRA says that the Coq team is responsible for keeping Coq sane, but the PLD is the law which defines the consequences if you don't.
Karl Palmskog said:
all the responsibility has legal consequences (publish some open source software, get sued regardless of nonliability clauses)
Note that this is already the case for the French law. You cannot waive away in any kind of liability. In particular, most licenses that state something like "but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE" (quoted from GPL) are meaningless in France. That is the reason why the CeCILL license was written in the first place.
@Michael Soegtrop the argument seems to be that almost nothing would qualify as "non-commercial"
@Paolo Giarrusso : I believe I cited the legal definition of "non-commercial" in the sense of the PLD above - and this definition is quite clear.
There are arguments that any contribution of non hobbyists to software makes them commercial, but I can't quite see how this is rooted in the proposals.
If this were to apply to Coq, what would be the consequences? Or rather, what could we do to mitigate the bad consequences?
as a first approximation: not publish any list of unfixed bugs, since CRA would require them to be patched
so close every github issue? that doesn't sound like a good way to work
You cannot be serious. If we willingly hide the existence of a bug and this bug happens to cause damages, then this becomes a criminal offense.
CRA requires to "deliver a product without known exploitable vulnerabilities". If the exploitable vulnerabilities are not "known", then it can't apply. They will have some central system for listing "known" vulnerabilities.
If the meaning of the law would be that non-commercial is defined as "hobbyists-only", open source would be dead in Europe and all one can do about this is to make this clear to ones EU parliamentarian.
I think to get forward we should scrutinise the wording of the proposals and see if we find anything in that direction. I am not going to my EU parlamentarian saying that I am worried because Apache is worried. But if there is a solid argument, I definitely will.
I believe I cited the legal definition of "non-commercial" in the sense of the PLD above - and this definition is quite clear.
That seems to be just some "prelude". Can you find matching text in the actual Articles (starting at Chapter 1)?
but more specifically I meant that CRA enforcers are likely to zoom in on this list. But just a tiny example. Probably more drastic stuff like assigning copyright to legal entity is needed.
Karl Palmskog said:
CRA requires to "deliver a product without known exploitable vulnerabilities". If the exploitable vulnerabilities are not "known", then it can't apply. They will have some central system for listing "known" vulnerabilities.
That is not what the directive says. The directive says you have to prove that "the objective state ... was not such that the defectiveness could be discovered". Hiding the list of bugs does not change anything. On the contrary...
@Karl Palmskog the Apache post says disclosure of (security) vulnerabilities is compulsory within hours
but also, "producers hide defects" is almost a trope by now, and neither authorities nor customers like it
I still think it's going to be hard to sue people for willingly staying ignorant [of exploits and the like]. That's not really "hiding" per se
I guess one also needs to find out what a "vulnerability" in this sense is. In a web server many bugs lead to vulnerabilities. In Coq bugs will only lead to vulnerabilities if a proof of False cane be exploitet to cause damage, say if Coq is used to check block chain contracts or the like. If Coq is used for verification which is also manually reviewed, I would say a proof a False is very unlikely to invalidate a verification and cause a vulnerability.
So I guess we should define what proper use of Coq is and what not.
Anyway, I will read the full text of the PLD and CRA (thanks @Paolo Giarrusso for pointing out my flaw).
@Karl Palmskog Even if the law had a loophole, I'd suggest that conversation should happen with lawyers. Authorities are terrible at prosecuting such crimes, but discussing ways to get away with crimes at some point is also a crime, of which you're providing evidence.
sorry, let me be more precise: IANAL so I don't know exactly what becomes a crime, but if yes this discussion would be evidence.
at least in my current jurisdiction, discussing how one can get around something that may be signed into law at some future point is not a crime. Anyway, this is just a variation of due diligence: if knowing something is a liability, then stop knowing about it.
that seems like it'd apply to tobacco, oil, pharma, any safety-critical industry
I did now read the full normative part of the PLD proposal, that is the part after HAVE ADOPTED THIS DIRECTIVE:
. A few observations:
Also reading, and disclosure seems irrelevant to the PLD: it ensures "no-fault liability" for any defect (known or unknown), unless the defect could not be discovered (see Art. 10.1.(e) — phrasing is complicated, but not reassuring).
Re open-source, the only possible exclusion seems that Art. 4.(9) requires that "making available on the market" only counts if happening "in the course of a commercial activity" — as the Eclipse blog-post says — but if I submit a patch as part of my employment it seems it'd count
‘making available on the market’ means any supply of a product for distribution, consumption or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge;
And yes, submitting a patch for free is clearly "making available on the market".
@Paolo Giarrusso : regarding "not part of a product": I refer to article 4 whose description of component laves for Coq only "related service" and this is defined as "means a digital service that is integrated into, or inter-connected with, a product in such a way that its absence would prevent the product from performing one or more of its functions"
I was reading the same, but this _might_ include "inter-connected intangible item", which is pretty vague:
‘component’ means any item, whether tangible or intangible, or any related service, that is integrated into, or inter-connected with, a product by the manufacturer of that product or within that manufacturer’s control;
still, I'd argue Coq's should be treated as factory equipment, say for quality assurance — that's not part of the product, but it can contribute to damage, so how's that regulated?
naively, if a car manufacturer is liable, _they_ pay the customer, and then possibly sue their own supplier — not sure if the same law would apply
presumably not because bugs per se don't count as "damage" under Art. 4
I guess I'll classify my FOSS projects as modern poetry and abstract art installations.
if you don't have commercial contributors you should be safe
@Paolo Giarrusso : I still think that "component" without "related service" means that it is somehow a part of the product - say that it contains combpiled binaries of the software. For FV this would not be the case, but for CompCert it would.
Reading the CRA, one noteworthy point is:
1.This Regulation applies to products with digital elements whose intended or reasonably foreseeable use includes a direct or indirect logical or physical data connection to a device or network.
if Coq is used for extracting code that goes into a product (or an AST inside Coq is pretty printed), then hard to argue Coq is not somehow part of the actual product, like a compiler generating the machine code
Michael: the text is a bit more ambiguous than I'd like, but I essentially agree
@Karl Palmskog : I am not sure if a compiler is a component of a product in the sense of the PLD either - it doesn't speak explicitly about tools. Would a NC-lathe be part of a car?
not sure, I guess lawyers will figure that out
At least runtime libraries are likely components, and CompCert has some (not sure about extraction?), but they seem less likely to be defective
I know gcc inserts its runtime library stuff into code, that's why we need: https://www.gnu.org/licenses/gcc-exception-3.1.en.html
I was looking for a definition of "safety", which is the fundamental notion they use all over the place nowadays and not just in that document, and I couldn't find it anywhere... anybody knows where it is? OTOH, they say "A product shall be considered defective when it does not provide the safety...", which is a complete misunderstand and distortion of the technical meaning of "defect" and, with that, of the meaning of "quality" as degree of correspondence to the intended requirements (i.e. correctness, not "safety"), and with that, of all the most fundamental notions of software engineering and indeed of engineering in general, Murphy's Law and what it means to begin with. We are even made responsible for "defects" in any third party software we use, and since the very operating systems and the development frameworks nowadays are systematically flawed, good luck bringing e.g. MS to court to prove the responsibility is theirs and not yours. But of course this is just the tip of the iceberg of a general situation that is every day more insane at all levels...
I did a "diagonal read" of the CRA. My conclusion for Coq and Coq Platform is:
For the PLD we should individually take political action to ensure that the intention (13) cited above does make it into the normative text.
I did find the Annex I, just a minute
https://eur-lex.europa.eu/resource.html?uri=cellar:864f472b-34e9-11ed-9c68-01aa75ed71a1.0001.02/DOC_2&format=PDF
(from https://digital-strategy.ec.europa.eu/en/library/cyber-resilience-act -> English PDF text -> "Link to document 2")
Btw.: I didn't want to stop the discussion with my post - I wanted to start it :-)
Julio Di Egidio said:
I was looking for a definition of "safety", which is the fundamental notion they use all over the place nowadays and not just in that document, and I couldn't find it anywhere... anybody knows where it is?
There is only one definition of "safety" that the politicians truly care to protect. Pardon my rant.
aren't they using "safety" as a sort of synonym or expansion of "security"? The "safe" software never goes wrong (functional correctness), and has no "exploitable vulnerabilities" (what is usually meant with security nonfunctional correctness)
functional correctness sounds much stricter, honestly
technical meaning of "defect"
The jargon they consider relevant is consumer protection law, not software engineering — PLD is about the software having done enough damage that the producer is liable to the customer, whether they're at fault or not (EDIT: AFAICT)
think of the CE safety certification on electronics, AFAICT — they also refer to existing legislation (such as "Regulation 2018/1139 [high uniform level of civil aviation safety]")
or car safety
Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users
when describing the old version of the PLD (85/374/EEC), the text implicitly defines safe: something is safe if it ensures it cannot cause damage _beyond_ the product — safe cars in a crash needn't remain intact
[The old PLD] establishes the principle that the manufacturer of a product is liable for damages caused by a lack of safety in their
product irrespective of fault (‘strict liability’). Where such a lack of safety consists in a
lack of security updates after placing the product on the market, and this causes
damage, the liability of the manufacturer could be triggered.
The relevant technical definitions indeed are the engineering, not the mathematical, ones: this is about the proverbial real production of real software for real applications by real people. In particular, "correctness" in the/that technical sense is relative to an intended semantics which is always the run-time semantics (note: irrespective of how much of it we are able to delegate to compilation time/static constructs/formal constructs even), as the only semantics that matters in the/that technical sense is what the program actually does/is expected to do. -- Under the same rubric, "safety" too has a technical definition, as the validation of pre- and post- conditions (which includes validation of return values from sub-calls, i.e. the whole boundary surface with obligations by the caller as well as by the callee). -- The two definitions mostly overlap but should not be reduced to one another: while correctness as satisfaction of an intended semantics can indeed be mapped to a set of pre- and post-conditions, whence the notion of correctness by safety, our ability to validate if not to express the needed conditions is often limited, by practical considerations at least, so a naive approach ("full safety in each and every place", so we don't even trust a call to our own safe code) is neither needed, nor wanted, nor generally feasible: we/the EU should rather be reasoning about components and the boundary of components, and eventually about modularity, of functionality as well as of responsibility... while these regulations seem to confirm and consolidate a trend in the exact opposite direction.
We're talking of two separate laws, and at least for the PLD, "modularity of responsibility" seems too vague or downright dangerous.
If you buy a car, and the car hurts you, the producer (or importer) is _liable_ to you — full stop; that's the idea of strict liability. That's essential so you can get the money quickly. They might not be responsible, but they can sue the faulty party and spend tons of time and money to prove who is or is not at fault. And modularity can be relevant to the latter.
Without strict liability, the damaged consumer would need to prove who's at fault, which is an excessive burden for consumers.
The new PLD just extends this to software causing damages — most bugs don't qualify, but conventional harms and loss of data do. What is in question is extending this to open source.
"full safety in each and every place", so we don't even trust a call to our own safe code
Do you have a source?
Paolo Giarrusso
"modularity of responsibility" seems too vague or downright dangerous.
I was neither vague nor I have said anything really unorthodox as far as engineering and production are concerned. Dangerous is not to take your engineers for serious. :)
If you buy a car, and the car hurts you, the producer (or importer) is _liable_ to you — full stop;
No, especially in case of "incidents", it's not as simple as a "full stop": which is the very problem with that directive...
Anyway, my 2c.
Last updated: Oct 13 2024 at 01:02 UTC