The IETF community has long used natural language to describe and specify protocols [...] In many ways this is clearly a successful approach. After all, the Internet exists as an interoperable, global, multi-vendor, network in large part due to the protocol standards the IETF develops.
https://csperkins.org/research/protocol-standards/2022-11-30-usable-formal-methods/
I guess since natural language specs didn't lead to complete disaster all the time, we'll have to keep doing them...
That's why we've been working on hacspec. It now has both a Coq and an SSProve backend.
yeah I hope you can sell hacspec to IETF, but it's still a bit backwards to me (subset of unspecified language that outputs something well-specified)
but I think they could have at least investigated: how much did all the ambiguity and misunderstandings due to natural lang specs actually cost? It seems like a low bar that they actually made some infrastructure work with it.
Hacspec has an independent operational semantics.
but as I think we discussed before, the evolution of the semantics is likely going to be constrained, if not outright dictated, by whatever people check in to the Rust compiler repository on GitHub...
or has anyone said: "Hacspec compatibility with Rust is not a high priority anymore"?
Didn't IETF require multiple interoperable implementations, or was that another standard? That doesn't seem a plain "natural language" spec.
Also the link seems about the opposite of that quote since they are exploring more formal methods :grinning:
from the minutes:
What are formal methods and why should IETFers care?
why should we care then if it’s all difficult?
I think the chances of losing this game are overwhelming: whatever FM techniques you introduce or push for, they need to ensure positive ROI up front compared to whatever old methods you were using, that never had any cost/benefit analysis
so essentially I believe they are setting up this whole "exploration" of FM to fail, and it will be a net negative for ITP and friends
I have the same sense; I further think the skepticism we are reading here is a well-deserved consequence of the PL tradition of zealously pushing for design choices that are better seen as academic simplifications.
for what it's worth I think the RISC-V formal specification competition/process was more of a step in the right direction for real-world formal methods (if we assume an ISA is a protocol, which it sort of is). At least, it ensured almost every popular ITP has a RISC-V ISA spec.
and seemingly few unwilling people were subjected to PL design choices along the way
saw this in the latest IETF formal methods research working group agenda/minutes from March 2023, regarding FM in general:
Why aren't these things already usable?
In my view, completely counterproductive adverse signaling to FM research community.
Looking at the linked talk:
Verifying a mechanical proof is correct can take many hours and cost hundreds or even thousands of dollars.
Uhm?
A successful analysis may find no issues, i.e. that the protocol meets its goals.
Such results are very hard to get published because they are not “novel”, even though they may be valuable to the community.
What community is that?
Naively, I'd also have expected at least lightweight formal methods like model checkers/TLA+ to be already relatively practical here?
I think we have plenty of workshops/conferences that welcome FM case studies that "find no issues". Of course, the workshops are not tier 1, maybe not even tier 2, but it gets added to the scholarly record. Was surprised when even TACAS (1.5 tier?) "allowed" this.
I gave a presentation there, https://datatracker.ietf.org/meeting/116/materials/slides-116-ufmrg-foundational-end-to-end-verification-of-high-speed-cryptography-00
which I believe was well received. These people will not be writing Coq code, but they there's some adoption of hacspec.
Last updated: Jun 05 2023 at 09:01 UTC