Following from the discussion yesterday in the call, I came across this SO thread: https://stackoverflow.com/questions/16891659/how-does-concurrency-using-immutable-persistent-types-and-data-structures-work
Is this basically what PMP's "dirty" labeling is doing. By having a mutable node on the mostly immutable ugraph, we can keep track of divergence in the data structure. My understanding from the SO is that "doing the same work twice" is typically more efficient than coming up with fancy ways to merge diverging graphs.
I could also be completely wrong, but I would like to learn a bit more about this.
There is a wealth of lore / design patterns around this topic in the Rust community at large (including research stuff like iris)
I find the Rustonomicon to have interesting nuggets of wisdom: https://doc.rust-lang.org/nomicon/intro.html
(aka "unsafeness in an otherwise semantically sound model")
Seems to me the answer at SO is describing spinlocking and lockfree/fine-grained approaches. In general this stuff is very tricky to get right which is why you have a whole field of concurrency verification (of which concurrent separation logics like Iris/FCSL are one of the approaches). From the app programmer's point of view the general consensus is usually to avoid lock-free stuff and just use system-provided locks unless you have some very specific scenarios - typically this involves situations where you need some kind of fair throughput, i.e. roughly speaking, all your threads have to proceed at the same pace, or where you really have to manually minimize the amount of context switches between threads. I think a good high-level introduction into the theory of this stuff is
https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.469.3698 Herlihy, Shavit, [2011] "On the nature of progress"
Last updated: Oct 13 2024 at 01:02 UTC