

Note that this rule says nothing about what order the events happen in-just that they happen in some order. There’s no notion that two events can occur “at the same time”, because they are all accessing a single main memory. The idea is that multiple threads running in parallel are manipulating a single main memory, and so everything must happen in order. Sequential consistency: an intuitive model of parallelismĪrchitects and programming language designers believe the rules we just explored to be intuitive to programmers. But those rules lead to a contradiction (event (1) happening before itself). Then all the ordering rules we just showed must hold. Think of it as a proof by contradiction: suppose this program could print 00. Since this execution would require time-warping, we can conclude that this program can’t print 00. So if we start at (1), and end up back at (1) again, the graph is saying that (1) must happen before itself! Barring a very concerning advance in physics, this is unlikely to be possible. If we start at (1), and follow the edges-to (2), then (3), then (4), then… (1) again! Remember that the edges are saying which events must happen before other events. So let’s add those edges too:īut now we have a problem. Similarly, for line (4) to print 0, that print must happen before line (1) writes a 1 to A, so let’s add that to the graph:Īnd finally, of course, each thread’s events should happen in order-(1) before (2), and (3) before (4)-because that’s what we expect from an imperative program.

We can represent this graphically with an edge:Īn edge from operation x to operation y says that x must happen before y to get the behavior we’re interested in. For line (2) to print 0, we have to print B before line (3) writes a 1 to it. Intuitively, it shouldn’t be possible for this program to print 00. and a few others that have the same effect.(1) → (3) → (4) → (2): The first instruction from the first thread runs, then both instructions from the second thread, then the second instruction from the first thread.(1) → (3) → (2) → (4): The first instruction in each thread runs before the second instruction in either thread, printing 11.There are also some less obvious orders, where the instructions are interleaved with each other: (3) → (4) → (1) → (2): The second thread runs both its events before the first thread.(1) → (2) → (3) → (4): The first thread runs both its events before the second thread, and so the program prints 01.Intuitively, there are two obvious orders in which this program could run: We should think about the order in which its events can happen. To understand what this program can output, How multiple threads (or workers, or nodes, or replicas, etc.)

There are many resources on memory consistency,Īnd motivate why memory consistency is an issue for multicore systems.įor the details, you should certainly consult these other excellent sources. Which is the problem of defining how parallel threads Seeing things in order is a challenge for the ages. Lurking amongst the tall weeds of computer science: Only two hard things in computer science:Ĭache invalidation, naming things, and off-by-one errors. The cause of, and solution to, all your multicore performance problems. Memory Consistency Models: A Tutorial 17 February 2016
