Concurrency and isolation levels

transactions would be the term. Maybe it helps to look at this document to understand how Fauna does optimistic calculations (Consistent Backends and UX: How Do New Algorithms Help? | CSS-Tricks - CSS-Tricks)? Towards the end there is a diagram.

In your case, if I’m not mistaken.

  • transactions will be ordered and optimistically calculated, so lets say they are ordered nicely and t1 is before t2.
  • when applying transactions on all nodes, each node will com to the same conclusion. t1 goes through since it’s data has not been modified in the meantime. t2 does not go through since it’s data is modified, t2 is added to the ordered log again and therefore again calclated and scheduled.
  • All nodes now accept t2 since it has not been modified.

That’s not correct, since the second transaction will be recalculated due to the selections being changed, it will contain the correct data and there will be two appends.

That’s what locking is. Fauna does not do locking but rather optimistically calculates and then verifies whether it was too optimistic when each node applies the calculated value. This is possible since everything is deterministic. The end result is the same though, you could call it ‘optimistic locking’.

You don’t have to, your logic should be correct however it might not be an ideal way to model your problem.

There is a limit to the amount of times a transaction is going to be retried and if you hit that limit you will get an error in the vein of: Transaction was aborted due to detection of concurrent modification. The way you have modelled it has two downsides:

  • you are replacing the complete array of selections each time.
  • when there are multiple concurrent writes, there will sometimes be these concurrent modifications which might trigger the error if it happens too often. You could of course always retry yourself.

The positive point is:

  • querying your document will be fast since your selections is readily available in the document, you don’t need to get a reference or use an index.

Therefore, it’s a tradeoff, ask yourself whether you want to optimize for writes or reads (or both?), there is some Fauna modelling advice that might interest you in this three part series: Modernizing from PostgreSQL to Serverless with Fauna Part 1

1 Like