Transaction was aborted due to detection of concurrent modification

Hey @pier

You said you were running 10 concurrent requests. When Fauna encounters contending transactions it will retry 5 times and then abort with a 409 error. Some of the queries are executing quickly enough to pass by, but in general too many transactions are trying to gain write access to the same document.

Every request in Fauna is a transaction, and read-write transactions are strictly serialized (docs). Here is a forums discussion I’ve thought was helpful to understand that: Concurrency and isolation levels

Your case is a bit more straightforward then the original topic. You are trying to read+write to a single document in many transactions. The database has to do something like this (similar steps from this discussion on aggregation)

  1. 10 requests to update the Document come in at roughly the same time.
  2. They all eagerly start running their transaction
  3. Each proceeds with updating the Document
  4. The earliest transaction (as determined by the Calvin algorithm) succeeds in Updating the Document and claims ownership of that document in the transaction log. The 9 later transactions no longer have the latest version of the Document and have to retry.
  5. The first transaction completes. 9 requests to update the Document are pending.
  6. They all eagerly start running their transaction
  7. … and the process repeats until the transactions succeed or retry 5 times.

Note how I worded #4, “The 9 later transactions no longer have the latest version of the Document”. You have to first read the latest version of the document before you write to it. This adds additional constraints on your transaction. You could, for example, not require a strictly-serialized READ, and let your transactions be happy with whatever the value was when the transaction started. That would make for a poor counter, though!

Regarding the original topic, the same kind of contention can happen with Indexes (any Set really) when you try to “read the latest” and then write something.

We are working on optimizing for counter-like operations, but until we can deploy that, straight counters like this are not a good pattern for Fauna. The best way to do this is to make a separate collection to log an event and then execute a separate process to aggregate the events. I already linked to a separate discussion about aggregation, but I’ll repeat it if for nothing else than to highlight the complexity that even a counter can take on. We understand this is a pain and hope we can make changes soon.

1 Like