Transaction was aborted due to detection of concurrent modification

Hi,

I have a sync nodejs job I need to run every day. I do parallel upserts for each document with the following generic function

const upsertDoc = async (doc, coll, index, terms) => {
  return client.query(
    q.Let(
      {
        m: q.Match(q.Index(index), terms),
      },
      q.If(
        q.IsNonEmpty(q.Var("m")),
        q.Replace(q.Select("ref", q.Get(q.Var("m"))), { data: doc }),
        q.Create(q.Collection(coll), { data: doc })
      )
    )
  )
}

Every upsert is applied to a different doc. I’m not manipulating the same doc.

The problem is I’ve started to get the Transaction was aborted due to detection of concurrent modification error with a parallelism level of 4. I’m really confused because this used to work and I can’t think of anything I’ve changed that might cause this. But obviously, there is something.

I’ve tried creating the client in the upsertDoc function too. That didn’t help, although I don’t know what’s the best practice here.

I’d appreciate any clue that might help.

1 Like

Hi @fred,

does the index just return one document (is a unique index)?

Luigi

Hi @Luigi_Servini

One of the indexes is, indeed. And I’ve also noticed while trying to debug this, changing the index to not unique solved the issue.

Although I couldn’t understand why because I wasn’t trying anything that would break the unicity contract .

We are dealing with the same error message in a use case that is very similar, except that we don’t modify documents at all. We basically do:

If(
    Exists(Match("some_unique_index")),
    false,
    Create(...)
)

As soon as there is some concurrency, we get (on JVM):

com.faunadb.client.errors.UnknownException: contended transaction: Transaction was aborted due to detection of concurrent modification.

That’s indeed quite similar.

One fact I just realised I missed is, yes, the index I’m using in the upsert code is unique, but the other index I mentioned I’ve changed to unique: false was just another one defined on the collection.

So you might want to test other indexes as well.

@fred @Felix we retry 4 times with an exponential delay between each retry to apply the transaction before generating this Contended Exception with error code 409. You could do a retry from client side too.

I see. So if I understand correctly, this basically puts a limit on how much a unique index can scale, write-throughput-wise, am I right? I wonder if there are any workarounds; retrying contended transactions will stop working once the number of concurrent writes to the unique index passes some threshold.

Thanks @Jay-Fauna

Though I thought the unit of transaction was the document in Fauna. In this case, I’m not changing a document concurrently. I’m adding or replacing different documents?