`UnknownError: contended transaction` for a read-only query?

Hi! I’m running the following simple query to read from an index:

          q.Get(q.Match(q.Index('appByHostname'), [appHostname])),

Here’s how the index was created:

export default q.CreateIndex({
  name: 'appByHostname',
  source: q.Collection('apps'),
  terms: [
      field: ['data', 'hostnames'],
  serialized: true,
  unique: true,

And here’s what a typical app document might contain:

  id: "some-id",
  hostnames: ["some-hostname.com", "some-other-hostname.com"],

A few moments ago, the query randomly started failing with the following error:

2022-04-29T00:54:54Z app[c00a0d01] sjc [info]UnknownError: contended transaction
2022-04-29T00:54:54Z app[c00a0d01] sjc [info]    at Function.FaunaHTTPError.raiseForStatusCode (https://cdn.skypack.dev/-/faunadb@v4.5.2-3grtqHoapaCgnBEQKCqj/dist=es2019,mode=imports/optimized/faunadb.js:877:15)
2022-04-29T00:54:54Z app[c00a0d01] sjc [info]    at Client._handleRequestResult (https://cdn.skypack.dev/-/faunadb@v4.5.2-3grtqHoapaCgnBEQKCqj/dist=es2019,mode=imports/optimized/faunadb.js:3349:25)
2022-04-29T00:54:54Z app[c00a0d01] sjc [info]    at https://cdn.skypack.dev/-/faunadb@v4.5.2-3grtqHoapaCgnBEQKCqj/dist=es2019,mode=imports/optimized/faunadb.js:3334:11
2022-04-29T00:54:54Z app[c00a0d01] sjc [info]    at async Wrapper.func (file:///app/main.js:242:15)
2022-04-29T00:54:54Z app[c00a0d01] sjc [info]    at async Wrapper.wrapFunction (https://cdn.skypack.dev/-/async-cache-dedupe@v1.2.2-UrZ57DlOVSkWziAvXRMv/dist=es2019,mode=imports/optimized/async-cache-dedupe.js:889:20)

This persisted for about a minute and then recovered on its own.

I’m curious what this error means in the context of a read-only query (I’ve seen a few posts about it that involve writes, which doesn’t sound like it would apply here), and, more importantly, what I can do to prevent it from happening in the future.

I have a slight suspicion that it might have something to do with the serialized: true property of the index, but I haven’t found enough documentation on what it actually does to be sure. The only “explanation” I was able to find was here:

Optional - If true , writes to this index are serialized with concurrent reads and writes. The default is true .

But as you can see it’s a rather… tautological description and isn’t all that helpful for understanding. :slight_smile:

Of course, that suspicion could very well be complete off-base. :sweat_smile:

Any insights from folks with more expertise would be appreciated. Thanks!

Can you confirm that this transaction is not performing any writes?

Read-Write queries (regardless of what they are writing) can contend when trying to read some resource if that resource is being written by other transactions.

Docs and Serialization/Isolation levels

“serialized” is a common term for databases, so we probably take that for granted when describing the Index’s serialized property. You can read more about serialization levels here:

Does the Isolation levels page help?

I’ve created an internal ticket to review the description of the serialized field.

Hi @lewisl

We are investigating an issue that is causing an elevated level of 409 responses for some customers in all Region Groups. This could be related.

If you can share more about your query (e.g. is it Read-write?), that may still be helpful.

You can subscribe to updates at status.fauna.com

Thanks for the link! That page definitely does a much better job explaining what it means.

Re: the transaction itself, the snippet I posted is the entirety of the query, so unless there’s some way those commands are performing hidden writes, it should be a read only transaction, as far as I can tell.

Would it be possible to investigate further using the timestamp in the logs I posted (UTC, I believe)? The database name is reflame-prod, let me know if there’s any other info I can provide to aid with investigations.

That is definitely unexpected – you should not receive 409 errors for read-only queries – and strongly suggests that the errors are related to the ongoing issue.

We have identified the root cause of the issue that is causing the elevated level of 409 responses and are in the process of deploying a fix.

Thank you for your patience and your feedback.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.