I would like to understand how 'uniqueness' is determined in the context of 'document is not unique.' errors

Hi Fauna Community,

Could someone explain how ‘uniqueness’ is determined in the context of ‘document is not unique.’ errors? I have documents that I would like to replace in a specific collection and for some reason, I receive the above message repeatedly. The document ref matches that of the document to be replaced in each case and the documents are not identical (i.e. the source of my frustration) though I keep getting this message.

Is uniqueness assessed in the context of the databases GraphQL schema? If so, this could be where my issue is as some of the fields in the incumbent document are no longer referenced in the schema. For the record, I use a simple FQL whole of document upload as per below query (python driver):

    def upsertDocument(document,collection):
        if bool(document.get('ref') and document['ref'] != None):
            client.query(
                q.replace(
                    q.ref(q.collection(collection), document['ref']),
                    {"data": document}
                )
            )

Either way, there isn’t much information available on this error in documentation.

Thanks in advance!

Uniqueness is determined by the index definitions that cover the document(s) that you are modifying. Specifically, when an index has unique: true, queries that attempt to create or update a document having the same terms or values as an already-indexed document would generate the error that you are seeing.

If your GraphQL schema uses the @unique directive, an index with unique: true is automatically created.

Thanks Ewan,

I had initially thought that this was the issue though I have been through all indexes covering this collection and checked to ensure that none had the unique: true condition. I have also deleted all documents in the collection and attempted to start uploading documents from scratch again. In case it was a concern, all indexes are clear of any references to old documents though I have over 40 active indexes over the collection (thus the hesitation to delete the collection and start again). Also, each document is 400-500kB if stored as a JSON file and about 30,000-35,000 lines long. Seemingly too long/large to enter using use the dashboard.

Even using a simple create function on documents referencing an almost empty collection (4 newly created documents) yields the same issue using the following:

client.query(
                q.create(
                        q.collection(collection),
                        {
                        "data": document
                        }
                )
)

Is there anything else you can think of? I would be available for a call if that could help resolve sooner. This our last step before production, hoping to ship on April 18.

I can’t think of anything else that would cause this issue. You should probably contact support@fauna.com and have someone investigate your database to see what the problem might be (I don’t have access to the service logs).

Thanks for the follow up @ewan, I’ll send an email to support now.

I would like to keep this thread active as I’m not sure that my fauna subscription level permits service through the customer support portal. The team have been kind enough to look into it though it has been several (weekend) days since last contact and I’d like to find a solution as soon as possible.

If anyone has found this issue with their deployments and figured out the cause beyond the issue @ewan warned against, I would really appreciate your input. I can confirm that my issue is not due to uniqueness constraints from indexes referencing the collection.

I am at a standstill here pre-production and there is no documentation/help forum threads relating to this topic there is also no more verbose error logging options that I am aware of being available.

Thanks all

Do you see the problem only during GraphQL mutations, or do you also see it with just an FQL query? Does the error appear regularly, or only sporadically?

If you can reproduce the problem with only 4 documents in a collection, can you share those documents, the definition of all covering indexes, and the subset of code that demonstrates the problem?

Until we can reproduce the problem, we might not make any head way in solving it. Feel free to DM me if you’d rather not share the details in this thread.