FaunaDB architecture design

I’m thinking about using FaunaDB as a base for a new project. But I’m a bit struggling with the application architecture.

I want to use Next.JS for the frontend and Strapi for the static content, and FaunaDB for the active part.

For the backend, I’m in doubt between .NET Core with HotChocolate or Node.JS. The last one would make the most sense as Next.JS supports functions, Netlify functions run on Node.JS, so everything would be streamlined using the same language (I would only need one extra developer instead of two).

I’m unsure about the identity provider. I would love one that supports passwordless authentication using the Android/IOS fingerprint reader to prevent account sharing but didn’t found one yet.

Lots of documentation talk about talking to FaunaDB directly from the frontend, but:

  1. I have some additional server-side requirements like creating objects from ZIP-files and XML, in-depth mathematical calculations, and AI functionality.

  2. I want to restrict the queries an authenticated user can execute, so I was thinking about adding persisted queries.

  3. I need to comply with the GDPR. So all queries should be logged to, for example, Seq.

Does this mean I need to decorate each database query with a NodeJS Apollo server layer on top?

So I presume this is the best architecture?

Client-side React code -> Apollo Server on Netlify functions (AWS lambda) with persistent queries + logging to Seq -> FaunaDB.

This added middleware proxy will introduce some complexity, especially when using the subscriptions (if FaunaDB adds support for them) and code redundancy, but I think it’s currently the only way to restrict queries and add external logging to FaunaDB?

That’s not correct in my opinion, see below.

There, you definitely need for a backend in such an approach. It’s not because you have a backend though that some parts of your frontend can’t talk directly to FaunaDB. For example, this example for which a tutorial series is underway uses a backend to have httpOnly cookies (for which you need a backend) to store refresh tokens. The rest of the frontend then retrieves the data vai a short-lived token directly from the frontend.

What you are mentioning here are probably even asynchronous tasks due to the slow nature or could be candidates for being asynchronous long-running tasks. That means that you could just use FaunaDB, to store the task progress, launch it in the backend (e.g. by querying whether there are new tasks, once we have collection streaming, you could even use that), update the frontend (directly from FaunaDB for example) when the task completes with upcoming streaming features. The idea is rather that instead of having: database → backend → frontend, that those can communicate each with each other instead of having to pass through the backend for everything.

You can use User Defined Functions and write ABAC roles to only allow a user to call that exact function.

Also this can be achieves with user defined functions. The cool thing is that FQL excels at queries that do multiple things and so composible that you can easily insert such specific logic in your queries by wrapping it in a function. For example, I have added an example here of a function that adds rate-limiting to a query (by just writing logs to a collection, like you need to do), then you can see here how I use it by just wrapping the query in the function I created, this example is in javascript but that can be easily done in all fauna drivers. Of course, if the frontend would be able to change that query, it wouldn’t work since it can remove the rate-limiting and that’s where User Defined Functions come in. The query is actually set up as a UDF as you can see here which allows me to write ABAC roles to give a user access to either the whole function or nothing. Therefore, if he calls that UDF, rate-limiting will apply, you could have a similar approach for logging for GDPR compliance.