I’d like to know more about the use cases for Faunadb’s multi-tenancy.
When the Fauna documentation talks about multi-tenancy for clients is it referring to multiple client projects that are different, or segmenting customers within a single product (same schema in multiple children)?
For example, let’s say we have two verticals / bounded contexts / micro services. Having a user’s token restricted to their company’s child database would make the security model much easier (no more policies in code checking arguments vs foreign keys like a Pundit policy in Rails).
Is Faunadb’s vision of client databases applicable to something this:
Yes. However, you won’t be able to perform a query in one transaction over multiple child database which is probably a limitation you want to keep in mind if you set it up like that since it might not be convenient when you need to join customers with customer data. It’s feasible technically to implement transactions over child databases but at this point we don’t have that yet. I don’t think we have a feature request open for that yet, feel free to add one on the forum for people to vote on.
The amount of databases you can make is as good as unlimited to my knowledge.
I don’t think that’s possible, if that’s a requirement I would opt to use a collection with documents that have a User ID and filter on that.
No, this would also not be feasible since you can’t run a query over multiple databases.
I’ll try to get some guidance on the multi-tenant story and get back to you. We’ll add it to our list to write a comprehensive article about this.
Does the scenario I am describing make sense / seem valuable?
For #1, on reads something like Dataloader can pull lookups / fks from any datasource. Is there an equivalent in Faunadb to that? Guessing the token permissions might get interesting there.
For #2, we could call a lambda to reduce the values into the set and publish them into an admin database, so I don’t think there’s a blocker there.
Thanks for any more details or advice you can offer.
There isn’t something like dataloader for FaunaDB but you could probably hook up dataloader to FaunaDB, I haven’t tried it, so I’m not certain. You could indeed call a lambda to reduce the values into the set and publish them in another database but the question you should ask yourself is whether you want to set up this extra complexity in order to keep your data separated logically in different databases. If so, you could also stream data to another database or aggregate on the fly if necessary by taking advantage of temporality.
The main trade off here for these solutions is whether you are willing to:
Deal with eventual consistency yourself when it comes to joins over multiple databases (an aggregate that comes from a Lambda will be an eventual consistent aggregate. That often doesn’t matter but sometimes it does… think of total financial balance of an account that might be the input for a security rule).
Deal with some extra complexity to set it up.
If that’s the case, then these solutions are fine and can be very elegant. I’m afraid I can’t give any further advice since these choices require a deep knowledge of your specific application .