Query Response Cache

It would be really nice if there was an easy way to cache responses to queries.

For example, if we have a collection of rooms that is not frequently updated, it would be helpful if the response could be cached so we don’t have to incur read/lookup/compute operations every time we want to fetch a list of the rooms and their names.

Since there are no webhooks, we cannot use any cache mechanism/gateway over Fauna, as we don’t have a way of knowing when the cache should be invalidated.

One potential approach would be to add a TTL property on mutations/functions, so we could at least cache the responses to an arbitrary duration. But that’s far from optimal. :slight_smile:

Hi @benevr! You should check out Fauna’s temporality features.

It’s possible to poll the collection for only those documents with history after a certain time. This would allow you to check every n seconds/minutes to poll for created/deleted/updated documents without having to poll the whole collection.

The problem is not the size of the dataset being polled, but the actual fact that polling is even required. As long as Fauna is billing its users for every hit to the database (Read/Write/Compute/Etc.) users will need (want) to reduce those hit counts, and one way of doing that is by having a cache over the database.

Hi Benevr,

We have a streaming capability, which allows you to subscribe to a document in Fauna and get notifications when that document is mutated. We plan to expand the streaming capability to get notifications on sets of documents, or updates to entries in an index this year. However, it’s possible that could use the current document streaming capability for your use case. If you have are modeling a fixed set of rooms in something like a single hotel property, it’s possible to contain the state of each of these rooms as a list in a single document, which you then subscribe to with our streaming capability. Changes to any room state within this single document would then trigger an event notification to your application, enabling you to refresh your cache.

Of course, I am making a lot of presumptions about your data model, and the pattern of putting multiple objects into a single document should not be taken to an extreme. However, I have worked with folks who successfully modeled their schema in this fashion before. Please take a look at our documentation on Document Streaming at let me know if this works for you. Ping me if you’d like to talk more about future improvements to the streaming feature and I’ll be happy to follow up.

1 Like

Your solution will only work when there’s a local cache. If we have 100 users all fetching a list of available rooms, we would have that many multitudes of hits on the DB, for data that is “static” at this point in time. In order to save these hits on the DB, we would need to proxy the DB behind a caching mechanism, which defeats the purpose of having a serverless DB with an integrated API.

And on top of that, we would need a webhook (a feature I’ve also requested) to know when to invalidate our local cache, so we can fetch the updated list of rooms.

Using streams, BTW, is very inefficient, as we’re keeping an open connection (and paying for the uptime) when 95% of the time there’s no activity on the stream.