Just looking for some clarification for the usage figures I’m seeing in the dashboard here.
The main “users” collection I have in this DB, comprises around 220 documents, and I’m very confused as to how I’ve allegedly amassed such a large amount of read ops on such a small collection. Checking the JSON file I exported from MongoDB Atlas, the size of that collection is 52kb.
Is this from creating indexes? Simply clicking on a collection in the dashboard? Also, I rarely use the shell, but when I do, my queries to said indexes are all coming back as 1 read op, and the figure for the actual back end doing the querying seems about right.
Is this an expected overhead when setting up a project? I’ve basically just started testing this system out with a small personal project, with a view to moving an absolutely massive project over to it if it pans out, but I’m quite concerned with these figures and would love some clarification before I look at moving on to a paid tier.
Whenever you involve an index, index entries need to be read. Fauna typically partitions indexes to improve performance, so there is one read operation per 4k read (or portion thereof) per partition.
A collection index, which has no terms or values defined, and the internal index used by the Documents function, has 8 partitions. So, reading a single page of index entries incurs 8 read operations.
Note that the greyed-out metrics in your screenshot are background Dashboard queries.
However, we recently identified a bug involving background queries whenever the Dashboard loses/gains focus. We just published an update that the notably reduce those metrics.
Let us know if you still see read op usage that seems high over the next week or so.
@ewan thanks for the reply, could you please explain what you mean by background queries? I’m not able to find anything searching for this term.
Does simply having the dashboard open cause any of these read ops to occur?
If I click on “indexes” but don’t perform any search, are any “background queries” occurring?
I presume that if I click on “collections” since it automatically shows the first entry, some read op occurs here, but an index doesn’t appear to do anything unless I actually enter a search term.
This really seems like a ridiculous amount of read ops for such a tiny amount of data, so I’d really like to know what to not click on in the dashboard, because clearly something is causing this to occur.
I’m making a lot of queries from my code, and that seems to show a tiny amount of read ops, compared to dashboard/shell etc, which I am doing very few queries from, so I’m pretty confused as to what’s happening here. Are these “background queries” somehow related to the queries I’m sending from my code?
We have released a fix that we hope has resolved this. While dashboard interactions do still accrue some read ops, we hope that it is minor. Please let me know if that does not appear to be the case.
“Background queries” refers to the Dashboard’s attempts to show up-to-date views while navigating its interface. The bug that was recently fixed caused a view refresh whenever the Dashboard page in the browser lost or gained focus. That could significantly boost the number of read operations for desktops that have an “autofocus” feature enabled, or if you cycle browser tabs frequently.
I’ve had a browser tab open on the Dashboard for several hours now, and even though I have modified documents in the currently-displayed collection multiple times (via fauna-shell, not the Dashboard), the view has not updated.
You should now only incur Dashboard read ops by interacting directly with the Dashboard, e.g. clicking on the Collections, Indexes, Functions tabs, click on the detailed view for a document, etc.
If you continue to notice unexpected read/write operation counts, do let us know.
If you prefer, you could use fauna-shell to interact manually with your database. It’s a command-line shell for executing queries, and there are no implicit queries (just the ones you write).
I am having the same issue and almost left Fauna after praising the platform for a long time. Please fix this as I already told my team to look for a new database.
I was racking read ops by the thousands for just checking to see how functions works or just creating a key.
Not good at all as we have spent hours looking for other options before I came to this forum.
The fix that we deployed prevents the dashboard from running queries to update the current view whenever it loses or gains focus. No adjustments have been made to already-counted metrics.
Are you still incurring read ops at the previous rate?
@ewan I’m seeing a similar issue as described in this thread that occurred just yesterday. I decided to spin up a database to follow @databrecht’s Cloudflare Workers tutorial (Getting started with Fauna and Cloudflare Workers). I’ve created a single collection with a single product document. As you can see, I’ve done 6 queries against it, however, the Dashboard is showing 73x the read ops (438), which is over 98% of the read ops.
Surely this can’t be correct. If customers are billed for Dashboard activity, then a significant percentage of billing will be just checking to see the actual DB metrics. I understand being billed for shell activity as those are user actions against the DB. But we shouldn’t be billed for the dashboard just to see the actual usage metrics.
Appreciate any help we can get to look further into this.
When you view the Dashboard, to see the metrics, the Dashboard has to query to find the list of databases associated with your account, as you may have created or deleted databases since the last time the list of databases was queried. Whenever you click on a database, queries to determine the current Collections, Indexes, Functions, etc. are run, so that you can see the “current” state of your database.
Whether the Dashboard activity is a significant percentage of the billable metrics depends entirely on how much you use the Dashboard.
Your screenshot shows 400+ read operates accrued by Dashboard activity, whereas your database shows 6. The number of read operations will vary with the number of databases in your account, and there is a result cache that avoid unnecessary read operations over short time spans. So, it is not possible to provide a formula that gives you the number of read operations per metrics page view.
However, I can tell you that a single page view of the metrics does not require hundreds of read operations. Additional Dashboard activity would be required to achieve that total.
However, the bug that was fixed caused the metrics to be reloaded every time that the Dashboard lost or regained focus. That did incur thousands of needles read ops.
My own account currently contains 13 databases, and I’ve been using the Dashboard recently to reproduce a number user-reported issues. My Dashboard totals look like this:
My activity involved a few dozen views of the metrics, plus multiple iterations of modifying docs, indexes, roles, plus importing GraphQL schemas and running mutations. Not a lot of activity, but the Dashboard metrics that I have accrued seem completely reasonable to me.
Thanks @ewan for the detailed reply and showing your dashboard metrics. This has definitely eased my concerns around dashboard use. Appreciate your prompt response!
I imagine that once my production database gets to thousands of documents, I can easily deplete 1 million read-ops per month just by using the dashboard, and that’s just me, what would happen if you have a team? and what would happen if they all work full-time? I can see that 10 million read-ops per month would be a piece of cake to accumulate.
We need to continue charging for webshell and playground usage, because there are ways that developers could take advantage if we didn’t. We do plan to stop charging for dashboard interactions like clicking around and listing objects (databases, collections, documents, etc), but it hasn’t been as high a priority as some other projects because the number of read ops accrued by the dashboard should be negligible compared to the free plan limits.
I imagine that once my production database gets to thousands of documents, I can easily deplete 1 million read-ops per month just by using the dashboard… I can see that 10 million read-ops per month would be a piece of cake to accumulate.
The math doesn’t really work that way, and I’ve never seen this happen. If someone did manage to legitimately rack up that much usage from normal dashboard interactions, that would be a very serious problem and we would do whatever we could to remedy the situation immediately.
That said, you might be experiencing an issue that we cannot reproduce. We would like to get to the bottom of this, so please email me to set up a call: product at fauna dot com