TL;DR: This is a known issue with the way that Fauna handles arbitrary sets. See: Known issues | Fauna Documentation
A more complete answer:
As a shared resource, Fauna tries to avoid performing full table scans wherever possible; full table scans could block subsequent queries until completed, and could use significantly more resources than actually needed. That’s why Fauna’s indexes place index entries in storage in sorted order. When you need to investigate a range of values covered by an index, the result can be computed quickly and efficiently.
When you combine Join
and/or Intersection
with indexes, the resulting set is, effectively, unbounded.
In these scenarios, Fauna uses an estimator to determine how many reads to perform for set operations (rather than performing a full table scan). The estimator uses the current Pagination
page size, plus a small multiplier, to inform its read prediction. When the estimator’s prediction doesn’t reach far enough into the source set to gather all results, that’s when entries appear to be “missing”.
The simplest workaround is to increase the page size in your Pagination
call. Add { size: 1000 }
as the last parameter to Paginate
to see if that makes a difference. Note that the maximum page size is 100,000. If that’s not large enough to produce the correct result, you might need to adjust your strategy to place intermediate results in a collection, and then operate on those.
We will be working towards a solution for this problem in a future release.