On what size index should I expect Range to timeout?

The docs for Range give this scary warning: “If you use Range on a collection containing many documents, there is a chance that evaluating the range could exceed the transaction time limit of 30 seconds.”

what order of magnitude is many in this context?

trying to decide if its alright to filter docs in an index by their ts field using Range , or whether another approach would avoid timeouts as the number of docs in the index grows

I’d say the number of docs that Range would be filtering in my app would grow by no more than 100,000 per year

at what point would I expect to have timeout problems while filtering them with Range ? what would be some steps to mitigate that? Should I use a different function like Filter to get all documents after a given timestamp?

I think i’d honestly need to do some testing to get the exact numbers but something to be clear on: whether range times out or not depends on the size of the slice bounded by the range not of the size of collection being sliced. That is to say: picking 100 records with range should be fast whether the records sit in a partition of 1000 records or 10000000 records.

3 Likes

Thanks for that feedback @liamdanielduffy , I’ll drop in a ticket to clarify that in the docs sine that seems to be misleading atm.

1 Like