Hi, I will try run FQL like logs.all() but I have this error:
429: limit_exceeded
Rate limit exceeded
Any advice, or this rate limit clean next day or timing to unlock?
Hi, I will try run FQL like logs.all() but I have this error:
429: limit_exceeded
Rate limit exceeded
Any advice, or this rate limit clean next day or timing to unlock?
Also my query on python is too short, and data is too small to get it:
query = fql(
'''
logs.where(.ts >= Time("2024-09-01T00:00:00Z") && .ts <= Time("2024-09-02T00:00:00Z")).pageSize(16000)
''')
Hi @Gilberto_Tunon! Throughput limits are based on operations per second. Trying read a page of 16000 documents will cost at least 16000 Read Ops, which will be well over throughput limits for most plans. See docs for more information about the throughput limits for your plan. Plan details - Fauna Docs
If logs.all()
with the default page size of 16 is also causing 429 throttling errors, then your documents may be very large, costing many Read Ops per document fetched, or you could be issuing too many queries concurrently.
You can use an index to improve the performance of your query within a time range. Our documentation has more information here: Work with dates and times - Fauna Docs
You will still likely need to reduce the page size of your query to spread out the workload and avoid 429 errors. The python driver has a Client.paginate()
method to assist with loading multiple pages of data. Python client driver - Fauna Docs