Intermittently getting rate limited applying FSL to a new child database

Cross posting here from discord as I didn’t get an answer there.

I’ve started to see intermittent rate limits when applying FSL to a new child database. I’m not sure how to work around this as it’s done through fauna-shell, so there’s no method I’m aware of to partially apply the schema to get around rate limiting. Is this expected behavior?

$ fauna schema diff --no-input --secret $FAUNA_SECRET_KEY --dir $CI_PROJECT_DIR/fauna_schema
Differences from the remote schema to the local schema:
* Adding collection `Account` to account.fsl:3:1:
  * Indexes:
  + add index `by_account_id`
  + add index `by_jurisdiction_id`
  * Constraints:
  + add unique constraint on [.account_id]
  + add unique constraint on [.jurisdiction_id]
* Adding collection `Customer` to customer.fsl:1:1:
  * Indexes:
  + add index `by_customer_id`
  + add index `by_alarm_user_id`
  * Constraints:
  + add unique constraint on [.customer_id]
  + add unique constraint on [.alarm_user_id]
* Adding collection `FalseAlarmFeeScheduleItem` to false_alarm_fee_schedule.fsl:1:1:
  * Indexes:
  + add index `by_account_and_sequence_number_and_alarm_type_sorted_by_effective_at_ascending`
  * Constraints:
  + add check constraint `sequence_number_is_positive`
  + add check constraint `fee_is_positive_or_zero`
  + add check constraint `retired_after_effective`
  + add check constraint `unique_within_currently_effective`
* Adding collection `FalseAlarmSchedulePeriod` to false_alarm_schedule_period.fsl:1:1:
  * Indexes:
  + add index `by_account_and_alarm_type_sorted_by_effective_at_ascending`
  * Constraints:
  + add check constraint `interval_is_positive`
  + add check constraint `retired_after_effective`
  + add check constraint `unique_within_currently_effective`
* Adding collection `LateFeeScheduleItem` to late_fee_schedule.fsl:1:1:
  * Indexes:
  + add index `by_account_sorted_by_effective_at_ascending`
  * Constraints:
  + add check constraint `days_until_due_is_positive`
  + add check constraint `late_fee_is_positive`
  + add check constraint `interest_configuration_percentage_valid`
  + add check constraint `compounding_valid`
  + add check constraint `retired_after_effective`
  + add check constraint `unique_within_currently_effective`
* Adding collection `PermitFeeScheduleItem` to permit_fee_schedule.fsl:1:1:
  * Indexes:
  + add index `by_account_and_permit_type_sorted_by_effective_at_ascending`
  * Constraints:
  + add check constraint `initial_fee_is_positive`
  + add check constraint `renewal_fee_is_positive`
  + add check constraint `retired_after_effective`
  + add check constraint `unique_within_currently_effective`
* Adding collection `fluctuate_migrations` to legacy.fsl:3:1:
  * Indexes:
  + add index `by_name_and_namespace`
  * Constraints:
  + add unique constraint on [.name, .namespace]
* Adding collection `unique_keys_by_token` to legacy.fsl:12:1:
  * Indexes:
  + add index `by_token`
  + add index `by_key`
  * Constraints:
  + add unique constraint on [.token]
  + add unique constraint on [.key]
* Adding function `delete_account_association` to account.fsl:96:1
* Adding function `delete_customer_association` to customer.fsl:81:1
* Adding function `get_account_association_by_account` to account.fsl:92:1
* Adding function `get_account_association_by_jurisdiction` to account.fsl:88:1
* Adding function `get_customer_association_by_alarm_user_id` to customer.fsl:22:1
* Adding function `get_customer_association_by_customer_id` to customer.fsl:26:1
* Adding function `record_branding_update_time` to account.fsl:147:1
* Adding function `record_customer_update_time` to customer.fsl:132:1
* Adding function `should_update_branding` to account.fsl:169:1
* Adding function `should_update_customer` to customer.fsl:154:1
* Adding function `store_account_association` to account.fsl:39:1
* Adding function `store_customer_association` to customer.fsl:30:1
$ if fauna schema push --no-input --secret $FAUNA_SECRET_KEY --dir $CI_PROJECT_DIR/fauna_schema; then echo "Schema pushed succesfully."; else echo "Could not push new schema. Abandoning any staged schema."; fauna schema abandon --no-input --secret $FAUNA_SECRET_KEY --dir $CI_PROJECT_DIR/fauna_schema; exit 1; fi
 ›   Error: Rate limit for read exceeded
Could not push new schema. Abandoning any staged schema.
 ›   Error: There is no staged schema to abandon
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1

Hi @rcausey. Do you think it would be valuable to include an option to only upload certain files? Something like this --files option. We are working on a new, major release of the CLI, and this looks like something we could include if it would be helpful to folks. It’s easy to imagine how partial file uploads could lead to unexpected behavior depending on how you structure things, so any feedback you or others have on how you’d expect something like this to work is appreciated.

potential new command option

fauna schema push --dir $CI_PROJECT_DIR/fauna_schema --files customer.fsl account.fsl

Workaround with Schema API

In the meantime, you can use the schema API directly to apply certain files

docs - Fauna Core API - Fauna Docs

curl -X POST "https://db.fauna.com/schema/1/update?staged=false" \
  -H "Authorization: Bearer <FAUNA_SECRET>" \
  -H "Content-Type: multipart/form-data" \
  -F "collections.fsl=@./fauna_schema/account.fsl" \

No, I don’t think a -files option would be useful in the context of avoiding rate limits. It may be useful for other use cases though.

I think it’s surprising behavior to get rate limited when trying to apply schema to a database, as that kind of operation is normally something I would consider outside of regular DB query operations. Especially when coming from other RDBMS’s that treat schema updates differently from regular queries.

I think it would be better for fauna-shell to handle rate limit errors transparently, or for the rate limiting logic to be reworked such that schema can be staged or applied without a rate limit error, so that the end result is that one fauna schema push results in all schema differences being staged.