SQL RDBMS vs FaunaDB - Some Nice To Haves As An Engineer

I’ve been tinkering with FaunaDB for a couple of days now. I’m considering it for an upcoming serverless project. Looking at the FaunaDB offering, kudos on what you guys have done so far. The GraphQL interface is great.

I wanted to contrast some must haves in my workflow that I think could help with developer adoption.

My Current workflow:

  • Traditional SQL, these days typically PostGres
  • Database Migration for quick schema development and production schema updates - I use “knex.js”
  • GraphQL at times, but not necessarily

Looking at FaunaDB:

  • FQL
    • I read somewhere online that SQL was a priority but read just a few moments ago that interest in a recent survey was low; I think that’s a shame because the familiar interface to the database, even if a subset of SQL, would help with the learning curve that is inherent with FQL.
  • Database Migration
    • It looks like there is an unofficial/unsupported migration tool like “knex.js” called “faunadb-migrate”. This helped me quite a bit when tinkering as I’m used to making database changes and importing seed data as I develop using “knex.js” and when it’s time to go to production, simply migrating the latest production schema into the database. I think “faunadb-migrate” should be a supported and full fledged tool! Right now it has a number of bugs that need to be fixed but if brought on the product radar it can be a super polished tool that every engineer will use.
    • When using “faunadb-migrate”, when I rollback, and then try to migrate forward immediately I get an error as specified below and have to wait the 60 seconds to migrate forward; I can rollback and migrate as I need to in development, it would prevent me from burning 60 seconds each time I want to try a different tweak:
      "failures": [
            {
              "field": [
                "name"
              ],
              "code": "duplicate value",
              "description": "Value is cached. Please wait at least 60 seconds after creating or renaming a collection or index before reusing its name."
            }
      
  • Graphql
    • It would be great if graphql upload could be incorporated into “faunadb-migrate”. Since only 1 schema is in force at a time, when migrating, only the latest graphql schema can be uploaded to the graphql endpoint on faunadb.
    • It would be nice if we can take the “modified” schema made by Faunadb, modify it, and upload it back. Essentially if we could use “Page” (that are generated for each of our etc.) as part of the graphql schema we upload, we might have greater control.
    • It would be nice if we could control or remove autogenerated graphql queries. Some of them are not useful, especially when we have a mildly complicated schema with references to other collections etc.

I was encouraged to post by whoever is manning the @fauna twitter account so you can thank them for this brain dump of thoughts!

Thanks,
Mashaal

We are getting that feedback quite a lot lately. At this point, there are no immediate plans to support SQL though. If we keep getting that feedback from many clients, maybe that’ll change in the future.

It has come up already a few times as well. This is definitely something we need, or at least guidance on how to do migration. We do currently have FDM but it’s in Java and doesn’t fit into our users’ workflows. Strong support for migrations/data management is definitely on our minds.

Fyi, Knex looks cool, I didn’t know it. It can indeed not get around the caching limitations in FaunaDB. If you are creating the whole database from scratch I personally choose to set up the database completely from scratch which is faster than waiting 60 seconds. Of course, my scripts do not use Knex and Knex might not have that option. If you are only migrating back a few steps than there is currently no other option than to wait 60 seconds. Our engineering team does know is really cumbersome though and there might be solutions for that in the future.

Great feedback, I think some of them are already logged as feature requests but I think there might be one or two new ideas here. I’ll go through the tickets and make sure to add them if they don’t exist yet. Thanks!

Concerning the GraphQL schema. It might (too early to tell, atm it’s just an idea) be possible that GraphQL and FQL evolve towards sharing a common schema in the future which would work towards the goal of migrating both FQL as GraphQL at the same time :slight_smile:.

FDM looks like a great option for point in time backups of a database and a full migration of database schema from one database to the other. So this is fantastic.

The way Knex.js fits into my workflow during development and ongoing development is the ability to make incremental modifications to a database schema in a development database, and then being able to apply those incremental changes to a production database. faunadb-migrate in your github repo is a great start, but is “unofficial” and so “not supported” by you folks. Essentially records for which migration files have been applied to a database are kept, and so in a production environment, only the new parts of the migration would be applied. It looks like faunadb-migrate might have been inspired by knex.js as it works in a very similar way!

I’m typically just using rollback and migrate commands and so it removes everything from the database when I rollback, and applies all changes back when I migrate. If there was some way to turn off that cache and message per database in the dashboard for a development database, it would make the use of the tool sooooo much nicer.

Awesome, no worries, and my pleasure.

I’m not sure if I fully understand what you mean by “sharing a common schema between FQL and GraphQL”. And it might be too early for you to explain it to me as you said! However, my initial thoughts are:

  • Take a look at and the way that knex and faunadb-migrate structure migrations into multiple files. Because SQL and FQL have language constructs for creating and deleting things like tables/collections, you’d want FQL migrations to be applied incrementally. For instance in an earlier migration you may create a table/collection and in a later migration you might remove a table/collection.
  • But for GraphQL since it is essentially a snapshot the entire Graphql schema at a point in time, I don’t know if it’d make sense to have FQL mixed in (except maybe for resolvers)?
  • I’d imagine that leaving them separate makes the most sense because there may be some edge-cases that make combining them a problem. But I think having a graphql overwrite be part of an fql migration using faunadb-migrate would fit really well into the developer workflow.

I tried to upload some files to illustrate a simply contrived migration example, but the system only allows images to be uploaded! Oh well. :man_shrugging:t2:

No worries, way too early yes :slight_smile:

I typically just nuke the database and create the database again to get around the cache in the case where everything has to be reset. Might not work for your flow though but it works as follows:

  • Create an admin key.
  • Use that admin key to create a child database with that admin key. We create child databases since deleting the top database will also invalidate our admin key each time which is cumbersome.
  • On reset, delete child database, recreate, apply all manipulations.

I do something like that in Fwitter: https://css-tricks.com/rethinking-twitter-as-a-serverless-app/
That setup script is however written for my convenience, not as a guideline. I’m thinking of writing a migration setup repository but there are many other things to write first :slight_smile:

That obviously doesn’t work if you ever need real up/down migrations for a production system but I would argue that you don’t really bump into cache issues that often there since you will often not do a down/up immediately in the next 60 seconds.