How can I create a data pipeline into a warehouse?

Hi team,

I’m looking to pipe data from fauna into a data warehouse, or even a postgres instance for ultimate consumption via a BI tool i.e. Mode or Looker. There doesn’t seem to be many integrations out there such as Fivetran connectors so I’m wondering what people have been doing to accomplish this.


Hi @akang10 and welcome!

Official tools

There are indeed no official tools or direct connectors to fauna. Though we are thinking of ways we can support folks who need these types of tools:


We recently created an integration for Retool, which could be used to proxy Fauna requests to other Retool integrations.

Other ideas

The idea definitely comes up, so you are not alone. Here is a possibly related discussion.

To you and others, please share if you have more ideas, or if you can share more about the use case you are trying to solve. Details like that can help inform our product roadmap for the future.


Hi @akang10,

Many of our users have indeed asked for tooling to help pipe data from Fauna into data warehouses and OLAP datastores. We understand how important this is and are planning to address these use cases in our upcoming roadmap. Our plan is to provide an data export capability, following the release of our Backup and Restore feature coming this winter. As we currently plan it, this export capability would allow you to export an entire database snapshot to cloud storage, such as AWS S3. You would then be able to seed your OLAP instance from this exported snap. We also pan to expand the capabilities of our Streaming feature so that you can keep your downstream datastore up to date with changes occurring in your Fauna databases.

Further out in the roadmap, we are planning to add a SQL query interface to Fauna that would allow you to connect BI tools such as Qlik, Microstrategy, and Tableau directly Fauna. In general, Fauna is better suited for operational workloads, so using these BI Tools through the SQL query interface isn’t intended to be a full replacement for a data warehouse. However, connecting BI tools through the SQL interface gives you the option serve some reporting or aggregation workloads without the need to move data.

I am eager to answer any questions or comments you have on these forthcoming features. So, please feel free to ping me anytime.

1 Like

Thanks Bryan! Sounds like a great feature. Approximately when is the first release of the export functionality planned?

Hi @akang10 I expect we’ll be beta import/export early next calendar year.

1 Like

I have been looking into this lately and love to hear that we will have snapshots coming. Do you know if this will be built into the fauna tool or do we need to add our some bit of coding/connections into our infrastructure?

Hi Jonathan, in the first release of Backup and Restore, snapshots will be generated on a scheduled, daily basis. You need only to enable snapshot generation for a given database via the UI. We’re looking to add an “on-demand” snapshot generation capability which you can trigger through via API request. This would serve users who are interested interested in creating snapshots in sync with a commit to their code repository, and intend to use the snapshot for automated testing in their CI/CD pipelines, or to seed data for a development environment. I’d be interested to know, is this how you might like to use this feature?

Hey @Bryan_Fauna, any update on the timeline for Backup & Restore and/or import/export?

HI @akang10 work on backup and restore is in progress. We expect to go beta shortly with the v1 functionality, which includes snapshot generation, storage, and restore. Work on data import/export will following GA of the V1 release. We don’t have a firm ETA just yet, but we’d like to have import / export later in the year.