Seeded Dev Container for E2E Tests. Best Practices?

Hey fauna community,

we’re currently trying to figure out how we can develop E2E tests for our app that uses a fauna database. For the tests to run in a github CI process we need a database with some basic data seeded. I’d like to know if there is someone in the community who could give us some guidance what the best approach could be. Basically he have two options:

  1. Create custom image with data already seeded (doesn’t seem to work)
  2. Use the base image fauna/faunadb and seed data on every test run (slow)

Approach 1: Create a custom image with data already seeded

This approach would be nice but does not seem to work. These were the (Dockerfile) steps for building a custom image, seed.sh would wait for fauna to be started, create the database, upload the schema and create indexes, functions and documents.
Problem here: The build never stops because there is no way to shut the faunadb down.

FROM fauna/faunadb:latest

RUN faunadb

COPY ./schema.gql schema.gql
COPY ./seed.sh seed.sh
RUN chmod +x seed.sh

RUN /bin/bash ./seed.sh

Approach 2: Seed the base container on every Test run

The steps on a build agent would be:

  • start fauna/faunadb container with ports mapped to 8443 & 8084
  • install fauna-cli
  • create endpoint
  • run seeding script on build agent (custom fauna cli installation)
    • Question: which secret do use? ‘secret’ ?
  • run E2E tests

The image would only differ in the way that we would run the seed script together with the fauna startup command:

FROM fauna/faunadb:latest

RUN faunadb

COPY ./schema.gql schema.gql
COPY ./seed.sh seed.sh
RUN chmod +x seed.sh

CMD /bin/bash ./seed.sh & faunadb

Problem here is that the container is always seeded for each test run. That makes the runs unnecessarily slow.

Problems with Approach 2

If you’ve read this far, then thank you! :slight_smile:

We’re currently working on Approach 2 which does not work either (schema upload not working, but we just wanted to double-check with the community if we missed something.

This is the error

443
UPLOADING SCHEMA (mode=override): ./schema.gql
RESPONSE:
Invalid database secret.

Expected behaviour

schema can be uploaded.

This is what we did:

docker run -d --rm --name faunadb -p 8443:8443 -p 8084:8084 fauna/faunadb

fauna add-endpoint http://localhost:8443/ --alias localhost-e2e --key secret

fauna create-database MY_DATA --endpoint=localhost
fauna upload-graphql-schema ./schema.gql --mode override --secret=secret --graphqlHost localhost --graphqlPort=8084 --scheme=http --timeout 5 --domain localhost --port 8443

Hi @doc it looks like you are trying to upload the GraphQL schema to the root database, which is not allowed. You will need to create a key for your MY_DATA database, or you should be able to use a “scoped” key by providing the following as the secret.

secret:MY_DATA:admin

That said, doing my own test, this failed for me too with Invalid database secret. We are investigating to see if there is an issue.

This is actually a known limitation with using GraphQL with the docker container: you cannot use "secret" as a secret for GraphQL. We have a task to document this behavior.

Workaround

You need to create a key for the new database and supply that key

> fauna create-database MY_DATA --endpoint localhost

creating database MY_DATA

  created database MY_DATA

  To start a shell with your new database, run:

  fauna shell MY_DATA

  Or, to create an application key for your database, run:

  fauna create-key MY_DATA
> fauna create-key MY_DATA admin --endpoint localhost

creating key for database 'MY_DATA' with role 'admin'

  created key for database 'MY_DATA' with role 'admin'.
  secret: fnAE8YNco_ACAEAO9yWrY_DtOpdWoZZnUfPXmxDR

  To access 'MY_DATA' with this key, create a client using
  the driver library for your language of choice using
  the above secret.
> fauna upload-graphql-schema ./schema.gql --mode override --secret fnAE8YNco_ACAEAO9yWrY_DtOpdWoZZnUfPXmxDR --graphqlHost localhost --graphqlPort 8084 --scheme http --timeout 5 --domain localhost --port 8443

UPLOADING SCHEMA (mode=override): ./schema.gql
RESPONSE:
Schema imported successfully.
Use the following HTTP header to connect to the FaunaDB GraphQL API:
{ "Authorization": "Bearer fnAE8YNco_ACAEAO9yWrY_DtOpdWoZZnUfPXmxDR" }

The format for the create-database is meant to be human readable, not for machines. You can get the JSON response by running eval instead

> fauna eval --endpoint localhost MY_DATA 'CreateKey({ role: "admin" })'

{"ref":{"@ref":{"id":"356211021705118208","collection":{"@ref":{"id":"keys"}}}},"ts":1675968152640000,"role":"admin","secret":"fnAE8YQJddACAJD9uN40Q92ki45NBIa9f4ZKuErT","hashed_secret":"$2a$05$XDVSHQD6lAAK/vHitHy5nOL7WdHMDXeYN9PFN0NF5uFcvsjbH7OA6"}

or

> fauna eval --endpoint localhost MY_DATA 'CreateKey({ role: "admin" })' | jq '.secret' -r

fnAE8YRD92ACANvK9Yx5xhnyQfn21Ir7KcGiM47a
1 Like

Hey Paul, thanks for caring!

I’ll give it a try asap. The thing I’m now wondering of is whether this secret is only required to upload the GraphQL schema? Cause we wanted to use the container only for E2E testing and it’s so very convinient that it comes with the predefined secret ‘secret’. Otherwise we had to transport this newly created secret to our other applications which are also running in separate containers and that would make the CI build process a bit more complicated.

“secret” is still a valid secret for operations directly on the database with Fauna Dev. That includes scoped secrets like secret:MY_DATABASE_NAME:MY_ROLE

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.