Unable to run fauna locally

I have been running a local instance of Fauna successfully. Following a reboot it will no longer start using the command:

docker run --rm --name faunadb -p 8443:8443 -p 8084:8084 -v ~/projects/fauna/faunadb:/var/lib/faunadb -v ~/projects/fauna/faunadb:/var/log/faunadb fauna/faunadb

The error log follows:

Starting…
Loaded configuration from /etc/faunadb.yml…
Trace sampling enabled without exporters?
Feature Flags File: /etc/feature-flag-periodic.d/feature-flags.json
Data path: /var/lib/faunadb
Temp path: /var/lib/faunadb/tmp

================================================================================
New fauna version available 5.0.0-beta → 1.5.3-SNAPSHOT
Changelog: https://github.com/fauna/faunadb-jvm/blob/main/CHANGELOG.txt

SLF4J: Failed to load class “org.slf4j.impl.StaticLoggerBinder”.
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See SLF4J Error Codes for further details.
HTTP Server started at port 8084.
Identified as ab68b6c7-1f53-4970-8652-ed89dfab4cff at 172.17.0.2.
Uncaught Exception on Main Thread. Terminating:
fauna.tx.log.LogChecksumException: TX(1740614) in /var/lib/faunadb/system/log_topology.binlog.3 does not match checksum.
at fauna.tx.log.BinaryLogFile.$anonfun$runVerify$2(BinaryLogFile.scala:624)
at fauna.tx.log.BinaryLogFile.$anonfun$runVerify$2$adapted(BinaryLogFile.scala:614)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
at fauna.tx.log.AbstractEntriesIterator.foreach(EntriesIterator.scala:116)
at fauna.tx.log.BinaryLogFile.$anonfun$runVerify$1(BinaryLogFile.scala:614)
at fauna.tx.log.BinaryLogFile.$anonfun$runVerify$1$adapted(BinaryLogFile.scala:614)
at fauna.tx.log.BinaryLogFile.runVerify(BinaryLogFile.scala:613)
at fauna.tx.log.BinaryLogFile$.open(BinaryLogFile.scala:244)
at fauna.tx.log.BinaryLogFile$.open(BinaryLogFile.scala:221)
at fauna.tx.log.BinaryLogStore.$anonfun$new$2(BinaryLogStore.scala:148)
at fauna.tx.log.BinaryLogStore.$anonfun$new$2$adapted(BinaryLogStore.scala:147)
at scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:100)
at scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:87)
at scala.collection.immutable.NumericRange.map(NumericRange.scala:40)
at fauna.tx.log.BinaryLogStore.(BinaryLogStore.scala:147)
at fauna.tx.log.CBORBinaryLogStore.(BinaryLogStore.scala:93)
at fauna.tx.log.BinaryLogStore$.open(BinaryLogStore.scala:56)
at fauna.tx.consensus.ReplicatedLog$Builder.open(ReplicatedLog.scala:314)
at fauna.cluster.ClusterService$LogAndStateBuilder.makeLogAndState(ClusterService.scala:72)
at fauna.cluster.topology.LogTopology$.apply(LogTopology.scala:121)
at fauna.repo.cassandra.CassandraService.(CassandraService.scala:458)
at fauna.repo.cassandra.CassandraService$.initializeAndStart(CassandraService.scala:246)
at fauna.repo.cassandra.CassandraService$.start(CassandraService.scala:192)
at fauna.repo.cassandra.CassandraService$.$anonfun$maybeStart$1(CassandraService.scala:179)
at fauna.repo.cassandra.CassandraService$.$anonfun$maybeStart$1$adapted(CassandraService.scala:179)
at scala.Option.map(Option.scala:242)
at fauna.repo.cassandra.CassandraService$.maybeStart(CassandraService.scala:179)
at fauna.api.APIServer.setupCassandraService(FaunaApp.scala:689)
at fauna.api.APIServer.$anonfun$$init$$2(FaunaApp.scala:415)
at fauna.api.FaunaApp.$anonfun$main$3(FaunaApp.scala:217)
at fauna.api.FaunaApp.$anonfun$main$3$adapted(FaunaApp.scala:217)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
at fauna.api.FaunaApp.main(FaunaApp.scala:217)
at fauna.core.Enterprise.main(Enterprise.scala)

Hi @psionman. I see you are using volumes; Can you delete the data and try starting a clean image?

Out of curiosity, can you share your use case for persisting data between runs? If clearing the volumes works, then I would recommend running without them if you don’t need them. But if you do, then we would love your feedback about it.

Also, what version of the Fauna Dev are you using? Is the version the same before and after the issue? I can’t tell from the log snippet you shared what the version is.

Thanks.

Version: FaunaDB Enterprise Edition 23.04.05-71cb8a1

I have not changed the version

Deleting and recreating the data works

I’m sorry I don’t understand what you mean by volumes :thinking: I’ve arrived at a working environment after web searches. What should I be doing differently? I’m getting quite significant memory leaks in Fauna. Could that be related?

Glad to hear you are back up and running!

I meant the image version, noted on Docker Hub, but that helps! We have made some recent changes that should help the case where you are persisting data.

You can get the latest image by running docker pull fauna/faunadb:latest

Docker volumes. These link folders on your machine to virtual folders inside the container. It’s what you create when you use the -v option.

Sorry for the delay in replying - been on holiday

I am developing an app that relies on Fauna for holding data on users and most significantly for streaming, so that a user gets a notification when a document changes. In testing I need to retain data between sessions and the -v option is the only way I know to do it

I have tried

docker run --name faunadb -p 8443:8443 -p 8084:8084 fauna/faunadb

But if I stop the server I have to do

docker stop faunadb
docker rm faunadb

and my data is lost.

Is there another way to run fauna locally and retain data between sessions?

As I mentioned before, I get significant memory leaks in fauna and its can gobble 5 to 10Gb at which point I need to stop and restart the server - several times a day

Fauna Dev does require a lot of memory. We document 8 GB RAM minimum id required. Please note that the docker image is not just running a database, but a whole serverless-database-as-a-service – there’s a lot that gets packed into the image. Fauna is designed to separate work across multiple machines, but with Fauna Dev you are running everything on one.

We do not support Fauna Dev for production use. It is best used for shorter-lived unit tests. Other long-running tests, tests that rely on data persisted over time, or tests attempting to measure real-world performance should use the cloud database.

That said, we have made changes since your original post that should help the case where you are persisting data. You can get the latest image by running docker pull fauna/faunadb:latest

Have you had a chance to try your original docker command (including volumes with -v option) with the latest image? Persisting data should be more stable now, and we would appreciate your feedback on the latest updates.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.