Add internal & external triggers/hooks

On record actions (insert, update, delete) I’d like to automatically have called:

  1. a UDF
  2. a serverless function

For both I’d like the data to contain the:
a) collection name
b) full record

I can provide use cases for both scenarios if that helps, let me know. Thanks.

Hi @RobertBright,

Yes, having more details would help us quite a lot.

Thanks,

Luigi

In general publishing events from FaunaDB will allow us to better itegrate with the AWS ecosystem in the same way DynamoDB does. A specific use case of this is building Event Sourced solutions on top of FaunaDB.

Given a log of events:

{
  "ref": Ref(Collection("order_stream"), "282842364888744451"),
  "ts": 1605999005510000,
  "data": {
    "eventName": "OrderPlaced",
    "orderId": "guid_123",
    "userId": "user_123",
    "amount": 100,
    "productId": "product_123",
    "stripeToken": "stripe_123"
  }
}

{
  "ref": Ref(Collection("order_stream"), "282843030393717251"),
  "ts": 1605998983670000,
  "data": {
    "eventName": "OrderChargeSucceeded",
    "orderId": "guid_123",
    "stripeData": {
      "fieldA": "field a",
      "fieldB": "field b"
    }
  }
}

{
  "ref": Ref(Collection("order_stream"), "282843519356240384"),
  "ts": 1605999450000000,
  "data": {
    "eventName": "OrderPlaced",
    "orderId": "guid_456",
    "userId": "user_123",
    "amount": 200,
    "productId": "product_456",
    "stripeToken": "stripe_456"
  }
}

{
  "ref": Ref(Collection("order_stream"), "282850645452521986"),
  "ts": 1606006246020000,
  "data": {
    "eventName": "OrderSucceeded",
    "orderId": "guid_123",
    "successField": "some value"
  }
}

and a trigger to fold the results:

Reduce(
  Lambda((acc, value) => Merge(acc, Select("data", Get(value)))),
  {},
  Paginate(Match(Index("order_events_by_order_id"), "guid_123"))
)

we can project for storage in another collection:

{
  data: [
    {
      eventName: "OrderSucceeded",
      successField: "some value",
      orderId: "guid_123",
      amount: 100,
      stripeData: {
        fieldA: "field a",
        fieldB: "field b"
      },
      productId: "product_123",
      stripeToken: "stripe_123",
      userId: "user_123"
    }
  ]
}

Raising events to Lambda will allow things like websocket responses through API Gateway as data moves through the pipeline of events. It also allows higher level integrations with other systems outside of FaunaDB.

FaunaDB is a great candidate for ES systems as there is a queryable database. We don’t need consumers to go through entire logs sequentially; we can just query for the data we want to roll up.

There are more technical details I’d be happy to go over if the team is interested, but hopefully this provides a few useful scenarios for building better serverless apps with Fauna.

1 Like

+1 This would be a huge bonus for Fauna.

When a serverless function sends a request to Fauna, it either needs to wait for a response (incurring CPU time costs) or tell Fauna to notify another serverless function with the response (currently not possible).

Steaming/subscription was a nice thing, but makes really hard to use in a serverless environment. Looking forward for webhooks!

1 Like

Is there any update on this feature?

We would also love this feature, especially as Fauna already has excellent fine-grained access controls that allow it to be safely invoke by client applications (eg a webapp frontend).

Our intended use case for triggers would be to allow the (untrustoworthy, unreliable) client application to communicate solely (or at least predominantly) with Fauna. The application logic would then be driven by triggers arising from changes in the database.

Example without triggers:

  • Webapp creates a new user in Fauna
  • Webapp invokes a backend function to send a welcome email

Problem - if the user closes their browser tab, or loses connectivity, or their browser crashes, etc, you have an inconsistent state. (A user has been created but has not received a welcome email.) This is a simple example with arguably acceptable consequences, but you can easily imagine more complex situations where the consequences would not be acceptable.

Problem - it requires the webapp to be aware of the backend application logic requirements. If you decide that changing a password should trigger a safety email be sent to the client, you need to update the webapp. It would be much cleaner for the webapp to not have to care about such things.

Example with triggers:

  • Webapp creates a new user in Fauna
  • Fauna detects the change, and invokes the registered function which sends a welcome email

Now, regardless of what happens in the client app, the required business logic will still execute. (Assuming that Fauna and our backend are much more reliable than a client’s browser and internet connection.)

The Webapp now interacts with Fauna for all reads and writes, safe in the knowledge that if anything else is supposed to happen as a result, then it will happen. Compare that to the undesirable workaround below.

Undesirable workaround

  • Webapp doesn’t talk directly to Fauna, but invokes a backend function to create a new user
  • Backend function creates a new user in Fauna
  • Backend function sends a welcome email, etc

The reason this is undesirable is because it essentially requires a new backend function to be created to make any change to the database, because you don’t want the webapp to sometimes be able to mutate directly and sometimes have to go through a backend function. You lose the benefits of the single API endpoint provided by FQL / GraphQL.

Even if you have a guideline like “reads can be made directly to Fauna, but writes must go through our own API just in case they have side effects”, you still end up with a proliferation of API endpoints “just in case” you ever need to add business logic over and above the database update.

Additional notes/suggestions

When creating an event trigger, you should be able to specify whether to wait for a result. Sometimes you will want to wait for a result, eg if the function is going to modify the database further and you want the client to have that latest data, and sometimes you don’t, eg if the function just queues an email to be sent to the client in due course.

7 Likes

Any updates or thoughts from the FaunaDB product team on the prioritization of a fauna-native webhooks api? Agreed this would significantly bump the value of fauna for building out API-first platforms capable of serving more than frontend clients.

We are still tracking this feature on our roadmap. In the meantime, please take a look at our Streaming functionality or reach out to our team with your questions.

1 Like

I’d benefit from this as well. I want to add a way to automatically manage document deletion after a set amount of time alongside a way to automatically modify/flag documents that lose their parent relation. As it stands I foresee that if I don’t carefully manage my data I could end up “losing” documents unless I specifically create indexes to check regularly. Also with user creation, I have to add a predicate condition when I really just want to have the server react to the user creation request from a new user.

With FQLv10 there is no streaming available. Is this planned to become the replacement? Or is there something else in the works?

Streaming for v10 is being worked on, with some feature improvements appropriate for FQL v10. If you have additional questions regarding streaming for v10, let’s move that discussion to a different topic.