Handling Blob Storage - Best Practises?

I’m wondering how people are generally incorporating blob storage into their Fauna databases?

My main questions are around how you link Fauna documents to blobs (one user in the Slack community suggested keeping S3 keys as a field on a document) and how you handle authorisation for accessing these files. E.g. How do you use your Fauna ABAC to control who can access which files, or how do you ensure users can only access files that they created?

1 Like

That’s a really interesting question! I wonder how people are doing that. If I was tasked to build something like this (in the serverless style) I’d probably have some aws lambda that would check perms against ABAC and generate a signed s3 url to hand back to the client so that they had a time limited credential to grab the blob themselves.

I would indeed keep S3 keys as a field on a document. I did the same for images in the Fwitter example: https://github.com/fauna-brecht/fwitter/blob/master/src/fauna/queries/fweets.js

CreateFweet has a ‘asset’ there which is simply some Cloudinary details to get the actual image.

About roles:
It would probably simplify your life if you ‘’‘collectionify’’’ that information. For example, a specific ‘blob’ collection. You could write your roles against that collection and allow your app to add additional information to that blog (name of the file etc… which a user might have provided in your UI)

Docs in that blob collection will then of course contain a simple link to the actual blob (an S3 key, a Cloudinary link, whatever your blob storage is)