How are write ops measured?

I’m looking to enter data into a db where I poll multiple items every x minutes, can I group this data and enter it as one write op, or would each item update be classed as its own write op? Curious as to how pricing would scale.

Hope this was clear, and thanks for reading.

@deltoid I guess you meant by read op instead of write op as poll would result in read. Nonetheless, this link explains how read, write ops are calculated and used for billing. Happy to provide more information if you need.

That depends on what you call grouping.

Option one, batch queries with Do, Map.

You could place multiple Create() statements in a Do() function. The link (https://docs.fauna.com/fauna/current/concepts/billing) that Jay shared is the best resource to verify which operations cost write operations. You will see there that one Create is going to cost one write operation. That’s for each time a Write() is executed. The approach executing Do(Create(), Create(),…) would execute multiple writes. Even when wrapping the Create in a Map, although you only would write the word Create once, it will execute the Create multiple times and hence cost you multiple Gets.

Option two, everything in one document

If you place all this information in one document and write that document with one Create you will only get charged one write operations. I would not advise you to do that as it is probably not going to be easy to work with in the long run and there is a limit on the request (1Mb). It’s also not really what an OLTP database like FaunaDB is made for :slight_smile:. That said, it’s possible :slight_smile:

Thanks for the replies, much appreciated. Writing a batch of data seems like the only suitable option right now. For more context, I want to take data from an API every minute for multiple different items, then store that in faunadb. Let’s say 75 items, multiplied by every minute of the day, gives me 108k write ops, which seems too high for such a small part of a hobby app. Grouping this data into 10 minute chunks would make things a lot more manageable.

Would you mind expanding on what makes it difficult to work with? I don’t think document sizes will get close to 1mb, so that part shouldn’t be an issue.

Depends on your use-cases. If you try to update one specific element in there it might be easier to override the whole document than to try and write code that maps over that document to find the specific element and updates the attribute :).

The same reasons that working with an inline array can be harder than working with relations.
E.g. for an inline array you would have to cut the array in two, update the element you need to update and glue it together or map over the whole array and do an if test to check which object that needs to be updated (very inefficient). While with relations everything receives its own object and you just grab the reference and update that specific small object.

Both are useful in certain usecases, it’s a careful consideration :slight_smile:. I would say, try out what works for you.

But!

Is this always new data? Since if this data is often the same that you are going to override then it’s also a bit of a brute-force approach right? I would strongly recommend calculating a diff if those are the same elements and you want to push down the costs. Also, because FaunaDB will store versions for each of those (even if you set versioning very low, it will -at this point- always store a few versions, so I don’t think it’s a good idea that you overwrite these docs each minute. Fyi, we are working on a feature to respect that versioning setting exactly).

If each minute 75 new elements arrive than 108k write ops is just the nature of your application and given that you get 100k ops for free, its actually still ridiculously cheap no? :slight_smile:

Thank you, that was very helpful. I think the route of simply being more efficient is the best solution, I’ll think carefully about checking for matching elements, and a few other ideas I now have. The versioning features sound very interesting too, Faunadb always seems to have really great ideas in the works, makes it exciting to follow :slight_smile:

Thanks again for the great support

Hi @deltoid!

The Difference function can be very helpful here, for efficiency, if not simplicity :upside_down_face: