New pricing disappoints me immensely

Personally I find the ops pricing makes sense. It’s easy to get an idea of how expensive a query is by looking at the info in the shell of the dashboard.

After a while it becomes more intuitive and you plan ahead with efficiency in mind.

Cloudflare already has a distributed KV database and I’m certain it will release a more complete distributed DB offering, probably this year.

As much as I love Cloudflare, I’m skeptical it will have as many features as Fauna. I could be wrong but I doubt they will have a query language or authorization layer as powerful.

One problem I’ve experienced with Workers KV is that unless you have tons of traffic most of your requests will not come from the edge. They have so many PoPs (points of service) around the world that 2 users that are somewhat close geographically (eg: Paris and London) will not use the same PoP. I think this is solved using their Argo router which can fetch data from a nearby edge location instead of going to the origin (or central location for KV) but then requests become much more expensive.

:+1:

One problem I’ve experienced with Workers KV is that unless you have tons of traffic most of your requests will not come from the edge

Getting a bit off topic, but they’ve stated Durable Objects would be the primitive used for a future DB, for consistency reasons, not KV. D.O.'s are stored in the CF node nearest its most frequent use. Agreed it wouldn’t be as feature complete as Fauna initially, but it’d have basic CRUD which most need and will likely be Fauna’s biggest competitor for a geographically-distributed, serverless-first database. Guaranteed it will be aggressively priced with a free tier and probably usage based with $5 min like Workers Bundled. Not as feature complete…probably. But sufficient…we’ll see.

Why not just do that and maybe keep discounts for higher usage, if needed for very high usage? I think it’d make lots of people happy.

Also could consider $300/mo support for enterprise, SSO, or something. IDK, I’m a one man shop, not an enterprise, so I don’t know what entices them. Pricing is hard, we can’t fault Fauna for iterating on pricing, but hopefully they keep in mind certainty for early users they won’t be priced out or feature gated after building on it, for peace of mind.

Best of luck y’all.

3 Likes

Durable objects are exciting, there might indeed be something built upon that. There is however a huge development gap between a durable object and a database. Cloudflare mentions in one of their blogs that they do see the possibility that this serves as the foundation of a database, not necessarily built by Cloudflare itself. However, I don’t see a company (except Cloudflare itself) spend millions to construct a database on top of it and be tied to one specific technology (durable objects) and one execution environment (CloudFlare workers) if I’m not mistaken. What I might see happening is that either KV store or Durable Objects become an underlying storage layer for a database that allows you to switch storage layers.

I wouldn’t say this is necessary competition. Fauna + Durable Objects can make a lot of sense for caching in-use data in a worker or (although you already have real-time objects with Fauna) rendering a specific object super real-time once you have fetched it (e.g. because you know that users that will collab on this object are colocated in the same region, and you are using workers already). You can’t really say that they are geographically-distributed, rather geographically-moving which defeats the purpose for people who want to have clients from different regions interact with each other. I don’t know what CF is planning so who knows, they might be working on a more complete database solution. Still, Durable Objects might seem appealing for many new users (and they are, but they might seem appealing for the wrong reasons as well), the step up to a real distributed database is huge. My understanding of them is limited, but I see them rather as a clever way of sharing context between different Cloudflare workers and having a real close to the function sharing of an object. To go beyond that needs relations, indexes, query composition, transactions over multiple objects, conflict resolution, disaster recovery, a sane security layer and all the (UI) tooling a database requires.

What I learn from this, the task of selecting the right database in post 2020 is going to be incredibly hard since it becomes almost impossible to learn for each database (and/or storage/caching/cms mechanism) is doing behind the scenes, the landscape is becoming more diverse than it ever was :slight_smile:. Which is a good thing, but we’ll all need to do even more reading :upside_down_face: :face_with_head_bandage:

I’m just a small fish, OK to be bait for bigger fish. I will die of old age before I ever make enough money where the $25 per month looks affordable. But I’d be happy to pay $10 a month if I exceed the free plan.

1 Like

Fauna should perhaps create a bunch of new plans, that would not be a four-tier system with three-feet-tall height differences between payments ( 0 - 25 - 150 - 500 ). I would personally love to see a $10 plan.

3 Likes

I was surprise to see that 100k read ops of “Free plan” is not that much. I’m working on a hobby / community project. Just by adding the master/initial data for my website, I have already used 70k read ops. And I don’t have that much data yet, roughly 400 documents.

I’m willing to spend $10 a month for some hobby / community projects, but $22.5 a month is too much.

Hopefully Fauna team will revise their pricing plans. Otherwise I will have to migrate to another DB solution, which will be a pity, because I really like FaunaDB.

@Tuan, when did most of your read ops happen?

There was a bug in the dashboard that blew up a lot of folk’s read ops. Fix went up 13 days ago. So if you’re read ops hit before then, it might be better now.

Of course, you might have an app that likes to read the data a lot :man_shrugging:

Hi @ptpaterson,

I see this graph in Fauna dashboard:

FYI. When I executed the next query:

q.Let(
        {
          nodes: q.Select(
            ["data"],
            q.Map(
              q.Paginate(q.Match(q.Index("all_topics")), { size: 10000 }),
              q.Lambda(
                "topicRef",
                q.Let(
                  {
                    topicDetails: q.Get(q.Var("topicRef")),
                  },
                  {
                    id: q.Select("id", q.Var("topicRef")),
                    name: q.Select(["data", "name"], q.Var("topicDetails")),
                  }
                )
              )
            )
          ),
          links: q.Select(
            ["data"],
            q.Map(
              q.Paginate(q.Match(q.Index("all_topic_relationships")), { size: 10000 }),
              q.Lambda(
                "relationshipRef",
                q.Let(
                  {
                    relationshipDetails: q.Get(q.Var("relationshipRef")),
                    fromRef: q.Select(
                      ["data", "from"],
                      q.Var("relationshipDetails")
                    ),
                    toRef: q.Select(
                      ["data", "to"],
                      q.Var("relationshipDetails")
                    ),
                  },
                  {
                    source: q.Select("id", q.Var("fromRef")),
                    target: q.Select("id", q.Var("toRef")),
                    value: 1,
                  }
                )
              )
            )
          ),
        },
        {
          data: {
            nodes: q.Var("nodes"),
            links: q.Var("links"),
          },
        }
      )

Then I got the next metrics:

{"queryBytesIn":954,"queryBytesOut":9145,"queryTime":187,"storageBytesRead":60637,"storageBytesWrite":0,"transactionRetries":0,"byteReadOps":163,"byteWriteOps":0,"computeOps":6}

When I checked the dashboard, then I saw that 989 Read Ops have been consumed. all_topics contains about 100 documents and all_topic_relationships* contains 250 documents.

Is 989 Read Ops correct? Or is there something wrong with my query?

@Tuan every time you read a document using Get() you’re generating a read op.

To prevent this, you can cache values using indexes. Reading an index page only counts as a single read op. Of course you will now consume more storage space, so you have to evaluate which approach works better for your use case.

Correction:

Since the change in pricing Reading an index doesn’t count as a single read op. The basic idea is true though, reading an index will count as less reads than reading each and every indexed document.

1 Like

@pier Thanks for the tip. I will try to cache values by using indexes like suggested.

1 Like

@Tuan If you still have questions not specifically addressing the new price scheme, will you please create a separate post?

2 Likes

Understood. I have created a new post. Please let me know if I need to remove my question from this post.

(edit: posts cleaned by admin)

Wrong information.
Reading an index page doesn’t count as a single read op anymore since the change in pricing.
My most used query used to count for 13 ops. It’s 519 ops now, and will grow exponentially.
You wrote amazing articles for Fauna (they are now part of the official docs!) and are considered highly in this community, it surprises me that you wrote several suggestions and opinions in this thread without fully knowing the changes.

Source: Billing | Fauna Documentation.

1 Like

Thanks for your comment @alfrenerd.

You’re right, but to be honest I have been focused in other stuff other than Fauna for the past couple of months.

2 Likes

+1 with the 5 USD plan.

I just got my first bill, 1 month into production, it’s $20.83. At the bottom of the receipt this is what’s included:

Individual plan (Apr 6 - Apr 30)
Read-Ops: 14,505
Write-Ops: 1,365
Compute-Ops: 3,047
Storage: 66.71 KB

Now I’m just wondering, should I really be paying $20 for such a small amount of ops? Should have I stuck in free plan for a while before moving into paid plans? Because there’s no paid plan for such a small ops, prior to 20 USD plan, you only have free plan.

2 Likes

I spent a lot of time researching faunadb for my use case. I was really stoked to see I could use 3rd party authentication and skip having a backend. But then I saw that it’s not available on the Team plan, which is really disappointing because if I start with the free plan and go beyond the limits, I can’t really imagine paying $150/mo and breaking even with my app by that point. That’s a big spend when bootstrapping a startup off the ground. In the end I think I have to go with Firebase or some other solution.

1 Like

Almost the exact same situation here. Lack of third party auth on the individual plan basically means I have to choose between the security of my users (I know enough about security to have serious reservations about hand-rolling an auth solution on my own) vs using fauna. It’s a easy choice for me at this point.

I’m quite intrigued, may I ask you what sort of things you are worried about hand-crafting your own auth? Because I hand-crafted mine with my serverless rest API.

It really depends on the sensitivity/value of the customer data you’re dealing with (i.e. your threat model).

My app deals with customer source code and production deploys, so if I were to roll my own auth and session management, I really need to obsess over every attack vector imaginable today and keep up with the state of the art in session security (the folks at supertokens has a pretty good write-up here of common attacks and mitigations: All you need to know about user session security), run a bounty program and manage payouts, etc. Or I could pay someone a tiny fraction of my monthly pricing per customer to obsess over all of that for me.

Your mileage will vary if you’re dealing with less sensitive customer data that might be lower-value targets for attackers. But the choice for what I’m building is pretty clear.

what would be “high-value customer data” that you will store in your database? because auth is really just login credentials, as long as you are sure those things won’t leak I think you won’t have any problems at all.

  • Man in the Middle attackALWAYS use HTTPS
  • OAuth token theft – I don’t really use OAuth but according to the article you linked “If an application provides access/refresh tokens to other apps via OAuth, then there is a risk of the main app’s auth tokens being stolen if the other app’s servers are compromised.”, if google or facebook OAuth tokens are compromised, that’s kind of hard to deal with, I believe google and facebook are very secured and the chance of that happening is really low.
  • XSS – Good thing you don’t need to worry about an attack similar to SQL Injection with faunadb because of the way faunadb and fql works. If you are sanitize user input, what else are you worried about?
  • CSRF – if you implement authentication correctly, you won’t even need to worry about this.
  • Database/filesystem access – If you’re worried about this, then I think :warning:faunadb has some explaining to do.
  • Session fixation – personally, I don’t like sessions, so I don’t use it, I use JWTs instead.
  • Brute force attack – implement rate-limiting or enforce strong passwords.
  • Social Engineering / physical access – Unfortunately there is no cure for this, it’s not your app can magically detect that it’s being used by a different person or that your customer is being social engineered.

You should take note that even if you use auth0 or any authentication services your app is not protected against any of these.

  • Auth0 cannot protect you from Man in the Middle attack, although I suppose it kind of does because it requires HTTPS as far as I know but if your app itself is not using HTTPS then that’s useless.
  • Auth0 cannot protect you from OAuth token theft, auth0 itself is using JWT and it uses refresh token as well, both of which can be stolen/compromised should the hacker gain access to the customer’s device or sniffed if you are not using HTTPS.
  • Auth0 cannot protect you from XSS, obviously.
  • Auth0 cannot protect you against Database/filesystem access, it can’t protect your database, that’s not what it’s for.
  • Auth0 cannot protect you against CSRF, obviously.
  • Auth0 cannot protect you against Session fixation, it’s your responsibility to ensure guests can only access guest stuff.
  • Auth0 cannot protect you against Social Engineering / physical access, obviously.

IMO, Auth0 does not do anything special. I use it on one of the apps I maintain for a client, and darn, it’s a waste of money.

Correct me if I’m wrong, I’m all ears how else I can hack my own app, because honestly I’ve done all that I could think of, if I can’t have access to the token, I can’t do anything, and the only way I can get access to that token is via social engineering which I really can’t protect against, although there is a flow that I thought of that could protect against this, that is implementing strict ip-based tokens, so tokens will be associated with IP addresses and when a token is used using an IP address that it is not associated with then the request will just be denied and the app will just display the login screen for the user to login. Though I did not implement it as I don’t find it necessary…

1 Like