Currently assessing FaunaDB for a new project we are embarking, one requirement is a pretty classic one where users will have to search a product catalogue given different sets of filters. So, some information on the data:
- There are around 10.000 products
- Each product has around 20-30 attribute that will be filterable
- Products are generally uniform. So they will have most of the time the same sets of attributes.
- Once populated, the product collection will be pretty much stable, (few writes, lots of reads)
The first straightforward/simple solution coming to mind is to create indexes for those properties and then when searching, using those indexes for matches. So a typical query could involve ~10 indexes.
I was wondering about the feasibility and performance implications of this solution? Any obvious downsides? Or would you advise an alternative approach?