Hi @Kimfucious and welcome!
Paginate then Filter
The most simple thing would probably be to fetch all of the items by tag, and then filter them. This will fetch SIZE
documents and then filter and return those that match. The Filter
will only be run on at most SIZE
documents, so you can easily predict performance and cost.
q.Filter(
q.Map(
q.Paginate(q.Match(q.Index("searchItemsByTag"), TAG), { size: SIZE }),
ref => Get(ref)
),
doc => q.ContainsStr(
// I recommend Casefold on both the stored data and your search term
q.Casefold(q.Select(["data", "title"], doc)),
q.Casefold(PARTIAL_TITLE)
)
)
Filter then Paginate
You can put the Filter
inside of Paginate
, but in this case, Filter
will continue to run against as many documents in the Set as necessary to fulfill SIZE
, potentially the entire Set. You get more predictable number of results, but less predictable performance and cost. In the worst case, a query with just a couple of results could cost as many read ops as the total documents in the collection. But this may be fine in the case that the Set you are filtering is small (in this case, the Set is q.Match(q.Index("searchItemsByTag"), TAG)
).
q.Map(
q.Paginate(
q.Filter(
q.Match(q.Index("searchItemsByTag"), TAG),
ref => q.ContainsStr(
q.Casefold(q.Select(["data", "title"], Get(ref))),
q.Casefold(PARTIAL_TITLE)
)
),
{ size: SIZE }
),
ref => Get(ref)
)
Use Index bindings
Here’s a previous topic that I hope can help: Search for substring ( need performant approach)