Pagination and filetering questions

I have a Node.js API using Express. It has the folllowing endpoint, that returns comments for a post with given id.

Now, I would like to implement two features:

  • pagination via limit and offset
  • data filtering (e.g /comments?title=my%20title) would only return comments that have the requested title. For this one, I’m wondering how to create an index that would work with dynamic parameters.

Here is my trial to handle limit and offset according to this post Recommended way of doing a page offset? - #2 by ben.


// GET comments by post

// Comment model
// commentAuthor: ref
// targetPost: ref
// title: string
// content: string

// comments_by_post is an index that has data.targetPost as term and ref as value on the comments collection

router.get('/:id/comments', async (req, res, next) => {
  try {
    const limit = req.query.limit ? parseInt(req.query.limit, 10) : 0;
    const offset = req.query.offset ? parseInt(req.query.offset, 10) : 0;

    const { data } = await db.get().query(
      Map(
        Drop(
          Paginate(Match(Index('comments_by_post'), req.params.id), {
            size: limit,
          }),
          offset
        ),
        Lambda((comment) => Get(comment))
      )
    );
    res.send(data);
  } catch (e) {
    next(e);
  }
});

However, I run into the following because I guess Drop expects an Integer as a first parameter, and I’m giving a Page.

Integer expected, Page provided.

How could I refactor my code to implement the following?

Thanks in advance for any help !

@theolavaux Looks like you are passing the arguments in reverse order to Drop. It should be Drop( num, array or Page ) and not Drop(Page, num ).

1 Like

Just a additional fix to default size to undefined instead of 0 and it’s working, thanks man !

Do you have examples of filtering aside from Fauna Docs ?

I read this but I would like to find a solution that works with a variable number of parameters ? I don’t really see how to utilise indexes in this case.

Variable number of parameters sounds like a job for Intersection and/or other Set operations.

Map(
  Paginate(
    Intersection(
      Match(Index("all_Planets_by_type"), "TERRESTRIAL"),
      Match(Index("all_Planets_by_size"), "HUGE"),
      Match(Index("all_Planets_by_color"), "BLUE")
    )
  ),
  Lambda("planetRef", Get(Var("planetRef")))
)

If you making a UDF, if you pass null into the index terms for not used (okay for Union), or use logic to build a list of indexes to Match.

This article goes into a lot of detail, including combining indexes.

1 Like

Thanks a lot for your article, it helped me a lot to grasp some additional concepts that might be vague in the docs regarding pagination, filtering and sorting :grin:

I chose the solution to group indexes using Intersection for now. What I would like to do is

  1. Gather all comments that have a certain post
  2. Filter the results using different indexes

All of that, while still supporting limit/offset.

I modified my query to reflects those changes, however, something is still off.

Join's second parameter is an IndexRef and Intersection returns a SetRef.
I hence get the following error Lambda or Index Ref expected, Set provided.

How could I correct this query?

    const limit = req.query.limit ? parseInt(req.query.limit, 10) : undefined;
    const offset = req.query.offset ? parseInt(req.query.offset, 10) : 0;

    const { commentAuthor, targetPost, title, content } = req.query;

    const { data } = await db.get().query(
      Map(
        Drop(
          offset,
          Paginate(
            Join(
              Match(Index('comments_by_post'), req.params.id),
              Intersection(
                Match(Index('all_comments_by_comment_author'), commentAuthor),
                Match(Index('all_comments_by_target_post'), targetPost),
                Match(Index('all_comments_by_title'), title),
                Match(Index('all_comments_by_content'), content)
              )
            ),
            {
              size: limit,
            }
          )
        ),
        Lambda((comment) => Get(comment))
      )
    );

Some questions to help figure out what you are attempting:

Is the post ID (req.params.id) a field on the comments? Then it could be included in the intersection. From the names, and without actual definitions, it looks like Index('all_comments_by_target_post') is the same as Index('comments_by_post').

The intersection will get all comments with the provided Author AND targetPost AND title AND content. That’s a lot of things to “AND”. Is that what you want, or should these be “OR” operations? Union would be used for that.

If your intention is to “OR” all of the search parameters, but “AND” only with the post, i.e.: (post AND (author OR target OR title OR content)) then you could do the following.

Paginate(
  Intersection(
    Match(Index('comments_by_post'), req.params.id),
    Union(
      Match(Index('all_comments_by_comment_author'), commentAuthor),
      Match(Index('all_comments_by_target_post'), targetPost),
      Match(Index('all_comments_by_title'), title),
      Match(Index('all_comments_by_content'), content)
    )
  )
)

Also, you already referenced another post which recommends NOT using Drop for pagination, but instead using the built in pagination. Is there a reason why you are using Drop rather than before and after cursors? Size is not how many are returned if you use Drop, it’s how many are fetched. Eventually as you get to later pages using Drop you will be fetching many MANY more times the documents than needed and dropping most of them.

Maybe this other post will help?

Using the size and after cursor, you can get later pages like this:

Paginate(
  Intersection(
    Match(Index('comments_by_post'), req.params.id),
    Union(
      Match(Index('all_comments_by_comment_author'), commentAuthor),
      Match(Index('all_comments_by_target_post'), targetPost),
      Match(Index('all_comments_by_title'), title),
      Match(Index('all_comments_by_content'), content)
    )
  ),
  {
    size: 20,
    after: [ q.Ref(q.Collection('Comments'), '123456789') ], // <-- the comment that was at the end of the list last page
  }
)

You were right about this, these indexes were the same so I deleted the duplicates and used the ones that I previously created.

About this part, an AND is indeed what I would like to achieve.
I will give at little bit more context here, these are the different kinds of queries that I want to support with this endpoint.

https://my.api.com/posts/:id/comments
Simply returns all the comments for a post

https://my.api.com/posts/:id/comments?limit=5&offset=3
Returns 5 comments for a post, starting at comment n°3 (so 3, 4, 5, 6, 7). In the case where there are only 4 comments, returns only 3 and 4.

https://my.api.com/posts/:id/comments?limit=5&offset=3&title=test%20title
Same query that before but also match comment’s title

https://my.api.com/posts/:id/comments?limit=5&offset=3&title=test%20title&description=test%20description
Same query that before but match title AND description

I didn’t use the after cursor because I didn’t know how to dynamically link it to the offset parameter that I receive. If I understand it well I would need to pass "comment number ‘offset’ " to after instead of passing offset to Drop?

Does it appear more clear now?

PS: Just for reference, here is what I have now with corrected indexes.

    const { data } = await db.get().query(
      Map(
        Drop(
          offset,
          Paginate(
            Intersection(
              Match(Index('comments_by_post'), req.params.id),
              Match(Index('comments_by_user'), commentAuthor),
              Match(Index('comments_by_title'), title),
              Match(Index('comments_by_content'), content)
            ),
            {
              size: limit,
            }
          )
        ),
        Lambda((comment) => Get(comment))
      )
    );

OK, thanks for the details! I enjoy seeing others’ use cases :slight_smile: and it help me understand.

Remember that because of Intersection, if any of those terms are null, then the whole result will be empty. You’ll need to logic out the indexes that you don’t need when not all parameters are given.

And size will need to be size: limit + offset, since you need to fetch all of them and get rid of the ones you don’t want.

From the docs: “The cursor has a structure identical to the associated index’s values field”. Looks like your Indexes are returning Refs, so you would need to send the Ref (or id and rebuild the Ref server side) of the last comment to the server. I expect it would be easiest to send the id in the query params. This only let’s you jump one page at a time – a good UX for a “load more comments” type of setup. But if you are jumping pages arbitrarily, then you’ll need to fetch everything and drop what you don’t want, like you are now.

1 Like

I got it thanks for clarification. By the way how could I conditionally construct this Intersection? Is there a Fauna-way to do so?

I would expect that undefined will throw an error. Have you tried this/ does it work?

The Set operations (Intersection, Union, Difference, etc.) can accept a single array of sets. That allows you to build an array with logic and provide that.

You of course have many more options if you use javascript to build a query, but it can all be done with FQL, too. If your variables are javascript vars, I recommend using javascript to append the relevant indexes to an array. But if you want to build this into a UDF, then you can pass things in a arguments and use only FQL.

Side note: you don’t need to use Lambda if you are using arrow functions.

JS example

const { commentAuthor, title, content } = req.query;

const matches = Match(Index('comments_by_post'), req.params.id);
if (commentAuthor) matches.push(Match(Index('comments_by_user');
if (title) matches.push(Match(Index('comments_by_title');
if (content) matches.push(Match(Index('comments_by_content');

const { data } = await db.get().query(
    Map(
       Drop(
         offset,
         Paginate(
           Intersection(matches),
           { size: limit + offset }
         )
       ),
       (comment) => Get(comment)
     )
);

UDF example

Create the function

CreateFunction({
  name: 'search_comments',
  body: Query(
    Lambda(
      [post, author, title, content],
      Let(
        {
          matches0: [Match(Index('comments_by_post'), req.params.id)],
          matches1: If(
            IsNull(Var('author')),
            Var('matches0'),
            Append(Var('matches0'), [
              Match(Index('comments_by_user'), Var('author'))
            ])
          ),
          matches2: If(
            IsNull(Var('title')),
            Var('matches1'),
            Append(Var('matches1'), [
              Match(Index('comments_by_title'), Var('title'))
            ])
          ),
          matches3: If(
            IsNull(Var('content')),
            Var('matches2'),
            Append(Var('matches2'), [
              Match(Index('comments_by_content'), Var('content'))
            ])
          )
        },
        Map(
          Drop(
            offset,
            Paginate(Intersection(Var('matches3')), {
              size: limit + offset
            })
          ),
          Lambda('comment', Get(Var('comment'))
        )
      )
    )
  )
})

EDIT: I removed the arrow function and showed using straight fql Lambda.

call from server

const { commentAuthor, title, content } = req.query;
const { data } = await db.get().query(
  Call(
    Function('search_comments'), 
    req.params.id, 
    commentAuthor, 
    title, 
    content
  )
);
1 Like

Thanks man, finally got it to work as expected :v:

Here’s the full function for reference if other people find it useful.

// GET comments by post
router.get('/:id/comments', async (req, res, next) => {
  try {
    const limit = parseInt(req.query.limit, 10);
    const offset = parseInt(req.query.offset, 10) || 0;

    const { commentAuthor, title, content } = req.query;

    let size;
    if (!limit) {
      size = undefined;
    } else if (!offset) {
      size = limit;
    } else {
      size = limit + offset;
    }

    const matches = [];

    matches.push(Match(Index('comments_by_post'), req.params.id));
    if (commentAuthor) {
      matches.push(Match(Index('comments_by_user'), commentAuthor));
    }
    if (title) {
      matches.push(Match(Index('comments_by_title'), title));
    }
    if (content) {
      matches.push(Match(Index('comments_by_content'), content));
    }

    const { data } = await db.get().query(
      Map(
        Drop(
          offset,
          Paginate(Intersection(matches), {
            size,
          })
        ),
        (comment) => Get(comment)
      )
    );
    res.send(data);
  } catch (e) {
    next(e);
  }
});
2 Likes