Graphql non-nullable array of items


When I define a query in my schema like this:

allJobs: [Job!]

the uploaded schema on fauna is incorrectly having a nullable array, when I use graphql codegen this gives me the wrong types.
This is the types in the uploaded schema on fauna:

    # The number of items to return per page.
    _size: Int

    # The pagination cursor.
    _cursor: String
  ): JobPage!

# The pagination object for elements of type 'Job'.
type JobPage {
  # The elements of type 'Job' in this page.
  data: [Job]!

  # A cursor for elements coming after the current page.
  after: String

  # A cursor for elements coming before the current page.
  before: String

Notice the missing “!” in the data attribute for JobPage, which give me the wrong type in typescript, all the elements in the array should be of type Job, but now they are either Job, null or undefined.

Any help workaround is appreciated!

Thank you

1 Like

References in Fauna can point to Documents that don’t exist. Consider the case that you have a relationship and then subsequently delete the connected Document. Or perhaps you created the reference with a bad ID. The GraphQL API is not aware of these things, so when you upload your schema, the “Page” types enforce the fact they the results can be null. This is done intentionally so that you handle the case that a data entry could be null.

1 Like

Thanks! I am not sure I understand why its impossible to create this when I just query the documents in a collection though, maybe I misunderstand your point. In case someone add the !(required) statement to the return type, can’t there be automatically a check in the resolver where it filters out all null values if there are any? Now there basically is no difference between writing

allJobs: [Job!]


allJobs: [Job]

which to me is very confusing since it is in the graphql specificaiton that the first one should return a non-nullible array.

The schema you upload is not the schema the is hosted. It is a simplified version that get’s parsed and is used to generate the hosted schema.

You specify

allJobs: [Job!]

which is shorthand for

allJobs: [Job!] @index(name: "allJobs")

which generates a new Index for you called allJobs. The Schema generation knows how to create “Page” types for Indexes, and you end up with a GraphQL schema like this

allJobs: JobPage!

When you make a graphql query like this

  allJobs {
    data: {

It gets compiled to something like this in FQL

    page: Paginate(Match(Index("allJobs"))
    allJobs: {
      data: Select("data", Map(
        job_ref => If(
          Exists(job_ref), // always checks for safety!
          Get(job_ref),  // only Get if it exists
          null  // returns null if does not exist

In principle, any Index can have entries with References to non-existent Documents, so the way your GraphQL queries are compiled always checks if Refs exist for safety.

We know that this particular Index should never return References to non-existing Documents, but the GraphQL API cannot make that assumption for you. Therefore, when the hosted schema is generated it changes the type to nullable.

Also, since Paginate always returns at least an empty array, the GraphQL type is updated to a non-nullable list, even if you didn’t specify it as such.

So this [Job]! rather than this [Job!]!, because the API makes the conservative assumption that the entry could be null. And this [Job]! rather than this [Job], because the API knows that it will always return at least an empty array (which is not null).

1 Like

Thanks! I know the schema is just used to generate the hosted schema. I am just wondering if it wouldn’t be more precise (maybe its not a bug then but a feature request) to change the fql resolver when the user schema specifies the requirement allJobs: [Job!] perhaps to something like you wrote above with a filter, to filter out all the null values, since that is what the user schema is specifically asking for.

I am not very proficient with FQL but I do know there is a filter function where you can filter an array for null values. see: Filter | Fauna Documentation

That’s fair. While the schema generation is working as intended, there are clearly reasons you (and I am sure others) would prefer a null value to create an error.

I think the default value would be to provide the null value which would cause the GraphQL server to send an error. Filtering would be an additional operation, which might be objectionable to other folks :sweat_smile: I don’t think we can make the call for everyone to add a filter step. If you want to filter results, you can create a custom resolver to do so.

It’s still worth considering, and we appreciate the feedback! If you feel strongly about it, feel free to start a new Topic here under Feature Requests. It’s always helpful to know a bit more about the use cases you think a FR is best suited for, as well as some examples of how it should work.

Another thing to consider is that the same Page type is reused for paginated queries that return the same type.

So for this uploaded schema

type Query {
  jobsByTitle(title: String): [Thing]!
  jobsByLocation(location: String): [Thing]!
  jobsBySalary(salary: String): [Thing]!

all of those fields transform to return the same Page type

type Query {
  jobsByTitle(title: String, _size: Int, _cursor: String): ThingPage!
  jobsByLocation(location: String, _size: Int, _cursor: String): ThingPage!
  jobsBySalary(salary: String, _size: Int, _cursor: String): ThingPage!

To accommodate nullable and nonnullable types, we would probably need different Page types. For example.

  • [Job]! => JobPage
  • [Job!]! => JobPageNonNull

If you think that is a reasonable tradeoff, then highlight that in a feature request.

NOTE: When considering different Page types, I did not address the fact that the list type is always changed to NonNullable. I already explained that the Paginate function always returns at least an empty array. If were to change anything, IMO, it would be to make it illegal to specify a paginated result type as a Nullable List, e.g. [Job]. Though I don’t feel strongly that we should do that

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.