What would be the best way to iterate down a tree/graph structure with graphql and FaunaDB?

Hi everyone! Thanks for taking the time to read this. I’m working on a project to be able to create an ETF-like structure out of nonprofits and ultimately hope to donate to a basket of such nonprofits following a custom weight distribution.

My current schema looks something like this -

type Cause {
name: String!
connections: [CauseWeights] @relation
charityRef: Charity
owner: User!
description: String
image: String

type Charity {
name: String! @unique
tags: [String!]
taxInfo: taxInfo
causeNode: Cause!

type CauseWeights {
parent: Cause! @relation
child: Cause!
weight: Float!

Sample graphql query and result

I was able to get this to work in the graphql playground and create connections between Causes but I’m not sure how to iterate through such a tree structure in a UDF. GQL works great to request data if you know how deep the structure goes, but the confusion comes in how to make it dynamic.

Ideally, I’d like a graphql query to start from a specific cause reference and return the entire tree underneath or, to be more specific, return a list of underlying charities and their ultimate weight.

For example, a cause with two connections.
1 50% ( 50% A and 50% B )
2. 50% B

Return (A: 0.25%, B: 0.75%)

Would greatly appreciate any advice or feedback on better ways to accomplish this goal. Thanks a lot!

Hi @Rohanisburg! This is a neat problem.

Let is one of your best friends when dealing with nested queries in FQL. Check out this gist, which kind-of replicates how a nested GraphQL query could be mimicked.

The idea is that Fauna lets you return JS-like objects. So you can build an object which has a field defined by a query, which returns an object which has a field defined by a query, which returns an object which has a field…

I think for your case be sure to check out these other FQL functions.

I’ve had a stab at what it might look like (not tested). I definitely suggest breaking this out into helper functions in whatever driver your using. It would be possible then to build nested queries more programmatically, which means easier to deal with arbitrary depths.

    ref: Ref(Collection('Cause'), '3003'),
    instance: Get(Var('ref')),
    // many-to-many with link table
    connections: Paginate(Match(Index('causeWeights_parent_by_cause'), Var('ref')))
    connectionSummaries: Map(
            causeWeight: Get(Var('causeWeightRef')),
            childRef: Select(['data', 'child'], Var('causeWeight')),
            child: Get(Var('childRef')),
            childCharityRef: Select(['data', 'child'], Var('causeWeight')),
            childCharity: Get(Var('childCharityRef')),

            // childSummaryArray, e.g. [['A', .5]]
            childSummaryArray: [[
              Select(['data', 'name'], Var('childCharity')),
              Select(['data', 'weight'], Var('causeWeight')),

            // childSummary, e.g. { A: .5 }
            childSummary: ToObject(Var('childSummaryArray')),
            // additional levels?
            // definitions would be the same as the outer `Let` function
            // but would need to also account for appending this child summary to it if it exists
            connections: Paginate(Match(Index('causeWeights_parent_by_cause'), Var('childRef'))),
            connectionSummaries: /* ...*/,
           // append Var('childSummary') before combining. Not shown.  Use `Append`.
            combinedSummary: /*...*/, 
            total: /*...*/,
            normalizedSummary: /*...*/

          // final summary in the form of
          { A: .5, B: .5},
    // connectionSummaries looks like:
      { A: .5, B: .5},
      { B: .5 }

    combinedSummary: Reduce(
        ['acc', 'val'],
          // custom merge operation!!!
          // { A: .5, B: .5 } + { B: .5 } = { A: .5, B: 1.0 }
          Lambda(['name', 'a', 'b'], Add(Var('a'), Var('b')))
    total: /* Reduce operation to sum all weights */
    normalizedSummary: /* ToArray->Map->ToObject operation to normalize each value */


This assumes that the top level Cause has no charity of its own. Probably missed some other things. I hope that this helps!

1 Like

@ptpaterson Thank you for such a thorough and well documented answer! I really appreciate you taking the time to write all of that out and for commenting the steps in the process.

This will definitely be many steps in the right direction. I’ll keep this saved and will comment if I run into any further difficulty. In general though, I’d love to know what your testing process and workflow would be if you had to create something like this from scratch. Do you start in Fauna Shell and create composable UDF’s or do you tend to stick on the native code side.

Thanks again,

Awesome @Rohanisburg! I’m glad this appears helpful.

I do tons of playing around in the dashboard when I am working through Ideas. Once things start to get too complicated, for example big complicated UDFs, I like to start breaking things down into pieces in code, and then update UDFs from a script. I am a strong proponent of building and maintaining all of the schema ultimately in code – also that client applications should not have to use too much FQL, but should rely on pre-defined, available UDFs. (There are benefits for permissions to users only having access to call UDFs, rather than direct access to Collections and Indexes, too).

I’ve been trying out all sorts of ways to make managing complex schemas easy in code side. I’ve taken a lot of inspiration from the examples like the “fwitter” twitter clone and the blueprints now in fauna-labs. There’s some ideas for unit-test-like stuff there, too. Spin up a child DB, add some docs, run some queries and make sure results are as expected. Others I see testing their code/schemas against the Docker instance.

There’s been all sorts of projects like fauna-lib, the biota framework. fauna-schema-migrate and fauna-gql-upload have bold ideas about how to structure projects. I’ve made faunadb-graphql-schema-loader. There’s a repo to document “awesome” fauna stuff!