A hexagonal lattice with a triangle inscribed at its center, drawn over a faint isometric grid.
API Architecture · May 2026

Deep Modules GQL

Most GraphQL APIs do not fail because GraphQL is hard. They fail because GraphQL hands you a question that REST answers for you by default: where does each rule live? This is a working theory of the answer, drawn from the architecture of a production read API.

Boyan Balev 15 min read Stack: Apollo · Postgres · DataLoader · Zod
The whole article in ten seconds

The SDL is the public contract. Zod owns input syntax. Services orchestrate one use case at a time. Repositories own SQL and visibility. Resolvers are wiring. Cross-entity fields go through per-request DataLoaders that live in one registry. The interface is small. The interior is large. Every change starts at the schema and ends in SQL, and nothing in the middle gets clever. The result is an API that a new engineer can navigate by reading file paths.

Part I

The Question GraphQL Hands You

REST answers a question for you by structure. One URL, one resource, one verb. The interface forces the architecture: if you want a filtered list of products by collection, you build GET /collections/:id/products. The URL itself is a placement decision. The router routes. The handler handles. Nothing crosses.

GraphQL does not force that. GraphQL gives you a graph and asks you to pick the shape. Every field is a function. Every type is a record. Clients compose their own queries. The protocol does not care where the rule lives. The protocol cares that the rule produces the right answer.

This is the trap. The first time you build a GraphQL API, you write a few resolvers, hook them to a database, and ship. It works. The second time, a buyer asks for a filter, and you add a tiny SQL call inside the resolver. It still works. The third time, a nested field starts hydrating related entities. The fourth time, a repository starts returning unpublished rows and the resolver "remembers" to filter them out. None of these decisions look dangerous. Each one is the smallest possible change that delivers the feature.

Two years later, you have a 4000-line resolver file. A new engineer cannot tell where a published-only rule is enforced. A buyer query that should issue three SQL statements issues fifty. The integration tests pass and the production logs do not. The schema, the original contract, has deformed into a documentation of accidents.

GraphQL did not do this. The absence of an opinion did.

This is an essay about supplying that opinion. There is nothing novel in it. The pattern is older than GraphQL. It is sometimes called "deep modules," sometimes "thin resolvers," sometimes "hexagonal architecture" with a tilt. What matters is not the name. What matters is that the pattern answers one question consistently at every layer of the API: who owns this rule?

The examples below use a generic store. Products, collections, reviews, customers, orders, line items. If you have ever queried a Shopify-shaped data set, you already know the domain. The same patterns apply to any read-heavy API over a relational store: a CMS, a catalog, a directory, a ticketing system, a B2B feed. The data shape changes. The architecture does not.

Part II

Deep Modules, Thin Interfaces

A deep module has a small surface and a large interior. The outside looks simple. The inside does real work. A shallow module exposes its complexity to every caller. It looks like a wrapper, because it is one.

In a GraphQL API, the public surface is the SDL. A client types:

client → /graphql query
query {
  products(page: 1, filter: { collectionId: "summer-25" }) {
    items {
      id
      title
      price
      reviews { rating, body, author { name } }
    }
    total
    hasNext
  }
}

That is the whole interface. Forty tokens, a stable shape, a precise return. Behind it, the module is doing argument validation, pagination math, SQL filter construction, count parity, foreign-key normalization, batched related-entity lookups, null coalescing, error mapping. None of that complexity is visible to the buyer. None of it should be.

Shallow vs Deep Module
SHALLOW MODULE Surface area equals interior. Every caller sees everything. validate args build SQL filter page math run query map rows filter unpublished load reviews format response Caller knows about all of it. A new field changes ten places. DEEP MODULE Small surface. Large, hidden interior. products(...) validate · paginate · query filter · count · map · batch load relations · normalize FKs handle errors · shape page A new field changes one layer.

The temptation, especially in GraphQL, is to give every layer a slightly broader job. A "convenience helper" emerges in a service. A resolver "just adds" a small condition. A repository returns the full row in case someone needs the rest of it. Each of these is a tiny act of generosity. Each one moves a small piece of complexity outward, into the surface, where the next engineer will trip over it.

The discipline is unromantic: every layer answers one kind of question, and only that kind. When you change the rule, you change exactly one layer. The other layers are unchanged because the question they answer has not changed.

Part III

Where Each Rule Lives

This is the matrix that decides whether the codebase will be readable in two years.

Question
Owner
Why
What can clients ask for?
src/schemas/*.graphql
The SDL is the public contract and the documentation surface.
Is this input syntactically valid?
src/validations/*.ts
Zod keeps runtime checks and TypeScript types aligned.
How do page args become offset/limit?
src/services/*.ts
Services orchestrate one use case. They do not own SQL.
Which rows are public?
src/repositories/*.ts
Visibility is enforced before rows leave Postgres.
How does a field load relations?
resolver → ctx.loaders
Resolvers wire fields. DataLoaders batch the lookups.
How do modules meet?
src/loaders/registry.ts
One intentional cross-module integration point.

The matrix is not arbitrary. It follows a single principle: rules belong where they are easiest to find when you forget. If a buyer can ask for product.reviews, where do you look when reviews stop appearing? The SDL tells you the field exists. The resolver tells you it uses a loader. The loader tells you which repository call backs it. The repository tells you the SQL that runs. You can debug this without grep.

Notice what is missing. There is no "controller." There is no "manager." There is no "facade." Each of those names is an invitation to put behavior in a place where the next engineer will not think to look. A name that does not answer a question does not belong in the architecture.

The placement test. Before writing a line of code, name the question your change answers. Then look at the matrix. If the answer to "where does this go?" is "it depends," the change is two changes glued together. Split them.
Part IV

Acquire the Root, Reuse the Parent

Most GraphQL writing focuses on a single request path: HTTP enters, a resolver runs, a response leaves. That is one third of the picture. A query that traverses any depth has three flows, not one, and treating them the same is the source of the worst performance problems in GraphQL.

Take the query from Part II: a page of products, each with its reviews, each review with its author. The naive mental model is "one resolver per field, just call the database from each." That mental model works for a flat query of one product. It collapses for a list.

The flows are these:

Acquire and Reuse: The Two-Phase Request
PHASE 1 · ACQUIRE One service call. One SQL statement. Query.products(args) listProducts(ctx, args) findProducts(filter) SELECT ... WHERE status='active' 25 product rows parent.id × 25 PHASE 2 · REUSE Many field resolvers. One batched query each. Product.reviews(parent) → loader Product.collections(parent) → loader Review.author(parent) → loader batch keys → 1 SET query per kind grouped by parent ID, returned

Conquer the root. Reuse the parent.

Services own use cases.

"Which products are on this page?"

One question. One call.

Loaders own expansion.

"For these 25 IDs, fetch reviews."

Many fields. One batch each.

Two phases. Two ownership boundaries. The resolver is the wire between them.

The architectural mantra is two sentences. Conquer the root, reuse the parent. Services own root acquisition. Loaders own child expansion. Resolvers are the wiring between them, and resolvers do almost nothing.

Why does this split matter? Because the two phases have different cost models. Acquisition is one SQL statement no matter what. Expansion is one SQL statement per relationship type, no matter how many parents. A page of 100 products with reviews, collections, and authors costs four queries with this design and four hundred without it. The difference is not a micro-optimization. It is the difference between a system that survives a popular query and one that hangs.

There is a corollary the architecture quietly enforces: services never know about loaders, and loaders never call services. The service is allowed to call one repository directly because it owns one use case. A field resolver is allowed to call a loader because it owns one relationship edge. These two privileges do not compose. If a service starts using a loader, you have hidden a batched call behind a use case, and the next engineer will not know it batches. If a loader starts calling a service, you have invited recursive expansion, and the next engineer will not know the call stack depth.

Each layer knows about exactly one neighbor. No more.

Part V

The Schema Is Not Your Database

This is the most common failure mode in API design, and it gets worse every year as tools that auto-generate types from database schemas get better. The story usually goes: introspect the database, generate types, expose them as GraphQL. It is fast. It is also wrong.

The contract you ship is shaped like your storage. Buyers see your join tables. They see soft-delete columns. They see audit fields. They see the difference between price and price_cents because the codegen surfaced both. They see your migration history through column names that hint at past refactors. When you change your schema, you break their queries. When you cannot change your schema because their queries depend on its shape, your storage is now their problem.

A well-designed GraphQL contract is a translation layer. Inside, you might have a CMS-managed Postgres with implementation noise: relationship tables suffixed with _rels, status columns named _status, foreign keys that allow null because Postgres requires it, numeric strings because the ORM returns them. Outside, buyers see Product, Collection, Review, Author. The translation happens in two places: the repository (row mapping) and the SDL (field shape).

Storage-shaped
Buyer-shaped
type ProductsCollectionsRels {
  collection_id: ID!
  product_id: ID!
  _status: String!
}
type Product {
  collections: [Collection!]!
}
price_cents: String
currency_code: String
discount_applied_flag: Int
price: Money!
# { amount: Float!, currency: String! }
products: [Product]
products_count: String
products_page: Int
products_per_page: Int
products: ProductPage!
# consistent envelope, used everywhere

Two rules make this stick. The first is that nullability is honest. If a foreign key can be null in the database and that nullability has product meaning (a product with no manufacturer is genuinely a product without a manufacturer), the field is nullable in the SDL. If it cannot be null (a product always has a price), the field is non-null. Buyers learn the meaning of null in your API by reading the schema, not by trial and error.

The second is that pagination is consistent. Every list returns the same envelope. items, page, pageSize, total, totalPages, hasNext, hasPrev. Buyers learn the shape once and use it everywhere. Do not invent a one-off envelope for a single module because "this list is special." This list is not special. Buyers do not want six pagination dialects.

This sounds like style. It is not style. It is a contract about cognitive load. The buyer's mental budget is small, and you are competing with their actual job for it. Every inconsistency you ship spends some of that budget.

Part VI

SQL Owns the Truth

The repository is where SQL lives. It is also where visibility lives. These two facts are inseparable, and treating them as separable is the source of most data leaks.

If a product is draft, it should never reach a resolver. If a review is unmoderated, it should never reach a resolver. The temptation is to fetch broadly and filter in JavaScript: pull the rows, then apply .filter(r => r.status === 'active'). This is wrong for two reasons. The shallow reason is that it scales badly. You fetch rows you will not return. The deep reason is that it leaks invariants. Any caller can forget the filter. The next caller usually does.

The discipline: visibility is enforced in the WHERE clause. The repository is the only place where data leaves the database, so it is the only place that can guarantee what leaves. There is no opt-out. There is no "internal" mode that skips the filter. The filter is part of the function.

src/repositories/products.ts ts
const PUBLISHED = eq(products.status, 'active');

const buildFilterSql = (filter: ProductsFilter) => {
  const clauses = [PUBLISHED];                // always. no exceptions.
  if (filter.titleContains) {
    clauses.push(ilike(products.title, `%${filter.titleContains}%`));
  }
  if (filter.collectionId) {
    clauses.push(sql`EXISTS (
      SELECT 1 FROM product_collections pc
      WHERE pc.product_id = products.id
        AND pc.collection_id = ${filter.collectionId}
        AND pc.status = 'active'
    )`);
  }
  return and(...clauses);
};

// list and count share one predicate — no drift, no lying totals.
const where = buildFilterSql(filter);
const [items, [{ count }]] = await Promise.all([
  db.select(...).from(products).where(where).orderBy(...).limit(limit).offset(offset),
  db.select({ count: sql`COUNT(*)::int` }).from(products).where(where),
]);

A few things in that snippet are worth naming explicitly, because they are easy to skip past:

The repository is also where you handle database quirks once. If the ORM returns numeric strings, convert them. If foreign keys are nullable, normalize them. If a timestamp is stored as text, parse it. The rest of the codebase sees clean, typed rows. There is one place to fix a quirk, and one place to look for one.

The repository checklist for every query. Visibility filter applied? Relationship state checked? Stable ordering with a tiebreaker? Count predicate equal to list predicate? Numeric strings and nullable FKs normalized? Set queries return either one row per key or a row with a sourceParentId for grouping? If you cannot answer all six, the query is not done.
Part VII

The Quiet Engine

DataLoader is a small library and a large idea. It is two hundred lines of code and an entire design discipline. The idea: within a single request, batch all calls to the same loader. If three resolvers each call loaders.reviewsByProductId.load(...) with different IDs in the same tick of the event loop, DataLoader collects all the IDs, fires one query, and returns the results to each caller.

Mechanically it works by deferring. When you call .load(id), you get a promise. DataLoader adds the ID to an internal queue and schedules the actual batch fetch for the end of the current tick. Any other .load(id) calls in the same tick join the same batch. When the tick ends, the batch fires once. The returned rows are distributed to the waiting promises.

N+1 Before and After DataLoader
WITHOUT DATALOADER 25 products × 1 query per product = 26 queries products query product 1 SELECT reviews product 2 SELECT reviews product 3 SELECT reviews product 25 SELECT reviews 26 SQL statements WITH DATALOADER 25 .load() calls collapse into 1 batched query products query .load(p1.id) .load(p2.id) .load(p3.id) .load(p25.id) batch fires 1 SET query 2 SQL statements total BEFORE reviews: (p) => fetchReviewsFor(p.id) // looks innocent. fires N queries. AFTER reviews: (p, _, ctx) =>   ctx.loaders.reviewsByProductId.load(p.id) // same shape. 1 query.

DataLoader solves N+1, but it also creates a discipline: every cross-entity field resolver must use a loader. Not "should." Must. The moment you have a field resolver that calls a repository or a service directly, and a client requests that field inside a list, you have N+1. The fix is the same every time. The rule is the same every time. Make it the only path.

Three details make DataLoader work in production:

There is a fourth discipline that is less obvious: loaders return shapes, not entities. A "reviews by product ID" loader returns Review[][] in the same order as the input keys. A "manager by ID" loader returns Manager | null for each key. The shape encodes the cardinality of the relationship. The resolver does not have to think about it. This sounds pedantic until you have a resolver that expects a single value and gets an array, and the error surfaces three layers away from the cause.

A worked example, in code, is worth more than another paragraph:

src/loaders/registry.ts ts
export const createLoaders = (db: Database) => ({
  // many-to-one parent → returns Review[] per productId, in input order
  reviewsByProductId: new DataLoader<string, Review[]>(
    async (productIds) => {
      const rows = await findReviewsForProductIds(db, productIds);
      const byProduct = groupBy(rows, r => r.productId);
      return productIds.map(id => byProduct.get(id) ?? []);
    },
    { maxBatchSize: 500 }
  ),

  // one-to-one parent → Author | null per id
  authorById: new DataLoader<string, Author | null>(
    async (ids) => {
      const rows = await findAuthorsByIds(db, ids);
      const byId = new Map(rows.map(r => [r.id, r]));
      return ids.map(id => byId.get(id) ?? null);
    },
    { maxBatchSize: 500 }
  ),
});

Notice that the loader does almost no business logic. It calls a repository function, groups the result, and returns it in the input order. The repository function does the SQL and the visibility. The loader does the batching. Each layer answers one question.

Part VIII

Operational Guardrails

A schema is not an API. An API is a schema plus the operational behavior that makes it survivable. Clients are not always benign. Networks fail. Databases hiccup. The graph permits queries that the database cannot answer in finite time. None of this is in the SDL. All of it is part of the contract.

The minimum set of guardrails, in rough order of how often they save you:

8 Max query depth. Reject anything deeper. Stops exponential fan-out.
100 Max page size. Never trust the client's pageSize argument.
500 Max DataLoader batch size. Chunks large IN clauses automatically.
5s Postgres statement_timeout. A hung query cannot hang the API.

These four numbers cap the worst-case work a single request can cause. They are not optional. A GraphQL server without them is a server waiting for one curious client to wedge it.

The other operational rules are less numeric but equally load-bearing:

None of these are GraphQL features. All of them are part of running a GraphQL API. The schema and the guardrails are inseparable, and the schema is not finished until the guardrails are.

There is one more category of guardrail worth naming honestly because most teams skip it: query cost accounting and rate limiting. Depth and page size cap individual requests. They do not cap a client's total work across a window. If you serve public traffic at any scale, you need both, and you need them at the edge, not in your application code. This article does not solve that. It just refuses to pretend the schema solves it.

Part IX

A Filter, End to End

The whole architecture is theory until you run a change through it. Here is one. A buyer asks for products(filter: { collectionId }). They want every active product in a given collection, paginated, with the usual nested fields.

The wrong instinct is to add a JavaScript filter. Load all products, then drop the ones not in the collection. This loses on every dimension: it fetches rows you will not return, it leaks the rule into the resolver, and it breaks count parity. Pagination metadata becomes a lie because total reflects the unfiltered query.

The disciplined path is small and local. Four files. Each layer answers the question it owns.

01 · Start at the contract

The SDL is the public surface. The change starts there because if it does not start there, the rest of the work is invisible to buyers.

src/schemas/products.graphql gql
input ProductsFilter {
  titleContains: String
  collectionId: ID      # new — public surface grows by one optional field
}

02 · Mirror it in Zod

SDL types are not runtime checks. They are removed at parse time. Zod is the runtime gate. It also generates the TypeScript types the service uses internally, which keeps the contract honest end to end.

src/validations/products.ts ts
export const ProductsFilter = z.object({
  titleContains: z.string().min(1).max(120).optional(),
  collectionId: z.string().uuid().optional(),  // new — runtime gate matches the SDL
}).strict();

03 · The service does not change

This is the test of the architecture. A new filter changes the contract, the validation, and the SQL. The orchestrator should not change. If it does, the architecture has leaked, and the next filter will leak again.

src/services/products.ts · unchanged ts
export const listProducts = async (ctx: Ctx, args: unknown) => {
  const parsed = ProductsArgs.safeParse(args);
  if (!parsed.success) throw badInput(parsed.error);
  const { page, pageSize, filter } = clampPage(parsed.data);
  const { items, total } = await findProducts(ctx.db, { filter, page, pageSize });
  return buildPage({ items, total, page, pageSize });
};

04 · Put the rule in SQL

An EXISTS clause against the join table, with the relationship state checked. The visibility filter on products stays where it is. The two predicates compose.

src/repositories/products.ts · +3 lines ts
const buildFilterSql = (filter: ProductsFilter) => {
  const clauses = [PUBLISHED];
  if (filter.titleContains) {
    clauses.push(ilike(products.title, `%${filter.titleContains}%`));
  }
  if (filter.collectionId) {                                // new
    clauses.push(sql`EXISTS (
      SELECT 1 FROM product_collections pc
      WHERE pc.product_id = products.id
        AND pc.collection_id = ${filter.collectionId}
        AND pc.status = 'active'
    )`);                                                  // new
  }                                                          // new
  return and(...clauses);
};

That is the entire change. Four files. The contract grew by one optional input. The validation grew by one optional field. The repository gained one clause. The service is unchanged. The resolver is unchanged. The loader registry is unchanged. The tests that need updating are the contract tests: one new test that asserts the filter works, one that asserts pagination is still consistent when the filter is applied.

What did not happen. No new abstraction. No "filter engine." No generic where builder that handles every domain. Three lines of conditional SQL. The architecture's job is not to make the change clever. Its job is to make the change small.

If a future filter does justify abstraction (suppose you have eight filters with similar EXISTS patterns), build it then, on top of working code, with the real shape in hand. Premature abstraction is the cousin of the shallow module. It expands surface area in anticipation of a future that may not arrive.

Three similar implementations are better than a premature abstraction.

Part X

The Working Checklist

The architecture compresses into a few questions you can run before any pull request. These are not theoretical. They are what stops a small mistake from becoming a structural one.

When you start a new module, the SOP is similar and short:

None of this is exciting. It is not supposed to be. The interesting work happens in product surfaces, performance frontiers, and data modeling. The architecture's job is to get out of the way of that work. When the architecture is invisible to the engineer adding a feature, it is doing its job.

Why This Holds Up

The pattern in this essay is not the only way to build a GraphQL API. It is the way that has held up across iterations of a production read API serving real traffic. The tradeoffs are concrete and worth naming honestly.

You pay for this architecture in file count. Adding a single field touches the SDL, sometimes a validation, sometimes a service, often a repository, occasionally a loader. A junior engineer's first reaction is reasonable: this is a lot of files for a small change. The answer is that the files are the point. Each one answers a question. When the question changes, exactly one file changes. The diff is local. The review is fast. The blast radius is small.

You also pay in generated noise. If your storage is managed by a CMS or any tool that produces opinionated table names, the repository has to translate. There is no way to make this disappear. The alternative is to expose those names to buyers, and that alternative is much worse than translation.

You give up some framework magic. There is no resolver decorator that auto-generates from a database table. There is no "fastify-style" plugin that wires everything for you. Every connection is a function call from one file to another. This is the boring tradeoff, and it is the one that pays off the longest. Boring code is reviewable code. Reviewable code is fixable code.

What you get in return is a system where the next engineer can find a rule by reading the architecture. Where a regression has a known place to look. Where a performance problem has a known place to start. Where a new feature has a known shape. The architecture is not novel. It is not the point that it is novel. The point is that it is consistent. Every layer answers one question.

A GraphQL API is the surface of a contract between you and every client you have not met yet. That contract should be small, stable, and honest. The architecture that produces it should be the same.

The interface is small. The interior is large. The schema is the contract. The SQL is the truth. Conquer the root and reuse the parent.

A buyer should see a clean graph. An engineer should see clear ownership. The code should answer one question at every layer: who owns this rule? When you can read your own architecture without grep, you have arrived.

The minimum bar. Schema-first contract. Zod-validated input. Service-orchestrated use cases. Repository-owned visibility. DataLoader-batched expansion. Per-request loader registry. Capped depth, page size, batch size, and statement timeout. Contract tests over implementation tests. If your GraphQL API has all of these, it will survive most of what production throws at it. If it is missing any of them, you already know which incident is coming.