Skip to main content

Context! Context! Context! Part 1 of Beyond the Code: Designing Services That Stand the Test of Time

As software engineers, it’s easy to get lost in the excitement of crafting clever business logic: the algorithms, the workflows, the elegant domain models. However, the success or failure of a service rarely hinges on its core logic alone. What really separates a fragile prototype from a resilient, scalable and maintainable system is everything else that happens around that logic: the invisible scaffolding that shapes how a service behaves, communicates, and recovers when things go wrong.

In Part 1 of Beyond The Code: Designing Services That Stand the Test of Time, I’ll explore project layout and how the physical structure of a codebase affects engineers’ ability to understand, navigate, and maintain the system. Two ideas sit at the heart of this: 

  • Cognitive load - the mental effort required to figure out how pieces of a system fit together

  • Cohesion - the principle that things which work closely together should live close together in the project. 
Reducing cognitive load (a bit like loosening coupling) and increasing cohesion both make a codebase feel predictable rather than puzzling, which is why thinking intentionally about where code goes is just as important as thinking about what the code does.

These ideas first began to take shape during my time working with Java and later C#, but they have matured significantly through my recent experience with Node.js and TypeScript. While I believe the concepts can be applied across a range of languages and ecosystems, they should be viewed primarily through the lens of Node.js, TypeScript and ExpressJs. That said, they are unlikely to fit more opinionated languages such as Go or Rust. While I strongly believe this approach represents good practice, particularly for ExpressJS projects which are at least loosely based on the Model View Controller pattern, this article is ultimately an opinion piece. It should be treated as a set of guidelines rather than rules.

I’ve called this part, Context! Context! Context! as you should always be considering the context of where the code you are writing is used when deciding where to put it in the project, as well as what to call it.

From MVC to N-Tier

My recent experience with Express.js projects suggests that their developers typically follow, or are at least loosely familiar with, the Model-View-Controller (MVC) architectural pattern. While MVC is commonly used for building user interfaces, it can also be effectively applied to service design.
  • Model - represents the business logic by defining the domain entities, their interactions, and the rules that govern their behavior and data.
     
  • View - the graphical user interface for human users or the external interface through which other services and applications interact.

  • Controller - serves as the mediator between the Model and the View, managing the flow of data and coordinating updates between them.
Project structure often follows this pattern: one directory for models, another for views, another for controllers. It’s one of a few possible starting points, but it’s also rigid. It doesn’t account for shared code, or the fact that models are often persisted through repositories rather than persisting themselves. It also ignores that business logic frequently lives outside both models and repositories, or that a service may need to talk to other services, and so on. 

Another possible starting point is a Vertical Slice Architecture, where code for each feature is grouped together in its own directory, including views, models, and repositories, instead of separating concerns into horizontal layers. When each feature has its own directory the design encourages isolation. While this improves cohesion within a feature, it also means that shared code may be reimplemented in multiple slices leading to duplication. In contrast, a layered architecture like MVC centralizes these concerns in shared layers making reuse straightforward. 

To have the interface of the service defined in one place and to maximise reuse, I like to keep the parts of the service which speak to other systems and services, like the views, controllers and repositories in horizontal layers and have the models implemented as vertical slices to emphasise the features they implement. 

To get a better handle on how these kinds of projects should be structured, we can look at an N-tier architecture, but before we get there, we need to understand how views fit into services.

Routes: The views in services

In the context of the Model View Controller, the view is usually a user interface: edit boxes, buttons and other graphics elements a human interacts with to do stuff. A service doesn’t have views like this as they interact with other services or applications, usually via a HTTP interface.

For our purposes, in a service, the view can be considered, conceptually at least, as the router that directs HTTP requests to the right controller. For example, if I want to retrieve an entity, I make an HTTP GET call to the service’s URL for that entity, and the router sends that request to the controller, which retrieves and returns the entity to me. If I want to update the entity, I make an HTTP PUT call, and the router sends me to a different part of the same controller, and so on.

Not all HTTP libraries separate routes from controllers. NodeJs’s ExpressJs routes include middleware too so there is a very clear separation. Java’s SpringBoot and the equivalents in C#, for example, do not and use annotations to map the routes to controllers. It doesn’t matter which implementation you use, the principles are the same, you just might not see the separation clearly.

N-Tier Architecture

Now that we understand what the Model View Controller pattern is and its limitations for project structure, and how views are applied to services as routers, let's look at how an N-Tier architecture can be a stepping stone to a better project structure.

N-Tier architecture is a design pattern that divides an application, or service, or even a mobile app, into multiple logical layers, where each layer communicates only with the adjacent (next to or directly beside) layers to maintain separation of concerns and modularity. Some consider N-Tier architectures such as Model View Controller 

In the architecture shown above, HTTP requests come into the service through the router. The router passes the request to the appropriate controller. The controllers use the services to execute any business logic to satisfy the request. The services use the repositories to get the required models to operate on to satisfy the request. However, the models aren’t really part of the architecture now. They’re just data structures, so can be removed. Also, there may not be any business logic required, beyond retrieving the model, so the repositories, as well as the services, can be used directly by the controllers, but they are not adjacent in the architecture. This means we can consolidate the architecture:

Does this give us our initial project structure? It does seem it could be:

- controllers
- repositories
- routers
- services


However, we haven’t considered context! Routers and controllers are closely associated. You can’t have a router without an associated controller and vice versa. Routers is definitely a top level directory as they provide the interface to the service. Controllers are only used by routers, so they only exist within the context of the router. So they do not need to be a top level directory, and moving Controllers to become a sub directory of Routers keeps them in context:

- repositories
- routers
  - controllers
- services

There is an exception, if you’re using a framework which does not have separate routers then the Routers directory can go altogether and Controllers becomes the top level entry point for the service:

- controllers
- repositories
- services

Controllers aren’t the only entry point

Up to now, I’ve talked a lot about controllers because they’re the most common way for a service to receive messages: an HTTP request comes in, the router points at a controller, and the controller orchestrates the logic. However, controllers aren’t the only part of the architecture potentially present at the same architectural tier.

Plenty of services aren’t driven primarily, or even at all, by HTTP messages. Maybe your service consumes messages from an SQS queue. Maybe it processes events from Kafka, reads from a stream, or handles scheduled jobs. These entry points play the exact same role as controllers: they are where messages enter the system, and they coordinate the services and repositories to process the message.

In the same way controllers sit alongside routers, listeners or handlers, subscribers, workers, or whatever the naming is in your stack, sit at the same architectural tier as controllers. They’re peers. They are simply alternative ways the system is invoked.

Which means they belong alongside controllers in the project structure. Not buried inside services, not merged with repositories, and not hidden away in some miscellaneous directory.
 
 
For example, a project structure like this:

- listeners
- repositories
- routers
  - controllers
- services

leanly expresses the architecture:
  • routers/controllers handle synchronous HTTP requests
  • listeners handle asynchronous or event-driven workloads
  • both call into the same services and repositories
  • both sit on the boundary of the system
  • neither depends on the other
Once again, this is all about context. Anything that acts as an entry point into the service,  whether through HTTP, a queue, a stream, a cron job or a file drop, is present at the interface of the architecture. And anything that represents internal business logic or data access sits below the interface.

Keeping those boundaries clear makes the system easier to navigate and reason about. You can open the project and immediately see: “Ah, here are all the ways messages enter the service, and here’s everything those entry points depend on.”

That’s exactly what a good project structure should communicate.

Collaboration implies Cohesion

When you look at a project, one of the easiest ways to spot unnecessary cognitive load is to ask a simple question:

“How far do I have to travel through this codebase to understand how two things work together?”

If two pieces of code collaborate tightly: models and repositories, repositories and services, controllers and routes,  then forcing the reader to jump across the project to understand that collaboration is just friction, and friction compounds over time.

So a good rule of thumb is this: 

Things that collaborate should live close together.

A classic example is models and repositories. A repository only exists to store, retrieve and manipulate a model. The model often only becomes useful when something persists or retrieves it. They’re inseparable. Splitting them into completely different top-level directories (e.g., models and repositories) might look tidy, but it disconnects two things that are conceptually glued together.

Ideally, they should even be declared in the same file as that’s the tightest possible context. When you see the model, you see its repository right next to it. No hunting, no mental juggling.

However, languages sometimes have opinions about this. Java, for example, insists on one public class per file. If that’s the world you're in, you can still preserve the spirit of the rule. Put the model in a top level models directory and put its repository in a subdirectory inside it, or invert it if you think the repository is the primary entry point. The point isn’t the exact nesting; it’s reducing the distance between pieces of code that must be understood together.

There are also cases where collaborators can’t realistically share a file. For example, a User model may contain an Address model, and each may have its own repository. In this situation, the models themselves are closely related, so grouping them together, such as in a shared directory, keeps the cognitive distance small while still respecting their individual boundaries. The same applies to their repositories: the physical layout should mirror the conceptual relationships.

- repositories
  - models
    - address.ts
    - user.ts
  - address-repo.ts
  - user-repo.ts

Whatever structure you choose, the guiding principle doesn’t change:

Always consider the context of how the code is used, and organise it so that the person reading it doesn’t have to go on a scavenger hunt to understand the relationship.

That’s how you keep a project navigable, predictable and human readable. 

When collaboration is hidden by design

There’s another scenario where thinking about context becomes even more crucial: when collaboration is deliberately internal.

Imagine you’re writing a client library for a third-party API. The raw API responses aren’t part of your public interface. You don’t want consumers dealing with the API’s weird field names, inconsistent formatting, or whatever chaos it contains. Instead, the client immediately transforms the response into something clean and consistent before returning anything.

In this situation you have:
  • API response models
  • conversion logic
  • outbound client-facing models
  • the client method performing the translation
all collaborating extremely closely, but only inside the client library.

These pieces don’t need to be exposed to the rest of the service. They don’t even need to be findable by an engineer who isn’t working on the client internals. They form a small ecosystem whose boundary is the client itself.

So the structures of the API responses belong inside the client’s own small context bubble, not in any project wide models directory. If the public never sees those structures, the rest of the project shouldn’t have to know they exist. 

This is called encapsulation.

You might have something like:

- payment-api
  - payment-api-client.ts
  - responses
    - create-charge-response.ts
  - converters
    - charge-converter.ts
  - models
    - charge.ts


Here, everything that collaborates lives together. If you’re working on the payment API client, you have everything you need right in front of you. If you’re not, this entire block might as well not exist. Context is preserved exactly where it matters, and nowhere else.

Code Beyond the Domain

Not all the code we write fits neatly into the domain or the tiers I’ve been describing. Some code isn’t an entry point, a service, or a repository. It doesn’t represent business logic, persistence, or message flow. Instead, it’s the code that makes everything else work. This code is still part of the architecture, and because it’s often widely used, it can be expensive to change. That’s why it deserves the same level of thought as anything else. Where you put the code and how you structure it will affect how easy your project is to understand and evolve.

Broadly, this kind of code falls into two categories, which I am calling shared code and library code for the purposes of explaining it easily:

Shared code

This is code used by specific parts of the code base, but not the whole system and therefore isn’t global. For example, you might have mappers that convert a domain model into the structure your HTTP response expects. Controllers use them, but services and repositories don’t. They’re shared, but only by a  specific subset of the code base.

Shared code should not sit at the top level, floating ambiguously above everything else. It should live close to the code that actually uses it. If a mapper is only used by controllers, put it near the controllers. For example:

- controllers
  - mappers
    - to-address-mapper.ts
    - to-user-mapper.ts
  - address-controller.ts
  - user-controller.ts

However, it’s important to remember to only separate the code from where it’s used if it’s absolutely necessary. For example, if a mapper is only used by one controller, there’s no need to move it to a separate file from that controller.  There are two obvious exceptions. They apply generally to shared code, but let’s keep the mapper example for simplicity. 

If there are multiple mappers and some of them are used by more than one controller, then it may make sense to group all of the mappers together, rather than have some in controller files and some grouped.

If you need a unit test for a mapper and you can’t easily triangulate* it via the controller tests, then move it to a separate file, but keep it near the controller. 

Sometimes you have to keep pushing the shared code up a level until it’s in the right place. It’s important to place shared code at a level that balances cohesion with the code which uses it, while minimizing unnecessary cognitive overload. Let’s look at an example.

Imagine you have a service that manages orders and another that manages subscriptions. Both contain their own business logic, and both need to perform a common piece of behaviour: calculating whether a customer qualifies for a loyalty discount.

The discount rule isn’t owned by the order logic, and it isn’t owned by the subscription logic. It’s a business rule that spans both, so neither service should implement it alone and you don’t want two slightly different versions diverging. So, you push the loyalty discount code up:

- services
  - loyalty-discount.ts
  - orders
    - create-order-service.ts
    - cancel-order-service.ts
  - subscriptions
    - renewSubscriptionService.ts
    - upgradeSubscriptionService.ts


The principle is the same as before:

If two things collaborate, keep them close.

Shared does not mean global. Shared means, shared by these parts of the code, so place it where that relationship is visible.

Library code

Then there’s the code that truly is global. Code that can be used anywhere and everywhere. UUID generators. String formatters. Small, self-contained pieces of code which give you the time now, the time in an hour, or format the time consistently as a string.

This kind of library code has no real context within the architecture. It’s not tied to a particular model, controller, or repository. It doesn’t express business logic. It doesn’t reveal anything about how the system behaves. It’s simply reusable code with no specific context

Library code belongs right at the top level of the project: clearly visible, clearly isolated, clearly generic. When something can be used everywhere, it shouldn’t live inside any particular architectural tier, otherwise you give it a false sense of belonging and mislead the reader.

Library code should be grouped by context and pushed to the top level of the project, but you don’t want to end up with a wide and confusing top level. For example:

- date
  - now.ts
- repositories
- routers
  - controllers
- services
- string
  - to-lower.ts
  - to-upper.ts
- uuid.ts

Group the library code at the edge of the project in an appropriately named directory:

-lib
  - date
    - now.ts
  - string
    - to-lower.ts
    - to-upper.ts
  - uuid.ts
- repositories
- routers
  - controllers
- services

I’m not a fan of generic, less meaningful names like ‘lib’, but here it’s a good compromise. We’ll talk more about naming things later.

A simple rule for shared and library code is:
  • Shared code lives near the code that shares it.
  • Library code lives at the top level, because it belongs to no other code.

Where to put the Tests

Different languages and ecosystems support different conventions for test placement. In NodeJs, for example, you’ll often see one of three approaches:

  • A separate test folder alongside src  
  • A __test__ directory inside src  
  • Test files placed directly next to the code they test  

I favour test alongside src because it’s cleaner and less noisy. The tests don’t get in the way, but are no less important. The bigger question is whether tests should live beside the code they exercise, or in their own space.

Placing test files next to the code they test has some drawbacks:

  • They clutter the project layout, making it harder to navigate the actual implementation.  

  • They need to be filtered out when building for release, adding friction to the build pipeline.  

  • Most importantly, they subtly encourage a mindset of testing the code rather than testing the feature.  

Unit testing is valuable, but it can be restrictive if it becomes the only lens through which you view quality. A project that only tests individual functions in isolation risks missing the bigger picture: how those functions collaborate to deliver a feature.

I first learned the discipline of testing software end‑to‑end, from the outside in, when I read Growing Object‑Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce. Their approach emphasises starting with tests at the system’s public interface, treating the software as a black box, and only introducing unit tests when triangulating behaviour through that interface becomes difficult or cumbersome. 

This philosophy aligns perfectly with the idea of context and cohesion: tests should live where they make the relationships between features and their verification obvious, not scattered in ways that obscure intent. By beginning at the boundary of the system and only drilling inward when necessary, you keep the cognitive load low, preserve clear architectural edges, and ensure that your test suite reflects the same navigable, predictable structure as the code itself.

A healthier approach is to think of tests in terms of features and behaviours rather than files and functions. Tests should answer questions like:

  • Does the order creation flow work end-to-end?  
  • Does the subscription renewal logic handle edge cases correctly? 
  • Does the API return the right response when given invalid input?  

These are questions about the system’s behaviour, not about whether a particular method returns the right value. Organising tests around features makes it easier to see what the system should do and whether it does it.

To reflect this philosophy, generally tests should live in their own top level directory, structured by feature rather than by file. For example:

- src
  - controllers
  - lib
  - services
  - repositories
- test
 -  orders
    - create-order.test.ts
    - cancel-order.test.ts
  - subscriptions
    - renew-subscription.test.ts
    - upgrade-subscription.test.ts

Tests remain close enough to the code they exercise to preserve context, but not so close that they clutter or mislead. You can open the project and immediately see: here’s the implementation of orders, and here’s how we test orders. The relationship is visible without being noisy.

Unit tests still have their place, especially for complex algorithms or critical edge cases. But they should be balanced with integration and feature tests that validate behaviour across boundaries.

Just as thoughtful test placement reduces noise and clarifies intent, clear naming does the same for code, it shapes how we understand and navigate a system. Let’s have a look at how we might name things.

Naming

Kevlin Henney tells us that “One of the hardest things in software development is naming. Naming of products, of paradigms, and of parts of your code….” he goes on to explain that the compiler understands context from how and where code is used and we should too to remove unnecessary verbosity from names:

If your programming language denotes abstract classes with a keyword such as abstract, don’t repeat yourself by putting Abstract in the name or by telling the reader that its necessary use is as base class by naming it Base. If a class is a concrete class then, by definition, it is an implementation class, so don’t repeat yourself by putting Impl in the name. If your compiler and IDE can tell an integer from a string, don’t repeat yourself by encoding that detail in a variable name, as popularised by the once-popular Hungarian encryption scheme. If your testing framework has you mark your tests with a Test annotation, attribute or macro - and your test appears inside a class, file or folder named Test - don’t repeat yourself by including Test in the name of your test. Use the bandwidth of a name to tell the reader things they need to know rather than repeating what is already known. Use your bandwidth for signal rather than noise - and use less bandwidth.

-- Exceptional Naming, Kevlin Henney

When naming files, apply the same principle of signal over noise. The name should communicate the file’s purpose in its immediate context without redundant prefixes or suffixes. For example, a controller that handles user related operations should simply be named user-controller.ts. If the file is inside a controllers directory, you don’t need to repeat ‘controller’ in the name unless it adds clarity. user.ts inside controllers is often sufficient because the directory already conveys the role.

Context matters, the directory structure provides part of the meaning, so avoid duplicating that meaning in the filename.

However, this guideline should also be applied thoughtfully rather than rigidly. In most cases, avoiding redundant prefixes or suffixes keeps filenames clean and meaningful within their directory context. However, when searching across a large codebase, filenames often appear without their full path, which can lead to ambiguity. In those edge cases, adding a clarifying term, such as user-controller.ts instead of just user.ts, may improve discoverability without significantly muddying the naming convention.

If a file starts to accumulate too much responsibility, for example, multiple controllers, complex orchestration, or just too much code, it’s time to split it into smaller, focused files. The guiding principle is to preserve cohesion, keep tightly coupled code together in a well named directory so the reader understands their relationship without hunting across the project,  and avoid fragmentation, don’t scatter related logic into distant directories. Instead, create a subdirectory that reflects the shared context.

For example, if user-controller.ts grows to handle multiple distinct flows, such as registration, profile updates, authentication, you might restructure like this:

- controllers
  - user
    - auth-controller.ts
    - profile-controller.ts
    - register-controller.ts

This approach keeps the cognitive load low. Everything related to user controllers is in one place, and each file has a clear, single responsibility.

Listeners, models, repositories, services, shared code, library code, tests and everything else that grows beyond a single file should follow the same pattern. Group them by context in subdirectories, and name the files according to their specific role. This keeps the project navigable and predictable, reducing cognitive load while preserving clear boundaries.

Why “helpers,” “utils,” and “lib” Are Bad Names

Directories named helpers, utils, lib, Extensions, etc. are a common source of frustration, and for good reason. These names are vague, interchangeable, and often scattered inconsistently across a project. Worse, they mislead! So called “helpers” aren’t helping; they’re doing real things. These generic labels add cognitive noise instead of clarity.

As we’ve discussed already, meaningful directory names are always better. A name should tell you what the code does or what domain it belongs to, not that it’s vaguely useful. For example, if the code formats dates, put it in a date directory. If it handles string manipulation, use string. Each name should reflect the actual responsibility of the code.

When you truly have code that is global and context free, small, reusable bits of code that don’t belong to any specific domain, then and only then use single, consistent directory naming like lib. This is the least bad option because it signals generic library code without pretending to describe a purpose it doesn’t have.

Don’t forget to watch as things grow and change. Don’t miss when the domain emerges and needs refactoring.

Finally

Designing a service that stands the test of time isn’t just about writing clean code, it’s about creating a structure that makes sense to the people who will live with it. Architecture and naming are silent guides for every future decision, shaping how easily others can navigate, extend, and trust the system. 

While N‑tier architectures offer clear separation of concerns, they’re not without trade‑offs. Adding too many layers without a clear purpose can lead to architectural bloat, slowing development and increasing cognitive overhead. Each tier can introduce complexity in communication and testing, so it’s important to balance modularity with simplicity. Layers should exist because they solve a real problem, not because the pattern says they should.

The principles outlined here aren’t rigid rules; they’re lenses for thinking. Every project has quirks, every team has constraints, but the goal remains the same: clarity over chaos. By grouping related code, naming with intent, and resisting the temptation of vague catch-all directories, you create a system that communicates its design without explanation. That’s the hallmark of a codebase built for longevity.

Ultimately, good structure is an investment. It pays dividends in reduced friction, faster onboarding, and fewer bugs, incidents and surprises when change inevitably comes. If you take one thing away, let it be this: code doesn’t live in isolation. Its context matters, today, tomorrow, and for every engineer who touches it next.

Acknowledgments

Thank you to the following for your reviews and comments and helping to expand my ideas:

Dom Davis
Laurent Douchy
Alex Leslie
Jon Moore
Chirs Oldwood
Kevin Richards
Nayana Solanki

References

Growing Object-Oriented Software, Guided by Tests
Steve Freeman and Nat Pryce
ISBN: 9780321503626

Exceptional Naming, Kevlin Henney

 

 * Triangulation means writing enough tests against the public interface to constrain and validate all essential parts of the internal implementation without directly testing private details.

Comments

Popular posts from this blog

Write Your Own Load Balancer: A worked Example

I was out walking with a techie friend of mine I’d not seen for a while and he asked me if I’d written anything recently. I hadn’t, other than an article on data sharing a few months before and I realised I was missing it. Well, not the writing itself, but the end result. In the last few weeks, another friend of mine, John Cricket , has been setting weekly code challenges via linkedin and his new website, https://codingchallenges.fyi/ . They were all quite interesting, but one in particular on writing load balancers appealed, so I thought I’d kill two birds with one stone and write up a worked example. You’ll find my worked example below. The challenge itself is italics and voice is that of John Crickets. The Coding Challenge https://codingchallenges.fyi/challenges/challenge-load-balancer/ Write Your Own Load Balancer This challenge is to build your own application layer load balancer. A load balancer sits in front of a group of servers and routes client requests across all of the serv...

Remember to Delegate: The Triangle of Trust

So you think you can lead a team? I’ve been talking and writing a lot about leading a software engineering team in 2025. I started thinking about it more deeply the year before when I decided to give a colleague, who was moving into team leading, some advice: 'Doing the work' isn't the only way to add value Remember to delegate Pick your battles Talk to your team every day Out of this came a talk, “So you think you can lead a team?” which I gave at work, at meetups and at conferences in various different formats during the first quarter of 2025. Here I am looking at Remember to Delegate and an idea which came out of discussion around the talk, The Triangle of Trust, in more detail. Delegate Delegation is a crucial skill for any team lead, yet it is often one of the most challenging aspects of leadership to master. Many leaders, particularly those who have risen through the ranks as individual contributors, struggle to let go of tasks, fearing a loss of control or a dip in ...

Catalina-Ant for Tomcat 7

I recently upgraded from Tomcat 6 to Tomcat 7 and all of my Ant deployment scripts stopped working. I eventually worked out why and made the necessary changes, but there doesn’t seem to be a complete description of how to use Catalina-Ant for Tomcat 7 on the web so I thought I'd write one. To start with, make sure Tomcat manager is configured for use by Catalina-Ant. Make sure that manager-script is included in the roles for one of the users in TOMCAT_HOME/conf/tomcat-users.xml . For example: <tomcat-users> <user name="admin" password="s3cr£t" roles="manager-gui, manager-script "/> </tomcat-users> Catalina-Ant for Tomcat 6 was encapsulated within a single JAR file. Catalina-Ant for Tomcat 7 requires four JAR files. One from TOMCAT_HOME/bin : tomcat-juli.jar and three from TOMCAT_HOME/lib: catalina-ant.jar tomcat-coyote.jar tomcat-util.jar There are at least three ways of making the JARs available to Ant: Copy the JARs into th...