Skip to main content

Deep Imports Considered Harmful

Keep It Loose

Deep down we all know it’s important to componentise software systems.  It allows different parts of the systems, the components, to change and evolve over time with minimal effect on other parts of the system. Designed and implemented correctly, components are loosely coupled, as well as highly cohesive.  

In practice this means that components can be changed, replaced, or evolved independently, without causing widespread change throughout the system. Components with responsibilities, and that naturally change together, are grouped together.

Components have abstract interfaces and concrete implementations. Interfaces describe the features provided by a component and hide the concrete implementation. Users of components - usually other components - depend on the interface rather than the concrete implementation.

If clients come to depend on concrete implementations - the internal details of a component - the benefits of componentisation are lost. The component can no longer change, be replaced, or evolve independently of its clients. Internal design decisions leak across component boundaries, increasing coupling, slowing development, and making the system harder to maintain and reason about.

Let’s take a look at a particularly bad example of dependency on concrete implementation in Node.js.

The Deep Import Problem

Tight coupling happens when clients depend on a component’s concrete implementation rather than its interface. In the Node.js ecosystem, this architectural mistake most commonly shows up as what’s known as a deep import.

The Node.js ecosystem has what I consider a fantastic default package manager: npm - the Node Package Manager. Coming from a C++ background in the early noughties, when package managers weren’t widespread, it’s easy to see why I appreciate it so much. That said, it certainly has its critics.

The publish feature is particularly helpful. It allows you to publish a package to the central npm registry where it is publicly available for others to use. Alternatively, you can create private registries - which you have to pay for of course - and control who can access your package.

A published package is Node.js code and configuration that you want to share and use in other Node.js packages. At a conceptual level, it is a component: it has an abstract interface that clients are meant to depend on, and a concrete implementation that should be free to change.

Unfortunately npm does not enforce an encapsulated concrete implementation. Nothing technically prevents a client from reaching past the package’s public interface and directly referencing its internal file structure, coupling itself to the concrete implementation. Users of the package are not prevented from referencing, and getting tightly coupled to, the concrete implementation.

This means internal design decisions leak across component boundaries, and clients become coupled to things which should be encapsulated. This results in what is described as the Deep Import Problem.

Microsoft's No Deep Imports guidance describes the term "deep import" (or "subpath import"). A deep import is any import that bypasses the package’s intended public interface and instead targets internal modules or directories explicitly. For example:

import { Button } from 'some-pkg/lib/Button'
import { privateUtil } from 'some-pkg/lib/top/secret/internals'


As opposed to imports from the package root, the package’s declared interface and stable contract:

import { Button } from 'some-pkg/Button' 

A deep import turns an internal convenience into a public promise, without the package author’s consent.

When a client uses deep imports, they are no longer depending on an abstraction. They are depending on directory layout, file names, and internal refactor sensitive details. The moment an internal folder is renamed, code is reorganised, or an implementation is replaced, downstream code breaks, even though nothing about the conceptual capability of the component has changed.

While npm does not enforce an encapsulated concrete implementation, it does provide a mechanism for explicitly declaring a package’s public interface. This means that you can define, and constrain, the interface for the package and the things a client can or should depend on.

Used correctly, this restores the key benefits of componentisation: loose coupling, independent evolution, and components that can change internally without creating unexpected consequences across the system.

Deep imports do not only violate componentisation. Let’s see how they violate SOLID too. 

Deep Imports Through the Lens of SOLID

The SOLID principles are a set of five object‑oriented design principles defined by Robert C. Martin to help software systems remain understandable, flexible, and maintainable over time. 

Although they were originally articulated in an object oriented context, the underlying ideas apply more broadly to software design in general: managing dependencies, defining clear interfaces, and maintaining well defined responsibility boundaries between parts of a system. 

They focus on clear responsibility boundaries, well designed abstractions, and minimising coupling between parts of a system. Taken together, SOLID provides guidance for building systems that are easier to change without causing widespread breakage.

While the SOLID principles are widely respected, Open Closed and Dependency Inversion are often considered controversial because they are easy to overapply, leading to premature abstraction, unnecessary indirection, and increased complexity that can outweigh their intended benefits. However, even with those controversies, SOLID applies particularly cleanly to deep imports, because deep imports are not a subtle or theoretical edge case but a concrete example of depending on implementation details, leaking responsibilities, and bypassing intentional interfaces, the very problems SOLID was designed to expose.

Let’s take a look at those violations more closely.

Dependency Inversion Principle

Dependency Inversion is violated the moment a client reaches for a deep import. Robert C. Martin defines the principle succinctly: 

High level modules should not depend on low level modules. Both should depend on abstractions.

Deep imports do the opposite. High level code ends up depending directly on low level details. Instead of both sides depending on a stable abstraction, the client hard codes knowledge of the implementation. The direction of dependency is inverted in the wrong way, and internal modules become the public APIs. 

The Interface Segregation Principle

Deep imports also cut across the Interface Segregation Principle, which Martin summarises as: 

Clients should not be forced to depend upon interfaces that they do not use.

When clients bypass a package’s abstract interface and couple themselves to the concrete implementation, they are no longer depending on a small, intentional interface designed for their needs. Instead, they assemble unintended interfaces which are not cohesive, not intended, and not stable. When the internals inevitably change, consumers break, not because their requirements changed, but because they chose not to depend on the intended abstract interface.

Single Responsibility Principle

Deep imports undermine the Single Responsibility Principle, defined by Martin as

A module should have one, and only one, reason to change.

Internals that were once free to change for purely local reasons now carry an additional, implicit responsibility, to not break downstream consumers who depend on them. Routine refactors become breaking changes. Components stop having a single reason to change and instead accumulate many: internal improvements, performance work, structural cleanup, and compatibility concerns for unintended clients. Deep imports quietly turn internal implementation details into long term obligations, and in doing so, erase the very boundaries componentisation is meant to create.

Finally

Deep imports feel convenient, but they exact a long term cost. They bypass abstraction, expose internals, and transform private design decisions into public contracts. Over time, this leads to increased coupling, fragile systems, and slower development.

Good component design relies on intentional abstract interfaces and enforced boundaries. npm may not enforce encapsulation by default, but it provides the tools to do so. Using them, and resisting deep imports, preserves the very benefits that make modular systems viable: independent evolution, safe refactoring, and change without fear.

Deep imports are not just a stylistic issue. They are an architectural failure. DON’T DO IT.

Comments

Popular posts from this blog

Catalina-Ant for Tomcat 7

I recently upgraded from Tomcat 6 to Tomcat 7 and all of my Ant deployment scripts stopped working. I eventually worked out why and made the necessary changes, but there doesn’t seem to be a complete description of how to use Catalina-Ant for Tomcat 7 on the web so I thought I'd write one. To start with, make sure Tomcat manager is configured for use by Catalina-Ant. Make sure that manager-script is included in the roles for one of the users in TOMCAT_HOME/conf/tomcat-users.xml . For example: <tomcat-users> <user name="admin" password="s3cr£t" roles="manager-gui, manager-script "/> </tomcat-users> Catalina-Ant for Tomcat 6 was encapsulated within a single JAR file. Catalina-Ant for Tomcat 7 requires four JAR files. One from TOMCAT_HOME/bin : tomcat-juli.jar and three from TOMCAT_HOME/lib: catalina-ant.jar tomcat-coyote.jar tomcat-util.jar There are at least three ways of making the JARs available to Ant: Copy the JARs into th...

Write Your Own Load Balancer: A worked Example

I was out walking with a techie friend of mine I’d not seen for a while and he asked me if I’d written anything recently. I hadn’t, other than an article on data sharing a few months before and I realised I was missing it. Well, not the writing itself, but the end result. In the last few weeks, another friend of mine, John Cricket , has been setting weekly code challenges via linkedin and his new website, https://codingchallenges.fyi/ . They were all quite interesting, but one in particular on writing load balancers appealed, so I thought I’d kill two birds with one stone and write up a worked example. You’ll find my worked example below. The challenge itself is italics and voice is that of John Crickets. The Coding Challenge https://codingchallenges.fyi/challenges/challenge-load-balancer/ Write Your Own Load Balancer This challenge is to build your own application layer load balancer. A load balancer sits in front of a group of servers and routes client requests across all of the serv...

Do software engineering professionals still read? - survey results

  In order to gauge the potential audience for my book, So you think you can lead a team? , I conducted a small survey of my colleagues, co-workers and anyone from Linked. I read regularly, for work and pleasure, and assumed everyone else did too but did the responses I received confirm this? I polled 173 people, all within the software engineering field (including Product, etc), with a range of ages and years of experience in their role. What surprised me the most was that the majority of people, young or old, just starting or seasoned, still prefer reading physical books to blogs or e-readers. It also seemed that the older and more experienced were the most keen in learning more, and reading to expand or update their knowledge.  When it comes to reading habits between different roles the survey showed that software engineers and team leads read more regularly for their career than other roles, with 55 years old and over and 16+ years experience being the biggest readers over...