Skip to main content

Deep Imports Are Not Faster

I wrote Deep Imports Considered Harmful for two reasons. Obviously to persuade people not to use deep imports, but I also wanted to reinforce that components should have an abstract interface and an encapsulated concrete implementation.

I was expecting some pushback, but all I got was something I should have seen coming and covered in the original piece. Here’s the follow-up to get it covered.

Every time you tell someone that deep imports are a bad idea, there’s always one reply waiting in the wings: But deep imports perform better.

It sounds plausible. It feels intuitive. But it’s wrong.

To be fair, this myth didn’t come from nowhere. In the past, some libraries really did ship poorly structured entry points with giant index.js barrels with side effects, no tree‑shaking - a build‑time optimisation that removes code you never use from your final JavaScript bundle - support, and no clear separation between public API and internal implementation. In those cases, deep imports sometimes felt like a workaround for bad library design rather than a choice.

Even if it was right, it would have to be a massive performance improvement to justify poor architectural decisions which hamper change.

Runtime performance doesn’t care about your import path

Once a module is loaded, the runtime cost is identical whether you imported it from the package root or use a deep import. JavaScript engines don’t reward you for bypassing the public API. There’s no fast lane for deep imports. There’s just the same module, loaded the same way, doing the same work.

The only thing you’ve optimised is your ability to break when the package author reorganises their folders.

Build performance gains are imaginary or microscopic

Modern bundlers already know how to tree‑shake, dedupe, and optimise module graphs. They don’t need your help. Importing from the package root doesn’t force them to pull in the entire universe; they’re smarter than that.

In fact, today’s tooling increasingly assumes the opposite. Conditional exports, exports maps, and pre‑optimised entry points exist specifically so package authors can define stable, efficient public APIs. Deep imports don’t help these tools, they often bypass the very optimisations designed to make builds faster and more reliable.

At best, deep imports save you a few milliseconds. At worst, they bypass pre‑optimised entry points and make your build slower. Either way, the difference is so small that it’s not worth the architectural debt you incur by coupling your code to someone else’s directory structure.

The trade‑off isn’t even close. You’re not choosing between “fast but fragile” and “slow but safe.” You’re choosing between “fragile for no reason” and “safe with no measurable downside.”

Deep imports don’t perform better. They just break more easily.
 

Comments

Popular posts from this blog

Catalina-Ant for Tomcat 7

I recently upgraded from Tomcat 6 to Tomcat 7 and all of my Ant deployment scripts stopped working. I eventually worked out why and made the necessary changes, but there doesn’t seem to be a complete description of how to use Catalina-Ant for Tomcat 7 on the web so I thought I'd write one. To start with, make sure Tomcat manager is configured for use by Catalina-Ant. Make sure that manager-script is included in the roles for one of the users in TOMCAT_HOME/conf/tomcat-users.xml . For example: <tomcat-users> <user name="admin" password="s3cr£t" roles="manager-gui, manager-script "/> </tomcat-users> Catalina-Ant for Tomcat 6 was encapsulated within a single JAR file. Catalina-Ant for Tomcat 7 requires four JAR files. One from TOMCAT_HOME/bin : tomcat-juli.jar and three from TOMCAT_HOME/lib: catalina-ant.jar tomcat-coyote.jar tomcat-util.jar There are at least three ways of making the JARs available to Ant: Copy the JARs into th...

Write Your Own Load Balancer: A worked Example

I was out walking with a techie friend of mine I’d not seen for a while and he asked me if I’d written anything recently. I hadn’t, other than an article on data sharing a few months before and I realised I was missing it. Well, not the writing itself, but the end result. In the last few weeks, another friend of mine, John Cricket , has been setting weekly code challenges via linkedin and his new website, https://codingchallenges.fyi/ . They were all quite interesting, but one in particular on writing load balancers appealed, so I thought I’d kill two birds with one stone and write up a worked example. You’ll find my worked example below. The challenge itself is italics and voice is that of John Crickets. The Coding Challenge https://codingchallenges.fyi/challenges/challenge-load-balancer/ Write Your Own Load Balancer This challenge is to build your own application layer load balancer. A load balancer sits in front of a group of servers and routes client requests across all of the serv...

Do software engineering professionals still read? - survey results

  In order to gauge the potential audience for my book, So you think you can lead a team? , I conducted a small survey of my colleagues, co-workers and anyone from Linked. I read regularly, for work and pleasure, and assumed everyone else did too but did the responses I received confirm this? I polled 173 people, all within the software engineering field (including Product, etc), with a range of ages and years of experience in their role. What surprised me the most was that the majority of people, young or old, just starting or seasoned, still prefer reading physical books to blogs or e-readers. It also seemed that the older and more experienced were the most keen in learning more, and reading to expand or update their knowledge.  When it comes to reading habits between different roles the survey showed that software engineers and team leads read more regularly for their career than other roles, with 55 years old and over and 16+ years experience being the biggest readers over...