Saturday, 9 January 2016

Dependency Injection: Don't Overdose!

I first encountered dependency injection, a pattern that implements inversion of control for resolving dependencies, when I was investigating the web application frameworks available for Java and decided to go with Spring (of course J2EE has dependency injection as well). When you start off with a golden hammer like dependency injection, suddenly every one of your software development challenges looks like a nail. And, as you would expect, all the books and documentation encourage this, although I suspect not consciously.

Dependency injection is a fantastic tool if you have multiple components you want to bring together to form a myriad of different applications. The perfect example is of course the Spring framework, because it is exactly that and dependency injection works very well in that context (no pun intended).

However, dependency injection should not be your default pattern for every project and if you are using it in a project, you should only use it when it’s really needed. Don’t load every object in your application into the container, you don’t need to.

Dependency injection is great for reducing the dependencies between components in your application and for allowing you to plug custom components into frameworks, as well as many other things, but one way or another it adds a further level of complexity to your application. Most dependency injection frameworks use either one or more files containing something like XML or a declarative method using something like annotations directly on classes to build the container at runtime.

Each method has its own drawbacks. With XML files you have a second place, other than your code, to understand and potentially to debug. With the declarative method you have no easy overview of exactly what is going into the container at runtime and you have to be careful about objects being instantiated and injected unintentionally.

And of course over full containers can result in very slow application startup and tests. If you have to fire up the dependency injection runtime and load objects into it every time you run a test it makes your tests slower and more complicated. Then you won’t run them or maintain them.

So what’s the solution? As the ‘Effective’ series of programming books are so fond of saying, be judicious. When you’re tempted to put an object into your container, first consider whether it really needs to be there. It might be better served, for example, being orchestrated with other objects behind a facade which is loaded into the container if necessary. That way you’ll find yourself with simpler code which is easier to maintain and test and it may just even perform a bit better at runtime.

1 comment:

  1. I think you're mixing up two separate aspects here: Dependency Injection as a design principle and the usage of container frameworks that help you get code that was written using this example to run. The latter is a totally optional thing to do.

    Dependency injection is not about creating pluggable components: it's about writing testable code. To achieve that, code needs to expose it's dependencies (collaborators) publicly — read: expose constructor arguments. This technique removes the responsibility of providing these collaborators to the entity that creates instances of your classes and basically implements the SOC principle as the class responsibility gets reduced to only using the collaborators, not obtaining them.

    This approach basically externalizes the responsibility to create instances of your classes and you now face the choice of writing this creating code yourself — which is perfectly fine and actually the way you'd do it anyway in unit tests — or by outsourcing that responsibility to 3rd parties as the general mechanism of wiring objects is a rather technical aspect, repetitive and usually results in code not core to the domain you're actually tackling.

    Different DI frameworks have different histories and philosophies in how they interact with the codebase they manage and thus also shape the design quality of the code users write when working with them. According to the theoretical foundations described above, a good DI container will has to provide two fundamental means:

    1. Means to define which classes shall be managed by the container.
    2. Means to define how those classes express dependencies to other components.

    With the current Spring Framework generation 1. is usually comprised of a set of base packages to scan for classes with an annotation. The latter aspect can be replaced by type matches or naming conventions, too, so that the code managed doesn't even have to know about the container managing it.

    2. Is currently achieved by annotating a constructor with either @Inject or @Autowired. Spring 4.3 will be able to detect a canonical constructor in case only one is available anyway.

    What I am basically trying to get to is that a good DI container will not trick you into some container specific way of designing components but actually leverages well-designed code — immutable classes, that expose their dependencies as constructor arguments — and can work with it. So it's the class' design that matters, no matter if we talk about entities, value object or service classes that will end up being managed by a container.