Decoupling: Electronics vs. Software

While trying to work out how you would model dependency inversion and how it would looked in UML, I came across an image of a person soldering some wires directly into a wall socket. This was an interesting way to describe how dependency inversion decouples software components to essentially make them pluggable.


One of my main concerns regarding dependency inversion, that stems from something I had read in Code Complete 2 by Steve McConnell, is that; the more flexible you make your design, the more complex and unreadable it becomes.

Having looked at a few examples of dependency inversion it has become pretty clear that the code becomes more complex and the implementation of some of the methods becomes less obvious – sometimes even unclear. And these examples are only trivial pieces of code.

After seeing this analogy of a person soldering directly to a mains outlet my mind started bringing up other circuit analogies. In electronics, how would you decouple your low-level components?

The answer is in the use of breadboards. This is a prototyping board that allows you to plug and unplug components to test out different circuit designs. And like highly decoupled software designs the problem with breadboards is that once you get beyond trivial circuits they can quickly become a incomprehensible mess that is difficult to understand just by looking at it.

Breadboard_complexYes, they are highly decoupled. But they are ugly. If we follow the analogy a little further then the next stage of evolution in electronic circuit design would be to design and create a printed circuit board. As the design becomes stable the components are than soldered permanently onto a clean, well laid out, circuit board. The complete circuit is then shelved for later application.

Here I find software and electronics tend to diverge. In software, these highly decoupled components are left at this ‘breadboard’ stage and merely locked away into a ‘blackbox’ that only has inputs and outputs. Application components are then shuffled and reconfigured on a whim to meet the changing demands of the client or business needs.

My query is, if we follow this analogy, would there be any benefit in moving software design towards more stable components? To move the design away from highly decoupled systems into cleaner more understandable yet stable designs (and possibly larger components).

It appears that in human nature we tend to follow opposing ideas to extremes. In our industry the current line of thinking is that coupling is bad. But in my observations, a balance of opposing ideas is usually whats required. So far the pendulum swings towards heavy decoupling, but at what point is too much decoupling? At what point is it more beneficial to introduce controlled coupling?

It sometimes feels as though developers are blindly following (and chiming) the mantra ‘decoupling is good’. Whereas my thoughts wonder, when is decoupling good? and at what point is decoupling bad? Should decoupling be used more responsibly, applied with less broader strokes, and with more precision? Should decoupling be used in context, for specific situations, and with a more specific purpose? Is there a good balance of dependency inversion, and plain old coupling?

To play my own devil’s advocate, I realize that you can do things in software design that cannot be achieved in electronics. However, the argument still stands, are there any benefits of using more tightly coupled and more easily understandable configurations in stable designs? Or do we continue to follow the push towards indirection and decoupling just because it’s better?

My final query is, it there an easy way to move from highly coupled systems, to highly decoupled systems, and back again. If so, would this be a good way to shifting from highly decoupled designs to clearer ones?

With the introduction of CVS and branching, would it be better to keep two versions of the same system? A highly decoupled system, and a more tightly coupled version that is easier to re-use?

Here the analogy starts to fall apart, after all electronic circuits are physical devices that come with physical advantages introduced by printed circuit boards. However, another perspective is to consider software development through time. If you have a complex highly decoupled software system that is difficult to comprehend, how useful will it be, how easily could it be modified – six months down the road?

Would it be beneficial to move towards a more coupled piece of understandable code for long term storage reasons? How difficult is it really to move from highly decoupled system to understandable code and back again? Is the price of moving from one to another worth the reward? Can we make the change from one to the other cheaper?

One argument would be just to become more proficient at understanding highly decoupled code, but isn’t it our role as engineers and software developers to reduce complexity and not introduce it?

Comments are closed.

Powered by WordPress. Designed by Woo Themes