What is premature optimization

by Max Galkin

Several generations of software folklore capture the negative attitude towards premature optimization. But what is “premature” optimization exactly? I share some thoughts on the subject today.

“Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” – Donald Knuth

“More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason — including blind stupidity.” — W.A. Wulf

“The First Rule of Program Optimization: Don’t do it. The Second Rule of Program Optimization (for experts only!): Don’t do it yet.” — Michael A. Jackson

I think one missing piece in all the premature optimization talk is that it really isn’t about code optimization per se, but about software design strategy in general. Someone claiming that code optimization is “premature” is supposedly able to show that another design strategy is “mature” in contrast. I like to think about design as a search problem: you are in some point A in a multi-dimensional design space, and you need to get to some area B, determined by the new user stories you must support and the acceptable ranges of the myriads of characteristics your solution has to fulfill. Your design documents describe the path you’re taking inside that multi-dimensional space to get from A to B (or at least to some point C that is closer to B than A was).  Changes you commit to the repository are the actual moves in the design space, and the amount of time you spend on each change is the actual “cost” of that move.

Now the tricky part is that the design space is not static: every move you make (and every bond you break) can potentially change the cost of  any other move in the space. The extent of this potential change is determined by the nature of the move and by the “interconnectedness” characteristics of your current design, such as coupling between and coherence of the modules. Naturally, every move also takes you into a different point in the space, so the distances from your position to other points in the space change. To put it simply, developing software is like trying to solve a billion-sided Rubik’s cube, where the stickers are changing their colors after each rotation… But, I digress.

What would be the difference then between a “premature” optimization and a non-premature? A premature optimization is the optimization that doesn’t bring you closer to your desired area B. It is either a move in the direction opposite from your real goal, or a move that changes the costs of other moves unfavorably, so the distance to the real goal increases. Two examples illustrate these cases:

1. Ms Piggy decides to optimize database access performance: she noticed that some extra queries are being sent by the intermediate abstraction layer, so she decides to work around this layer and directly send the database queries from several place in code. She spends two weeks to finish this work and her tests show that the scenario is now faster. She doesn’t realize that the performance of this scenario is of low importance to users, because it runs nightly as part of an automatically scheduled db maintenance job; in fact, users are more concerned about the frequent connection failures. A few weeks later, someone has to undo Ms Piggy’s change to update the intermediate layer API to improve logging and connection reliability diagnostics. Ms Piggy’s optimization was “premature”, because it was going in the direction opposite from the overall design goal and had to be undone.

2. Kermit decides to implement his module “with performance in mind”, therefore he avoids calling into existing methods from other modules, instead implementing his own, “more efficient” versions of those methods. Most of the code in the module would not have caused performance issues, even if it had used the existing methods. However, for the next few years, it increases the cost of all changes touching this module. Designers do not realize the specific implementation of the module and underestimate the costs of the design changes. Coders do not realize the specific implementation of the module and introduce subtle and costly defects. Slowly the module is updated to call into shared libraries and the amount of duplicate logic is decreased. Kermit’s optimization increased costs of the changes that followed it without justification, thus the optimization was “premature”.

 “All of this reinforces the key rule that first you need to make your program clear, well factored, and nicely modular. Only when you’ve done that should you optimize.” — Martin Fowler, Yet Another Optimization Article

“…the strategy is definitely: first make it work, then make it right, and, finally, make it fast.” — Stephen C. Johnson and Brian W. Kernighan, http://c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast

Really, the “no premature optimization” rule is merely a heuristic for the search of the optimal path in the design space: “do not use a greedy algorithm to optimize design”. Often to get to the desired state you have to make multiple moves of the “refactoring” nature, which do not “improve” any observable program behavior at all, all they do is decrease the costs of the further moves towards the goal.

Finally, among many reasons why people make wrong design moves, I think, miscommunication or insufficient communication is a major one. This situation is related to the “inner-outer world” dilemma, and premature optimization is one of the examples of that tree falling in the forest, when no one is around to hear. To avoid wrong moves, designers and coders need to have a clear shared vision of customers’ needs.

“Users are more interested in tangible program characteristics than they are in code quality. Sometimes users are interested in raw performance, but only when it affects their work. Users tend to be more interested in program throughput than raw performance. Delivering software on time, providing a clean user interface, and avoiding downtime are often more significant.” — Steve McConnell, Code Complete, 2nd ed.