Tuesday, March 28, 2006


When should code be tuned for performance? I have seen far too many applications built with little or no consideration for performance, or at best where performance is left as an afterthought. Event the phrase 'performance tune' is problematic because it assumes that performance problems can be simply 'tuned' away, and that performance problems are trivial (which is far from the reality). Tuning code is rarely possible as the fundamental design is usually the source of the problem. Then should you design for performance? I've seen this case and it usually means a complicated design and poor usability (sorry, we can't do that it would be too slow). So when do we worry about performance?

If "Premature optimization is the root of all evil", then what does premature mean? and when does it make sense to optimize. Most performance problems stem from a naive approach to design (it works for 10 records so it's good), and ignoring performance until it becomes a problem. I've seen applications that used XML to pass data between every object in the system. I've seen applications where every object is a COM object (I built an app like this once). What I also see is developers making excuses for these poor design decisions and I hear sentences like "It's too late to fix that now, that would require a full rewrite". I've been involved in far too many last minute performance 'blitzes', hacking away at code and patching up anything that can easily be fixed. I propose that optimization should be as constant as any other activity in an Agile development cycle.

Enter RGRPT. I propose an addition to Kent Beck's famous Red Green Refactor mantra. I propose that optimization become part of the refactoring process so that the enhanced mantra might go something like Red-Green-Refactor-Profile-Tune. So, what does this mean? Here's how it works. Follow the same rules for TDD. Write a test, define the desired interface and write code to satisfy the test. Create a naive initial design, just make the test pass. Now define a performance goal and write a test for that too. The performance tests should probably run against a well sized database with indicative data (a real customer database is ideal here). Run your performance test against the large database and if it fails, profile it. I highly recommend the JetBrains profiler. It is important to use a profiler, as assumptions about where code is slow are very often incorrect. Tune the code, fix the design, do whatever is needed to get that test green.

Using TDD (Test Driven Development) in the context of the Agile 'Attitude' and my proposed RGRPT pattern ensures that the entire development process is followed in small rapid cycles. Leaving performance tuning to the end, is like leaving testing to the end, or writing the installer at the end. Ken Schwaber compares Agile development to 'Sashimi', an 'all at once' software engineering methodology. Performance tuning is as much a part of that as testing and documentation and any other product development activity.

No comments:

Post a Comment