Wednesday, November 9

Profilers and The Perils of Micro-Optimization

Ian Griffiths writes about the perils of optimising code where it is not required. He correctly points out that most parts of a typical application are not perceived as too slow, and should be optimised for reliability and maintainability, not speed.

Those that are too slow are typically too slow because of infrastructure issues, and code is responsible in only a small number of cases.

Ian criticises profilers as they encourage developers to chase performance gains where they are not a serious problem. However, I'd recommend running code through a profiler occasionally as a learning experience. Early in my programming career, I had some code with performance problems. Running it through a profiler, I found one loop was slow. This was because I had set it up in the form:

for (int i=0;i < strlen(s);i++){s[i]='A';}.

I didn't realise that strlen was evaluated every time the loop was iterated, and the time taken to run this is proportional to the length of the string. A profiler found this was the cause of a bottleneck, and I learned from the experience to move the call to strlen() outside of the loop.

For a production web site, I'd recommend working out what the acceptable time is to load each page. This should be based on how real users make use of each page, not what you think might be technically possible.

You should then record the time taken by a percentage of requests, chosen at random. If these are greater than the acceptable level, then an alert should be automatically sent to the developer. There may be many causes of slowness, e.g. unusually high loading, databases not being properly indexed etc., and its not possible to anticipate them all, but this will ensure you spend your time tackling issues that are causing users genuine problems and minimise the numbers of users affected by them.

No comments: