Saturday, September 23, 2006

Attaching Debugger takes Forever!

Why does it take so long for Visual Studio to start my web app in debug mode?


Check your Symbol Server


If you have a symbol server set up, and have the following environment variable '_NT_SYMBOL_PATH', your debugger may be retrieving symbols from a slow store. After some experiments (like uninstalling and reinstalling VS addins, etc.), I removed the _NT_SYMBOL_PATH variable from my system and voila! attaching the debugger is fast again.


Investigation Required


Some analysis required here. My symbol path is quite long.


SRV*C:\data\symbols\OsSymbols *http://msdl.microsoft.com/download/symbols; c:\data\symbols\ProductionSymbols; C:\Program Files\Microsoft Visual Studio .Net 2003\SDK\v1.1\symbols;C:\winnt\system32

I thought to slowly remove stuff in the path until performance returned. It appears that VS caches the symbol path however (blah), making the investigation slow. I am suspicious of the microsoft web address.


The Culprit


It appears that removing the microsoft web address has fixed the problem. It's not really necessary to check for those symbols over and over again anyway. So I have removed it from the _NT_SYMBOL_PATH variable and I will put it back when I need it.

Sunday, September 17, 2006

Less Up Front Design == More Supple Design?

Does reducing the amount of up front design encourage a cleaner, more supple application design?


One Case


My little (ongoing) project, the car maintenance reminder app, is a good example of how designing for one feature at a time encourages a highly extendable code base. Since the amount of time I can spend on it is highly variable, I am not able to plan sprints. So, I merely tackle each item on the backlog as they come. Once I'm happy with the feature, I release it. In a more usual agile development environment it is fairly inefficient to release software each time a feature is completed. There is usually a fair amount of process involved. In my case however, not having paying customers, or customers at all for that matter, means I'm pretty much free to take risk. I don't need the release rigor.


Anyway, back to suppleness I noticed that adding a new feature meant fully re-examining the system design as a whole. Talk about reducing the cognitive load! I only had to ensure that 1 new feature could be integrated with the current model. If it couldn't, I would look at what needed to be added or changed to make it work. Re-examining the design helped me look at awkward areas and forced me to think about them over and over. In the beginning the design refactoring took longer than the changes to add the feature. I swear that this ratio changed as the model built up however. At the moment the model is extremely extendible and I have a collection of 'patterns' that I have used throughout the system that I can draw upon for new features. I have not created frameworks, I have common code and patterns but no frameworks!


Getting Real


My experience with carcarecalendar.com does not match my experience on real projects with budgets and ROI. Corporations are very concerned about risk these days. Planning is still the favourite for risk mitigation. I think planning is good as a communication tool. It is very considerate to inform people that they will be needed to do some work for you, in advance, and preferably with a time frame. I dislike 'emergency' panic situations that come out of bad planning. It is not this type of planning that I question. It is system architecture and design. My aging brain is having more and more difficulty remembering, and grasping massive and intricate systems (or I just don't care so much anymore). I am more capable and successful at designing for a handful of needs than a multitude. Using 100 requirements to accurately design a system is just not possible.


But lack of planning means risk, doesn't it? If we can plan, we must plan. And system design is planning. I would suggest that the only 'planning' exercise that is worth pursuing is proof of concept type stuff. Where you're breaking new ground. Everything else is just project manager CYA.

But UML?


UML should be used to describe a design as it exists at a certain point in time, not as a way to plan creation of software, but to tell the story of already functioning code. By the way if you like UML, try StarUML it's much better than Rose or Visio (it doesn't beat a white board though).


Design as You Go


So dispense with the docs and pictures, keep things thin. Nobody wants to read that stuff, and anyway software never turns out as designed. Stop wasting time doing stuff you hate and start building, your software will be softer, your customers will be happier and you will be too.

Wednesday, September 06, 2006

Agile Methodologies - Anti-Reusability?

If you subscribe to the Agile notions of YAGNI (You aren't gonna need it) and DTSTTCPW (do the simplest thing that could possibly work), are you potentially writing code with limited reusablility?


The Purpose of a Routine


In Steve McConnell's Code Complete he identifies many reasons to create a routine. Avoiding duplicate code is the most popular, but there are other reasons too. One of them is 'Promoting code reuse'. In the Agile age then, is it still prudent to try to make code reusable? Or should developers simply abstract when the need arises? Developing for the immediate need, and not some uncertain future.


If we apply the Agile XP practices in their purest form, it could be argued that new routines should only be created when needed. Perhaps that's not quite right. Perhaps we should create a function or method when a refactoring requires it. This still means blocking out any thoughts of reusability. Making a method more 'reusable' than necessary still contradicts the spirit of agile.


For example, if my code always multiplies a number x by y and y is always 2 my function should look someting like,


public int multiplyXAxisBy2(x)
{
return x * 2;
}

If my multiply method took two arguments x and y and multiplied them it could be argued that I am 'building for the future'. Alright, I'll admit this is a contrived and extreme example. But I have seen developers argue for hard-coded values in their methods on the grounds that exposing those values as parameters violates the YAGNI principle.


Balance


Like most things in software, there is no simple answer. The best you can hope for is 'it depends'. In the case of agile design and reuse, the 'it depends' postulate seems to hold. YAGNI is perhaps a reaction to the 'modeling the world' design dreams of the past. It is a way to pull back on the programmers reigns and say 'Hey, the customer needs something real, today! stop dreaming and get on track'.


Achieving balance between YAGNI and REUSE means looking at you method interface and asking 'does it stand on its own, does it make sense?'. Constantly changing method names through refactorings is probably and indication of a poor interface. The method names should hardly change at all, so make them specific and understandable. The parameters should jive with the method name. For example, a method like SaveAttachment() should take a parameter like an Attachment object. It should not take parameters that leave the caller trying to understand the internals of the method. Something like SaveAttachment(UserLogin, Attachment, UrlLink, AttachmentType) is probably a sign of a bad object design (some of these parameters should probably be contained in the object itself).


So, in sum I have to say use your judgement and try to look at your interfaces in isolation, not as interconnected pieces, and hopefully you will create reusable code without straying too far from YAGNI.