Friday, May 26, 2006

Handling the Browser 'Refresh' Button

When the user hits the 'refresh' button, the page resends the previous request to the server, which usually results in unexpected behavior* for web applications (*bugs).

One of the difficulties with building Web Applications is the fact that they are hosted within a 'browser'. The browser contains features that allow the user to customize their Internet browsing experience. On this note, it is EXTREMELY annoying when an application attempts to mess around with browser settings, it not threatening (in a securityish sort of way). So, in my opinion, disabling the refresh button is not an option! Besides, the user can always hit ctrl-r to get a browser refresh (and yes you could probably catch ctrl-r with javascript but that's not the point).


My current preferred method of dealing with the refresh button is to redirect back to the current page for all data modifying postback events. Something like the following,


where Model.aspx is the current page. It might not be a great idea to hardcode the page name (in case you want to change it). I've seen code where each page exposes a Url property. In this case the code would look like the following.


Placing these redirects in the event handlers ensures that if the user hits 'refresh' immediately after a data changing postback the event will not be re-fired. The refresh merely calls the redirect again (essentially) and reloads the page (as expected! how wonderful and easy too!).


While the redirect method works, it is not without its difficulties. Remember, it is like a fresh navigation to the page, so it refires your "if( !Page.IsPostBack )" code. This can be a problem. Usually the "!IsPostBack" code populates list controls and re-running this code will cause current selections to be lost (very irritating for the user). Normally the ASP.NET Viewstate mechanism ensures the current selections on list controls are maintained through postbacks. My solution to the list selection problem is to store the current selection in Session and set the list selection manually.

Besides the extra management of Session variables, there is also the performance issue to consider. Each postback is now causing 2 hits to the server. Potentially calling into the database for data already displayed on the page. This seems like a high price to pay to deal with the 'refresh' button. Output caching may be a solution here...

Friday, May 19, 2006

Cost of Calling Methods in C#

What does a method call cost in C#? Should I use temps to reduce method calls?

In university, so many years ago, we learned that one of the most performance intensive operations was the method call. The professor explained the work required to create a stack frame, and move values into it, etc. Well... that was then, when maybe compilers weren't quite so efficient. Is this still true? What is the overhead when calling a method with a modern language like C#?

Why do I care about this?

Martin Fowler in his book Refactoring: Improving the Design of Existing Code, (which I am currently re-reading) describes several method creation refactorings, one of which is driven by the desire to remove temporary variables from a method (Replace Temp with Query). The motivation behind this refactoring is based on the idea that temporary variables encourage large methods and make refactoring difficult. Upon reading this refactoring I recalled the university lecture where we discussed the cost of calling methods and use of the C++ inline keyword (inline is a C++ compiler hint to not actually create a function and call it, but to generate inline code instead). In C++ 'inline' exists because function calls can be expensive. So I created a simple test to measure the overhead with method calls.

Method Call Overhead Results

Here is the code I wrote to measure the method call overhead, one method that simply does a calculation inline, and another that calls a function to do the calculation.

public class MethodCallCost
private int _iterations;
private int _valueA;
private int _valueB;

public MethodCallCost(int iterations, int valueA, int valueB)
_iterations = iterations;
_valueA = valueA;
_valueB = valueB;
public void MethodCall()
double temp = 0;
for( int i = 1; i < _iterations; i++ )
temp = CalculateAmount(_valueA, _valueB, i);
private double CalculateAmount(int valueA, int valueB, int divisor)
int result = valueA * valueB / divisor;
return result;
public void Inline()
double temp = 0;
for( int i = 1; i < _iterations; i++ )
temp = _valueA * _valueB / i;

I created the MethodCallCost object to run 3,000,000 iterations. Here are some results,

Method Call (Ms)Inline (Ms)

So, my conclusion (with this simple and perhaps insufficient test) is that method calls are cheap. The benefits of the 'Replace Temp with Query' are probably worth it. Now of course you may have a complex method that calls a webservice, or into a database, and in that case of course it probably makes sense the store the result instead of requerying. There are still judgement calls to be made, but go ahead and create method calls, just remember to profile and tune.

Tuesday, May 16, 2006

UrlReferrer - Handle with Care

What page was the user on before this one? Hey! Reponse.UrlReferrer seems to have that information.

Ok, I'll admit I'm a little scared of this feature. It seems to undermine the atomicity of web requests. If I could trust it though, UrlReferrer would be extremely handy for a web app with sophisticated navigation. Imagine multiple ways to get to a screen (as any good app should allow), and the user hits the 'cancel' button, and magically is returned to the previous screen. After all, the user's natural expectation is to be returned to the screen they were just on isn't it?. Here's some code that does just that.

private void btnCancel_Click(object sender, System.EventArgs e)
if(Request.UrlReferrer != null)

This code ensures that the UrlReferrer HTTP Header is set, and if it is redirects the browser to the referrer page. Otherwise the user is sent to a default page. Note here, NUnitAsp doesn't set the UrlReferrer header hence my null check. But...


If the page posts back, the Url Referrer is set to the current page, and your navigation is now broken. You either have to store the referrer in the page load event for later use, or not put any postbacks in the page (danger! danger! you will probably forget about this and break your redirection). I don't know about you, but my pages post back a lot especially in their immature state.


You can't use Reponse.Redirect to navigat to pages that reference the UrlReferrer. Reponse.Redirect causes the browser to send a GET request for the new page. The subsequent requests, such a POST to go to another page, now have the Url Referrer of the current page because of the GET request made by Response.Redirect (or something like that, trace the packets and you'll see what I mean).

The more I write about it, the more convinced I am to avoid Url.Referrer. The design limitations to 'make it work' are extremely constraining and easy to forget. It doesn't work with NUNitAsp (important for me, anyway). I am also unsure of which browsers even support this header.


You can potentially change your navigation strategy and use the bread crumb trail approach. Just save a navigation tree in the user's Session. I have also had some success passing navigation information as URL Query parameters (I.e. page.aspx?PreviousPage=Main.aspx). UrlReferrer is a fragile construct, try to find another way.

Friday, May 12, 2006

Protected or Private?

As of late, I've been setting the access level on class properties to protected.

I was creating a class, and the protected keyword came up in intellisense and I paused to think. Maybe if a class inherits from this class it would be useful to have access to the member data of the class, the same goes for private methods, why not make them protected?

So, right now, I am generally going with 'protected' for all internal class stuff. Classes that I know will not be extended will have private members, I ensure to seal those classes.

I imagine that a truly carefully designed class has a mix of private, protected and public access levels, and that setting everything internal to 'protected' is a bit naive. but for now...

Wednesday, May 10, 2006

OleDbCommand Parameters - Order Matters

Having trouble with your MS Access Update statement? Check your parameter order.

So I've written a small parameterized Update statement to create an OleDbCommand.
I was surprised to find that my Update command was returning 0 rows updated. Everything looks fine on inspection, so I pull the command into MS Access and replace the parameters with the specified values and it works fine. My code looks like this,

string sql = "UPDATE tblMemberVehicle SET MemberId=@MemberId, VehicleTypeId=@VehicleTypeId, Identifier=@Identifier WHERE Id=@Id";

OleDbCommand dbCommand = new OleDbCommand(sql, conn );

dbCommand.Parameters.Add("@Id", OleDbType.Integer).Value = vehicle.Id;

dbCommand.Parameters.Add("@MemberId", OleDbType.Integer).Value = vehicle.User.Id;

dbCommand.Parameters.Add("@VehicleTypeId", OleDbType.Integer).Value = vehicle.VehicleType.Id;

dbCommand.Parameters.Add("@Identifier", OleDbType.VarChar).Value = vehicle.Identifier;

Parameter names all match nicely, no exceptions from the database. I check the rows updated after running my ExecuteNonQuery statement, and always - 0 rows updated.

I have another update statement which is working, so I take a quick look at it. Lo and behold, the @Id was the last parameter added, and it corresponds with the parameter order as it appears in the SQL statement. Could it be that OleDbCommand works just like OdbcCommand and requires parameters specified in the SQL specified order? I begin to suspect that the parameter names are actually meaningless and conduct a small experiment. I change the parameter names to nonsensical names and leave the sql parameter names alone.

string sql = "UPDATE tblMemberVehicle SET MemberId=@MemberId, VehicleTypeId=@VehicleTypeId, Identifier=@Identifier WHERE Id=@Id";

OleDbCommand dbCommand = new OleDbCommand(sql, conn );

dbCommand.Parameters.Add("@Foo", OleDbType.Integer).Value = vehicle.User.Id;

dbCommand.Parameters.Add("@Bar", OleDbType.Integer).Value = vehicle.VehicleType.Id;

dbCommand.Parameters.Add("@Try", OleDbType.VarChar).Value = vehicle.Identifier;

dbCommand.Parameters.Add("@This", OleDbType.Integer).Value = vehicle.Id;

Any guesses as to what happens? Surprise, surprise, this code works. The parameter names are truly meaningless, well against MS Access anyway. I believe SQL Server respects these parameter names and actually uses them.

And all this time I've been sooo careful about my parameter names, sigh.

Monday, May 08, 2006

Returning Null Objects

What does it 'mean' when a method call returns a Null object?

I believe you must define interfaces very carefully. Firstly, because once an interface is in use it is difficult to change later. Second, because every method name, parameter, output is part of the description of what the interface does. The interface 'expresses' a mental model to the developer using it. It explains how the library works, or presents a mental model that can be used to apply the library effectively. The interface tells the developer how the library 'works', so while the code only deals with inputs and outputs, we developers use interface I/O to fabricate an understanding of what's happening under the covers. It is therefore important to specify an interface carefully. So, with that in mind, I propose a few instances where it makes sense to return a Null object from a method.


If you are not using exceptions to relay errors, or unexpected system conditions, and you are instead setting a global or passed-in error structure, your method should return a Null object reference. You don't want the processing to continue as if nothing was wrong. In the case of an error, the application should proceed into recovery mode. Oh look, I got a Null, something bad must have happened. I can either check the error structure and avoid using the Null object, or I can ignore the error structure and get an Null Object exception (No! Bad programmer!).

Object not found

In many cases I have returned Null from a 'find' or 'get' method when it was unable to retrieve a requested object. While it is necessary to check the return value for Null in these cases, the implementation is simple. A Null object is sometimes even acceptable, as it forms an input to another method which allows a null parameter. I am still in favor of this use of Null object references. After all the database supports the concept of Null too.

Null Object Pattern

The Null Object Pattern entails essentially 'stubbing' out methods and creating an indistiguishable object from the real object. The client object can use the Null object as it would the real thing, and no unpleasant Null checks are required.
I believe the Null Object Pattern is only useful for stubbing out code, where the object's behavior is not important to the client object. Maybe I can stub out the application logging object for example. If I haven't configured my logging, the logging system returns a Null logging object that doesn't fail on the log call, but does nothing. I can't however, stub out the Math object from which I am expecting performance of important calculations. It seems that I would be at risk of introducing difficult to find bugs, I would rather get an 'object reference not set' exception than a series of zeroes displayed in a report.

Empty List or Null

If my method returns a list of objects, and there are no objects to return, it makes sense to me to return an empty list. See Tor Norbye's blog post on this. You certainly can't return either Null or empty list and ascribe the same meaning to both, they're two different things. Empty list is easy, no objects found. Null? That just means an error to me, like the object wasn't properly initialized or something.

Tuesday, May 02, 2006

Repository Create Pattern

I'm working on creating a new object creation pattern. This new pattern is an elegant solution that fits within the Repository pattern described in Domain Driven Development by Eric Evans. I'm trying to ensure the following constraints in my application,

  • Objects are valid at all times

  • An Entity object without an identifier (db key) is invalid

  • Don't want to have to use GUIDs as identifiers

  • Objects that cannot be saved (because of their state) are invalid

Essentially, if an object exists I must be able to save it without getting constraint errors from the database.

Introducing 'Repository Create'

The Repository Create pattern works a lot like the factory patterns. You create objects through the repository. 'Entity' objects (objects that must be persisted) must all be created by a repository. That repository ensures uniqueness of business keys (if there are any) and applies an id to the new object. Any errors creating this new object and 'no object for you!'. The application never has partially formed, or duplicate objects floating around.

What about updates? Model objects should not permit updating of their keys. Simple. But what if I modify a property and thus make it a duplicate? You shouldn't be able to do this. Properties that are part of the uniqueness constraint must be modified through the repository, by a repository Update method.

An Example

Let's say I'm writing a project management application and I have a 'Project' object. The Project object must have a unique name so users can identify it, but that name can change too.

I simply create my project objects by calling 'create' and passing the project name to my project repository.

Project p = projectRepository.Create("Project 1");
If the name Project 1 is a duplicate I get an exception from the repository. If not, I get a new Project object, with a valid database Id and I am assured the name is not duplicated. Updating the name would look something like this
pRep.Update(p, "Project One");
Access to the project name has to be restricted either by using the C# internal keyword or Interface casting* (*A slippery way to implement 'friends' in C#).

This is the Repository Create pattern in a nutshell. It seems to be working fairly well so far, of course my requirements have also been fairly simple and I haven't had to do much optimization. More to come on this pattern...