Friday, January 30, 2009

Tool Review: RockScroll

Doing my fluffy-Friday reading I found this post on Scot Hanselman's blog about a tool named RockScroll that turns your VS2005/2008 scroll bar into a thumbnail view of the open document (when working with .cs files anyway). It also has a nifty feature that if you double click a word in the source code, say a variable name, it will highlight all occurrences of that word in a lovely lilac color (and red in the thumbnail scroll bar - which is the useful bit, IMO).

Anyway, I'm digging this new tool and hope you might also find it useful.

Thursday, January 22, 2009


I found a new tool yesterday that saved me from having to type up some sample XML. In preparing for a XSLT demo I wanted to use some sample data returned from a stored procedure in our database. I was able to take those results and get them into a .CSV file but needed to then get it into an XML format. The idea is the data may start out as this:

Homer,Simpson,Springfield Nuclear Power Plant
Kermit,Frog,Muppet Theatre

but I want it to end up like this:

<Employer>Springfield Nuclear Power Plant</Employer>
<Employer>Muppet Theatre</Employer>

A quick search on Google brought me to this post about a tool called XmlCsvReader. It seemed to be just what I needed, but the link to download the tool was dead (still pointing to, which has been defunct for a while). Armed with the name of the project, though, I Googled again and found another post about it with a download that worked (three cheers to Andrew for not having a dead end!).

Syntax is easy and the only snags I hit were the result of QueryAnalyzer, excuse me, SQL Server Management Studio 2005, not being able to save sproc results to a CSV file correctly. I had to type in my column headers and wrap text values in double quotes if they had commas.

Wednesday, January 21, 2009

Keeping Current

Recently my manager asked me for examples of things I do outside of work to stay current in the development world. It seemed like a good topic for a blog post - so here it goes. Essentially I do three things to stay current; participate in the local .NET user group, attend the regional Code Camp (when possible) and read blogs/books.

Local .NET User Group
I’ve been involved in the local VTdotNET user group for about 4 years now. It is held monthly and covers a variety of topics related to .NET development. In the past presented some “newbie” sessions for the user group. I found the exercise of putting a presentation together to be a good way to force myself to learn a topic in more depth. As a side note, when I would be preparing a presentation I would always do a dry run of it in-house for my co-workers as a tech chat or lunch and learn. Doing that led to questions I hadn’t considered and resulted in a more thorough presentation.

Another benefit to participating in our local .NET user group has been the number of books I’ve won as door prizes. The books I have which have familiarized me with WCF and LINQ were door prizes. The book I’m currently skimming is “Programming ASP.NET 3.5” put out by Microsoft Press.

I should also mention one of my new co-workers runs the VT SQL Server user group and I've committed to him that I would begin attending more regularly once they get going again.

Code Camp
Twice a year there is a regional Code Camp held in Waltham, MA. Code camps are free events, held outside of work hours, that are run by the development community for the development community (here’s a link to the Code Camp Manifesto). I’ve been attending these for about 4 years and they are marvelous opportunities to learn. Although the event itself is free the fact it’s held in Waltham, MA means most people need to cover travel expenses. I grew up in that area, though, so I stay with family which makes it an easy decision for me. Here are links to the last presentation schedules for Code Camp 8, Code Camp 9 and Code Camp 10 if you want a sense of the topics covered.

Finally I read to stay current. In addition to the books I’ve mentioned earlier there are a few blogs I follow via RSS. Chris Bowen is our regional Microsoft developer evangelist. His blog is how I learn about upcoming code camps and MSDN road shows. Julie Lerman runs our local .NET user group. Lately her blog has focused mainly on Entity Framework (EF), but that’s because she’s in the midst of writing a book on the topic for O’Reilly. While I’m not that interested in EF she also links to other topics (plus following her blog gives us something to talk about socially at the local user group meeting). I also follow the Software by Rob blog. It’s not really a technical blog but he writes on our industry in general. While Ted Dziuba’s blog is technical his topics rarely align with what I do. He has an extremely funny writing style, though, and the nuggets of information I get are usually pretty good.

There are other things I do to stay current such as viewing web casts from Microsoft, the PolymorphicPodcast or other sources but it’s difficult to find dedicated time for watching/listening. Having said that, however, the new job is going to require me to get back up to speed on ASP.NET programming and these are resources which I'm sure I'll rely on more heavily than the past couple of years.

Friday, January 16, 2009

Fun with progress bars

You could be forgiven for spending too much time on this site:

Wednesday, January 14, 2009

Speed up your string comparisons

One of the first software engineery things I was tasked with at the new job was to do a code review for a peer. It was a good opportunity to look at some of our code. One of the things that stood out to me, however, was the way string comparisons were being performed. I saw a lot of this:

if(stringVariable1.ToUpper() == stringVariable2.ToUpper()) { ... }

One of my comments back on the code review was it might be better to do this:

if(stringVariable1.Equals( stringVariable2, StringComparison.OrdinalIgnoreCase)) { ... }

My peer agreed it might be a good idea but said the coding standard in the shop was the first approach. Not having a second software engineery thing to look at yet I figured I would do a quick benchmark between the two approaches. I also decided to include a comparision using .ToLower() just to be fair since that was in the code also.

To perform the benchmark test I used the SimpleTimer class Bill Wert outlined on his blog.

The test itself was just a console application that builds a List<string> with the number of entries identified by the person running the test. So the tester can enter 1 to whatever long.MaxValue is. I fill that List with the appropriate number of Guid.NewGuid().ToString() values. Then I time how long it takes to invoke an if using the different comparisons.

The code for all this is below, but I'm all about getting to the results...

The Results
I started small with a sample size of only 8,000:

8000 iterations took 0.008 seconds, resulting in 1012736.624 iterations per second

8000 iterations took 0.007 seconds, resulting in 1199852.721 iterations per second

8000 iterations took 0.001 seconds, resulting in 9669591.016 iterations per second

Not bad. Already we see the .Equals method performs faster than changing the case and doing the == thing. I decide to skip going for a medium size test and go right for big.

Here's the results using a sample size of 8,000,000:

8000000 iterations took 4.608 seconds, resulting in 1736263.497 iterations per second

8000000 iterations took 4.187 seconds, resulting in 1910494.970 iterations per second

8000000 iterations took 0.318 seconds, resulting in 25181978.355 iterations per second

Oh sure, the test takes longer - but I think it was worth it. In case you missed it - 3 tenths of a second is less than 4.6 seconds.

I'm going to see if we can get that coding standard changed...

The Code
Sorry the syntax highlighting is missing but the blog editor isn't good about that. I've changed the color on the comments but you'll want to paste this into the IDE of your choice if you want the whole enchilada.
using System;
using System.Collections.Generic;

namespace StringComparisonPerformance
class Program
static void Main(string[] args)
const long defaultSampleSize = 10;
long testRecordCount = defaultSampleSize;

Console.WriteLine("Enter sample size");
string inputValue = Console.ReadLine();

if (string.IsNullOrEmpty(inputValue))
Console.WriteLine("No value provided, using default sample size of {0}", testRecordCount.ToString());
if (!long.TryParse(inputValue, out testRecordCount))
testRecordCount = defaultSampleSize;
Console.WriteLine("Unrecognized sample size provided. Using default sample size of {0}", testRecordCount.ToString());

List<string> testValues = GetTestValues(testRecordCount);
string comparisonValue = testValues[1].ToUpper(); //some value

SimpleTimer timer = new SimpleTimer();

ToUpperTest(comparisonValue, testValues);

//now recast all the test values to upper case to account for the fact
//GetTestValues returns lower case values. This just ensures the test
//is fair between ToUpper and ToLower
for (int i = 0; i <> testValues[i] = testValues[i].ToUpper();

ToLowerTest(comparisonValue, testValues);

EqualsTest(comparisonValue, testValues);


static List GetTestValues(long testLength)
<string> testValues = new List();

for (int i = 0; i <>

return testValues;

static void ToUpperTest(string comparisonValue, List<string> testValues)
foreach (string testValue in testValues)
if (testValue.ToUpper() == comparisonValue.ToUpper()) { }

static void ToLowerTest(string comparisonValue, List
<string> testValues)
foreach (string testValue in testValues)
if (testValue.ToLower() == comparisonValue.ToLower()) { }

static void EqualsTest(string comparisonValue, List
<string> testValues)
foreach (string testValue in testValues)
if (testValue.Equals(comparisonValue, StringComparison.OrdinalIgnoreCase)) { }

Tuesday, January 13, 2009

VTdotNET January 2009 Summary

The January VTdotNET meeting was heavily attended (38 people by my count) and with good reason. The presentation by Mario Cardinal was "Best Practices to Design a Modular Architecture". The main thrust of his presentation was that we as developers and architects need to do a better job of designing our solutions to be compositions of solution models. The idea in his mind is to not have our applications be one "big ball of mud" but a series of "smaller, loosely coupled balls of mud."

Nothing revolutionary there, I admit. But he did a better job than most of defining his terms and offering some best practices.

Mario started by describing his view of architecture as the process of abstracting solutions and experiences into different models. The intent is to identify the various areas of an application which can be their own module. The goal is always to simplify the application design. The example he gave was if a builder has to meet a requirement that a person standing a room should be able to view the outside the preferred design would be to put a window in the wall (simple and meets the need). A bad design would be to introduce a series of gears and supports which would hoist the wall up like a garage door. Sure it meets the need but it is overly complex.

So what, in his mind, is a module? He identifies a module as something which has certain attributes:
  • Role - this describes the responsibility it performs in the system
  • Seam - the visible or public interface of the module. This is more than just the API, however, which I describe later.
  • Body - the hidden design parameters and implementation
  • Test Bed - this determines how well the module works in an autonomous way without running the whole system. Based on what I saw it consists of the unit tests and various mock objects required by those unit tests.
Mario's point is we as an industry overly emphasize the Body and need to focus more precisely on the Role, Seam and Test Bed of a module. To illustrate he discussed that in his view a module could be a System, Layer or Class.

The point Mario wanted us to take away from this discussion was that all serious mistakes are made the first day
  • The most dangerous assumptions are the unstated ones
  • The design must ensure a module has only one reason to change - the Single Responsibility Principle
  • We need to group elements that are strongly related to each other while separating elements that are unrelated or have a conflict of interest
He was asked how to identify a module's responsibility. He suggested starting with the business logic layer and stripping away concerns of infrastructure such as persistence, logging and the like.

As I previously mentioned, Mario's discussion of modules were intended to apply equally to a System, Layer or Class. He defined each as:
  • System - An autonomous processing unit which defines a business boundry. Services or applications are the usual examples of a system.
  • Layer - the parts of a service or application
  • Class - the units of programming which comprise a layer
A major aspect of understanding a module, therefore, is the concept of a Seam. A Seam has three parts:
  • List of operations - the API
  • Expected behaviors - what it's supposed to do. These become the core of understanding what tests can/should be written for the module. Use examples rather than formal statements (such as UML expressions). It is important to keep in mind we're talking about tests for the module so if the module is a layer the scope and nature of the testing will be different than testing for a class.
  • Constraints - logic or requirements that define the prerequisites. Examples of these would be preconditions such as input validation or other guard conditions.
Best Practice Recommendations
The best practices which Mario offered centered around doing the hard part first. So what's the hard part? Getting the interfaces correct. He believes the greatest leverage in architecting is at the interfaces. So the suggestion is to focus on the Seams to ensure the interactions work properly and clearly. This focus allows the natural boundaries between modules to become self evident. Then the "big ball of mud" can be divided up naturally - much like a log is split along the grain.

Mario advocates starting with the System Seam. For applications this means the UI and for services this would be contract-first development. Ensuring the System Seam is correct is the best way to ensure the user ends up with the experience they desire. Because the system is being written to satisfy the user's need it's the natural place to start.

The process he recommends following for defining the System Seam is decidedly low-tech but aligns with the Agile approach. You start with defining the User Stories/Use Cases/User Scenarios. From there you create what he called low fidelity mock ups - what everyone else would call screen layouts on paper. Nothing is cheaper than drawing on paper or a white board and asking the user, "Is this what you mean?" Once the low fidelity mock ups are solid (after iterative reviews and revisions) you then build high fidelity mock ups - actual screens with just enough behavior implemented to define the user experience but nothing real beneath it.

Once the high fidelity mock up is defined and is agreed upon Mario claims the natural seam between the UI and the business layer will be revealled.

He makes two further recommendations that apply if the user identifies further changes once the high fidelity mock up is started. First, we should welcome those changes. It may like our hard work is being dismissed or put aside but they are an opportunity to get closer to what the user actually wants and needs. Second, any changes should be defined and clarified by restarting with the low fidelity mock ups. For example, don't start moving buttons on a form and compiling - draw a picture of your understanding of the requested change. Remember, paper is cheap.

Other Observations
Part of the meeting got side tracked because Mario was asserting it is foolish to commit to delivering a finished system on a particular date. He believes the software industry needs to get to a place where we are making rational commitments. He believes we should only commit to the next step in the development process. For example, he believes we can only commit to having mock ups by a certain date. Once the mock ups are defined and agreed upon we can begin discussing when the next stage of module delivery might be accomplished. Some of us tried to bring the discussion back to reality - pointing out, for example, that sales folks can't sell screen mock ups - but Mario was insistent the issue is our industry needs to change expectations. He likened it to someone saying, "Here's a bunch of money to cure cancer. When will you have that done?"

I don't think I'll hold my breath for that, though.

Monday, January 12, 2009

Clone Detective for Visual Studio

A friend of mine recommended this tool. I've not played with it yet but didn't want to loose the link - it's

He says it's a little utility delivered as a non intrusive VS.2008 add-in that helps identify duplicate code with an intuitive UI. He finds it handy for refactoring code to be more concise and uses it to remove the “copy/paste” pattern he finds in the legacy product on which he works.

Additional C# Code Snippets for VS2005

Something you might find useful when coding in C# is the way code snippets can increase your productivity and provide guidance/reminders on syntax. If you like snippets, you might want to grab the additional snippets published here.

Monday, January 5, 2009

Day 1 of the new adventure

So today is my first day at the new job. There hasn't been much structure to my day, simply getting my account information and installing software. It made me think about what the tools I simply MUST have as a developer. There are 3 that I installed right off the bat which I figured might be worth sharing.

7-Zip: This is the zip utility I use. 7-Zip is an open source project so it's free (always nice). The UI is a simple integration with the context menus in Windows Explorer.

CmdHere PowerToy: This is a Microsoft power toy that adds a menu item to the context menu that is presented in Windows Explorer when you right click on a folder. The menu item allows you to quickly open a command window with a working directory of the folder on which you invoked the context menu. (I'll update this with a link once I find one).

Notepad++: Sometimes I just want to open a source code file and review its contents without spinning up an IDE with lots of overhead (I'm looking at you, Visual Studio). In the past I've used editors like TextPad, which I like, but it cost money. A little over a year ago I was turned onto an open source editor that I like well enough to start using, Notepad++. It is fast, gives me the syntax highlighting I want and has some nice tools baked into it.

There are other tools I like, and I continue to find new ones. But these were the first 3 to get installed and that made them seem noteworthy.