Wednesday, December 2, 2009

Wiring FxCop up to our build process

Earlier this week we decided to finally pull the trigger to have our automated build fail when the code violates the FxCop rules we use for code analysis. Previously we ran FxCop but rule violations didn't break the build. The code is a legacy application and so lots of rules were broken pretty much everywhere.

But yesterday I set all existing violations as exceptions so FxCop will basically ignore them. Today's effort was to figured out how to have the build break and still have the FxCop results appear in CruiseControl.NET (CC.NET). That second goal was a bit trickier than I thought it would be.

There were two changes that had to be made to enable this. First, was to have the NAnt script fail when FxCop finds a problem. This wasn't a big deal because FxCop only produces the XML output when there are violations. All I had to do was add a FAIL task that runs after the code analysis. If the output file exists we fail. This is what that task looks like:

<fail if="${file::exists(fxcop.output.path)}">FxCop rules violated, see FxCop Report for details.</fail>

Now the code analysis can build to break - awesome!

But doing this caused the FxCop Report in CC.NET to not display any results. This blog post by Leifw gave me the hint I needed to resolve the problem. Previously I had been doing my File Merge to get the FxCop results as part of the TASKS node of my CC.NET project configuration block. It turns out, though, File Merges done in the TASKS node won't occur if the build fails. I moved it to a PUBLISHERS node in the project and it all works ducky. That's because the PUBLISHERS node is processed even if the build fails. That looks like this:

<project name="MyWebGrocer.Gsa Trunk">
...
<publishers>
<merge>
<files>
<file>C:\BuildArtifacts\Project_Trunk\*.xml</file>
</files>
</merge>
<xmllogger />
</publishers>
</project>

Now if someone commits code that does something unforgivable - like neglecting to make a method static if it doesn't use any instance members - the build will break and they can check what the offense was via the CC.NET dashboard.

It's still a legacy application - but we won't be committing any more legacy code.

Wednesday, September 16, 2009

VT Code Camp 1: Summary

It's been nice to read some of the posts people have made about the VT code camp. The ones I've seen are by Bradley Holt, Chris Bowen, Dave Burke and Jim O'Neil. But I thought I would post my own summary of the first VT code camp.

First, I want to reiterate my appreciation to the many donors, speakers and volunteers who made the event the wild success that it was. You folks rock!

Code camp setup began around 7:30AM. This was only my second time at Kalkin Hall and was just as impressed with this location as I was when I had seen it the day before. It is just a wonderful space that suited our event to a "T." Again, many thanks to the UVM School of Business Administration for opening their doors to us. The setup team included Margot Schips, Julie Lerman, Laura Blood, Carl Lorenston, Bradley Holt and Dan Russell. (If I'm forgetting someone I apologize. Things were moved so fast and furious during the day I neglected to stop and take notes to do this properly.)

We got our only crisis out of the way early on Saturday when at around 7:35AM when we found out one of the rooms we were told we would be using turned out to be hosting a different all day event. After some frantic calls made by Margot we got it straightened out and were able to use an alternate room which worked just as well.

After that it was smooth sailing. We had a marvelous breakfast spread provided by Green Mountain Coffee Roasters. Lots of coffee, lots of pastries, lots of bagels and even lots of fruit. It was all yummy (and I speak from experience... lots of experience).

In typical VT style we began the welcoming remarks a few minutes behind schedule (what? you got someplace to be?). After thanking the donors, orienting everyone to the space and reviewing the day's schedule we turned our attendees loose. Plenty of positive energy and familiar faces.

The first two time slots went off without any projectors exploding (although it took a little bit to figure one of them out). Laura Blood was super-generous with her time and watched the registration desk. After that we broke for a lunch provided by MyWebGrocer. Lots of pizza and soda. Filled me right up. By this point my schedule started to even out. We had the swag organized, the registrations had started to die down and Julie and I had done the pizza run (complete with expert parking job). So I made the most of it and enjoyed the lunch break. It was great to spend a few minutes chatting with folks I knew and meeting folks I hadn't - plus the surreal experience of meeting in person folks I only knew from online.

After lunch we dove back into session. We moved the registration desk downstairs where it was manned by Dave Burke. Dave's one of the first people I met when I started to attend the .NET user groups and I always enjoy talking with him. Plus, when he adjusted my name tag for me it was the most action I had had all day.

Soon it was time for another snack break - this time courtesy of Microsoft. Sodas, brownies, and other tasty treats. Noshing and networking... good times.

We held the raffle during the last break. We gathered in one of the session rooms and used a random number generator to identify the winners. Julie also took a moment to extend a special thank you to Steve Andrews and Alison Gianotto who were the two speakers who travelled the farthest. After all the support our donors provided we ended up with nearly 30 prizes to give away - so we ran a little longer than expected. The final sessions started a little late, but everyone was still going strong. We volunteers set about cleaning up during the last session so we could get an early exit to a social gathering at the Windjammer. Special thanks to Chris Bowen and Microsoft for treating the speakers and volunteers so kindly.

I could keep writing about all the people I met and reconnected with. I might even do a post about the lessons learned that I hope to apply to the next code camp (yes, we intend to do this again). The whole experience was exciting, maddening and gratifying. I'll keep searching for posts, tweets and pictures tagged with VTCODECAMP to see what people thought. I hope you do, too. And join us at the next VT Code Camp!

Friday, September 11, 2009

Twas the night before code camp...

No more messing around - tomorrow is the first VT code camp. We've got everything as done as we can at this point. Excellent speakers/sessions lined up, a kickin' venue, a good team of volunteers and lots-o-swag. We've got over 100 people who have registered to attend, which far exceeds my expectations. I think I'm probably on the record somewhere stating that for our first code camp 50-70 would be a great turn out but talk of 100 was fantasy.

Looks like I was wrong. Happily, happily wrong.

Seriously, for this event to have that many people register is a huge testament to the development community in VT and the northeast in general (we've got participation from as far away as PA and NYC).

I'm super pleased with the support we've received from our donors, too. The UVM School of Business Administration has opened their doors to us for the venue. Green Mountain Coffee Roasters is covering breakfast, MyWebGrocer (my employer) is buying pizza and soda for lunch and Microsoft is providing an afternoon snack.

Chris Pels and the http://www.thedevcommunity.org/ site helped us by managing our registration and speaker abstract submissions which was HUGE.

We've a pile of swag to raffle and give away, too. Donors are:
It takes a lot of people to get an event like this off the ground and we have a great team. Super big thanks go to Bradley Holt, Carl Lorentson, Julie Lerman, Laura Blood, Margot Schips, Martin Stevanof, Matthew Weier O’Phinney and Rob Rohr who did a lot of heavy lifting to get us poised for a successful event tomorrow.

I also got some wonderful advice about organizing a code camp from Chris Bowen and Jim O'Neil our regional Microsoft development evangelists and also from Dennis Perlot and Supriyo "SB" Chatterje from the CT code camp. These four guys provided some good input and are all class acts.

Finally, I wouldn't even be able to be involved in this kind of community development if it weren't for my wife, Sue, who will be taking care of the kids while I'm at the code camp. So she gets a thank you, too.

So that's it. Everything is printed, the alarm is set and I'm really very, very excited to see how we do.

Wednesday, August 12, 2009

WebKit border radius and cascading styles don't always mix

Since starting the new job in January I've had to learn more about developing for the web than I ever have. One of the fascinating (and frustrating) aspects has been the way different browsers render the same code. The only web content I've written prior to this job was a very basic set of static pages for the Burlington Irish Heritage Festival and for an intranet web application where the company dictated which browser was to be used. But with the new gig I'm in the real world. That means every change I make needs to test against 4 or 5 browsers. And that's just on my development box, once it goes into QA the application is tested against over 20 different browser and OS combinations.

So I've started to familiarize myself with the various rendering engines used by the major browsers. One of the big ones is the WebKit open source engine. It serves as the rendering engine for some of the browsers I need to support; Safari on Windows and Google's Chrome. Recently we encountered a little WebKit specific gotcha that I wouldn't have expected. It has to do with a WebKit specific CSS attribute, -webkit-border-radius. It's a wonderful attribute that when set instructs the WebKit browsers to round the corners of an element. This post has a very good summary of the -webkit-border-radius attribute (and it's Mozilla equivilent, -moz-border-radius). The important point for this discussion is -webkit-border-radius is a shortcut declaration.

The problem we encountered had to do with how the -webkit-border-radius attribute behaved when used in a cascading style. The application on which I work has one style sheet defined for application wide default styles and then uses additional files to adjust the styles as needed. So, for example, the application default stylesheet may define a class like this:

.someRoundedThing { -webkit-border-radius: 4px; }

which will will cause a WebKit browser to round the corners of an element decorated with that class. Another style sheet could then declares following:

.someRoundedThing { -webkit-border-radius: 8px; }

If a page pulls in the application style sheet and then the alternate style sheet what should happen is the elements on that page which are decorated with the "someRoundedThing" class should be twice as rounded (again, only in WebKit browsers) than on a page where only the application default styles are included. But this wasn't working as expected.

In order for the page defined style to work we had to declare the alternate -webkit-border-radius value using the more verbose, four value declaration. Like this:

.someRoundedThing { -webkit-border-radius: 8px 8px 8px 8px; }

Declaring each corner's radius individually also worked:
.someRoundedThing {
-webkit-border-top-left-radius: 8px;
-webkit-border-top-right-radius: 8px;
-webkit-border-top-bottom-radius: 8px;
-webkit-border-top-bottom-radius: 8px;
}

While I'm not certain why this is going on, I assume it is related to the differences in implementation outlined in this post.

So while I like using the shortcut declarations for their terseness it appears they can lead to problems if your not careful.

Wednesday, July 15, 2009

Browser differences and jQuery, (oh, yes... they exist)

I've been doing more with jQuery lately and I can't say how much I'm enjoying it. The other day I was able to take an old JavaScript function that was over 20 lines long and refactor it down to 4 lines... sweet! And one of the nice benefits of using jQuery has been not having to worry about coding to specific browser variations. I did come across something recently, though, that is inconsistent between the browsers which I thought I would share.

It has to do with the way CSS attributes are retrieved for an element. Sometimes the .css() method will return different values for the same style definition (or no value at all) depending on the browser. I'm probably expecting too much of jQuery in terms of it's browser agnostic implementation - but I was honestly surprised. Consider the following HTML page:

<html>
<head>
<style type="text/css">
.ugly
{
background-color: #3A9F0E;
border: solid 1px #FFD800;
color: #fff;
-moz-border-radius: 10px;
blur: 1;
}
</style>
</head>
<body>
<h1 class="ugly">test</h1>
<div id="dvDisplay" />
</body>
</html>

A very basic but easy to grasp example. Ugly, but easy to grasp. There is a H1 tag which has the "ugly" class applied. The ugly class has some style definitions (some valid/some not... more on that shortly). There's an empty DIV tag, too. We're going to use that - just watch.

Now let's add a little JavaScript to illustrate my jQuery concerns. It's going in the head section and looks like this:

<script src="js/jquery-1.3.2.min.js" type="text/javascript"></script>
<script type="text/javascript">
$(function() {
var message = "";
var property = ["border-color",
"border-bottom-color",
"-moz-border-radius",
"-moz-border-radius-bottomleft",
"background-color",
"font-weight",
"blur"];
property.sort();

for (var i = 0; i < property.length; i++) {
message += property[i];
message += " = '";
message += $(".ugly").css(property[i]);
message += "' <br />";
}

$("#dvDisplay").html(message);
});
</script>

The script is going to run when document.ready event fires, build an array of some CSS properties, sort them (because I'm too lazy to put them in the right order) and then loop over those properties to see how the jQuery .css method returns them. Note that the first and second pair of properties (border-color | border-bottom-color and -moz-border-radius | -moz-border-radius-bottomleft) are going after similar values - it's just one is more specific. Note, also, how the style declarations for the ugly class define each of these. Anyway, each property is appended to the output message which is then displayed in that DIV we left open (see, I told you we would use it).

So what does the output look like? Well, despite the browser agnostic behavior of jQuery it depends which browser you're in. Here are the results I saw (all browsers running on XP professional):

Firefox (3.0.11)
-moz-border-radius = ''
-moz-border-radius-bottomleft = '10px'
background-color = 'rgb(58, 159, 14)'
blur = ''
border-bottom-color = 'rgb(255, 216, 0)'
border-color = ''
font-weight = 'bold'

Internet Explorer 8 (both browser modes and all document modes)
-moz-border-radius = '10px'
-moz-border-radius-bottomleft = 'undefined'
background-color = '#3a9f0e'
blur = '1'
border-bottom-color = '#ffd800'
border-color = '#ffd800'
font-weight = '700'

Google Chrome (2.0.172.33)
-moz-border-radius = 'null'
-moz-border-radius-bottomleft = 'null'
background-color = 'rgb(58, 159, 14)'
blur = 'null'
border-bottom-color = 'rgb(255, 216, 0)'
border-color = ''
font-weight = 'bold'

Safari (3.2.2)
-moz-border-radius = 'null'
-moz-border-radius-bottomleft = 'null'
background-color = 'rgb(58, 159, 14)'
blur = 'null'
border-bottom-color = 'rgb(255, 216, 0)'
border-color = ''
font-weight = 'bold'

So, where are the differences? What strikes me is that even though the -moz-border-radius property is a Mozilla specific style, the output in Firefox is an empty string. Only the specific corner (-moz-border-radius-bottomleft) has the value now. I can only assume the style definition I'm using is a shortcut like the border definition. That would explain why Firefox, Chrome and Safari all return an empty string when checking border-color but can return border-bottom-color. IE will give me either.

The other interesting thing is that while IE8 doesn't know squat about the -moz-border-radius property can tell me what I wanted the value to be (but unsurprisingly, can't provide the specific corner value). So IE8 seems to be able to access the style declarations even if it doesn't do anything with them. This brings me to the declaration of the blur property. I wanted to know if it was possible to use CSS styles to store values to be retrieved later by jQuery to be applied to a drop shadow effect (rather than hard coding in the script). And it appears that this approach would work with IE8 but not the others. Which is how I identified the different behaviors of the browsers in jQuery.

So time for one last assumption: I really believe jQuery is doing its best to get the style attributes but the browsers must be preventing it from doing that. This does not dimish my appreciation of jQuery, but makes me recognize different browsers continue to plague web development.

Thursday, July 9, 2009

VT Code Camp - Call for Speakers and Registration: Now Open!

The first VT code camp is going to be Sept. 12 at the UVM school of Business Administration. We've just opened up the web site for registration and call for speakers. Get started at http://www.vtdotnet.org/codecamp/

You can follow the VT code camp on Twitter using #vtcodecamp

Wednesday, June 24, 2009

CT Code Camp Recap

I attended the second CT code camp on June 13th at the New Horizon's center in Hartfort, CT. Ostensibly it was a research trip for the code camp we're organizing for VT this fall, Sept. 12. However, I love going to code camps because of the educational value and networking opportunities they present. As I've previously posted, I've been to a few of the code camps held in Waltham, MA - so I was excited to have a slightly different experience.

The venue for this code camp was a well appointed learning center with plenty of rooms of varying capacity. Each room had projectors, lots of tables and chairs. At the Waltham camp most of the seating lacks a table, so it was nice to have a place to put my notebook (rather than balancing it on my knee). One thing I noticed by its absence was the wide hallway at the Waltham location. The hallways at the New Horizon center seemed more narrow. Maybe it was just the 150 or so people moving through them, though.

I encountered the registration area as soon as I entered the building. It was nice to see Dennis Perlot's smiling face at the desk. I met Dennis at the New England User Group Leadership Summit in May. He and S.B. Chatterje run the CT .NET user group and this code camp. The registration packet included the session schedule, list of donors and a copy of Windows 7 Release Candidate.

The entire event was extremely well run. My only quibble was the lunchtime activities, focused on job fair like sessions such as resume writing and networking skills, went on longer than I would have liked. I'm sure this is a result of my being comfortable with those topics - I just would have preferred to have at least one technical session going on concurrently. But as I say, that's a quibble. The entire experience was well worth the trip and I hope to make additional CT code camps.

I've created separate posts for 3 of the 4 sessions I attended. I'm not bothering to post about the 4th session because I had to leave early and my notes aren't complete. My session specific posts are:

.NET Troubleshooting in a Production Environment - Polina Cherkasova

jQuery – Why You Want It & How to Use it in ASP.NET - Dave Bush

Scrum 3 Roles, 3 Ceremonies, 3 Artifacts, 3 Best Practices - Dan Mezick

CT Code Camp Recap: Part 1

.NET Troubleshooting in a Production Environment - Polina Cherkasova

This session was a review of different strategies for identifying problems that are discovered after software has been deployed to the production environment. Polina categorized the severity of issues from unexpected behaviors (e.g. code working as designed but not as the user believes it should) through complete application failure (i.e. smoking crater where the server used to be).

One of the strategies to diagnose a problem was to have the production environment run in debug mode. However, this approach results in an application that is not identical to the production build and can skew the results. Besides, the problem may not be something which can be reproduced at will.

Polina spent a fair amount of time advocating for using an on-the-fly debugger. The debugger she was using was AVIcode ART (www.art4dotnet.com). It was about 3/4 of the way through the presentation before I realized Polina worked for AVIcode. So her advocacy made sense in that light. Her demo was compelling, though. I liked the fact you just configure the application to monitor the application (without having to code additional instrumentation into the application) and it can report on a variety of conditions with varying thresholds of sensitivity.

Another strategy is using application logs using tools such as log4net, Entity Library, Event Log, nLog, etc. This approach allows for information to be retrieved at run time but requires development effort and usually only deals with handled exceptions. Missing from her analysis was coding robust event logging into the application. Doing this will provide the person diagnosing a problem with additional information for identifying what lead to the problem being researched. However, like error logs, this requires development effort and getting the signal to noise ratio right can be a challenge (so only events which are truly helpful are captured).

CT Code Camp Recap: Part 2

jQuery – Why You Want It & How to Use it in ASP.NET - Dave Bush

With the release we're currently developing at work we've been using jQuery more and more. Put me down in the fan column. I really, really like it - but I'm still a newbie to it in so many ways. So when I saw there was going to be two sessions about using it I figured attending one would be a good idea. I'm glad I made Dave's talk. It was chock full of demos. I'll admit that even being a relative newbie it took a few samples before we got to examples I hadn't seen, though.

The best parts were some of the best practices he provided. For example, Dave suggested linking to the jQuery library hosted on Google's servers rather than serving it up on our own machines. Many sites do this and it leverages the browser's caching to improve performance.

An interesting point Dave made was to point out there is a difference between jQuery's document.ready event and the document.onLoad event. The document.ready fires as soon as the page is parsed enough where all JavaScript can be executed while the document.onLoad fires when the page is done rendering. Knowing the difference between the two can be instructive when trying to decide which event should be referenced when code needs to run.

Dave suggested making sure we apply VS2008 SP2 so we can have Intellisense when coding jQuery statements. Step by step instructions on how to do this are available on Scott Guthrie's blog here.

Another revelation, and one which I will surely be sharing with my peers at work, was a throw away line Dave had talking about a JavaScript library that forces IE6 to behave like IE7. I wrote that down with the intention to research it because of our requirement for continued support of IE6. Anyway, a quick Google search resulted in this blog post talking about the IE7.js library. It's still just a lead at this point... but I intend to follow up on this soon.

Dave's blog is http://blog.dmbcllc.com

CT Code Camp Recap: Part 3

Scrum 3 Roles, 3 Ceremonies, 3 Artifacts, 3 Best Practices - Dan Mezick

I'm not sure how to write this session up. This was by far the most compelling session I attended. Dan is a dynamic speaker, his points are sound and I found the approach to development he described very appealing. There was so much content presented during the session I'm not sure any blog post I can write could do it justice.

The session was an overview of Agile and Scrum. I've never worked in a development group that embraced
the agile approach. Many of the teams I've been on have conducted what they called "Scrum" meetings, but those meetings didn't match what Dan was describing - let alone the process that was supposed to accompany it!

What was fascinating about Dan's presentation was it was more about team dynamics and interpersonal relationships than it was about the development process exactly. The same techniques being described could apply to any team trying to complete a number of tasks in support of an end goal. The whole process being described was very empirical in how the project's progress is being tracked. One example of this is to hold off making a decision until the "last responsible moment." I LOVE this concept. The idea is that as more information comes in we can make a more informed decision. If a decision is made early on it is human nature to not reverse that decision - even in the face of additional evidence. By delaying the decision to the last responsible moment we're increasing the odds that the decision being made is as fully informed as possible. I'm hoping to adopt this in my daily work as application designs are considered.

Another example of having a project managed by evidence had to do with project schedules. The idea of backing into a release date, for example, appears to be completely abhorrent in this system. By identifying the release date early the team exposes itself to "inattentional blindness" which is a phenomenon where people don't see something that's there because they are paying a lot more attention to something else. In the case of the release date, the team becomes so focused on the scheduled release date they are unable to perceive changes or events that had they been perceived would lead to a change in the release (either schedule or scope).

The 3 Roles

Dan described an appropriately organized team as consisting of 3 roles: the product owner, the Scrum master and the team.
  • The product owner is the individual responsible for collecting and elaborating the project requirements and putting them into the product backlog (see the 3 artifacts). This individual is the interface with all the stakeholders who are interested in the resulting product (e.g. management, sales, marketing, users, etc.).
  • The Scrum master is the individual (and it is an individual - one person, singular) responsible for product backlog - they own and prioritize it. The Scrum master is not a traditional project manager. Their job is to "patrol" the product boundery to ensure no requests are coming into the product backlog that are not appropriate for that product. By having this be an individual the product backlog prioritization is not subject to the conflicting whims of multiple individuals
  • The team is self-organized. The team participates in the product design. They decide on a subset of the product backlog that will be addressed in a particular sprint. A good rule of thumb is to have +/- 5 to 7 people on a team. No one who is allocated less than 50% to the project is considered a team member. "Committed people" are >= 50% allocated, everyone else is merely involved.

The 3 Ceremonies

The process Dan described is punctuated by a few types of meetings (there are always meetings, right?); Sprint planning meetings, daily Scrum meetings and project retrospective meetings.
  • Once the Product Owner has put all the requirements, requests, bugs into the product backlog and after the Scrum master has prioritized the backlog it's time for the team to meet. The purpose of the meeting is to decide what can be accomplished in the next development period. These development periods are 2 - 6 weeks long and are called "Sprints". In the sprint planning meeting the team identifies which sub-set of the product backlog can be accomplished in the sprint. The team must pick off the top of the product backlog (ensuring priority work is being done) but they decide how many of the items realistically can be completed. Other stakeholders can attend the sprint planning, but it's the team's meeting. Also, remember before I mentioned the team is self-organized? This is part of what that means. It's worth noting that by making the decision about what to work on this way, the team is delaying decision until the last responsible moment. The product backlog is as mature as it can be, so the team is committing to the truest priorities possible.
  • Once the sprint is underway it's important for the team to meet daily to provide an update on progress. This is the daily Scrum meeting. Each team member reports what they accomplished the prior day, what they intend to accomplish that day and what (if any) obstacles are in the way. The reason the meeting is daily is to prevent "cognative dissipation." The idea is if I say I'm going to do feature 1 today, and nothing is in my way, the team is going to notice if feature 1 isn't finished (or at least had progress) tomorrow. If the scrum were held weekly, well, who remembers what they worked on a week ago let alone what commitments a peer made? Having the ongoing, consistent knowledge might prompt one of my teammates to find out what problems I might be having. Again, this is the team's meeting. Other stakeholders can attend - but they better stay out of the way...
  • Once the project is complete it's time to reflect on what worked well, and what needed improvement. That's the project retrospective meeting. These meetings might expose some hurt feelings and raw emotions. On a mature team these can be worked out. The goal is to improve the process for everyone so the team, product owner and scrum master can do better the next time around.

The 3 Artifacts

There are three documents
  • By now you probably have a good idea of what the product backlog is; it's a prioritized list of each feature, enhancement request and bug that is to be incorporated into the product. Normally the list is dynamic. Requests are made, bugs reported and the product owner adds all that to the product backlog. The Scum master prioritizes the product backlog and the team provides estimates on level of effort for each item on the product backlog. However, once the sprint is underway the product backlog is static (at least as far as priority goes). If requests/bugs come in they can be added to the bottom of the stack, but nothing gets prioritized to the top. That's because while the sprint is underway the team is working off the sprint backlog.
  • The sprint backlog is the result of the sprint planning meeting. It's a mini-product backlog of what the team will be delivering in the sprint. Because this is what the team has committed to deliver, and that commitment was made at the last responsible moment, it is unacceptable to change the priorities of the sprint once it has begun. That's why the product backlog is static during a sprint.
  • A burn-down chart is a graph that shows work to be done on one axis and time remaining on the other. Dan showed some examples. Google can help you find examples, too.

The 3 Best Practices

Unfortunately, by the time we got to talking about the best practices, Dan was out of time. He threw up a few slides and pictures to communicate the ideas, but I'm just going to provide some links which will cover the topic more completely.
  • User Stories are a way to capture user requirements.
  • Planning Poker is a technique teams can use to provide estimated levels of effort for items on the product backlog.
  • Scrum Board (a.k.a. task board) is a tool used to reinforce the work being done. This is the only thing I've been able to implement in the immediate aftermath of the code camp. My whiteboard has been converted to track the tasks on which I'm currently working. One of of my teammates likes it, too, and has started using one. I expect we'll use one for our team in short order.

Final Thoughts...

Dan's blog is at http://www.newtechusa.com/agile/blog/

Wednesday, June 10, 2009

An Excellent Post About JQuery Selectors and ASP.NET

I had to share this blog post by Dave Ward about optimizing performance when trying to get jQuery to select the correct element as you work with ASP.NET.

Tuesday, June 9, 2009

I'm So Excited (and I just can't hide it)

I'm one of a handful of people trying to launch a code camp in the Burlington, VT area. We just received confirmation on the venue and can now start asking people to hold the date. It's going to be on Saturday, Sept. 12, at the UVM Business School.

There's still a lot of work to make it happen but I can't tell you how thrilled I am that we have a venue and date.

Small reminder for JavaScript development

Sometimes we need reminders about the basics. In reviewing some JavaScript code this morning I came across a function that builds some html on the fly by concatenating values together. What people have to remember is the smaller the JS file is the faster it will download. So rather than doing this:

scratch = scratch + "<a href....";


you need to use the built in += operator, thusly:

scratch += "<a href....";


The function in question had the first approach 12 times. Using the built in += operator I was able to reduce each statement by 9 characters. That reduces the function by 108 characters overall.

So the lesson is use built in functionality because little things can add up.

Wednesday, May 6, 2009

Notes from the New England User Group Leadership Summit

This past Saturday I had the opportunity to attend the 2009 New England User Group Leadership Summit. Hosted by Microsoft and O'Reilly Media it was an event dedicated to helping user group leaders connect, share ideas and experiences. It had nothing to do with the technologies these folks are usually discussing. Rather, we spent time talking about how to build community, publicize events, manage event logistics and things of that nature. Attendees came from all over the northeast - primarily New England but there were also folks from Pennsylvania and New York (which is not part of New England for you folks who don't know better).

The summit was held at the Microsoft New England Research and Development Center (a.k.a. the N.E.R.D. center) in Cambridge, MA. This was a really groovy facility with excellent meeting spaces.

A wiki was set up for all the session notes, available at http://neugsummit2009.pbworks.com/FrontPage/ Take a look. Even if you're not involved in a technical user group many of the topics relate to any community group. I was commenting to Chris Bowen that one reason I was really happy to have attended was in addition to what I learned which I will be able to apply to my involvement in the VT.NET user group much of what we discussed applies to my other community involvement, the Burlington Irish Heritage Festival.

One of the attendees, Rachel Ford James, took lots of photos and has made them available on Flickr. She really took some stunning photos that captured the spirit of the day. As you view the photos you may wonder why there are a number of shots with mixers that appear to be smoking. There was a break between sessions which was conducted as a team building exercise where groups of attendees selected ingredients for an impromtu ice cream flavor. The ice cream was made using mixers and liquid nitrogen. It was great entertainment with lots of noise, visuals and (of course) taste. Being from the land of Ben & Jerry's, though, I was terribly jaded about the results.

Many thanks to O'Reilly and Microsoft for hosting this event.

Wednesday, April 29, 2009

I don't know what it's called, but I like it

I've recently started using JetBrains' Resharper. Today I was reviewing the code inspection rules and came across a few that had to do with using an operator with which I was unfamiliar. So I took a moment to learn about the ?? operator (link) which must have come out with the .NET 2.0 framework because it has to do with nullable types.

What it does is kind of like the old IsNull method from VB (you remember VB don't you?). Here's an example in two lines:

int? x = null;

int y = (null != x) ? x : -1;

The first line declares a nullable int variable named x which is assigned null.
The second line declares a non-nullable int variable named y. Because it's not nullable we've got to ensure a null value isn't being assigned (otherwise we'll raise an exception). To do this we're using the ternary operator. What the ?? operator does is allow us to write the second line like this:

int y = x ?? -1;

So if x is null y is set to -1. Nice, right? Especially if you replace x and y with more meaningful variable names, such as this:

int? someMeaningfulName = null;

int whatYouReallyWant = someMeaningfulName ?? -1;

Or, more common in my current job, getting values from a web form or querystring:

int desiredFormId = 0;

if (null != Request.Form["Activate_FormId"]) {
desiredFormId = Request.Form["Activate_FormId"];
}

can become:

int desiredFormId = Request.Form["Activate_FormId"] ?? 0;

Now if I could just figure out how to pronouce this operator I can tell people about it.

Monday, April 27, 2009

Mitigating web.config security vulnerabilities through scripting

After reading this post, .Net and Business Intelligence: Application Security Vulnerabilities in Web.config File, what occurred to me was all this could be mitigated by employing a strategy I refer to as composition scripting.

While I've done a bit with build automation I've also created NAnt scripts that I refer to as composition scripts. The purpose of a composition script is to automate all the tasks required to prepare an application for deployment. I've used them to create ClickOnce deployments, but for web applications a composition script generally grabs all the pages and binaries needed. But in addition to that, and this is the relevant bit, I make use of the XmlPeek and XmlPoke tasks to swap out web.config tags to use configuration values appropriate for the environment being composed.

You see, my script accepts a parameter called Target.Environment. The acceptable values for that parameter are QA, UAT, PROD - quality assurance, user acceptance testing and production, respectively. (Where's development, you may ask? Well, that's the default state of the web.config in the source code repository). Along side the web.config I have a few files named web.config.qa, web.config.uat and web.config.prod. These are not repeats of the entire web.config file, though, but rather are the configuration values that need to change from environment to environment. These are the values swapped into the config using XmlPeek and XmlPoke.

Note: I believe a composition script should not recompile the application. It should compose the application deployment using the same pages and binaries as the application progresses from QA testing to UA testing and into production. This ensures the application being deployed is the same application which underwent testing.

So using composition scripts it's easy to mitigate the 10 security risks identified in the article.

Tuesday, April 14, 2009

Note to self... always read the instructions

I'm in the process of setting up a build machine at work. I firmly believe any shop doing production code with multiple developers needs to have both a source code repository (I like Subversion) and a build machine (sometimes called an integration server). I'm setting our build machine up to use CruiseControl.NET for our integration server. I used it at my last place, it's free and I like it. Since I'm the one doing the set up I get to decide.

It's good to be the king.

Anyway, I didn't install the OS, framework and all that on this box so was getting frustrated that I couldn't get the CC.NET web dashboard working. Turns out if I only read the FAQ I would have seen the first item talks about what to do if IIS gets installed after the .NET framework. Running the aspnet_regiis.exe as that document suggests fixed my problem. That's an hour or so of my life I would like back.

RTFM, indeed.

Tuesday, March 24, 2009

Ada Lovelace Day: Julie Lerman

Apparently today is Ada Lovelace Day (more information here). It's an opportunity for bloggers to write about women in technology who have inspired us. That makes this the perfect time to write a few sentences about one of the programmers who has been an inspiration and friend for around 5 years now - Julie Lerman.

I originally met Julie because she runs our local .NET user group. She works tirelessly for this group. She ensures the meetings are scheduled, maintains the web site, arranges for speakers, getting little gifts for them and presents her own topics when speakers are hard to come by.

Her technical skills, energy and humor are wonderful. But that's not why I wanted to write about her on this day.

It is Julie's dedication to community which is inspiring. She continually works to foster a real sense of community among the people who attend the user group. There's the usual call for people to raise their hands if they are looking for work or looking to hire (what I call the "Love Connection" portion of the meeting). She encourages people to get up and do their own presentations - both at our meetings but also at regional code camps (we don't have a local one... yet). At our last meeting she took a moment to lead a discussion to brainstorm how we as members of the same community (professional and regional) might support one another in this period of economic uncertainty. Julie is sure to introduce you to people "you've got to meet."

Julie's dedication to community transcends our small state, by the way. She travels the world speaking at conferences and user groups. She participates on "women in technology" panels (perhaps inspiring an upcoming Ada Lovelace). She blogs and has written a book. Julie is one of those people that wants to see others succeed and will do a lot to try and help.

She's a true leader and an inspiration.

Wednesday, March 11, 2009

Debugging tips from the MSDN Roadshow

Yesterday I attended the VT leg of the MSDN Roadshow. While I was unable to stay for all the presentations I picked up some good tips during Jim O'Neil's overview of debugging with Visual Studio 2008 (with a preview of VS2010 debugging). As is my habit, I'm putting my notes from the session here so I don't lose them and so others might benefit from them.

But if you write bug free code you can stop reading here.

One thing that used to bother me was that if I had a line of code that chained or nested several method calls I would have to step into/out of all the methods until I got into the one I wanted. Apparently Visual Studio supports the ability to step into a specific function. I saw Jim do it yesterday, I can see the web sites describe it, but I can't find it in my environment. So it's something to research.

There is another thing I saw and can read about but can't find in my environment. Apparently there is a setting under Tools | Options | Debugging which allows you to step over property and operator calls by default. Again, something to research.

One thing I do see in my environment, and which I believe I can find use for, is the HitCount property of breakpoints. The breakpoint window displays this property for each breakpoint and should prove useful when trying to identify which iteration within a loop is causing a problem (for example).

Finally, Jim reminded me of the DebuggerDisplay attribute that we can use to decorate our classes to make them more user friendly in the debugger. Here's one overview of that attribute and another from Scott Hanselman.

Love the free training...

Wednesday, March 4, 2009

I don't care what they say... 6 != 4

When writing a SQL query today (against SQL Server 2005) I came to realize I can't trust the len function. Well, ok, I can trust it but not if the value being evaluated has trailing white space. Take this code:

Declare @value char(10)
Set @value = ' 123 '
Print Len( @value)

Notice I set @value to ' 123 ', that's [space][1][2][3][space][space]. Now even if I count like my 6 year old that's 6 characters. But what does the code above say? 4. Four?!?!? Lies!!!

And it lies to me if I declare @value as char, nchar, varchar and nvarchar data type. I can see varchar because it's a variable length data type. However char is a fixed length data type so I was expecting either a length of 6 (I did, after all, assign a string literal with 6 characters) or 10 because @value was declared as a char(10).

So I don't know what to believe anymore.

Sunday, March 1, 2009

Ah-ha! I knew I never liked that feature

Part of the new job is re-familiarizing myself with ASP.NET development. I've done this before, obviously, notably at my last job writing web services. But it's been a few years since I've done web pages. So I'm reading Dino Esposito's book "Programming Microsoft ASP.NET 3.5". It's a thorough book; or at least it is so far - I'm only into chapter 2. But what I find I just had in chapter 2.

One of the things the developers at my last shop used to "debate" was whether to develop web applications and do development testing against IIS or the embedded local web server available to Visual Studio 2008. I was always an advocate for developing so the various web services I needed would be available via my box's IIS - I liked the fact I could have a web service available to me without having to included it in my solution. Other developers liked the fact they didn't have to maintain their IIS setup. Once we discovered a project can be set up to allow the developer to work as they prefer we stopped having these "discussions" (because updating the project file from source control no longer messed you up).

But I just read the definitive reason why I would advise people to not use the embedded web server. When your code runs using that server it assumes the level of credentials of the windows account you're signed in as. For far too many of us that means our web code would be running as administrator on the box. This rocks for us as developers because we don't have to worry about all those pesky security issues we would otherwise encounter.

But we're really just kicking that can down the road. For me it comes down to addressing those security issues sooner rather than later. Because it's not like those security concerns aren't going to be raised when the code is put on the test server, or production, and by then it's more expensive to fix (not to mention embarrassing).

So I say:
<FrankensteinVoice>
Embedded web server -baaaaddddd
</FrankensteinVoice>

I hope you do, too.

Friday, February 20, 2009

Yes, I'm still here

I haven't posted anything in a little while because I've been caught up trying to settle into the new gig and establish myself here. So no great revelation of technology to write up. I did want to share this link of 10 things that bother developers. Funny and accurate.

As for the new gig - I've come to the conclusion that there must be many successful companies which have code bases which could benefit from a bit of cleanup. Luckily, I'm at one of those companies that recognizes that fact and is getting us positioned to do it.

I'm expecting to publish a couple of posts in the future which I originally wrote an internal blog at my previous employer. I just need to scrub some of the identifiable information from it so as to protect the innocent.

Friday, January 30, 2009

Tool Review: RockScroll

Doing my fluffy-Friday reading I found this post on Scot Hanselman's blog about a tool named RockScroll that turns your VS2005/2008 scroll bar into a thumbnail view of the open document (when working with .cs files anyway). It also has a nifty feature that if you double click a word in the source code, say a variable name, it will highlight all occurrences of that word in a lovely lilac color (and red in the thumbnail scroll bar - which is the useful bit, IMO).

Anyway, I'm digging this new tool and hope you might also find it useful.

Thursday, January 22, 2009

XmlCsvReader

I found a new tool yesterday that saved me from having to type up some sample XML. In preparing for a XSLT demo I wanted to use some sample data returned from a stored procedure in our database. I was able to take those results and get them into a .CSV file but needed to then get it into an XML format. The idea is the data may start out as this:

FirstName,LastName,Employer
Homer,Simpson,Springfield Nuclear Power Plant
Kermit,Frog,Muppet Theatre

but I want it to end up like this:

<People>
<Person>
<FirstName>Homer</FirstName>
<LastName>Simpson</LastName>
<Employer>Springfield Nuclear Power Plant</Employer>
</Person>
<Person>
<FirstName>Kermit</FirstName>
<LastName>Frog</LastName>
<Employer>Muppet Theatre</Employer>
</Person>
</People>


A quick search on Google brought me to this post about a tool called XmlCsvReader. It seemed to be just what I needed, but the link to download the tool was dead (still pointing to GotDotNet.com, which has been defunct for a while). Armed with the name of the project, though, I Googled again and found another post about it with a download that worked (three cheers to Andrew for not having a dead end!).

Syntax is easy and the only snags I hit were the result of QueryAnalyzer, excuse me, SQL Server Management Studio 2005, not being able to save sproc results to a CSV file correctly. I had to type in my column headers and wrap text values in double quotes if they had commas.

Wednesday, January 21, 2009

Keeping Current

Recently my manager asked me for examples of things I do outside of work to stay current in the development world. It seemed like a good topic for a blog post - so here it goes. Essentially I do three things to stay current; participate in the local .NET user group, attend the regional Code Camp (when possible) and read blogs/books.

Local .NET User Group
I’ve been involved in the local VTdotNET user group for about 4 years now. It is held monthly and covers a variety of topics related to .NET development. In the past presented some “newbie” sessions for the user group. I found the exercise of putting a presentation together to be a good way to force myself to learn a topic in more depth. As a side note, when I would be preparing a presentation I would always do a dry run of it in-house for my co-workers as a tech chat or lunch and learn. Doing that led to questions I hadn’t considered and resulted in a more thorough presentation.

Another benefit to participating in our local .NET user group has been the number of books I’ve won as door prizes. The books I have which have familiarized me with WCF and LINQ were door prizes. The book I’m currently skimming is “Programming ASP.NET 3.5” put out by Microsoft Press.

I should also mention one of my new co-workers runs the VT SQL Server user group and I've committed to him that I would begin attending more regularly once they get going again.

Code Camp
Twice a year there is a regional Code Camp held in Waltham, MA. Code camps are free events, held outside of work hours, that are run by the development community for the development community (here’s a link to the Code Camp Manifesto). I’ve been attending these for about 4 years and they are marvelous opportunities to learn. Although the event itself is free the fact it’s held in Waltham, MA means most people need to cover travel expenses. I grew up in that area, though, so I stay with family which makes it an easy decision for me. Here are links to the last presentation schedules for Code Camp 8, Code Camp 9 and Code Camp 10 if you want a sense of the topics covered.

Reading
Finally I read to stay current. In addition to the books I’ve mentioned earlier there are a few blogs I follow via RSS. Chris Bowen is our regional Microsoft developer evangelist. His blog is how I learn about upcoming code camps and MSDN road shows. Julie Lerman runs our local .NET user group. Lately her blog has focused mainly on Entity Framework (EF), but that’s because she’s in the midst of writing a book on the topic for O’Reilly. While I’m not that interested in EF she also links to other topics (plus following her blog gives us something to talk about socially at the local user group meeting). I also follow the Software by Rob blog. It’s not really a technical blog but he writes on our industry in general. While Ted Dziuba’s blog is technical his topics rarely align with what I do. He has an extremely funny writing style, though, and the nuggets of information I get are usually pretty good.

Other
There are other things I do to stay current such as viewing web casts from Microsoft, the PolymorphicPodcast or other sources but it’s difficult to find dedicated time for watching/listening. Having said that, however, the new job is going to require me to get back up to speed on ASP.NET programming and these are resources which I'm sure I'll rely on more heavily than the past couple of years.

Friday, January 16, 2009

Fun with progress bars

You could be forgiven for spending too much time on this site:
http://www.prettyloaded.com/

Wednesday, January 14, 2009

Speed up your string comparisons

One of the first software engineery things I was tasked with at the new job was to do a code review for a peer. It was a good opportunity to look at some of our code. One of the things that stood out to me, however, was the way string comparisons were being performed. I saw a lot of this:

if(stringVariable1.ToUpper() == stringVariable2.ToUpper()) { ... }

One of my comments back on the code review was it might be better to do this:

if(stringVariable1.Equals( stringVariable2, StringComparison.OrdinalIgnoreCase)) { ... }

My peer agreed it might be a good idea but said the coding standard in the shop was the first approach. Not having a second software engineery thing to look at yet I figured I would do a quick benchmark between the two approaches. I also decided to include a comparision using .ToLower() just to be fair since that was in the code also.

To perform the benchmark test I used the SimpleTimer class Bill Wert outlined on his blog.

The test itself was just a console application that builds a List<string> with the number of entries identified by the person running the test. So the tester can enter 1 to whatever long.MaxValue is. I fill that List with the appropriate number of Guid.NewGuid().ToString() values. Then I time how long it takes to invoke an if using the different comparisons.

The code for all this is below, but I'm all about getting to the results...

The Results
I started small with a sample size of only 8,000:

ToUpper
8000 iterations took 0.008 seconds, resulting in 1012736.624 iterations per second

ToLower
8000 iterations took 0.007 seconds, resulting in 1199852.721 iterations per second

Equals
8000 iterations took 0.001 seconds, resulting in 9669591.016 iterations per second

Not bad. Already we see the .Equals method performs faster than changing the case and doing the == thing. I decide to skip going for a medium size test and go right for big.

Here's the results using a sample size of 8,000,000:

ToUpper
8000000 iterations took 4.608 seconds, resulting in 1736263.497 iterations per second

ToLower
8000000 iterations took 4.187 seconds, resulting in 1910494.970 iterations per second

Equals
8000000 iterations took 0.318 seconds, resulting in 25181978.355 iterations per second

Oh sure, the test takes longer - but I think it was worth it. In case you missed it - 3 tenths of a second is less than 4.6 seconds.

I'm going to see if we can get that coding standard changed...

The Code
Sorry the syntax highlighting is missing but the blog editor isn't good about that. I've changed the color on the comments but you'll want to paste this into the IDE of your choice if you want the whole enchilada.
using System;
using System.Collections.Generic;

namespace StringComparisonPerformance
{
class Program
{
static void Main(string[] args)
{
const long defaultSampleSize = 10;
long testRecordCount = defaultSampleSize;

Console.WriteLine("Enter sample size");
string inputValue = Console.ReadLine();

if (string.IsNullOrEmpty(inputValue))
{
Console.WriteLine("No value provided, using default sample size of {0}", testRecordCount.ToString());
}
else
{
if (!long.TryParse(inputValue, out testRecordCount))
{
testRecordCount = defaultSampleSize;
Console.WriteLine("Unrecognized sample size provided. Using default sample size of {0}", testRecordCount.ToString());
}
}

List<string> testValues = GetTestValues(testRecordCount);
string comparisonValue = testValues[1].ToUpper(); //some value

SimpleTimer timer = new SimpleTimer();

Console.WriteLine();
Console.WriteLine();
Console.WriteLine("ToUpper");
timer.StartTimer();
ToUpperTest(comparisonValue, testValues);
timer.StopTimer();
timer.Result(testRecordCount);

//now recast all the test values to upper case to account for the fact
//GetTestValues returns lower case values. This just ensures the test
//is fair between ToUpper and ToLower
for (int i = 0; i <> testValues[i] = testValues[i].ToUpper();

Console.WriteLine();
Console.WriteLine();
Console.WriteLine("ToLower");
timer.StartTimer();
ToLowerTest(comparisonValue, testValues);
timer.StopTimer();
timer.Result(testRecordCount);

Console.WriteLine();
Console.WriteLine();
Console.WriteLine("Equals");
timer.StartTimer();
EqualsTest(comparisonValue, testValues);
timer.StopTimer();
timer.Result(testRecordCount);

Console.ReadLine();
}

static List GetTestValues(long testLength)
{
List
<string> testValues = new List();

for (int i = 0; i <>
testValues.Add(Guid.NewGuid().ToString());

return testValues;
}

static void ToUpperTest(string comparisonValue, List<string> testValues)
{
foreach (string testValue in testValues)
if (testValue.ToUpper() == comparisonValue.ToUpper()) { }
}

static void ToLowerTest(string comparisonValue, List
<string> testValues)
{
foreach (string testValue in testValues)
if (testValue.ToLower() == comparisonValue.ToLower()) { }
}

static void EqualsTest(string comparisonValue, List
<string> testValues)
{
foreach (string testValue in testValues)
if (testValue.Equals(comparisonValue, StringComparison.OrdinalIgnoreCase)) { }
}
}
}

Tuesday, January 13, 2009

VTdotNET January 2009 Summary

The January VTdotNET meeting was heavily attended (38 people by my count) and with good reason. The presentation by Mario Cardinal was "Best Practices to Design a Modular Architecture". The main thrust of his presentation was that we as developers and architects need to do a better job of designing our solutions to be compositions of solution models. The idea in his mind is to not have our applications be one "big ball of mud" but a series of "smaller, loosely coupled balls of mud."

Nothing revolutionary there, I admit. But he did a better job than most of defining his terms and offering some best practices.

Terms
Mario started by describing his view of architecture as the process of abstracting solutions and experiences into different models. The intent is to identify the various areas of an application which can be their own module. The goal is always to simplify the application design. The example he gave was if a builder has to meet a requirement that a person standing a room should be able to view the outside the preferred design would be to put a window in the wall (simple and meets the need). A bad design would be to introduce a series of gears and supports which would hoist the wall up like a garage door. Sure it meets the need but it is overly complex.

So what, in his mind, is a module? He identifies a module as something which has certain attributes:
  • Role - this describes the responsibility it performs in the system
  • Seam - the visible or public interface of the module. This is more than just the API, however, which I describe later.
  • Body - the hidden design parameters and implementation
  • Test Bed - this determines how well the module works in an autonomous way without running the whole system. Based on what I saw it consists of the unit tests and various mock objects required by those unit tests.
Mario's point is we as an industry overly emphasize the Body and need to focus more precisely on the Role, Seam and Test Bed of a module. To illustrate he discussed that in his view a module could be a System, Layer or Class.

The point Mario wanted us to take away from this discussion was that all serious mistakes are made the first day
  • The most dangerous assumptions are the unstated ones
  • The design must ensure a module has only one reason to change - the Single Responsibility Principle
  • We need to group elements that are strongly related to each other while separating elements that are unrelated or have a conflict of interest
He was asked how to identify a module's responsibility. He suggested starting with the business logic layer and stripping away concerns of infrastructure such as persistence, logging and the like.

As I previously mentioned, Mario's discussion of modules were intended to apply equally to a System, Layer or Class. He defined each as:
  • System - An autonomous processing unit which defines a business boundry. Services or applications are the usual examples of a system.
  • Layer - the parts of a service or application
  • Class - the units of programming which comprise a layer
A major aspect of understanding a module, therefore, is the concept of a Seam. A Seam has three parts:
  • List of operations - the API
  • Expected behaviors - what it's supposed to do. These become the core of understanding what tests can/should be written for the module. Use examples rather than formal statements (such as UML expressions). It is important to keep in mind we're talking about tests for the module so if the module is a layer the scope and nature of the testing will be different than testing for a class.
  • Constraints - logic or requirements that define the prerequisites. Examples of these would be preconditions such as input validation or other guard conditions.
Best Practice Recommendations
The best practices which Mario offered centered around doing the hard part first. So what's the hard part? Getting the interfaces correct. He believes the greatest leverage in architecting is at the interfaces. So the suggestion is to focus on the Seams to ensure the interactions work properly and clearly. This focus allows the natural boundaries between modules to become self evident. Then the "big ball of mud" can be divided up naturally - much like a log is split along the grain.

Mario advocates starting with the System Seam. For applications this means the UI and for services this would be contract-first development. Ensuring the System Seam is correct is the best way to ensure the user ends up with the experience they desire. Because the system is being written to satisfy the user's need it's the natural place to start.

The process he recommends following for defining the System Seam is decidedly low-tech but aligns with the Agile approach. You start with defining the User Stories/Use Cases/User Scenarios. From there you create what he called low fidelity mock ups - what everyone else would call screen layouts on paper. Nothing is cheaper than drawing on paper or a white board and asking the user, "Is this what you mean?" Once the low fidelity mock ups are solid (after iterative reviews and revisions) you then build high fidelity mock ups - actual screens with just enough behavior implemented to define the user experience but nothing real beneath it.

Once the high fidelity mock up is defined and is agreed upon Mario claims the natural seam between the UI and the business layer will be revealled.

He makes two further recommendations that apply if the user identifies further changes once the high fidelity mock up is started. First, we should welcome those changes. It may like our hard work is being dismissed or put aside but they are an opportunity to get closer to what the user actually wants and needs. Second, any changes should be defined and clarified by restarting with the low fidelity mock ups. For example, don't start moving buttons on a form and compiling - draw a picture of your understanding of the requested change. Remember, paper is cheap.

Other Observations
Part of the meeting got side tracked because Mario was asserting it is foolish to commit to delivering a finished system on a particular date. He believes the software industry needs to get to a place where we are making rational commitments. He believes we should only commit to the next step in the development process. For example, he believes we can only commit to having mock ups by a certain date. Once the mock ups are defined and agreed upon we can begin discussing when the next stage of module delivery might be accomplished. Some of us tried to bring the discussion back to reality - pointing out, for example, that sales folks can't sell screen mock ups - but Mario was insistent the issue is our industry needs to change expectations. He likened it to someone saying, "Here's a bunch of money to cure cancer. When will you have that done?"

I don't think I'll hold my breath for that, though.

Monday, January 12, 2009

Clone Detective for Visual Studio

A friend of mine recommended this tool. I've not played with it yet but didn't want to loose the link - it's http://www.codeplex.com/CloneDetectiveVS

He says it's a little utility delivered as a non intrusive VS.2008 add-in that helps identify duplicate code with an intuitive UI. He finds it handy for refactoring code to be more concise and uses it to remove the “copy/paste” pattern he finds in the legacy product on which he works.

Additional C# Code Snippets for VS2005

Something you might find useful when coding in C# is the way code snippets can increase your productivity and provide guidance/reminders on syntax. If you like snippets, you might want to grab the additional snippets published here.

Monday, January 5, 2009

Day 1 of the new adventure

So today is my first day at the new job. There hasn't been much structure to my day, simply getting my account information and installing software. It made me think about what the tools I simply MUST have as a developer. There are 3 that I installed right off the bat which I figured might be worth sharing.

7-Zip: This is the zip utility I use. 7-Zip is an open source project so it's free (always nice). The UI is a simple integration with the context menus in Windows Explorer.

CmdHere PowerToy: This is a Microsoft power toy that adds a menu item to the context menu that is presented in Windows Explorer when you right click on a folder. The menu item allows you to quickly open a command window with a working directory of the folder on which you invoked the context menu. (I'll update this with a link once I find one).

Notepad++: Sometimes I just want to open a source code file and review its contents without spinning up an IDE with lots of overhead (I'm looking at you, Visual Studio). In the past I've used editors like TextPad, which I like, but it cost money. A little over a year ago I was turned onto an open source editor that I like well enough to start using, Notepad++. It is fast, gives me the syntax highlighting I want and has some nice tools baked into it.

There are other tools I like, and I continue to find new ones. But these were the first 3 to get installed and that made them seem noteworthy.