Friday, October 30, 2009

Design and Code Review Checklist - scoping

Here is my scoping/access modifier cheat sheet.


  • Public Class - Access is not restricted.
  • Private Class - Only valid for nested classes
  • Internal Class - only visible to the assembly


  • Private Member - only available to the containing type
  • Protected Member - Available to the containing type and anything derived from the containing type.
  • Internal Member - available to current assembly.
  • Protected Internal Member - Available to current assembly or anything derived from containing type.
  • Public Member - Access is not restricted

Valid Member Access Modifiers:

  • Enum - public
  • Interface - public
  • Class - public, private, internal, protected, protected internal
  • Struct - public, private, internal


  • Abstract class - cannot be instantiated. May have abstract and non abstract methods. Derived class must implement all abstract methods.
  • Sealed Class - Cannot be inherited
  • Virtual Method - may be overridden
  • Abstract method - no implementation, must be overridden
  • Sealed method - Cannot be overridden
  • Sealed Override Method - No longer may be overridden. Public sealed override MyMeth()


(My design review checklist)

Design and Code Review – Look at the code.

Many things need to be considered when we do a code review (My design review checklist).

Is the code logically correct?

Are we following best practices?

Are we considering security?

Did we run a code review? If you have a team or enterprise developer edition, the code review may be found in the Analyze menu.


Is the code maintainable?

Is our code unnecessarily complex? I always favor simplicity until forced to do otherwise.

Is the error handling effective?

Will our code perform? I tend to assume it will until proven otherwise.

Are our objects loosely coupled?

Did we follow our coding standards? idesign has a pretty good C# and WCF coding standard freely available on their website (right hand side).

Wednesday, October 21, 2009

Intertech R&D: Share your thoughts!

James White, who oversees R&D at Intertech, is conducting a survey to help define our 2010 R&D plan. If you complete the survey, you'll be eligible to save $250 on any Intertech course.  Intertech R&D Survey.

I have been an Intertech employee for just over a year but I have been an Intertech customer since about 1998.  I started with learning COM + ATL and most recently completed the WPF course and there are about 7 other classes I have taken in between.   I have been writing software professionally since 1993 and I still take take every chance I get to step into the classroom and keep my skills sharp.  The course are terrific and the instructors are the best I have ever seen.  I highly recommend Intertech training.

Tuesday, October 20, 2009

A look at test planning with Microsoft Test and Lab Manager. Part 3, Executing manual tests.

In part three of my series on using Microsoft Test and Lab Manager we are going to take a look at running and analyzing test results.


Let’s first look at the right hand side of the screen again. 

Notice that we have to run a test for each test case and configuration combination so even though we defined 2 test cases we see that we need to execute 5 tests because each of the test cases must be run on more than one configuration. 

The tests are grouped by results; in our case we have one test at the ready state, 3 completed tests, and 1 test that has been marked as not ready. 

There is a cool visual indicator at the top of the grid so we can quickly assess the state of the test plan.

Finally we have some quick tools that allow us to block, unblock, and reset a test to a ready state.  This would be useful if we determine a test is flawed and we want to block the execution of the test until we have a chance to fix it.


If we click on the analyze test runs section we get the following screen.  We have a summary section at the top, the middle section is an overview section and the bottom is a fully editable grid view of the test results.


By clicking on the my bugs section we get a list of TFS work items that are assigned to us.


Moving on to the track tab we see the screen where we can assign the build.   When we assign the build we get a list of work items that have been fixed and need to be verified.  


If we move on to the recommended tests section we can select a build and look at the impact the code changes have on testing. We will discuss in the test impact analysis section.


A look at test planning with Microsoft Test and Lab Manager. Part 5,Test Impact Analysis

In part five of my series on using Microsoft Test and Lab Manager we are going to take a look at test impact analysis.

I find the test impact analysis feature to be one of the most exciting new features of VSTS.  As you read through this blog entry keep in mind this is based on beta 1 (beta 2 was released to MSDN subscribers)

From a high level, following are the steps we need to take to enable test impact analysis.

• Add the test project to source control.

• Create the build definition.

• Make sure “Analyze Test Impacts” in the Process tab is set to “True”.

• Queue a new build.

• Make sure the “Test Impact Collector” option is set in test settings file.

• Run the tests in the test project (this collects the test impact data)

• Change the code.

• Run a new build. (the test impact analysis runs)

• Test manager – Assign the new build

• Test Manager - Track/Recommended tasks

Here is a very good walk through:

Once we get things lined up there are a couple places we can see the recommended tests:

1) In the test and lab manager we can see the impacted tests in the Testing Center area - Track tab under the recommended tests section.  By comparing the build in use to the previous build, VSTS recommends what tests to run based on the code changes.


2) In Visual Studio the developer can use the test impact view to get a list of tests that will be impacted by a code change before the change is checked in.  The idea here is rather than running all unit tests, the developer can run a subset (the impacted tests) of the tests and be confident that the change is safe.


3) The build report will include an interactive list of tests that had been impacted by the code changes in the build.  Anybody that gets the build report can get a list of all tests impacted by the changes in the build.


The new test impact analysis is a must use feature.  As a former development manager I can’t tell you how many times I was forced to make a decision about how to hot fix a bug while minimizing the possibility of braking anything and all I had to go on was a gut feel and experience.  This new feature gives the developer a way to fully understand the potential impact of a change before making it.  It also gives a clear set of tests to perform to insure that the released code is of the highest quality possible.

Saturday, October 10, 2009

A couple basic WPF databinding examples.

The databinding options available to us in WPF are numerous and powerful.  We can bind a UI element to an object, a list, another UI element, a collection, an ADO.Net object and the list goes on and on.  You can bind declaratively or programmatically.  You can bind one way, two way, or one time. 

Let’s take a look at just a couple of these databinding options.  Say we are creating a user preference page and from this page we want to allow the user to specify the foreground color of a textbox by typing an ARGB number in a textbox.  Further, we want the color of the text in the textbox to represent the color that was typed into the textbox.  We are trying to bind the foreground property of the textbox to the value of the textbox’s Text property.

Here is what our app looks like:


Here is the code that will pull that off:

Code Snippet

  1. <Window x:Class="WpfSelfBinding.Window1"
  2.    xmlns=""
  3.    xmlns:x=""
  4.    Title="Window1" Height="300" Width="300">
  5.     <Grid>
  6.         <Label Height="28" Margin="12,47,0,0" Name="label1" VerticalAlignment="Top"
  7.                HorizontalAlignment="Left" Width="100">Text Color</Label>
  8.         <TextBox x:Name="tbColor" Height="23" Margin="110,49,48,0" VerticalAlignment="Top"
  9.                  Foreground="{Binding Text,RelativeSource={RelativeSource Self}}" >
  10.         </TextBox>
  11.     </Grid>
  12. </Window>

The key bit of XAML we need to focus on is the textbox’s Foreground property:

Foreground="{Binding Text,RelativeSource={RelativeSource Self}}"

This binding is telling the app to bind the Textbox’s Foreground property to the value of the Text property of myself.  If you run the App you will notice that as you change the value of the textbox, the color of the text will also change.

Now I can’t get even close to the color I want by using ARGB so let’s see if we can make the user experience a little more pleasant.  let’s give the user the option of selecting the color from a color picker.  WPF does not have a color picker dialog so we will have to use the Windows Forms library.  We’ll modify the app a little bit and add a button that will  launch the color picker and populate the textbox with the result.

Here is the new UI:


The new XAML:

Code Snippet

  1. <Window x:Class="WpfSelfBinding.Window1"
  2.    xmlns=""
  3.    xmlns:x=""
  4.    Title="Window1" Height="300" Width="300">
  5.     <Grid>
  6.         <Label Height="28" Margin="12,47,0,0" Name="label1" VerticalAlignment="Top"
  7.                HorizontalAlignment="Left" Width="100">Text Color</Label>
  8.         <TextBox x:Name="tbColor" Height="23" Margin="110,49,48,0" VerticalAlignment="Top"
  9.                  Foreground="{Binding Text,RelativeSource={RelativeSource Self}}" >
  10.         </TextBox>
  11.         <Button x:Name="btnColor" Height="23" HorizontalAlignment="Right" Margin="0,49,24,0"
  12.                VerticalAlignment="Top" Width="18" Click="btnColor_Click">...</Button>
  13.     </Grid>
  14. </Window>

And the code behind:

Code Snippet

  1. using System.Windows;
  2. using System.Windows.Media;
  3. namespace WpfSelfBinding
  4. {
  5.     /// <summary>
  6.     /// Interaction logic for Window1.xaml
  7.     /// </summary>
  8.     public partial class Window1 : Window
  9.     {
  10.         public Window1()
  11.         {
  12.             InitializeComponent();
  13.         }
  14.         private void btnColor_Click(object sender, RoutedEventArgs e)
  15.         {
  16.             System.Windows.Forms.ColorDialog colorDialog = new System.Windows.Forms.ColorDialog();
  17.             System.Windows.Forms.DialogResult result = colorDialog.ShowDialog();
  18.             if (result == System.Windows.Forms.DialogResult.OK)
  19.             {
  20.                 tbColor.Text = SysDrawColorWinMediaColor(colorDialog.Color).ToString();
  21.             }
  22.         }
  23.         private Color SysDrawColorWinMediaColor(System.Drawing.Color color)
  24.         {
  25.             System.Windows.Media.Color bColor;
  26.             int i = color.ToArgb();
  27.             byte a = (byte)((i >> 24) & 255);
  28.             byte r = (byte)((i >> 16) & 255);
  29.             byte g = (byte)((i >> 8) & 255);
  30.             byte b = (byte)(i & 255);
  31.             bColor = Color.FromArgb(a, r, g, b);
  32.             return bColor;
  33.         }
  34.     }
  35. }

Two things are worth noting in the code.  Rather than importing the System.Windows.Forms namespace, we just use the fully qualified class name.  This is because there are several conflicts between WPF classes and windows forms classes in the  System.Windows.Forms namespace.  The second thing we have to do is convert the color returned by the color picker (System.Drawing.Color) to a System.Windows.Media color.  We use bit shifting and a bit mask to pull this off in the SysDrawColorWinMediaColor function. 

Rather than using the color dialog result to directly set the text property of the textbox let’s say we want to populate an object ( so we can persist the selection to a database or isolated storage).  It would be cool if we could bind directly to this class.

The UI looks exactly the same but here is our new class

Code Snippet

  1. using System.ComponentModel;
  2. namespace WpfSelfBinding
  3. {
  4.     class BindableColorClass:INotifyPropertyChanged
  5.     {
  6.         private string textColor = "#FF0F0FF0";
  7.         public event PropertyChangedEventHandler PropertyChanged;
  8.         protected void OnPropertyChanged(string name)
  9.         {
  10.             PropertyChangedEventHandler handler = PropertyChanged;
  11.             if (handler != null)
  12.             {
  13.                 handler(this, new PropertyChangedEventArgs(name));
  14.             }
  15.         }
  16.         public string TextColor
  17.         {
  18.             get { return textColor; }
  19.             set
  20.             {
  21.                 if (textColor != value)
  22.                 {
  23.                     textColor = value;
  24.                     OnPropertyChanged("TextColor");
  25.                 }
  26.             }
  27.         }
  28.     }
  29. }

The key here is that our class implements INotifyPropertyChanged.  We raise the PropertyChanged event when the TextColor property changes.  This event is what wires our class into WPF’s Data Change Notification mechanism.

Here is the new Window1 code:

Code Snippet

  1. using System.Windows;
  2. using System.Windows.Media;
  3. namespace WpfSelfBinding
  4. {
  5.     /// <summary>
  6.     /// Interaction logic for Window1.xaml
  7.     /// </summary>
  8.     public partial class Window1 : Window
  9.     {
  10.         BindableColorClass colorClass = new BindableColorClass();
  11.         public Window1()
  12.         {
  13.             InitializeComponent();
  14.             tbColor.DataContext = this.colorClass;
  15.         }
  16.         private void btnColor_Click(object sender, RoutedEventArgs e)
  17.         {
  18.             System.Windows.Forms.ColorDialog colorDialog = new System.Windows.Forms.ColorDialog();
  19.             System.Windows.Forms.DialogResult result = colorDialog.ShowDialog();
  20.             if (result == System.Windows.Forms.DialogResult.OK)
  21.             {
  22.                 colorClass.TextColor = SysDrawColorWinMediaColor(colorDialog.Color).ToString();
  23.             }
  24.         }
  25.         private Color SysDrawColorWinMediaColor(System.Drawing.Color color)
  26.         {
  27.             System.Windows.Media.Color bColor;
  28.             int i = color.ToArgb();
  29.             byte a = (byte)((i >> 24) & 255);
  30.             byte r = (byte)((i >> 16) & 255);
  31.             byte g = (byte)((i >> 8) & 255);
  32.             byte b = (byte)(i & 255);
  33.             bColor = Color.FromArgb(a, r, g, b);
  34.             return bColor;
  35.         }
  36.     }
  37. }

There are a couple things to pay particular attention to here.  First we have a class variable referencing an instance of our BindableColorClass.  Next, notice in our btnColor_Click event we are updating the TextColor of the BindableColorClass, NOT the textbox’s text value.  Finally notice that on class construction we are setting the textbox’s DataContext to our instance of the BindableColorClass.

Here is the ne XAML:

Code Snippet

  1. <Window x:Class="WpfSelfBinding.Window1"
  2.    xmlns=""
  3.    xmlns:x=""
  4.    Title="Window1" Height="300" Width="300">
  5.     <Grid>
  6.         <Label Height="28" Margin="12,47,0,0" Name="label1" VerticalAlignment="Top"
  7.                HorizontalAlignment="Left" Width="100">Background Color</Label>
  8.         <TextBox x:Name="tbColor" Height="23" Margin="110,49,48,0" VerticalAlignment="Top"
  9.                  Foreground="{Binding Text,RelativeSource={RelativeSource Self}}" >
  10.             <TextBox.Text>
  11.                 <Binding Path="TextColor" Mode="TwoWay"/>
  12.             </TextBox.Text>
  13.         </TextBox>
  14.         <Button x:Name="btnColor" Height="23" HorizontalAlignment="Right" Margin="0,49,24,0"
  15.                VerticalAlignment="Top" Width="18" Click="btnColor_Click">...</Button>
  16.     </Grid>
  17. </Window>

The XAML we want to focus on:

    <Binding Path="TextColor" Mode="TwoWay"/>

This tells WPF to bind the TextBox.Text property to the “TextColor” property.  The “TextColor” property of what??  Our background property uses the source property to tell WPF which item’s text property to bind to but we didn’t specify one here.  Since we didn’t set the source property, WPF is going to look for a DataContext which we DO set in our code behind. 

tbColor.DataContext = this.colorClass;

WPF knows to bind the textbox’s text property to TextColor property of the windows colorClass.

The intention is to persist the BindableColorClass eventually so we need the TextColor property to be changed either from the color picker or by typing the ARGB color into the TextBox.  To pull this off all we have to do is set the binding Mode to TwoWay.  Set a break point on the TextColor getter and setter to see when they fire.  If you are typing the ARGB color into the textbox by hand you will need to tab out of the textbox to get the change to take place.  This is the default textbox binding behavior.

Saturday, October 3, 2009

A look at test planning with Microsoft Test and Lab Manager. Part 2, test planning.

In part two of my series on using Microsoft Test and Lab Manager we are going to take a look at test planning. 

The Testing Center Area

From a high level the testing center is where we create and execute our test plans, track our testing progress, assign builds, and determine our recommended tests.

Plan Tab


In the contents view of the plan tab we have a split screen.  The left hand side is used for managing and organizing requirements.  The right hand side is for managing test cases.  Let’s drill into some other plan concepts before we explore the plan contents screen.

Plan - Properties


Each test plan has some properties associated to it.  These properties include the test plan name and description, the  TFS project it is associated to, some default test settings and configurations, build information, and some tracking info such as owner, state, and dates.

If we drill into the test settings we see the following:


The general tab is used to name and describe the settings as well as set some options regarding how the tests will be run.


The data and diagnostics tab is where we configure what we want to collect during a test run.

Action recording and log is used to record and play back the test as well as capturing a textual description of what is happening during the test.  Have a look at this log the descriptions are good, we could record an exploratory type test and then copy paste entries from this log into the steps of a more specific test case.

ASP.Net profiler is used to collect performance data during a test run.

Collecting code coverage data allows us to track how much of the code we are covering during our test runs.

Diagnostic trace collector captures events and call stack information for the historical debugger.

When we collect the event log information, any new event log entry created during a test run is attached to the test run.

Network Emulation is used to emulate different qualities of network such as T1, DSL and dial up.

By collecting the system information with the test run we are attaching the OS version, amount of ram, CPU speed, and a handful of other details to our test results.

The test impact is a really cool feature.  If we collect this information during a test run, TFS can tie the test to the code paths that are being executed.  This information is later used during a build to allow TFS to actually suggest which tests to run.  Or at development time to tell developers what tests will be impacted by changes to code.  Important: we may not collect test impact data at the same time as we collect code coverage data.

Collecting the video recording will attach a video of exactly what the tester is doing during a test run.  Developers can later replay this video to see exactly how a bug was produced.

Plan – Set Plan Context


This is the screen we use to manage our list of test plans and set our plan context.  The list of plans in this screen is limited to all plans associated to the TFS project displayed in the title bar.  Select the plan we want to work with and click the set context icon.

Plan - Contents


We find ourselves back at the plan contents screen.  Remember, the context of this screen is determined by the plan we selected in the last step.  We’ll explore the left side of the screen first.  This is the place we select and organize user stories (requirements) and ultimately associate them to the test cases on the left hand side.  I want to mention that Microsoft has introduced the notion of a suite.  A suite is a generic bucket used to build a hierarchy any way we want.  Think of them as folders in windows explorer, they are just used to group and associate our user stories.

To start things out we need some user stories to exist in TFS.  If we bounce over to team explorer we can see an example of the default user story template.  The QA team is not typically responsible for entering user stories.


Once the user stories exist in TFS, we can click the add requirement button in the test and lab manager and we will get a list of the user stories associated to the team project.


If we select Import we can copy the user stories and test cases from an existing test plan.  This copy may be individual user stories, suites (groups of user stories), or all of the stories in the entire test plan.  There is even a nice feature that allows us to configure a query based suite.  In a query based suite, the test case list is dynamic. As test cases are added to TFS that match the query parameters, the list of test cases in the suite will change dynamically.


Now let’s take a look at the right hand side of the plan contents screen.  This is the test case area.


From here we can add an existing test case.   The following screen is used to search our TFS project for existing test cases.


We can also add new test cases from the plan contents screen.  Following is the default test case work item template.  Notice at the bottom of the test case I have configured 3 steps.  From here I can also configure variables for data binding and insert shared steps.


If we click on the configurations button in the plan contents screen we get the following screen.  Here we configure which tests run against which configuration.  In my example I have 2 test cases, the first is configured to run on; a Vista machine with the 3.5 version of the .Net framework installed and a Vista machine with the 4.0 version of the .Net framework installed.  The second case runs against both of those configurations and additionally needs to run on a machine with Vista and IE7 installed.  The cool thing about this is the configurations have a multiplier effect.  When it comes time to execute these two test cases 5 tests will be created, one for each configuration.


The last screen I want to look at in this post is the “Assign Testers” screen.  To get here we click the assign button from the test contents screen.  This is where we assign TFS users to each individual test (a combination of a test case + a configuration).


Thursday, October 1, 2009

YAGNI – You ain’t gonna need it

The YAGNI principle states that programmers should not implement functionality until it is necessary.  It is a pretty simple principle but oddly enough it is a difficult principle to follow.

I’ll take a crack at what I think the upside of breaking this rule is.

  • It seems to me the biggest benefit to breaking this principle is that we can architect and design our code by taking into account both current and future features thus implementing a more robust design.
  • Another big argument is that by implementing this feature our software will be more useful.  If our software is more useful we may sell more licenses or we may be more productive.

Let’s look at the most obvious negatives of implementing a feature before you need it. This is a longer list.

  • If we don’t need it yet, maybe we should be spending our time and money implementing something we do need now.
  • If we implement a feature we should have some corresponding documentation describing the feature – this takes time.
  • If we implement something we need to test it -every release and this takes a lot of time.
  • When we add a feature in an agile/TDD shop, we need to write unit tests for the feature so we can automatically test it every release.  Writing the tests can take as long as or longer than writing the feature and anyone who has to run unit tests before a check in will probably agree that adding time to this automated process is not good.
  • When we add a feature we need to train the support crew and user base on how to use the feature.
  • A feature is hard enough to implement correctly when it is needed, how do we expect to implement it optimally when we are still guessing whether or not we need it?

A quick summary of the negatives:  We are spending time writing code for something we might need.  We are writing the feature the way we think we will need it in the future.  We are spending time and money documenting and testing a feature that we are guessing we will need and by the way we are also guessing how to best implement it.

If all of those (mostly financial) arguments aren’t enough to convince you, let’s take a look at some down sides from an architect or developer’s point of view.

If code is in a release you have to assume it is being used.  When it comes time to change the code you think you needed, you are going to spend time figuring out how to change it without breaking the (imagined?) user base.  This might involve migrating data, configurations, backup procedures, reports, integration work flows etc.  When it comes time to change the code you really do need you must consider all of the code.  This includes the code you don’t realize that you don’t need – if it is in production, how can you be confident in saying “oh, we really don’t need that code”.  This really stifles our ability to modernize our code.  Even if you do find a way to refactor the bloated code base not only will you have to change the existing code but you will have to change the tests, documentation, and training materials.

Let the business analysts, user base, and marketing folks decide what features your product needs and when it needs it.  I would rather architect and design a system based on things that are known.  I don’t ever want to tell somebody that the system is the way it is because I thought we needed some functionality that turns out to be useless.  When we do a good job of designing and reviewing our system we can be confident that we can refactor our code to implement major functional changes when they are understood, needed, and the highest priority.

My design review checklist