WiX project type in Visual Studio “Rosario”

I am very much looking forward to the release of Visual Studio Rosario. The list of major features is available here. On the list is the integration of WiX. Rob Mensching posted about this back in November. This really validates the work that Rob and his team have done (congratulations to them), and enforces the position many adopted when we took the risk to develop our commercial products’ installers with the toolset.

Rob mentioned that code changes made by the Visual Studio team would be checked back into the WiX trunk. I was curious about the progress on WiX in Rosario so I downloaded the November CTP VPC. I was a bit disappointed when I saw that at the moment, it is just Votive installed as in Visual Studio 2008, but it is very early days.

I am hoping that the Visual Studio team would take WiX in a similar way that they did with FxCop and Code Analysis. I still think it should be called WiX in Visual Studio, but I would like it to be a first grade component, rather than a latecomer tack on. The difference between the integrated version and the sourceforge hosted version should be very simple, WiX project types should be within Other Project Types > Setup and Deployment and different product/upgrade GUID’s.

Christopher Painter said there is a huge gap in the authoring/designer/editing tools for WiX, which is completely correct. I think Microsoft considering to purchase a current tool and integrate that might be a good way to go, but I think Microsoft already has code, mostly from the current Visual Studio Setup projects, for the critical features that are required.

1. Enabling automatic file references generation based on project outputs.

Microsoft already has the ability to add project output for the current Setup Project. Adding XML output of these files outputs should be a comparably small task with a huge gain. With web projects, manually maintaining the file list is the largest complaint I get about WiX from the development team. I know automating this has issues with component rules, but there was talk a while ago that there is a solution that could be implemented using a component catalog database. This should also include the automatic Detected Dependencies.

2. Forms designer

The current WPF forms designer is a live dual view, form and XML. Add to the toolbox the Windows Installer form components and modify the XML generation engine.

3. Bootstrapper integration

This is already available via the GenerateBootstrapper MSBuild task. Add the interface as in a Setup Project or ClickOnce for configuration and Prerequisite selection. It would be a nice addition to both WiX and VS Setup projects if selections for the bootstrapper prerequisites, also sets the launch conditions on the MSI.

4. File System, Registry, File Types, User Interface Sequence, Custom Actions and Launch Conditions Editors

All these editors currently existing could be made to support WiX generation. This would be a huge step forward for WiX becoming mainstream. Since developers are not afraid to modify XML like WiX, it still comes down to the understanding of Windows Installer to do advanced things correctly, whether or not you have nice designers and authoring tools. However, this current set of editors has allowed many developers to author satisfactory installers without needing to know what is going on.

As a developer, I understand it is easy for someone of the outside to look in and say, "it should be quick and easy", but that is not often the reality. This is just my wish list. I do realise this is very early stages, and only more good will come of WiX with some full time developers on it. Exciting times ahead, other installation tool companies better watch out!

Technorati Tags: ,

WiX is Free… almost

Christopher Painter recently posted an interesting WiX post, WiX: Forced to use Beta Software, and now has a follow up cleverly titled, Pay No Attention To The Bugs Behind the Curtain. These are very good arguments that have been bought up, but I think there is another aspect that is being slightly overlooked. WiX is free open source software.

My experiences with open source projects have generally come with the issues that Christopher has been raising, lack of support, unknown release quality, and unknown version compatibility. If Microsoft and not just their employees, on their own time, built WiX, many of these concerns might go away.

Being open source is also its strength. It is completely free, if you have the time. If WiX were a commercial product, it would then have great difficulty competing against the current major MSI authoring tools. If you come across any showstopper bugs, you can fix them yourself, although this can be a time consuming, expensive process. If it does not suit, you can make it. You just need to look at what SharpDevelop did to the WiX MSBuild target for WiX v2 to support fragments in their IDE. The difference is the developer remains in control. Whenever you use a third party product you are depending on them to support and fix any issues you find in a timely manner. This does not always happen and can leave you stranded.

Being forced to use Beta software is a bit of an embellishment. I completely agree that as a setup developer you are stuck if you want to use Votive. How do I get around this? I do not use it. WiX v2 is great, Votive v2 is not. As long as you can hit a button and create a ready to ship MSI, this is all that matters. I simply have an empty C# project that contains the list of WiX files. In the MSBuild script for the project, I run the command line tools to generate my installer with WiX. I have the WiX release I used to develop that installer, checked in source control, and referenced by the MSBuild task. You do not need to be concerned about what WiX release you are using as long as it creates the MSI you need for your project.

This only becomes an issue when you want to take advantage of new features or you come across a bug that you cannot workaround. Here you have two options. See if it has been fixed in a later release, or fix it yourself. If you use a later release, then you have to go through a complete quality assurance process of your MSI you have created. This is like any commercial product. If you fix it yourself, it will cost you time, but hopefully you will be able to estimate the timeframe that will be required. This is not like any commercial product. For example, we have recently come across a showstopper bug in the .NET Framework 2.0 SP1. This is preventing us moving forward to .NET 3.5. The fix is known, has been raised with Microsoft, but we need to wait for their process to get a fix. (A recent update informs there will not be a hotfix release, because a workaround is available, which may not help to us, because the code is in a 3rd party control.)

I am using WiX v2 and do not intend to move to WiX v3 until the schema is stabilised and it is given the go ahead. It would be too expensive for me to develop my installers in v3 and have to modify them heavily to work in later releases of v3. There are already plenty of modifications required to move from v2 to v3, so I only want to have to do this once. Having all the developers focused on v3 is an issue since support for v2 is short. The mailing list alleviates this issue, although many responses are "you can do this in v3", which is of no help. I would like to move to v3 due to great new features and integration with Visual Studio. This is not something that I can do if business depends on it. It was a risk moving to WiX v2 while it still was not finished. I had to choose a weekly release and work with that, until I hit a bug that was fixed in a later weekly release.

Now that WiX (Votive specifically) is being developed within the Visual Studio Rosario team hopefully releases will be more supported, higher quality and better version compatibility. It is unlikely that I will move to WiX v3 for any commercial project before a release candidate of Rosario is available.

Technorati Tags: ,

Performance Improvements

During initial development it is often fruitless to take performance into consideration. Focusing on good design, readability and maintainability is far more important. When performance improvements are looked at too early, time is usually spent in areas where little difference is actually made. Obvious inefficient code should always be avoided, but running performance testing at the end of the development cycle is a much more effective way of making gains.

Profiling tools are what make this way of performance improvements possible. I was a little disappointed in the Visual Studio 2008  Profiler. It is an alright start, but just knowing which functions are taking the most time, doesn’t always make it obvious what is slowing it down, especially when framework classes are included in the analysis. ANTS profiler felt light, but had the features that made the task very simple. It can be set to only profiler classes you have code for and gives you the number of times each line is run, and the time spent on each line. With this information it is very easy to see the violating code. Armed with the right tools, there were three performance improvement techniques that aided me in my last bottleneck hunt.

Firstly, combining loops. Avoid iterating over the same list twice. This one can sometimes be hard to see if the code is not well maintained. Surprisingly, it also is often hard to refactor without introducing errors, and does not give a great return on investment. This is certainly an inefficiency that should be avoided during initial development.

Secondly, avoid unnecessary exceptions. If you know a certain piece of code may throw an exception, if possible, test for that condition first. Exception handling is far more expensive than doing a conditional check. This is one that you should not be concerned about while coding. Usually exception handling is a more elegant way to write error handling code. Especially if the exception is not expected to happen often. Avoid using try catch blocks within loops. The particular instance that I found from the profiler was inside a nested loop of rows and columns, and throwing and catching an exception almost every iteration. Removing it improved the performance greatly.

Thirdly, cache expensive calls. This is another improvement that is often not seen during development, but when it causes an issue, is highlighted by a profiler and easy to improve. Calls to some functions or properties may do more work than you expect, like reading from the database. If these calls are made within a loop, make the call once outside the loop, and store the result in a local variable. If the call is dependant on the index of the loop this can be more difficult. A simple way around this is to use a dictionary. For example consider the code:

foreach (Student student in students)
{
    Console.WriteLine(RoomName(student.RoomId));
}

The function RoomName comes up on the profiler to take a long time, since it actually performs a database query. Replacing this with a dictionary lookup would result in:

Dictionary<int, string> roomsLookup = new Dictionary<int, string>();
foreach (Student student in students)
{
    if (!roomsLookup.ContainsKey(student.RoomId))
    {
        roomsLookup.Add(student.RoomId, RoomName(student.RoomId));
    }
    Console.WriteLine(roomsLookup[student.RoomId]);
}

This ensures that the expensive function is only called the minimum required amount, and is replaced with a quick dictionary lookup. For more advanced performance improvements read the Microsoft patterns & practices guides.

Technorati Tags: ,

Use your Desktop as a Desktop

I just read Confessions of a Desktop Neat Freak over on Channel 10, which highlights the feature in Windows where you can hide the icons on the desktop. This is all well and good, but as Larry said himself in the post, it is just like sweeping it under the carpet. Although this helps it look clean, so that you can see your nice desktop background picture, what real benefit is it giving. It hasn’t really done anything to help you keep it neat. Potentially it has made it worse by allowing the mess to grow bigger with an out-of-sight, out-of-mind attitude.

I propose to use your Windows desktop as you would your physical desktop. Typically, you don’t leave piles of paper right in front of you on your desk. You file papers away, either into a filing cabinet, piles on your desk, or bin. Accessible if required, but not interfering with your current task. Things you do leave on your desk are items you use all the time, like a pen and phone. These are equivalent to your desktop shortcuts to commonly used applications. In Windows though, pinning these to the start menu can be more effective, since they can be seen quickly without minimising your current work.

While you are working on a task, often it requires a set of documents, reference material, drafts and other random things being bought together. In development, when building a new feature or debugging some code, I often have a few code files, test scripts, saved web pages or PDF’s for reference, and other miscellaneous files that are all useful for that task. These files I just dump straight to the desktop, since it is clear and a ready workspace. Once I have finished that particular task, I sort the files. Some get deleted, others get filed safely away.

The desktop is a very convenient workspace if it is keep clean for that purpose. Files can be grouped into sections easily to help better tackle the task. BumpTop takes this further to really help make the computer desktop like a real desktop. I work so that if there are any files on my desktop, it means I am in the middle of a task. I do similar with my email, anything in the inbox needs action taken against it. If the email has been dealt with, it is filed or deleted. This all helps me stay neat, organised and effective at my tasks. I understand that some people’s physical desktops get more out of hand than their Windows desktop, and in the end, what’s wrong with a messy desk? 

Technorati Tags: ,

WiX Installation for Excel Add-In

Using WiX for installation development provides a simple way to quickly build installers, while maintaining the power to extend to the most difficult deployment scenarios. For the deployment requirements of an Excel Add-In you should read Deploying Application-Level Add-ins, on the MSDN site.

Firstly, it is required to register your COM Add-in. This can be done from the command line using the RegSvr32 executable. In WiX, all that is required is:

<File Id="Addin_Dll" Name="Addin.dll" Source="Addin.dll" KeyPath="yes" >
    <Class Id="{CLSID UUID}" Context="InprocServer32" Description="Addin" ThreadingModel="apartment" >
        <ProgId Id="Addin.Connect" Description="Connect Class" />
    </Class>
</File>

The Addin.dll is  a C++ COM component. Similarly can be done registering a .NET Assembly with COM Interop. The required registry keys are very simple with WiX to add:

<Registry Root="HKLM" Key="Software\Microsoft\Office\Excel\Addins\Addin.Connect" 
          Name="Description" Value="Description for Addin.Connect" Type="string" />
<Registry Root="HKLM" Key="Software\Microsoft\Office\Excel\Addins\Addin.Connect" 
          Name="FriendlyName" Value="Addin.Connect Friendly Name" Type="string" />
<Registry Root="HKLM" Key="Software\Microsoft\Office\Excel\Addins\Addin.Connect" 
          Name="LoadBehavior" Value="3" Type="integer" />

To register the add-in for just the current user on the computer, simply change the Root value to HKCU. It is also prudent to add a pre-installation condition that Excel in installed on the target machine:

<Property Id="P_EXCEL11INSTALLED">
    <RegistrySearch Id="SearchExcel11" Type="raw"
        Root="HKLM" Key="SOFTWARE\Microsoft\Office\11.0\Excel\InstallRoot" Name="Path" />
</Property>
<Condition Message="You must have Microsoft Office Excel 2003 installed to use this product.">
    P_EXCEL11INSTALLED
</Condition>

Adding these few blocks to your standard installation is all that is required in WiX to deploy an Office Excel Add-In.

VB.NET New Line Character in a String

Normally I work with C# but recently I have had to do some VB.NET development. Today I came upon the issue of a new line character in a string. C# this is no issue:

string.Format("Line 1: {0}\nLine 2: {1}", str1, str2);

VB.NET this becomes a little awkward. VB.NET you have many options for a new line character:

VB Constants: vbNewLine, vbCrLf
Character Function: Chr(13)
VB Control Chars: ControlChars.NewLine, ControlChars.CrLf
Environment Variable: Environment.NewLine

Using the Environment Variable would be the recommended practice as it should return an Environment specific string. However, in practice it would come down to constants versus variables. As I saw suggested on various forums was replace the \n in your string for one of the options above. This would result in either:

String.Format("Line 1: {0}" + Environment.NewLine + "Line 2: {1}", str1, str2)  
String.Format("Line 1: {0}{1}Line 2: {2}", str1, Environment.NewLine, str2)

Neither of these options satisfied me. Since I wanted to use it in a resource file, that ruled out the first way, and the second is clumsy and error prone. There is a very simple solution to this issue though. In the resource editor, while editing the string press Shift+Enter to insert a New Line.

If you don’t have a resource file setup in your project, make sure you have Refactor! for VB.NET installed. This is free as I mentioned in my last post. Place your cursor on the string and press the Refactor! Shortcut Key Ctrl+~. Select Extract String to Resource. This will move the string literal to the resource file and replace it in the code with a call to the resource. Name the resource string appropriately, and adjust your string in the resource editor. You can now use Shift+Enter to add a new line in your string, while also using a better programming practice with resource files.

Technorati Tags:

Refactoring Bad Code

The code I have been refactoring has been causing me a bit of pain, as I hinted in my last post. I have refactored plenty of good and bad code before. This time however, I headed off in the wrong direction too quickly. Before long, I had myself tangled and had to revert to the original code and start again. Before I get into my approach, let’s review the tools.

Refactoring is not something that should be done by hand, as there are very good tools available. I usually make heavy use of the stock ones in Visual Studio Team System – Software Developer Edition. I am using VS 2008, but not much in the Refactoring seems to have changed since VS 2005. Unfortunately, these are not available in VB.NET, but Microsoft does recommend using Refactor! for VB.NET which is available for free. I am fortunate to have access to the full version of Refactor!Pro and ReSharper. For me, I have actually uninstalled both of them, and just use the VS built in refactoring’s. Whenever you see a smart tag, Shift+Alt+F10, is your friend.

Refactor!Pro I find has a very clean interface and interaction. Their principle of no modal dialogs works very well to avoid jumping to the mouse. ReSharper has dynamic compilation as you type and it appears to have much smarter refactoring’s. On the negatives Refactor!Pro has a bit of a delay to give you the context menu of available refactoring’s, and nothing appearing really advanced like ReSharper. ReSharper though is bugging, crashes often (itself and VS), uses huge amounts of memory and slows down VS greatly. I feel it attempts to do a little too much, hijacking the intellisense and cripples VS when uninstalled. Additionally, due to the dependency that you get having these tools, I find myself feeling crippled when I have to work on another machine without them. I do have Refactor! for VB.NET installed, since there is no VS options and it is free, so I can install it on any machine I am working on. Although, if I have a choice, I would not opt for programming in VB. John Papa has an old but still relevant comparison if you want to read more

Armed with Refractor! for VB.NET my approach was as I normally would do and usually works well. Find obvious blocks of code that can easily be pulled out into using Extract Method. Doing this helps aid in understanding the code. Well-named methods become self commenting code. In this function there was a few For Each loops, some repeated. Removing the content of a loop into methods so that the start and finish can be seen on the one screen can often show possible optimizations that are not easily seen otherwise. While extracting a method in the initial state of this code I had seven parameters, half of them begin ref’s and the other being out’s. This makes for an extremely error prone function and the code more of a mess than it is already in. Extracting methods initially failed in this code due to the complete misuse of local variables.

The next Refactor command to come to the rescue is this case was, Move Declaration Near Reference. At the top of this function all the variables used (and not used) were declared. Not only that, but after a variable was used, instead of creating a new variable, it was just reassigned a value, not dependent on its previous value. This creates an unnecessary dependency, requiring an extract method attempt to return the value, when it is not really used after the method. As in the example below, extracting the first loop requires iRow and iSum to be returned.

Dim iRow As Integer
Dim iSum As Integer

For Each iRow = 0 In Table.Rows
    iSum += 1
Next
iSum = 0
For Each iRow = 0 In Table.Rows iSum += 2 Next

Reducing the scope of the variable to the smallest required, segments these two tasks. Before performing an extracted method this code should be refactored to:

Dim iSum As Integer
For Each iRow As Integer = 0 In Table.Rows
    iSum += 1
Next
Dim iSum As Integer = 0
For Each iRow As Integer = 0 In Table.Rows
    iSum += 2
Next

This will raise a warning that been Local variable ‘iSum’ is already declared in the current block. This will be fine once the code blocks have been extracted to methods. Declaring the variable again allows the refactoring tools to know that the variable is not required after the method. Meticulously performing this a great number of times to reduce the scope of the variables, and determine when a new declaration was required enable one particular method extraction to go from having a return value, 4 ref and 3 out parameters, to being a subroutine with no return value and no parameters.

In the example above, you could also see the possibility of combining the two loops. This could also be performed on the code I was working on, but was not immediately obvious until the contents of the loops were safely extracted. When refactoring, be careful not to automatically assume the previous programmer did not know what they were doing. There must be a reason why certain things were done. Do not just delete a block of code because you at first appearance seems unnecessary. It is likely you do not understand fully what it is doing or attempting to do. Reviewing code after it is written can reveal a great deal of improvements that can easily not be seen while you are in the process of writing it. It may just be an instance where the code built up over many iterations building on Technical Debt and was just never reviewed as a whole, allowing for great improvements with a little refactoring.

Technorati Tags:

Buyers Beware

In the build versus buy debate, which has many different considerations for software development companies, I generally lean towards the buy option. The cost to develop, support and maintain your own component code is often far greater than buying the component if available. Purchasing the code for a complete application with the intent to extend has completely different arguments and implications.

One of the first tasks was to write a new installer. The reason for the new installer was to implement it using WiX so that we can have full control over the installation and bring it in line with the rest of our application installers. I have built many installers and once I had the information I needed, this task went smoothly. I will talk more about installation creation with WiX soon.

This particular application is an Excel Add-in, written in VB.NET. One of the issues that was raised has been is the poor performance of the add-in. Doing a refresh from the data source takes almost 10 seconds. Some VBA code was written by the consultant that raised the issue to attempt to determine if the bottleneck was with the add-in or the data source. The VBA code returned the same data set almost instantaneously. Of course, the add-in performs more on a refresh than just what the VBA code did, but difference is unacceptable.

It was time to dive into the code. Running performance analysis over the application quickly pointed to the violating method. Unfortunately, the method was over 700 functional lines long! Before I could make performance improvements, I needed to understand what the code was doing. Before I could understand what the code was doing, I needed to refactor the method into manageable-sized methods. With the wide scope and reuse of variables, and the lack of unit tests, this became a problematic task. Performing a complexity analysis over the method, gave it the lowest maintainability index possible. However today, it has ended well, with the refresh now taking under 2 seconds!

In following posts, I will be talking about the Installation Creation, Refactoring and Performance Improvements undertaken on this code.

Parse TimeSpan String

On one of my projects on the side, I had the need to for a user to enter a time estimate. The Parse method from the TimeSpan object is limited in usefulness to convert a string to a TimeSpan due to the strict requirements of the string format. From the documentation:

The s parameter contains a time interval specification of the form:

[ws][-]{ d | [d.]hh:mm[:ss[.ff]] }[ws]

Items in square brackets ([ and ]) are optional; one selection from the list of alternatives enclosed in braces ({ and }) and separated by vertical bars (|) is required; colons and periods (: and .) are literal characters and required; other items are as follows.

Item

Description

ws

optional white space

"-"

optional minus sign indicating a negative TimeSpan

d

days, ranging from 0 to 10675199

hh

hours, ranging from 0 to 23

mm

minutes, ranging from 0 to 59

ss

optional seconds, ranging from 0 to 59

ff

optional fractional seconds, consisting of 1 to 7 decimal digits

The components of s must collectively specify a time interval greater than or equal to MinValue and less than or equal to MaxValue.

This is fine, but not easy to train a user to use. What I require is a user to be able to enter an estimated time, in a simple free form way that makes sense to them. I would like automatic conversion between units, so that if the user enters 180 minutes, it is parsed to 3 hours. I would like to be able configure whether 1 day is equal to 24 hours or an 8 hour work day and configure what the default unit is, if none is specified by the user. Input should be of the format:

\s*(?<quantity>\d+)\s*(?<unit>((d(ays?)?)|(h((ours?)|(rs?))?)|(m((inutes?)|(ins?))?)|(s((econds?)|(ecs?))?)|\Z))+

Using values (removing milliseconds) from the TimeSpan Parse examples:

String to Parse

TimeSpan

0

00:00:00

1h2m3s

01:02:03

180mins

03:00:00

10 days 20 hours 30 minutes 40 seconds

10.20:30:40

99 d 23 h 59 m 59 s

99.23:59:59

23hrs59mins59secs

23:59:59

24 hours

1.00:00:00

60 min

01:00:00

60 sec

00:01:00

10

10:00:00 (if hours is default unit)

If .NET 3.5 Extension Methods supported static extension methods I would add the method public static TimeSpan ParseFreeForm(static TimeSpan timeSpan, string s) to the TimeSpan class. This would allow a TimeSpan.ParseFreeForm to be seen in the intellisense next to the inbuilt Parse method, which I think is a logical place with higher visibility than a utility class. There is obviously arguments for and against this, but I’m not going to get into that now. Since extension methods only allow new instance methods it does not make sense to create a new instance of a TimeSpan to parse a string to return a new TimeSpan. Therefore I created the utility method:

public static TimeSpan ParseTimeSpan(string s)
{
    const string Quantity = "quantity";
    const string Unit = "unit";

    const string Days = @"(d(ays?)?)";
    const string Hours = @"(h((ours?)|(rs?))?)";
    const string Minutes = @"(m((inutes?)|(ins?))?)";
    const string Seconds = @"(s((econds?)|(ecs?))?)";

    Regex timeSpanRegex = new Regex(
        string.Format(@"\s*(?<{0}>\d+)\s*(?<{1}>({2}|{3}|{4}|{5}|\Z))",
                      Quantity, Unit, Days, Hours, Minutes, Seconds), 
                      RegexOptions.IgnoreCase);
    MatchCollection matches = timeSpanRegex.Matches(s);

    TimeSpan ts = new TimeSpan();
    foreach (Match match in matches)
    {
        if (Regex.IsMatch(match.Groups[Unit].Value, @"\A" + Days))
        {
            ts = ts.Add(TimeSpan.FromDays(double.Parse(match.Groups[Quantity].Value)));
        }
        else if (Regex.IsMatch(match.Groups[Unit].Value, Hours))
        {
            ts = ts.Add(TimeSpan.FromHours(double.Parse(match.Groups[Quantity].Value)));
        }
        else if (Regex.IsMatch(match.Groups[Unit].Value, Minutes))
        {
            ts = ts.Add(TimeSpan.FromMinutes(double.Parse(match.Groups[Quantity].Value)));
        }
        else if (Regex.IsMatch(match.Groups[Unit].Value, Seconds))
        {
            ts = ts.Add(TimeSpan.FromSeconds(double.Parse(match.Groups[Quantity].Value)));
        }
        else
        {
            // Quantity given but no unit, default to Hours
            ts = ts.Add(TimeSpan.FromHours(double.Parse(match.Groups[Quantity].Value)));
        }
    }
    return ts;
}

To modify the hours in a day, when a match is made on the Day unit TimeSpan.FromHours(Quantity * hoursInDay) is all that is required. The value hoursInDay could be passed as a parameter or set as an Application Configuration value. The structure of this solution also provides the ability to easily extend for other units, such as weeks.