git tfs pull command exited with error code: 128

Argh!! Another very unhelpful error message:


What am I supposed to do with that? Today I found out what. Run the command again with –d for debug.


And there is the message that should be have been with the error code:

fatal: Unable to create ‘C:/Code/Main/.git\tfs\default\index.lock’: File exists. If no other git process is currently running, this probably means a git process crashed in this repository earlier. Make sure no other git process is running and remove the file manually to continue.

I can work with that. There were no other git processes running, but the file .git\tfs\default\index.lock did exist. Deleting that we get further, this time with a very helpful error message:


Run the command: git tfs cleanup-workspaces


Looks good. Run the pull again and we are back in business.


TFS Local Workspace Limit

One of my colleagues hit this last week:

TF401190: The local workspace XXXXXX;XXXXXX has 112202 items in it,
which exceeds the recommended limit of 100000 items. To improve
performance, either reduce the number of items in the workspace,
or convert the workspace to a server workspace.

Fortunately he had a few older release branches he could remove. I didn’t know there was a recommended limit. I found this overview of local workspaces:

Local workspaces have scalability limitations due to their use of the local workspace scanner which checks for edited items. Local workspaces are recommended for most of our customers, because most workspaces fit into the “small” or “medium” category in our view – that is, they have fewer than 50,000 files and folders. If your workspace has more than 50,000 items, you may experience performance problems or TF400030 errors as operations exceed 45 seconds in duration. In this case, splitting your workspace up into multiple smaller workspaces (perhaps one workspace per branch), or switching to server workspaces is recommended.

Server workspaces are optimized for performance at large scale and can support workspaces with up to 10,000,000 items (provided your SQL Server hardware is sufficient).

It seems performance for local workspaces has been improved, or maybe the limit has been adjusted due to average hardware upgrades. I can tell you though, our SQL Server hardware is not sufficient, greatly due to TFS using too much disk space.

Technorati Tags:

Increase TFS 2013 task board work item limit

Today one of our task boards hit this:

Board exceeded number of items allowed

No worries, we’ll just follow the link and increase the limit. That however takes you to Customize the Task Board Page for Visual Studio 2012. From what I have done previously I knew this would not apply to TFS 2013. The command is to export the agile process config and in 2013 this has all been combined to be the one process config. Looking through my process config though I could not find the IterationBacklog element, so I run the command anyway and get:

Warning: This command is obsolete. Use 'witadmin exportprocessconfig' instead.

In the 2013 process config, although there is not an IterationBacklog element, there is PortfolioBacklog, RequirementBacklog and TaskBacklog. The same attribute workItemCountLimit still applies and it goes on the TaskBacklog element. The details can be found in Configure and customize Agile planning tools for a team project.

The steps however are very simple:

  1. Export your process config
    witadmin exportprocessconfig /collection:http://tfs:8080/tfs/DefaultCollection /p:TeamProject /f:ProcessConfiguration.xml
  2. Add workItemCountLimit attribute to your TaskBacklog element

    <TaskBacklog category="Microsoft.TaskCategory" parent="Microsoft.RequirementCategory" pluralName="Tasks" singularName="Task" workItemCountLimit="800">
  3. Import your modified process config

    witadmin importprocessconfig /collection:http://tfs:8080/tfs/DefaultCollection /p:TeamProject /f:ProcessConfiguration.xml

Note that the default is 500 and the maximum allowed is 1500.

Technorati Tags:

Need Help: TFS tbl_Content Table and Database growth out of control

Recently our TFS Database size has peaked at over 570GB. Granted we do have a lot of people working against it and use it fully for Source Control, Work Items and Builds. We used to have this problem with 10s of GB being added each week. The cause then on TFS 2010 was the Test Attachments and a run of the Test Attachment Cleaner would clean it up. Kinda. We found after a while although the tables were reporting smaller, we needed two SQL Server hotfixes to allow the space to actually be freed. After that though 100s of GB flowed free and things were good. These details are covered well in a post by Anutthara Bharadwaj.

We continued running the tool and then upgraded to TFS 2012 and were told (TFS 2010 SP1 Hotfix) the problem had now gone away. We stopped running the test attachment cleaner and later upgraded to where we are now on TFS 2013.


This year however our system administrator noticed we were running out of space again. However, looking at the Disk Usage By Top Tables report the tbl_Attachment table was not the problem. It was the tbl_Content table.



From Grant Holliday’s post he tells us that the Content table is the versioned files. In the forums there is this,

“If you have binary files, the deltafication of the files will add size to the table. For example, you might have 15 binary files and 1000 changes to the files – all that data needs to be stored somewhere.”

This got me to check out my source into a clean workspace and running Space Sniffer against it to spot if anything big had been added. Our entire source is about 50 GB. Which sounds like the total size isn’t too far off. But Main is only 820 MB and the whole team is working on there. We have been doing lots and compared to a 4 months ago it was 730 MB. We have many branches but that should only be less than a GB per branch and being a delta it should be next to nothing. Checking the tbl_Content table itself showed that the biggest rows were years old and no new large binaries have been added.

I then came across the comprehensive post by Terje Sandstrom. The also contained some queries for TFS 2012 to determine the attachment content size. Here’s where it doesn’t make sense. The report table size from the disk usage report, does not match up, whatsoever, against what these queries returned. And the query from Grant Holliday showing monthly usage again show huge amounts of data (60GB per month) in a table that is 680MB. Who is correct, SQL Server reports or table queries?



I then ran the Test Attachment Cleaner in preview mode, and sure enough it said it was cleaning up GBs of files. So I ran a Delete against the TFS database. While the cleaner was running the queries were showing the size in those tables dropping, and the Disk Usage report was showing drops in the tbl_Attachment table, albeit at a much much smaller scale. The total database size however was unchanged and the available space was getting less! On completion it said: Total size of attachments: 206030.5 MB!



The size reported by the database properties when I began was:


After cleaning apparently 200GB it is now:


Another suggestion was to delete old workspaces, which we have done. To my surprise this released about 5 GB from the content table. Git TFS can create a lot of workspaces.


Hence the problem. We are growing at up to about 5 GB per day. Our System Administrator is doing an awesome job keeping up. But we need to know if this is to be expected to continue or if there is something to tell us how we can use less space.

Update 3 April 2014: I have put the question on the forums.

Update 4 April 2014: Something has happened overnight. I’m assuming the workspace clean has caused it. We now have 115 GB space available! What’s odd though is that the tbl_Content size has dropped that space. What does that table have to do with workspaces? Some insights into how this is working so we can manage our systems would be appreciated.



Update 7 April 2014: More space has flowed free over the weekend without doing anything else. A shrink is in progress. Pretty crazy that 34% of the content size was local workspaces?




Update 8 April 2014: Shrink complete. Looking much better now. Notice however that the tbl_Content table in a day has gone up a few gigabytes, which is quite concerning.



Update 11 Apr 2014: By the end of the week, we have consumed around 5 GB in 3 days. I don’t see any significant amount of new workspaces created either. Unless the build server with TFS Service is creating them. I’m going to clean up TFS Service’s workspaces at the risk of breaking some builds so that I can monitor them carefully. I now have 51 server workspaces for TFS Service. Which does seem like a lot but we do have 18 Build Agents and many build definitions.



Update 14 Apr 2014: I don’t know what has happened Friday and over the weekend. The content has grown 2 GB but the database expanded a massive 60 GB! Event though the autogrowth is set to 1%. So a shrink is in progress.




Update 14 Apr 2014 #2: After the shrink it is back to the 2 GB growth for Friday and the weekend.


Update 16 Apr 2014: A couple of days growth.



Update 22 Apr 2014: Over the long weekend, it has done exactly the same thing. Another 60 GB expanded to. I’m not going to shrink this time. There is now 54 server workspaces for TFS Service, only 3 more than 11 days ago, so nothing extreme there. The growth of the Data in the tbl_Content from the 7th April has gone from 282,901,304 KB to 301,318,640 KB; 17.5 Gigabytes. Taking out weekends and public holidays that is almost 2 GB a day.



Update 12 May 2014: Usage has gone up consistently over the last couple of weeks. We have been shrinking regularly as it keeps expanding inappropriately. Here’s the current state:



I then disabled the CodeIndex and ran the delete commands as Anthony Diaz suggested in the comments:



It has removed about 500,000 records from the content table but only about 2 GB of data. We’ll see how it does for daily growth.

Technorati Tags:

TF30063: You are not authorized to access Microsoft-IIS/7.5

This is the second time we have hit this error so this time it was a quick fix. But if you don’t know what it is you might be led astray by the message. When we have received this error is has nothing to do with permissions. This typically occurs on a build failure:


The real reason is found in the Event Viewer on the TFS Web Server in all these Errors:

Event Viewer Errors

If you scroll through the jumbled Web Request Details within the event details you will see:

Exception Message: There is not enough space on the disk.
 (type IOException)
Exception Stack Trace:    at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count)
   at Microsoft.TeamFoundation.Framework.Server.TeamFoundationFileService.InternalCopyTo(Stream source, Stream destination, Int32 bufferSize)
   at Microsoft.TeamFoundation.Framework.Server.TeamFoundationFileService.CopyStreamToTempStream(TeamFoundationRequestContext requestContext, Stream stream, Int64 compressedLength, CompressionType& compressionType, Boolean compressOutput, Boolean useFileStream)
   at Microsoft.TeamFoundation.Framework.Server.TeamFoundationFileService.RetrieveFile(TeamFoundationRequestContext requestContext, Int32 fileId, Boolean compressOutput, Byte[]& hashValue, Int64& contentLength, CompressionType& compressionType, String& fileName, Boolean failOnDeletedFile, Boolean returnNullIfDelta, Boolean forceFileStream, Boolean readIncompleteData)
   at Microsoft.TeamFoundation.Framework.Server.TeamFoundationFileService.RetrieveFile(TeamFoundationRequestContext requestContext, Int32 fileId, Boolean compressOutput, Byte[]& hashValue, Int64& contentLength, CompressionType& compressionType)
   at Microsoft.TeamFoundation.Server.Core.MidTierDownloadState.CacheMiss(FileCacheService fileCacheService, FileInformation fileInfo, Boolean compressOutput)
   at Microsoft.TeamFoundation.Server.Core.FileCacheService.RetrieveFileFromDatabase(TeamFoundationRequestContext requestContext, FileInformation fileInfo, IDownloadState downloadState, Boolean compressOutput, Stream databaseStream)
   at Microsoft.TeamFoundation.Server.Core.GenericDownloadHandler.DownloadFile(TeamFoundationRequestContext requestContext, DownloadContext downloadContext, HttpRequest request, HttpResponse response, HandleErrorDelegate errorDelegate)

The reason we ran out of disk space is due to IIS Logging. We were having giant transaction logs being recorded for all requests to the TFS Web Services. Archiving (deleting Winking smile) these restored our space quickly.

Technorati Tags:

New Work Item State on TFS Kanban Board

Continuing our Quest we currently have the states To Do, In Progress and Done. We need to add another for when it goes into testing so we will call that Boss Battle.

Firstly we need to add the new state to the work item type. Export the Quest work item type:

witadmin exportwitd /collection:http://vsalm:8080/tfs/FabrikamFiberCollection /p:FabrikamFiber /n:Quest /f:%userprofile%\desktop\QuestWitd.xml

Add a new state under witd > WORKITEMTYPE > WORKFLOW > STATES:

<STATE value="Boss Battle"/>

Modify the transitions to go through the new state:

<TRANSITION from="In Progress" to="Done">
<TRANSITION from="In Progress" to="Boss Battle">

<TRANSITION from="Done" to="In Progress">
<TRANSITION from="Done" to="Boss Battle">

In practice you might want to add more transitions around the state. This is much easier using the TFS Power Tools Process Template Editor. This will now have the workflow in place with the new state, but when that state is selected the work item will not appear on the board.

Export the process configuration definition:

witadmin exportprocessconfig /collection:http://vsalm:8080/tfs/FabrikamFiberCollection /p:FabrikamFiber /f:%userprofile%\desktop\ProcessConfiguration.xml

Under the TaskBacklog add a new InProgress state:

  <State type="Proposed" value="To Do" />
  <State type="InProgress" value="In Progress" />
  <State type="InProgress" value="Boss Battle" />
  <State type="Complete" value="Done" />

Import the process configuration definition:

witadmin importprocessconfig /collection:http://vsalm:8080/tfs/FabrikamFiberCollection /p:FabrikamFiber /f:%userprofile%\desktop\ProcessConfiguration.xml

You now have a new column on your iteration board:

New Column on Board

Technorati Tags:

Rename Category for TFS 2013 Agile Portfolio Management

We have come from originally a TFS 2008 CMMI Work Item Template and have upgrade through 2010 and 2012. When we upgraded to TFS 2013 we were keen to use the Agile Portfolio Management features. After enabling it we ended up with the categories Features and Requirements. These categories names we find are often use interchangeably so having them mean distinctly different  things was confusing.

Agile Portfolio Management Categories

By default if you are the Scrum template you get something far less confusing, Features and Backlog items. Initiatives can be added following Agile Portfolio Management: Using TFS to support backlogs across multiple teams guide.

Scrum Template Categories

So when it came to use agile portfolio management the hierarchy is was not clear. Fortunately this is easily fixed. I assume the TFS Team did not what to end up with another area like Team Projects that is not able to be renamed. To begin open the Developer Command Prompt which has the paths configured for witadmin.
Developer Command Prompt for VS2013

The examples I have done below have been done on the Visual Studio 2013 ALM Virtual Machine provided by Brian Keller. I recommend anyone who is a TFS Admin have the appropriate Visual Studio ALM Virtual Machines for testing changes on. Especially if you are managing a small team since you are unlikely to have a staging environment and setting one up is too much work. Consider these staging and test environments. With the error I got below no one was effected while I worked out the solution.

Renaming Categories

  1. Export the process configuration definition to an xml file: 
    witadmin exportprocessconfig /collection:http://vsalm:8080/tfs/FabrikamFiberCollection /p:FabrikamFiber /f:%userprofile%\desktop\ProcessConfiguration.xml 
  2. Modify the export xml setting the name to something more appropriate:

    <!-- Feature to Saga -->
    <PortfolioBacklog category="Microsoft.FeatureCategory" pluralName="Features" singularName="Feature">
    <PortfolioBacklog category="Microsoft.FeatureCategory" pluralName="Sagas" singularName="Saga">
    <!-- Backlog Item to Journey -->
    <RequirementBacklog category="Microsoft.RequirementCategory" parent="Microsoft.FeatureCategory" pluralName="Backlog items" singularName="Product Backlog Item">
    <RequirementBacklog category="Microsoft.RequirementCategory" parent="Microsoft.FeatureCategory" pluralName="Journeys" singularName="Journey">
    <!-- Task to Quest -->
    <TaskBacklog category="Microsoft.TaskCategory" parent="Microsoft.RequirementCategory" pluralName="Tasks" singularName="Task">
    <TaskBacklog category="Microsoft.TaskCategory" parent="Microsoft.RequirementCategory" pluralName="Quests" singularName="Quest">
  3. Import the process configuration definition file: 
    witadmin importprocessconfig /collection:http://vsalm:8080/tfs/FabrikamFiberCollection /p:FabrikamFiber /f:%userprofile%\desktop\ProcessConfiguration.xml 

Refreshing the browser after renaming the categories, my URL is wrong since the category no longer exists but I get this very good message:

Most comforting error message ever.

After messing with the process configuration and then getting an error message containing the line, “Don’t worry, the system is not broken”, is very comforting.

Renaming Work Item Types

This is appears easy with just one command.

Rename Feature to Saga:

witadmin renamewitd /collection:http://vsalm:8080/tfs/FabrikamFiberCollection /p:FabrikamFiber /n:Feature /new:Saga

A confirmation prompt appears:

Are you sure you want to rename the work item type Task to the new name of Quest? (Yes/No)

Rename Task to Quest:

witadmin renamewitd /collection:http://vsalm:8080/tfs/FabrikamFiberCollection /p:FabrikamFiber /n:Task /new:Quest

However after doing this you may get this nasty error message when viewing the backlog essentially telling you this time your system is broken and not really useful information on how to fix it:

Broken Configuration Error message

If you export and then import the process configuration definition it will tell you a reason why this could be.


Correcting this color reference in the xml as such however gives you this error.

<!-- Update Task to Quest for the new work item type name -->
<WorkItemColor primary="FFF2CB1D" secondary="FFF6F5D2" name="Task" />
<WorkItemColor primary="FFF2CB1D" secondary="FFF6F5D2" name="Quest" />


Regardless neither of them are the error. This issue is an application pool recycle of TFS is required. After the application pool recycle, the backlog will be working again.

Now you can upload your modified process configuration definition correcting the work item colors.


Now I have a much more interesting hierarchy:

Final Hierarchy


Technorati Tags:

Visual Studio 2013 Project Load Error – The parameter is incorrect or Unspecified error

Here we have this nice obscure error message from Visual Studio 2013. I get it for almost all my projects within the solution, which at 23 or 67 that’s no fun.


If a project is reloading due to a background update and you click Reload, then you might get this other unhelpful message.


The strange part is that it would go away and then come back. What we discovered is that it is a bug due to the source control bindings on the solution mismatching the connection settings in Team Explorer. In our case we have multiple URLs that resolve to our TFS server. In our solution file we have http://tfs:8080/tfs/<projectcollection&gt; yet in my connection I have https://tfs.<domain&gt;.com/tfs/<projectcollection>. After going through all the error dialogs if you Save All to save the solution file and do a diff, you will see all the URLs updated to your connection.


After making sure all the source control paths are the same in all your solution files, ensure everyone on the team updates their TFS connection, by creating a new connection to the desired address.

  1. image
  2. image
  3. image
  4. image
  5. image

You don’t need to remove the existing one, although, it is advised so that you are sure to connect to the correct address.

Hello WordPress

I have now completed the move to WordPress from Windows Live Spaces. It was a good kick to be given and I am already enjoying the great statistics provided by WordPress. I do still want to adjust the Theme and get to know more of the way WordPress works but there are higher priority things I am interested in.

I am really motivated at the moment by Software by Rob to get going with my own software products. Anyone wishing to write their own software this is a must read. Rob freely shares his start-up and marketing experiences that are invaluable. So following Why you should start Marketing the day you start coding I have bought a domain name and some hosting and activated it last night. In the next week I plan to put up my Landing Page. I will be detailing here how things are going along the way for my personal log and anyone who is interested. This will be my first time trying to sell a product after two "successful" free products, TFS Working On and Bluetooth Auto Lock Gadget. I have no idea how well I will be able to sell it, but once I have a little more information I will be setting small financial and user targets. This project won’t be so much about making money but more on the experience. I want to understand what is involved first hand. It is a product in a domain I am very comfortable with and will help me out regardless if anyone else finds it worthwhile.

The product, Search TFS.


TFS 2010 Upgrade

Recently I took the opportunity, safely between product releases, to upgrade our TFS 2008 server to TFS 2010. I performed a Migration Upgrade to new virtual servers. I have a server for each tier, Application, SQL Database, Analysis Server, Reporting Server and Build Server. Overall the upgrade was very smooth and error free. Very impressive work done by Microsoft to get this complex operation simplified. I had absolutely no issues with the version control or work items, so downtime for my team was very minimal once they installed the forward compatibility update. I did however have some issues, which I will outline and the solutions I found.

Backing up our existing TFS databases, transferring them across our (slow) network and restoring them on the new SQL Server, was the longest task. The actual upgrade by the TFS 2010 administration of the TFS 2008 database (about 25GB) took around an hour. Not bad.

The data warehouse did not update right away. I had to manually trigger a rebuild. Using the cube afterwards we found it was not fully populated. This was very odd and gave the first use experience of the new cube a very bad impressive. Over the week however, I was monitoring the warehouse views and database size and saw this steadily increase until all the data was in there. We have our own BI Reporting tool, so all our existing reports were completely broken due to the cube schema changes. The jury is still out whether the cube is easier to use or not, but we are now powering away rebuilding all our reports.

Most of the pain was getting the new build server up and running. This is partly due just to getting our prerequisites in order, but there were a few issues from Team Build 2010 itself, since we were not upgrading to Visual Studio 2010 just yet (we will soon).

Firstly, MSBuild 4 failed to detect dependencies. This caused the projects in the solutions to build out of order, and subsequently fail. The solution is to update the dependencies manually in the solution file and the fix is found here.

Secondly, it failed to find the Bootstrapper SDK path. A reinstall of the Windows SDK may have helped this, but the easy solution is to just edit the registry and add the path. Solution found here and here.

Thirdly, and most painfully, and still not resolved is that the Test Results are not published and the error message is TF270015: ‘MSTest.exe’ returned an unexpected exit code. Expected ‘0’; actual ‘1’. This means that they are not available from the build details, the drop folder or the cube. I can only see the test results by looking through the text MSBuild log file. This issue has been resolved but the fix is not yet available. I assume it will only be available after we have moved to VS 2010 anyway which does do much better management of Unit Tests by removing the need for the horrible vsmdi.

Lastly, we still get numerous TFS237086: The work item cannot be saved because at least one field contains a value that is not allowed. I do not know why we get this message sometimes especially since the work item number quoted has been updated and associated with the build and there is no invalid state of it. One theory is that multiple builds are updating the same work items and the build does not refresh the work item just before it updates it.

The Reports node in Team Explorer had a red cross on it and I was eager to see the new reports. I found that the new instance of Reporting Services did not have the correct permissions for the users. Fixing that and the Team Explorer reports node came good and the old TFS 2008 reports all worked against the new data warehouse and cube. Nice. The new reports however, were not there. Importing them from the exported CMMI Process Template did not map them to the data sources correctly. Manually fixing that got them working, but I imported them to the wrong folder structure, so the links did not work. Later I found this page which outlines a process to import the new reports. I am most interested in the Excel reports, but I have not connected to SharePoint yet, since I will wait till we upgrade it to SharePoint 2010, so I was quite disappointed I could not get them without SharePoint. We use our own reporting tool anyway, so this was more for interest and replicating the Reporting services reports in our BI tool from the Cube was very easy.

Once the builds were working I set off to enable the Test Case Management which is a big reason for us to upgrade to get started with right away. Following the instructions here was easy although there were errors, but the user comments are helpful getting around them. I had already quite heavily customized our work item layout so I did not have to follow many of those modifications. It was a very simple task with just these steps:

  1. Download the Process Template
  2. Import Link Types (Shared Steps and Tested By)
  3. Import the Work Item Types (Test Case and Shared Steps)
  4. Import Categories
  5. Modifying the Bug template, just to add a couple of fields. I did not modify my layouts.
  6. Specify the Bug Type to and bugfieldmappings for Test Manager
  7. Grant permissions

I did this for two projects and the second time was very quick. There were however issues at step 3, 5 and 6. Firstly, step 3. The downloaded work item template has the new Names for existing Fields. This caused the error TF212018: Work item tracking schema validation error: TF26177: The field System.IterationId cannot be renamed from ‘IterationID’ to ‘Iteration ID’. This involved just editing the work item type xml. For me, it was:

TestCase: Area ID –> AreaID
SharedSteps: Area ID –> AreaID; Iteration ID –> IterationID

Second, step 6 which caused me to go back to step 5 later. This was painful! Bug Field Mappings. Contrary to the article and name of Bug MAPPINGS, you cannot specify another field you want the Test Runner to populate. I have tried to get it to use the existing Microsoft.VSTS.CMMI.StepsToReproduce field with no avail. MVP Ed Blankenship has confirmed this, that the values are hard coded! So thirdly, was to go back to step 5. To resolve it I had to:

  1. Add the new ReproSteps field.
  2. Set it to copy StepsToReproduce.
  3. Update all Bugs work items in the project by adding a comment in the history. (Use Excel, the new bulk edit in the web access is slow!)
  4. Edit the form to display ReproSteps where it was StepsToReproduce
  5. Remove the Required on StepsToReproduce and add it to ReproSteps

Now though, the Test Runner populates our Bug templates nicely, and performing these on the second project was quite quick. However, the upgrade could have been much smoother if the mappings worked. It felt like Microsoft had disregarded existing installs in this area. Next steps are now powering ahead with the new test case management, which seems nice, but the Test Management interface is certainly version 1 to put it nicely. Upgrading our projects to Visual Studio 2010 to get more benefits from TFS 2010 with the client designed for it. And plan the next iteration with the hierarchical work items.

Technorati Tags: