Thursday, 20 September 2012

Benevolent dictator

The company that I work for is rapidly expanding, so much so that we're actually knocking down walls to fit us all in! This is all happening because Product A is about to be re-branded and released to a global audience. These are good times!

One aspect of my role is to continually improve our software development practices, as such I'm always looking out for potential risks and mitigation strategies.

Our on-going expansion is presenting us with a moderate risk of lowering our productivity. We are in the midst of a big recruitment drive, and the fruits of this mean new team members are arriving almost weekly, whilst the existing team is busy delivering and maintaining a new product.

In terms of exposure to risk, we're not so exposed structurally, we should have little issue scaling up the number of features or the supporting infrastructure. The risks we face are likely to be issues surrounding  integration and training; activities that will absorb a lot of the teams energy resulting most likely in a lowering of our momentum.

The process maturity work that we have been conducting over the past year has provided us with structural alterations required to handle higher workloads. In this sense, we're now in a much better position to absorb any influx of new team members with little risk of bottlenecks emerging. 

The investment in our build processes has yielded a productive gain of about 3,700%. We've reorganised our branching strategy, introduced deployment automation and switched to Kanban.
On the release side we've introduced a platform for providing an unlimited number of disposable virtual environments. Coupled with automated deployment scripts we have achieved a modest 10,000% improvement. 

Whereas before it would cost us £130 for a bean-to-cup deployment,  it now costs £1.23. Given that we're soon to have more people, publishing more code more frequently, this is good news.
Still I think where we are lacking is our ability to support these new developers. We're in the middle of a major release, with an expectation that post-release we'll be able to pop-out features on an industrial scale.

All of which raises some interesting questions:
  • Where is the time to support our new colleagues going to come from?
  • How will we integrate them into our team?
  • What impact will this have on the team productivity?
  • How can we protect our code base against accidental damage?
  • How can we mitigate these risks? 

Moving to defcon 4

Our code base already has reasonably good levels of protection against accidental damage with the existing stack of analysis tools:
  1. We have unit testing
  2. Code coverage analysis with minimum coverage failing
  3. Toxicity checking with minimum acceptable thresholds
  4. StyleCop checking with violation failing
The above measures are however largely "reactive" measures; A negative "red-flag" event after something has already gone wrong. Trial and error is a good way to learn, but wouldn't it be better if we could provide proactive assistance to our new colleagues rather than just waiting for them to fall through a trap-door.

The best approach would be to pair-up, but this would lead to one of two possible outcomes:
  1. We've got new developers doing absolutely nothing -bad.
  2. We've got existing team members impeded -bad.
Both of which are bad for our productivity. Our feature delivery rate will take a hit.

Pairing will happen; Our platform is huge and our new team members are going to be quite disorientated in the early weeks. No amount of documentation is ever going to help orientate people. You can't beat the human touch in this respect.

However we should try to reserve the pairing-time for when it's most valuable, and not on mundane activities. If we can improve our self-help and guidance offerings then this should be achievable.

Therefore what we'd like to achieve is:
  1. Reduce the time spent supporting new colleagues
  2. Reduce barriers to entry
  3. Shorten the integration period

Moving to defcon 3

There are two specific tools provided by Microsoft that seem to be feared and loathed in equal measure; The mere mention of FxCop or StyleCop normally elicits a reaction of shock, pain and disgust, normally all three at once.

I believe however that the reputation of these two tools is largely due to legendary tales of misuse. If you're going to activate every rule, you can expect productivity and morale to suffer as a result. Not every rule is appropriate to your situation, and the cost of implementing some rules just can't be justified. We have been very careful to choose a pragmatic set of rules, considering practicality and affordability.

FxCop and StyleCop can be used to lay traps for developers to fall into, but more positively, they also provide the very guidance we're seeking to give our new colleagues. Both integrate into the IDE, both are entirely customisable, and both provide new developers with assistance they need from day #1 without any impediment on the existing team members.

Code reviews are an obvious alternative, but such meetings are often lengthy, unstructured, quickly loose focus, inconsistent and patchy. The outcomes of any such review often go undocumented and only partially implemented. Code reviews are the solution that everyone agree's would sort things out, if only they had the time do them. But they so rarely live up to the expectation.

Coding standards documentation is another obvious alternative, but let's face it, who actually reads a company coding standards document after day #1? And why would you?
  1. Typically written by one person, and don't reflect the views of the team
  2. Often ignored by the rest of the team (see point 1)
  3. Typically written by the one person in the team that everyone else dislikes (see point 2)
  4. They take a long time to produce
  5. They very quickly become obsolete
  6. Difficult to use, badly organised, badly indexed, inconsistent.
Every single time I've been handed a standards document, I've quickly scan read it and thought, this is impressive, but does anyone actually implement it. The answer is always "a bit".

Experience has taught me that Coding Standards docs are little more than a well intentioned statement of intent, "this is who we'd like to be".

Moving to defcon 2

We're about to jump fully on board the Style and FX cop bandwagon. Our team leaders, architects and developers are working on a set of rules for both FX and Style cop. These rules will be integrated into all of our IDE's.

When our new colleagues join us, their IDE will be able to guide them on issues of styling and coding standards without any help required from the rest of the team. It might not be able to help with the architecture of our platform, but it will at least ensure a smoother transitional period for everyone involved.

Tailor made

We've configured both Cops to work on a per project basis, so that rules can be tailored to every individual project. A one-size-fits-all approach rarely works, and trying to find a set of rules that could apply to an entire platform is either going to lead to an in-effectively diluted set of rules, or rules that cause pressure and friction in unintended ways.

Focussed code-reviews

We're implementing FxCop to stimulate a conversation about coding standards, when necessary. Rather than having routine code-reviews which quickly become seen as a burden, we're using FxCop to bring specific issues to our attention.

Thanks to IDE integration, this should happen before check-in where a team leader can validate and approve an exception. However as a fail-safe, FxCop will also be able to fail builds if violations are found. In this event a quick and focussed code-review will be triggered in order to fix the build.

This approach ensures that team leaders only need to perform code-reviews as and when they are genuinely needed, and focussed on a specific issue. 

Barriers to entry

A FxCop and StyleCop are easily integrated into our IDE, which effectively embeds our "coding standards" into our development environment. Line by line, the IDE nudges our progression in accordance with our standards.

As our product evolves, as we improve, as Microsoft update, our StyleCop and FxCop based rules will adapt, assisting us with every line of code we'll ever write. It might not be able to explain the platform, but it can at least promote our standards.

FxCop therefore lowers barriers to entry, and eases the burden that new developers make upon the team.

Evolving documentation

One of the biggest issues with coding standards documentation is how quickly they become obsolete. The energy and enthusiasm that went into the documents creation doesn't last. As such, these documents become worthless in a very short period of time, unless future resources are allocated to keeping the documents relevant.

With Fx/StyleCop after you've update the rulesets, you've effectively updated your coding standards, and they become instantly available to every developer. Microsoft provide new rules as their framework evolves, and we can choose which rules we would like to implement as we move forward, it is even possible to create our own.

The flip-side of knowing the rules is knowing when to break them. There will always be the exception. FxCop handles exceptions very well, and we've embedded our FxCop projects in our source repository to take full advantage of it's exception handling.

A team leader is asked to perform a code review (triggered perhaps by a failed build), examine the code and the circumstances that raised the exception, then make a decision. If the team leader decides that an exception can be made in this particular case, this can be recorded within the FxCop project. This exempts this rule violation from future checks, and we now have a journalled exception. 

Moving to defcon 1: Bring it!

With FxCop integrated into our pipeline and IDEs we'll be in a much better position to integrate our new colleagues. The tools provided by Microsoft should ensure that the disruptions faced by the existing team members are fewer, and our new colleagues will spend less time either waiting for help or having to look thins up.

Our coding standards are becoming embedded within the IDE and we are able to offer succinct code reviews, and keep a record of any exceptions made. 

With the help of FxCop, we can be much more confident that our code follows our standards and that this will be maintained over time.

Came here looking for actual code?

I think I can help...

Invoking FXCop from PowerShell and consuming the results

Tuesday, 18 September 2012

Deployment on a schedule

The automated deployment that we're currently using has only ever been invoked by an end-user with an open powershell prompt.

The commands we invoke manually to publish our platform are:

Get-LatestBuild | Publish-Platform -environment "uat" -all -cleanFirst

Using Windows Task Scheduler

In order to provide our test teams with an up-to-date test environment each day, we've created an early morning scheduled deployment to the UAT environment.

The task's action command is relative simple. It's just packaging the above commands into a command line call.

powershell.exe -command {Set-Location D:\; Get-LatestBuild | Publish-Platform -environment "uat" -all -clean -goldData}

Took a little while to understand and get the syntax just right, so I thought I'd share it. 

Be aware of "Start Location"

You might set the Start Location of the scheduled task to where you want it to be, but Powershell.exe doesn't care, your powershell host once opened will start at its default location.

For this reason, I added an additional step to set the powershell environment location first;

Hope this helps,

Monday, 17 September 2012

Blogger Font Change

After much bemoaning, from several of my work colleagues, I've gone for a simpler font.

Can't say it does anything for me, but those of us who like to read blogs on a Windows 8 RC VM on MacOS - this is for you! ;)

Saturday, 15 September 2012

Kanban and software production

A colleague of mine suggested this week that we attend a SyncNorwich event on Kanban.
We all really enjoyed the evening, although some of us found the round-robin introductions a little uncomfortable.

As a team, we had introduced the Kanban approach back in February this year, and it seemed to be going well, but we were curious to learn how far we might have strayed from the rest of the herd. Before the presentation at the Kings Centre, in the company of our MD and Principal Architect, I was feeling a little excited, and a lot nervous. We were about to find out how esoteric our implementation had become (yes, I do mean bad).

For the uninitiated, Kanban is of the "lean" school of thought. Of all the methodologies it is the least prescriptive, in that more or less anything goes. The essence of Kanban are the "signal cards" and the rules that we attach to their transition. Kanban is extremely easy to pickup, and has more than a thing or two in common with the kind of card based drinking games you might have played at Uni, or in my case, hosteling.

I picked up the Kanban bug 2 years back whilst I was working at RBI. There was one particular team that struggled to implement Scrum due to the unpredictability of their workload and decided to try Kanban instead. This team was widely known within RBI for being a bit grumpy, and a kind of fatalistic culture had taken a hold. Scrum, wasn't working, if anything it was actually damaging the team. Eventually, Mr Rami Hatoum was brought in as a consultant and he introduced the team to Kanban. What surprised everyone, including the team in question, was how very quickly this beleaguered team transformed it's opinion of itself, and how this new wave of positivity was felt throughout the business. Just before I left RBI, I went and spent a few hours with that teams principal developer, Mr Rob Callaghan who gave me a very detailed walkthrough of their implementation and how it had gone for them. I still refer to those notes even today! Thanks Rob ;)

I'm not suggesting for a second that Kanban is a panacea, or bandwagon for everyone to jump aboard. The purpose of my anecdote is to underline that on this occasion it was the right tool for the job. There was a team who perceived themselves as failing principally because Scrum said so.  In reality, the nature of their work didn't fit the Scrum mould, their customers couldn't be marshalled into the discipline required by Scrum, and it was the team that felt the adverse effects of this.

The thing I really like about Kanban is that its so flexible, so non-prescriptive. You are positively encouraged to "make it up" as you go along, to evolve your practices over time. Of course, this is more professionally termed "Continuous Improvement". The business of CI is crucial to the success of a Kanban implementation, and should be conducted as scientifically as possible, with every alteration subjected to some form of peer review and conclusions supported by empirical evidence. You need something more substantial than mere "opinion" as to whether something worked better, or worse than before. Without concrete evidence to support a claimed improvement, reputation and morale may suffer.

But, the key to Kanban is not to obsess over it, its just cards, and simple transitional rules. Most Kanban implementations, ours included, beg borrow and steal from more prescriptive methodologies like Scrum . We have taken only what we need to get things done. The rules? Are literally made up as we go along! Or to sound more professional, the rules evolve over time using a process of Continuous Improvement (Kaizen). We borrowed from Scrum the "retrospective",  a fortnightly team forum to discuss and evaluate our practices. We also borrowed the daily stand up, another quick forum that helps the team to solve or better-yet, avoid production problems.

So when it comes to Kanban, just throw some columns up on a board or a wall. You can expect it to change radically in the first few hours, days and weeks, gradually evolving into a well-honed workflow.

Our Kanban board is now 8 months old, and we've not made any changes to its structure in a good long while, but in the early days it was tremendously fluid. Is it perfect? of course not! I think we need to review our WIP limits and do more to overcome the work-push mindset that still endures, but, everything in time.

So, back to Benjamin Mitchells presentation on Thursday. Ben's presentation was funny, thoughtful, engaging and even a little self-deprecating. I particularly liked the impromptu Kanban board he thew up on the wall to track the progression of his presentation. If that didn't hammer home the simplicity of Kanban to the audience, I'm not sure what else would!

If I hadn't heard of Kanban, Ben's presentation would have surely tempted me to try it out. I particularly liked how throughout he put the emphasis on people, how something affected them, how it helped them, all to often this aspect is over-looked when thinking about how people should work. As a survivor of my own nervous breakdown, I can't welcome Ben's humanist approach enough.

One thing I'd really hoped for at the talk was to learn more about how to deal with the politics of change, and Ben certainly had a lot of knowledge and experience through his consultancy roles of implementing radical change into a resistant culture.

Sadly, Ben's talk didn't quite get that involved in the psychological aspects due to the strong audience participation during his introduction in to the mechanics of Kanban. After a rapid exchange of Tweets, Ben provided me with plenty of material on the psychological aspects of implementing Lean and Agile methodologies.

Ben provided me with a link to this video,  LSSC12: What comes after visualising the work? Conversations for double loop changes in mindset - Benjamin Mitchell.

I need to watch this video before Monday, as I feel a big push is coming.

Apologies if you'd read an earlier version of this post, I thought I'd merely saved it for later! :)

Thursday, 13 September 2012

Passing arrays with ArgumentList

Just working with a collection of objects, that were being passed into a remote hosted Invoke-Command.
An old curiosity of Powershell just came back and caused me a few minutes head scratching.
I then searched on Google for

  • Truncated array
  • Passing array into Invoke-Command
  • Only getting the first item in an array when passed as parameter
It was at this last item that I realised what I was doing! 

So, let's take you through it:

#A collection of objects
$privateHosts = @($serviceMap | where {$_.key -match "_private"})

Invoke-Command -session $msmq_session -ScriptBlock {
    param ([object]$privateHostApplications)
    .... code and stuff

} -ArgumentList $privateHosts

Then, the magic started... My original array of $privateHosts which contained 28 objects, turned into only 1 after it had entered the remote Invoke-Command.

The reason for this, is that the -ArgumentList is a strange parameter that accepts an "array of parameters". It was receiving my array of host entries, and treating each entry as a unique parameter value. My parameter declaration was only setup to receive one such object, and thus, i was left with only one.

The solution is simple.
Encapsulate $privateHosts in an array itself, using the ,

} -ArgumentList ,$privateHosts

Now, the first and only parameter in my Invoke-Command call, is an array of objects.

Hope this helps!

Tuesday, 11 September 2012

Working with INI files

Whilst hunting around today for a quick solution for modifying an INI file, I came across tonnes of examples, many half baked.

Found this on Stack-Overflow and it did the trick nicely.

Function Get-IniFile ($file) {
   $iniConfiguration = @{}

   Get-Content $file | foreach {
      $line = $_.split("=")
      $iniConfiguration[$line[0].Trim()] = $line[1].Trim()

   return $iniConfiguration

Function Set-IniFile ($iniConfig, $file) {
   Write-Host (New-Item -ItemType file $file -force)

   $iniConfig.Keys | % {
      $fileLine = (" {0}={1}" -f $_,$iniConfig[$_]) 
      Add-Content $file $fileLine

# Load the INI file
$iniConfiguration = Get-IniFile $myINIFile

# Modify the properties
$iniConfiguration.port = "80'
$iniConfiguration.hostname = ""   

# Save them back to disk
Set-IniFile $iniConfiguration $myINIFile

Made very easy work of modifying Apache configuration files.

Thought I'd share.


Saturday, 8 September 2012

Orphan Annie

It seems I owe AppFabric an apology of sorts.

In a recent post about AppFabric, I gave it a good slating for just been plain rubbish, but it seems it wasn't entirely AppFabrics fault! In fact, it wasn't AppFabrics fault at all.

I would still maintain that it has some fairly opaque nuances, but on this occasion my general AppFabric ignorance and an aversion to reading documentation may have contributed greatly to my mid-week psychosis.

To recap

I was having a very bad day, trying to automate the installation, configuration and tear-down of an AppFabric cluster.
The principal aggravation was not being able to register/un-register Cache Host instances. It was insistent that I could not do this as there was already one running. Even though I thought I'd stopped it, and de-registered it from the cluster.
Not understanding just how AppFabric works was the cause of my trouble.

What I learnt about AppFabric?

I now understand just how AppFabric works, and it's actually very very simple. So simple, and my failure was to grasp how simple it really was.
I had assumed that the cluster was some kind of super-controlling service, that reached out across the network and exerted total control over its cache-hosts. 
Therefore, I assumed...
Stopping the cluster, would stop the hosts... and, yes, it does.
Removing the cluster, would remove the hosts.... no, it does not.
In actual fact, the cache hosts appear to be completely self organising. 
The cluster appears to be nothing more than a text file (in our XML provider model), from which the cache hosts can learn their configuration. Hence why AppFabric needs to use a UNC path for the cache hosts.
There are some powershell CmdLet's that promote the idea that the cluster is an entity in its own right. Create-CacheCluster, Remove-CacheCluster and Start/Stop-CacheCluster. However, in reality, all these cmdlets actually do, is create, delete and iterate over the XML file on the UNC path. Perhaps these Cluster CmdLet's encapsulate the knowledge for correctly configuring this text file, but thats it.
There is no intelligent cluster host, no cluster service or cluster controller. Just a shared text file, that enables the disperate cache hosts to form a team.

Orphaned hosts!

The conceptual, but non-existent cluster is what gave rise to my troubles. 
I thought I'd removed the cluster and all its dependents. In actual fact, I'd only removed the shared text file. The hosts were still running under the configuration they'd previously accepted.
And this is why I was having so much trouble with my scripts.
I had hosts that were still running, but I had no record of this because I'd removed the cache-cluster configuration. Or, as it should be referred to, the shared config file. 
I couldn't deregister the hosts, because they had no entry in the new shared config file.
I couldn't create a new host, or register it, because the original host was still running and blocking this process.

So to recap, my mistake was:

  • I didn't realise the cluster was just a shared text configuration file.
  • I removed the configuration file which effectively orphaned the cache hosts.
  • When I recreated a new config file
    • The orphaned cache hosts couldn't be de-registered
    • The orphaned cache hosts were blocking any attempts to add new cache hosts and register then with the shared configuration file.

The steps I am now taking are quite simple:

  • Remove all caches from the cluster (makes the next steps quicker)
  • Stop the cache cluster
  • Visit each cluster host using a PSSession and 
    • de-register the host from the cluster
    • stop the cache host
    • remove the cache host
  • Kill the now empty cluster
  • Recreate the cluster
  • Add and register a cache host on the cluster host machine
  • Add all the caches
  • Visit each host using a PSSession and
    • Add the cache host
    • Register the cache host with the cluster
    • Add the cache admin feature

In summary

  • AppFabric was working as it was designed to do so. 
  • A series of stand-alone cache instances, that can arrange themselves into a team (cluster) by using a shared configuration file.
  • There is no entity that actually is a cluster. 
  • As our cousins across the water my say, My Bad. 
AppFabric, Microsoft. Sorry.

Wednesday, 5 September 2012

The joy of the web farm framework

Project A rumbles ever closer to its release date, and now more than ever the team is pulling together and all the loose threads are starting to tighten and a very classy looking product is emerging from the dust.

This frenetic pace however leaves little time for my primary projects at the moment, both the distributed testing and the integrated workspace projects are effectively on hold until we've gone live in October.

But, there's still plenty to talk about in the automagical world of build and deployment.
Today, the Microsoft Web Farm Framework really pulled it out of the bag for us.

Our new front-of-house website is based on SiteFinity, which like any CMS can run like a bit of a dog until you enable caching and throw some more hardware at it. Today, using WWF2, we scaled-out  SiteFinity over two hosts.

What was remarkable is how incredibly easy this happened. After my recent experiences with  AppFabric I was less than keen to get involved with WFF, but it turns out, my fears were baseless.

In a matter of seconds, with the aide of the wizard...
  1. Created a new server farm
  2. Added the two machines that were to play host to SiteFinity
  3. Configured the load balancing profile
  4. Configured the client affinity
  5. Added a simple rule to match the HTTP_HOST to the new Server Farm
  6. Sitting back to watch the management statistics
I thought about posting snapshots, but it really isn't required. WFF just works! :)

Where's the automation?

Good point,  using a wizard doesn't really count.

The good news is, yes, this is scripted. In fact, the WFF/ARR services have been scripted for quite some time as Project A relies heavily on both WFF and ARR to deliver much of its traffic to the right endpoints.

Today however, was the first time that WFF had been used for something more than a simple routing container, today it became a load balancer!

I need to genericise my scripts a little before I post them online, as they perhaps give away a little bit of information as to how Project A is structure, and that won't do. Equally, these are scripts in the truest sense, very long, very procedural, and not very pretty, so before I show them to the world, a little bit of a code-tidy is required I think :)

This is definitely something I'd like to share, WFF is extremely neat, and I'd like to hear more people are using it.

What about the application itself?

Well, that's a good question, setting up a load balancer is one thing, but the application itself needs to be installed on two separate locations also. 

The good news here is that our existing deployment scripts are configured in such a way that they permit a package to be deployed to multiple hosts, so this part of the work was so trivial. 

   packageName = "SiteFinityWebsite"
   deployment = @{
      servers = "Host-A","Host-D"
      httpBindings = @{ipBinding = *; hostHeader = "sitefinity.{host}"}

This is a severely paired back example of our platform configuration object, there are almost a hundred of these package definitions. However, all that was required to publish this package to two servers, was to add the new destination to the packages servers property.

Thanks for reading, and I promise, there will be code posted.

Update #1: Notes to myself

WFF2 was actually a complete nightmare to install. I seem to recall that it existed in some kind of twilight zone, where in the end, I could only install it with the WebPICmd. It was too new for the Web Installer, too old for the previous incarnation, and it could not be installed outside of the WPI. 

I've got this little gem scripted in my provisioning scripts, so I'll be sure to add that to the next update of this post.

Tuesday, 4 September 2012

Arrested Development


Our current build and deployment pipeline doesn't really align particularly well with our desired development practices.  Whilst the pipeline has assisted the adoption of Feature branching as it was intended, our current build and release practices are still disruptive.

What do I mean, disruptive?

They say that "Technology is at its absolute best when its invisible", this to me means it's so seamlessly integrated with a given activity that it goes unnoticed. This should be the goal of every technology and process, it should be invisible.

And this is what I mean by disruptive. Whilst as a team we're busy developing new features, we're continually disrupted by routine operations such as branching, building, deploying & cleaning up.  The kind of thing that will normally be labelled "Process", the stuff we have to do, in order to get other stuff done.

So whilst our newly adopted practices and processes have really taken us a long way from where we were, we've still got some way to go before they can be forgotten about. 

Integrated workspaces

I briefly mentioned in the synopsis, some of the routine actions that disrupt the flow of development.
  • Creating new features
  • Creating build definitions and managing builds
  • Building environments
  • Deploying packages
  • Running tests
No-one would argue that these actions aren't worth the effort, they ultimately save us a lot of time, but they are disruptive. They break the flow of concentration and feature development, and it's features that drive the company forward, and pays our wages.

I've developed a plan to eradicate the disruptions I listed above, a plan that should more or less be invisible to the developers and the wider business participants.

I want to introduce the concept of integrated workspaces, that is until I can think of a better name!

Arrested development

In order to explain how I believe my plan will work, I first need to tell you why.

At the moment, we have 7 internal test environments that we refer to as Systest. The systest environments are single-host environments that are used to assist in feature development. Our SOA platform requires that platform needs hosted in order to facilitate continued development, therefore developers are continually deploying the latest versions of their features.

Compared to 12 months ago, the act of provisioning environments and deploying code is now 100% automated. And whilst these automated processes have helped to eliminate a whole raft of productivity sapping and morale crushing activities, they are still a disruptive influence on the flow of development.

Consider a typical feature development cycle:
A developer takes on a new feature,
  1. A branch and build definition is created for the feature
  2. The developer works on a feature
  3. The feature is checked-in, and tested
  4. The developer then pushes the latest build to an environment
  5. The developer continues to develop the feature
  6. Steps 2-5 continue until by iteration, the feature is considered complete
  7. The feature is eventually completed, and re-integrated into Main
  8. The testing team deploy the new trunk to UAT and begin testing
  9. Steps 2-8 are repeated as bugs are found and resolved
  10. Eventually the developer removes the feature branch and builds
In this typical scenario, up to 50 deployments may take place. And whilst, as I say repeatedly, the automated processes in place make this a completely dependable operation, it still a disruption, its still getting in the way of what the developer wants to achieve.

My idea for integrated workspaces is in part, to make these disruptions go away. 
However, there are also other reasons...

The human touch

Fundamentally, we still see the world in terms of “packages”, “deployments” and “hosts”, and all of these entities are accessible to anyone in the development team. Essentially, the inner workings are still laid bare, which allows developers, who are very clever people, to take advantage of.

Dirty environments

We've gotten into a situation on a number of occasions where developers have performed partial deployments (segments of the platform) with packages of varying ages. And, in some cases, even packages from different code-bases (features) have been deployed into an environment.

Meaning, that our test environments were hosting a collage of different versions of our platform.

And lastly, our ad-hoc configurations, leading to configuration drift. The code running on any environment is supposed to be entirely derived from the code base. However, everyone, from time to time, will be overcome by the temptation to make ad-hoc changes to the run-time environment, normally to "just see" if something will work. We all do it. And the problem is, each of these innocent little changes potentially increases the distance between the environments run-time and the source that is supposedly came from.

This is very damaging as it leads to an environment that is in an unknown and unrepeatable state, which we then continue to further develop against. 

Our tests validate our code; A series of actions performed against a carefully contrived application state, engineered to give us confidence that our code works. How can we have confidence that our tests are giving us accurate results when they're being deployed against an environment that no longer has the state or configuration we expected. What was the point in writing these tests if they become non-deterministic? Worse still, how much further down the development cycle will we get before false-positive results are finally discovered? Now thats a big, timely and costly disruption, that could have been so easily avoided.


The best solution to avoiding dirty environments is to only record the state and configuration in the source code, and then deploy the changes up to the test environment. But, this to any developer will be a major frustration as their concentration and momentum is disrupted.

No-one should be surprised therefore that in order to "get things done", disruptive practices and processes will be circumvented.

Integrated workspaces

The notion of an integrated workspace was something I came up with whilst trying to work out how I could make my processes invisible to the developer. I wanted to remove all the things that disrupt the developers momentum, whilst paradoxically ensuring that all of my processes and practices couldn't be circumvented.

I want to create a flow that carries the developer along, so I began thinking about where all this begins, with a new feature. 

My thoughts have been through almost countless iterations, but I arrived at the following example, which I think is best represented in the Powershell code that ultimately it will be implemented in.

.\Create-Feature “feature-x
.\Update-Feature “feature-x(performed by TFS after build)
.\Complete-Feature “Feature-X
At no point here, do I give any indication of the underlying actions. 


Would be the only Powershell command the developer would need to use in order to begin work.
  1. Create the branch
  2. Create the build definition
  3. Trigger a build
  4. Provision an environment on AWS EC2 specifically for this feature
  5. Update the DNS on AD to make the run-time available.
  6. Deploy the new packages to the environment
  7. Local hosts file updated so that the code-branch knows where to find the run-time.
In a short while, the developer would be all set to begin work with
  1. A new feature branch
  2. Targeted to run against feature-name.mydomain.local
  3. And with  feature-name.mydomain.local provisioned and deployed
Create-Feature “
Create-Feature “feature-u

Update-Feature (TFS only)

Each time the developer performs a check-in, TFS will invisibly build, compile, package and re-deploy to the new code to the provisioned host.

A check-in is something the developer has to do, but the rest is invisible. 
When the build is complete, the developer will be able to simply resume development. The code being developed in Visual Studio will already been running against the latest version. 

Almost entirely free of disruption for the developer.

There could be arguments that the "build" stage would be disruptive, and this is true, but unless we can introduce real-time compilation there's not an awful lot I can do about that.

All I can do is continue with projects that will drive down the build times, such as distributed unit testing and alike.


Is the final Powershell command the developer will need to know.

This step will be a tidy-up exercise, removing the now redundant builds from TFS, removing the build definition and the branch. The environment that was provisioned specifically for this feature can be marked for deletion, and torn-down later by scheduled task.

In summary

The integrated workspaces project, isn't going to be terribly difficult to implement, it will entirely reuse the current build and deployment pipelines. The major difference is that the entire process will be more or less invisible to the developer. 

The most significant chunk of work will be AWS provisioning, and configuring the code in the branch to run against the new environment. 

The proposal for this project is currently circulating around the development team via Sharepoint, so with a few adjustments I'm sure, work will hopefully commence shortly.

Measuring success

Our Kanban process already provides us with very detailed metrics for cycle-times and a cumulative state-flow. It should therefore be very easy to compare both overall cycle times, but also the state cycle times too.

Powershell, nested queries

Just a quick one.

In a previous post, I mentioned I love using hash-tables, usually organised into complex collections.

Today I had to perform a "nested query" on a hash-table that contained nested hash-tables.

What I wanted, was a filtered collection of nodes based upon (the top-level nodes) matches found by a sub-query of their audiences child node.

In English, I wanted all the configs where they had support for private developer environments.

After I have my list, I wanted the one with the "lowest cost".

Here it is...

$configs = (
        name = “aws”
        audiences = (
                name = “dev”
                limit = 999
                scope = ,“private”},
                name = “test”
                limit = 999
                scope = ,“private”},
                name = “uat”
                limit = 999
                scope = “public”,“private”}
        cost = 100
        name = “inhouse”
        audiences = (
                name = “dev”
                limit = 2
                scope = "public",“private”},
                name = “test”
                limit = 2
                scope = ,“private”}
        cost = 500
$sample = $configs | where {$_.audiences | where {$ -eq "dev" -and $_.scope -contains "private"}}

$sample.GetEnumerator() |  Sort-Object {$_.cost}  | Select-Object -first 1


Just a quick rant. I'm using, or, at least trying to use Google AdSense.

But even though I'm using Google's own Blogger sites, tied to my Google Enterprise accout, along with my Google+ Account, they keep refusing my application?!?!

It seems they are most perturbed by the "Redirects" I've configured to my own domain. A feature that is actually promoted and setup by Blogger!

So, it seems you can't use AdSense if you want your Blogger pages to have your own DNS name.