Building Blocks


Before I continue with my distributed tests project, I thought I'd pause for a moment, and better explain how my scripts are composed. How I manage parallel job execution and gather results of complete jobs. 

Most scripts of this nature (i.e. asynchronous PSJobs, multi-hosts etc.) are composed in a fairly standard fashion.

So, here I go. First, the basics.

Using hash-tables to store and share complex data structures.

I pass a lot of information around in my scripts. The flow is bi-directional, and multi-hop. Information may pass between two or three scripts, spanning several hosts, the resulting feedback information being enriched at every stage.

My preferred mechanism for sharing and representing complex data structures, or aggregated sets of information, is to use a Powershell Hash-table object.

The hash-table is a thing of great beauty, its declaration and usage are both elegant, readable and compact. There are other well documented techniques for creating custom objects in Powershell, but in most cases, I default to the hash-table.

I generally lean towards 'chunky' interfaces over verbose, namely aggregating related sets of information into hash-tables to be passed as parameters rather than a long serial list of parameters. I find this improves the readability and maintenance of the scripts. It's much simpler to add a property to a hash-table than to be continually modifying parameter lists.

Take this example for instance....

Create-Database.ps1

usage
Or, you could be very granular
usage

You can choose which you'd rather have, but I will always go for hash-tables.

PSJobs, Queues and Abstraction

Let's take a simple hash-table declared as $myJob, and use it to create a new PSJob.

Did you see that? Not sure what I'm getting at? Ok, what if I expand a little further?

Now do you see it??? No?!? Ok...

What you're seeing here is a collection of objects, each representing a specific job. The object specifies the name of the job, it details the handler script and the parameters needed by the handler script.

Then, with a very simple for-each loop the hash-table can be used to spawn a multitude of asynchronous remote PSJobs.


Still not with me? Ok, consider this more real world example.


Three jobs, defining deployments of three very different types of package, handled by three different scripts, each with a unique set of parameter requirements. 

It get's even more powerful when you consider that $deploymentJobs could be dynamic? It just so happens, that is a mechanism not too disimilar to this that enables me to push the Platform-A in it's entirety. And, it's not much more complicated that the example above! 

Getting results

The problem with asynchronous jobs is not so much knowing when they're done, but what they did whilst doing it. 

Getting the results is much simpler than you might think.

Remember the hash-table? What about this then...

DeployWebsite.ps1


And back in the script that created the job..

How hard is that?! Vive la Powershell!

Coming up? Scaling up! 

With the basic blocks covered, in my next post we'll be ready to tackle some big questions,
  • How to manage a list of pending jobs
  • Capture the results of these jobs as they complete
  • Display the results back to the users console host


Comments