Expert Texture Home Contact me About Subscribe Digipede Connect on LinkedIn rwandering on Twitter rwandering on FriendFeed

rwandering.net

The blogged wandering of Robert W. Anderson

Archive for Grid Computing

Microsoft Windows Azure

imageMicrosoft’s long awaited cloud platform has finally been unveiled here at PDC 2008.  Late to the Internet, Microsoft hit it hard.  Late to the cloud, Microsoft is doing the same with Windows Azure.  Happily, this will put an end to all the guessing about what Zurich, Red Dog, biztalk.net, SSDS, Live Mesh, etc., actually are.  

Of course, now begins the discussion of how all these pieces fit together.  

This is not a simple approach like Amazon’s EC2 or Google App Engine.  Not to trivialize either, but they are certainly easier to understand.  Try explaining them to the proverbial grandmother — no problem, especially if you leave out virtualization and pythons 😉  (preemptive comment: I know AWS is much more than EC2 and that bigger and better things are coming from Google).

Regardless, the Microsoft Azure is multi-faceted.  In typical Microsoft fashion, there is a lot for a developer to choose from:

  • Azure Storage, Management, and Compute.  Run WCF/ASP.NET based services, with work queues and data storage.
  • Microsoft .NET Services, nee biztalk.net (wrote about here).  This gives you an Internet Service Bus, Access Control, and Workflow Services.  Messages and workflow in the cloud connecting other cloud and enterprise offerings.  Very big deal.
  • Microsoft SQL Services, nee SQL Server Data Services or SSDS.  Eventually a relational model in the sky, currently not too different from Azure Storage.
  • Live Services: Not too much detail on this today, but this is clearly what was “Live Mesh”: a rich synchronization framework, “live operating environment” for writing applications to across the Web and on user’s devices. 
  • Windows Live (Live Office, Live Sharepoint, Live Dynamics CRM, etc). In-cloud applications extensible by partners and users with in-cloud and in-premises solutions.

It all does fit together, and will be of immediate value to developers.  As Marc Jacobs of Lab49 said to me afterward,

We could make use of all of these services today.

Damned straight.  It is the openness of this platform, the ability of developers to mix and match the different components, and to do it between the cloud and in-premises solutions that makes this such a winner. 

This last point is an important one.  Microsoft is in a unique position to help enterprise IT bridge to the cloud.  While I don’t think Amazon and Google will cede that market to Microsoft, their current offerings aren’t a natural fit. 

Taking this all together — not forgetting Microsoft’s leading developer productivity story — it looks like a home run to me.

Tags: , , , , , ,

Some thoughts on Chrome

image

Google releases a new browser.  The world declares “browser war” with some apprehension and  relish.  Web developers are cringing because browser compatibility is a major source of effort, cost, and frustration for software developers.

Q. Why would Google do this to us?  Just to take away Microsoft browser share? 

A. No.

Q. Are they doing this to extend the “Google OS” to the desktop in a way they control?

A. Probably, but that isn’t even their first concern.

Q. So, what is going on?

A. Well, I’m glad you asked.

Google is working to make their JavaScript-view of the Web as powerful as possible.  This makes sense given their enormous investments in JavaScript and in their own application suite.

Contrary to the approaches of Microsoft and Adobe with their Rich Internet Applications (RIA) frameworks, Google has focused on JavaScript. Where Microsoft and Adobe are building a better user experience inside of a container, Google is creating a better user experience through dynamic HTML and AJAX techniques.

Their developer model includes building out tooling to make it easier to author AJAX applications.  This includes the efforts made in the Google Web Toolkit (GWT) to enable modern IDE tooling for AJAX development. This allows developers to build maintainable object-oriented applications (in Java) that get converted and optimized to JavaScript.  Plus it promises cross-browser compatibility.

On the client side, they have Google Gears to enable local storage, improved caching support, and offline mode.

Q. So what have they been missing?  A browser? 

A. Not exactly.  They’ve been missing a JavaScript client runtime engine.

Google has made great advances in AJAX application development and tooling, but they have had to rely on others to provide reliability, responsiveness, performance, etc.

And that is what Chrome is about: taking control of the runtime engine for Google applications.  This makes the Google applications way more compelling.  More specifically, Chrome is about delivering that engine.  As Google says, they would love it if other browsers adopt the engine too.  I buy that.

Of course, by that time Chrome will be differentiated from its JavaScript engine.  By then Chrome will be about the Google OS.

Tags: , , , , , ,

WordPress 2.6 should be 3.5

Version 2.6 of WordPress came out the other day.  From the announcement (WordPress › Blog » WordPress 2.6):

Version 2.6 “Tyner,” named for jazz pianist McCoy Tyner, contains a number of new features that make WordPress a more powerful CMS: you can now track changes to every post and page and easily post from wherever you are on the web, plus there are dozens of incremental improvements to the features introduced in version 2.5.

These feature changes are actually pretty big.  Revision tracking?  Support for Google Gears?  Full support of SSL (finally)?  Theme previews?  Really cool “Press This” button?  Big.

This feels to me like a major release.  Probably not as major as the 2.5 release, but still pretty major.

In my book, 2.5 should have been version 3.0 and this one should have been 3.5.

Does the version number matter?  Yeah, it does.  It isn’t just about marketing.  It signals something about the maturity of the product.

Disclaimer: I am not immune to such version number mistakes.  After all, the Digipede Network 2.1 should have been version 2.5.

Tags: ,

Cloud Services Continuum

I have found myself talking about cloud services a lot recently.  We have been talking about them here — there is an obvious synergy between what we do at Digipede and cloud services.  And I’ve been talking about them externally too: at the recent CloudCamp, on the Gillmor Gang, and in all sorts of other interesting contexts. 

Note that I refer to cloud services, not to the cloud.  I am not interested in defining cloud as a term, because I don’t think it very useful.  For those of us in the distributed computing space, cloud is the latest buzzword to compete with the word grid in terms of utter ambiguity.  I think the ship has already sailed on this one and I’m not going to try to call it back.

So, everyone is talking about cloud services and much of the conversation centers on understanding them and how they are changing the landscape.  Of course, cloud services are not one thing.  I find it helpful to think about them as parts of a continuum.  This seems useful regardless of the technical level of the people with whom I’m speaking.

imageThe diagram to the right shows this continuum from infrastructure to platform to software.   Brief definitions of these parts are:

  • Infrastructure includes provisioning of hardware or virtual computers on which one generally has control over the OS; therefore allowing the execution of arbitrary software.
  • Platform indicates a higher-level environment for which developers write custom applications.  Generally the developer is accepting some restrictions on the type of software they can write in exchange for built-in application scalability. 
  • Software (as a Service) indicates special-purpose software made available through the Internet.

I have indicated several companies that play at different parts of this stack.  This list is not comprehensive nor does it attempt to represent motion across the stack.

One scenario in which I find myself talking about the continuum is when people equate Amazon EC2 with Google App Engine.  EC2 is a flexible / scalable virtual hosting platform with provisioning APIs.  It allows you to dynamically scale the number of instances of your OS (i.e., Linux).  What you do with those instances is up to you.  Google App Engine operates at a much higher level in the stack.  It is a new software platform with specific APIs.  It requires developers to build for this specific platform.  yes, they are both in the cloud, but they are very different services. 

Another scenario in which the continuum is useful is in thinking about what vendors and new entrants might be up to.  The continuum makes one thing even more clear: many vendors that operate higher in the stack are relying on their own internal lower-level infrastructure or platform.  This begs some questions: which vendors will expose lower-level interfaces?  And of course, which vendors will move up the stack? 

  • SalesForce is already moving down with their PaaS offering. 
  • Any chance Google will expose its infrastructure stack?  I doubt it, but I do expect them to move down a little. 
  • Some of the readers of this blog probably know better than I where Amazon and Microsoft are planning to go.

Yet another way it is useful is in comparing vendors inside of a particular category.  Maybe I’ll write more on that later.

Is the continuum obvious?  Using the definition of obvious from patent law, yes, but I think it a useful paradigm.

Tags: , , , , , , ,

Digipede + Velocity

Last week Microsoft released the first CTP of the Microsoft Distributed Cache (code-named Velocity). deatlefast2

I am definitely excited about this release.  While Microsoft is not breaking new ground here, the addition of a distributed cache to .NET is a great addition to the platform.  Certainly there are competing technologies, but Velocity will be a very simple choice for developers and ISVs because we’ll be able to count on its availability. 

This ISV is interested, so we tried it out.

We have many customers who use our Executive pattern to load and cache job-specific data for compute-intensive jobs on the Digipede Network.  These data are often fetched through WS calls or directly from SQL databases.  Often this is performed in the Executive.Start method.  Before Velocity, the code might look like this:

protected override void Start() {
    // read the CBOData object from the database.
    _cboData = ReadCboData(cache.Get(JobTemplate.Parameters["CBODataStore"].Value));          
}

Including Velocity in this example is really easy.  The following snippet adds use of the Velocity cache:

protected override void Start() {
    // get cache 
    CacheFactory factory = new CacheFactory();
    Cache cache = factory.GetCache("CBOCache");
    // see if our CBOData object is already there
    string key = JobTemplate.Parameters["CBODataKey"].Value;
    _cboData = (CBOData)cache.Get(key);
    // if not, read it from the database.
    if (_cboData == null) {
        _cboData = ReadCboData(cache.Get(JobTemplate.Parameters["CBODataStore"].Value));
        // store it in the cache for later use
        cache.Put(key, _cboData);
    }          
}

With a few lines of code, we reduce the load on the database server and network and spend more time computing.  (I’m making an assumption with this simple code that all Executives don’t start at once, an assumption made obsolete by seeding the cache from a master application).

Of course, this is a simple example, but there are many other use cases.  For example,:

  • Digipede-enabled applications can share results; 
  • master applications can load the cache with job-specific data; and,
  • others where baking Velocity deeply into the Digipede Network start looking pretty interesting.

I have seen many posts on “must-haves” for a Velocity RTM.  I mostly agree with the lists I have seen.  I’ll have a list too mostly from the ISV perspective.

Cool stuff.

Tags: , , ,

Digipede Network 2.1 Out the Door

Dan has a write up of some of the enhancements added to this release here.  He said we should have probably called it 3.0, but it is really more of a 2.5.  We’ll be hosting webcasts soon going over the new features.  

Thanks to the team for all the hard work in getting this out the door.

Follow http://twitter.com/010111011010111 for Digipede announcements.

Tags: , ,

Come see Digipede at the Microsoft launch event

Heroes who happen by our booth at the Server 2008, Visual Studio 2008, and SQL Server 2008 launch will get a chance to win an XBOX-360.  OK, you don’t have to be hero, but you do have to be spotted wearing a Digipede sticker sporting our mascot, Deatle.

image image

Come on by and see us.

BTW: I won’t be at this event, but I’ll be at the one in SF on March 13th.  No Digipede booth or give-away there.

Tags: , , , , ,

Narrowing the Semantic Gap

Last week, PowerShell Architect Jeffrey Snover wrote an excellent post titled the Semantic Gap.  He writes about the gap as . . .

. . . 2 worlds:

  1. The world as we think about it.
  2. The world as we can manipulate it.

The difference between these two is what is called the semantic gap

This is a great working definition. 

Jeff writes about this specifically regarding PowerShell and instrumentation providers and asks the question,

So why do instrumentation providers close or not close the semantic gap?

Yes, some do, and some don’t.  This isn’t just about hierarchy of needs, but also about prioritization.  How important to the provider is a narrow semantic gap for product X when used through interface Y? 

In the case of X := Digipede Network and Y:= PowerShell, we thought it pretty important.

But how do you decide if narrowing the gap is worth it?  Engineering costs aside, understanding what your interface could look like in PowerShell can help you decide.  Internally, we answered these questions:

  1. What would a PowerShell script look like just using your .NET or COM APIs? 
  2. What could it look like with Cmdlets? 
  3. Would these Cmdlets support how we think about the Digipede Network (i.e., small gap?).

I already said the answer to #3 turned out to be yes and in a previous post, I gave an example of the gap in Why a SnapIn for the Command-Line?  This example highlights the gap for a common operation on the Digipede Network: get the description of a pool of resources.

If you are thinking about supporting PowerShell in your product, take a look at my post.

I hope this helps you decide.

Tags: , , , ,

Digipede Network for Microsoft MVPs

deatlemsmvp Today, we announced a program to provide free licenses of the award-winning Digipede Network to Microsoft MVPs.  For more details and to request your license, go here.

Thanks to MVP Marco Shaw for the idea. 

Tags: , ,

Certified for Windows Server 2008

Certified for Windows Server 2008We just received our certification for Windows Server 2008.  Or we are about to — it probably isn’t “official” yet.  Anyway, congratulations to the Digipede team and thanks to everyone at Microsoft and Veritest who helped us through the process.

Getting the logo was arduous.  This has less to do with the technical logo requirements and more to do with the complexities of the process itself.  Some of the complexity is inherent in such a process, but much was due to the program itself being a sort of “work in process”.  But hey, that’s why we early certifiers got the testing fees waived.  I think those who begin the process now will find the test requirements and tools are better written and more robust.

As I said, passing the technical requirements was not arduous for us (we were already very close), but passing the tests did require some minor improvements to the Digipede Network that were motivated by the test:

  • Support for User Account Control (UAC).
  • More useful logging on the Digipede Server and during installations.
  • Improved user messaging and event logging during error conditions between server components and the database.
  • Improvements to the Installation Guide including new sections on Custom Actions, installation artifacts, and more.

Some of these changes have already made their way into the shipping product, though others won’t be available until the Digipede Network 2.1 (which, while a minor upgrade, contains many features beyond the improvements mentioned above — I think the feature set will be announced soon).

So now we’re ready for the big launch of Server 2008, Visual Studio 2008, and SQL Server 2008 in Los Angeles on February 27th.  If you are going to be there, come see us at the Partner Pavilion.  I’m pushing for some kind of Digipede swag — but I’m not in marketing ;).

Tags: , , , ,

« Previous entries · Next entries »