Bluelock Blog

Part 2: 15 Tips for Software Companies, Understanding Cloud Computing

February 2, 2010 by Brian Wolff
In my last post, I tackled tips 1-5.  This week I’d like to take a look at the next five tips Adam Stone referred to in regards to "Making sense of the cloud: 15 tips for software CEOs" and provide you with the BlueLock perspective on what companies looking to migrate to cloud computing should be thinking about.

Tip #6:  To Avoid vendor Lock-in, stick to open standards. 
This one makes a lot of sense to me – in the end, you need to make sure that whatever you put in the cloud you can get back easily and intact.  While some may argue that deploying VMware technology locks you into VMware’s virtualization platform, I would argue that VMware is the defacto standard for virtualization technology for the enterprise, by virtue of their large market share.  Deploying VMware gives clients a lot of flexibility to move that server to another VMware host if they wish to move.  We even have cases where companies wish to protect themselves from something happening to BlueLock as a cloud provider.  In that instance, we’re replicating the entire virtual machines to a neutral third party, Iron Mountain.  If a triggering event were to occur, the company simply contacts Iron Mountain and receives immediate access to the virtual machines, which can immediately be loaded on servers running VMware.  That’s just one straight-forward example of how “portable” the environment is as a result of running in a VM ware-based virtualization platform.

Tip #7:  Location, Location, Location.
 
Yes, indeed, it’s difficult to bend the laws of physics and the speed of light.  This tip talks about two real issues – the first is latency and the second deals with the laws that govern the location where the data center sits, in both cases, BlueLock has engineered solutions to address our client’s specific challenges.   We have clients that need to have the data closer to them than our data centers in Indianapolis, IN or in Salt Lake City, UT for speed or data privacy issues.  For these clients, we introduced our version of a private data center called The BlueLock Box in October 2007.  This private cloud solution entails installing an HP C3000 blade chassis with redundant SAN shelves behind the client’s firewall.  This solution provides them with the same benefits of BlueLock’s public cloud such as fault tolerance and scalability, but puts the data closer to them for speed and/or privacy issues. 

Tip #8:  Consider using a middleman. 
I agree with Adam – there is a huge opportunity for cloud brokers or companies that have expertise in helping clients make thoughtful decisions about what can and/or should go into the cloud and then to actually help architect and deliver the cloud solution.  We’ve worked closely with several partners who have trusted advisor relationships with large fortune 1000 clients that have chosen BlueLock as their cloud solution.  In fact, we’ve been asked to present next week in VMware’s Partner Exchange keynote on the topic of how partners can work with a cloud providers to deliver real value to their clients.  I will be sharing the stage with Carl Eschenbach, EVP of Worldwide Field Operations and Casey Watson, VP Business Development for Apparatus to talk about how BlueLock and Apparatus have built a sizable business delivering cloud integration services for large clients.

Tip #9:  Monitoring uptime isn’t enough, you need an action plan

We couldn’t agree more with Adam on this point.  From day one, we’ve had a resolution-based 99.99% uptime SLA in place for our clients.  This means that not only will we respond quickly to the issue, but we’ll promise resolution of that issue.  On top of that, we’ve also patented a portal that we call “the VITAL signs portal” that provides our clients with an overall view of the health of their environment, as well as an ability to drill into each aspect of their environment, to see what’s actually happening.  Finally, we have also built capabilities in the portal to send alerts and alarms when something goes wrong or when the environment has reached a pre-determined limit on things like CPU, RAM and storage.   If those measures aren’t enough, we’ve also built tailored metrics for some clients that wish to monitor additional key metrics in their environment.

Tip #10:  A clause may look good in the contract, but be useless in the real World.  Adam’s tip in this area covered a “useless” escrow agreement.  In tip number six, I shared how we’ve put an escrow agreement in place that can be tested and actually works.  Having said that, I agree that empty legal promises are not the way to make sure you’re protected.  Testing the system is the best way to insure what’s being set aside actually works.  In addition to the escrow agreement, we also have numerous disaster recovery clients that have performed successful tests of our geographic failover disaster recovery service.  In the end, you want the “promise” in writing, but then you want to do a test to make sure it performs as expected.  Reminds me of an old Reaganism – “trust but verify”.

Next week, I’ll take us down the homestretch and walk through the final five tips for migrating successfully to the cloud

Tip #11:  Set financial penalties for downtime
Tip #12:  It takes time to see ROI on SaaS development
Tip #13:  Savings are not in the cloud, but in headcount
Tip #14:  Follow the cloud into new markets
Tip #15:  Let the cloud lead you to new innovations

If you'd like to read the original post by Adam Stone, go here.

Comments for Part 2: 15 Tips for Software Companies, Understanding Cloud Computing

blog comments powered by Disqus