My Future Cloud: The Bare-Metal PaaS

November 15, 2010 by John Ellis

First off, I wanted to thank VMware for giving my blog post a home last week. I appreciate the feedback we recieved from the article – we’re looking forward to working with VMware on a blog series coming up!

I focused a lot on the pragmatic last week; now my brain has bounced the other way and started to wonder about what managed cloud hosting will look like in the future. Today there is a huge amount of interest from enterprises in leveraging Infrastructure as a Service in order to have a more agile datacenter without the capital expense, but what will be the next big thing?

Westminster tube stationCurrently bubbling up from the cloud crowd is a lot of interest in upcoming Platform as a Service offerings that are increasingly vendor-agnostic with a wide range of supported languages and frameworks. CloudBees RUN@cloud promises to be the PaaS to watch in 2011, especially considering how well the team has done in implementing continuous integration services with cloud technology. VMware’s OpenPaaS initiative might be interesting as well, judging by their announcement at RubyConf. Still, I have to wonder if building PaaS as a layer on top of IaaS is the way to efficiently architect a Platform as a Service solution.

Bare-metal hypervisors, like VMware’s ESXi, provide hardware and even processor architecture abstraction to the virtual machine it hosts. Traditionally a VMware BIOS bootstraps a guest operating system within a virtual machine, then we install a software stack into that guest operating system. For example, we may create a virtual machine within ESXi and install RedHat Enterprise Linux 6. After the machine is built we may install our runtime environment, maybe the frameworks and libraries to host a vFabric application. Then we install our own application running with tcServer, tweak configuration files and hook it up to our deployment scripts. True, we can do this setup once and then save this application as a vApp for future use, but we consume a lot of memory, compute and (especially) disk space for the guest OS, runtime environment and framework libraries.

What if we removed the guest OS setup, BIOS and runtime environment installation? What if we ran our runtime environment directly within the hypervisor, bypassing the OS and going straight to the cloud computing infrastructure itself? The Java runtime environment itself runs within a self-contained virtual machine of its own – does it need an OS to add another layer of abstraction?

CDC6600 core memory planeWork already done by IBM, BEA and Sun have demonstrated a Java runtime that interacts directly with the hypervisor can yield significant gains. BEA (and now Oracle) provide a way for the JRocket JVM to skip the OS entirely by using LiquidVM and WebLogic Suite. IBM had a slightly different route with Libra, creating an isolated execution environment that could host a Java virtual machine. In my mind the ultimate solution was Sun’s Project Guest VM, allowing the JVM to sit entirely on the hypervisor with no operating system at all… only a microkernel augments the Java virtual environment. The VM itself is entirely written in Java, allowing for a highly optimized Java runtime residing within a cloud computing infrastructure.

Imagine if you no longer deployed "Windows" or "Linux" virtual machines with our cloud technology – but could deploy "Windows," "Linux," "Java 6 EE," ".NET Runtime" or "Python" as stand-alone VMs. No OS tweaks, no hacking of open file handle limits, but instead a very thin virtual machine instance that is 100% dedicated to running your application. Managed cloud hosting providers could then augment this stack with their own specialized tools for automated deployment, monitoring and security policies.

FSL JET SupercomputerThe possibilities for such a cloud computing platform go beyond just making applications easier to deploy and more efficient to execute. We can move away from worrying about hardware drivers and chipsets… our hypervisor and the new virtual machines worry about that for us. Now that we’ve sufficiently abstracted away the underlying physical infrastructure we can change the hardware architecture completely. Why not ditch your conventional CPUs and now accelerate our code by adding hardware for more efficient vector processing? NVIDIA is already creating "GPU clusters" so that one can have their own cloud-based super computing instances, allowing for certain types of algorithms and applications to reach unthought levels of performance and power efficiency. Why not tune the Java or .NET runtime environment to take advantage of GPU clusters as well and allow cryptographic or streaming operations to run at insane speeds?

With this type of bare-metal PaaS you can save massive amounts of disk storage, since you no longer need the entire operating system just to host an application. VMs that once needed 8 GB disks could now live inside of 256 MB, possibly less if you leverage SAN de-duplication or virtual disk technologies like linked clones. Memory overhead would be reduced, since you only need enough memory to keep your application running. Compute could not only be augmented to crazy-fast levels, it could also run with much less power consumption. Application developers also have less infrastructure to deal with, allowing them to focus on their application rather than supporting the layers the application runs on top of.

If 2011 shapes up to be the year of the PaaS, I can only hope 2012 is the year we blur the lines between IaaS and PaaS. With bare-metal PaaS, cloud technology could give every hosted application their own supercomputer.

  • Michael Neale

    I like it !

    I think having bare metal "applications" would be far more efficient. However, with the tools we have now, it is much easier to build things ontop of the iaas "layer" – exchanging platform efficiency for time to market and developer efficiency (which usually is a trade off that works – at least for a while to prove the concepts).

    Also, even the JVM makes a whole lot of assumptions about the existence of an OS around it, so still work to be done there.

  • John Ellis

    I definitely agree – PaaS is going to be sitting on top of IaaS for at least the next 2-3 years. I was hoping to see if Microsoft could do more of a bare-metal .NET runtime with Azure, but that seems to be running headless Win2k8v2 installs instead.

    You’re right that the JVM will always need some kind of OS – even if it’s just a shim to the hypervisor. The neat thing about Sun’s Project Guest VM was that even this shim was Java, meaning you got an optimized JVM stack all the way down.

    Ultimately you can do just a thin kernel, kinda like what Android did – largely using Linux so they could have a ready-to-go driver model. The big advantage either way is that you can afford to have a monolithic, highly-optimized kernel and a bare-bones filesystem that eschews a hardware abstraction layer and ties directly into a very specific hardware profile.

    By the way – we’re excited about CloudBees and appreciate the work with Drools!

  • Massimo Re Ferre’ (VMware)

    NNow this is an interesting topic. I have been thinking about it for a while.
    I believe that building a PaaS on top of IaaS is the simplest/quickest approach because you (the one building the PaaS stack) can leverage a lot of things that IaaS already provides in terms of workload management and security boundaries (Most would say it’s more secure having two customers sharing the same HW in two different VMs rather than sharing the same piece of sw with security boundaries built into it).

    As you point out this may not be the most efficient model because of the duplication of resources being used. However it’s the easiest way to implement this. There are good examples of organizations that went down this path: MS with Azure, CloudFoundry (the SpringSource PaaS solution they used to have in beta when VMware bought them) and perhaps many others.

    There are other organizations (say Google) that are taking a different path. I am not an expert on AppEngine but I understand they don’t use any sort of hw virtualization techniques so all the magic happens in their platform software. This means (as far as I understand) that they are sharing an "OS"; certainly a very peculiar OS, don’t imagine an out-of-the-box RedHat setup here but more of something very efficient that includes both hw interfaces southbound and application frameworks northbound. And because it’s google imagine a "virtual platform" stretched across a number of physical servers – so to speak. So this "sw platform" is shared among a number of tenants each of which may be running more than an application. There is not doubt that this is a more efficient approach (due to avoided overhead of VMs and duplication of identical resources used) but it certainly adds a lot of complexity in building such a platform. Someone may also argue that it is a little bit more inflexible and insecure simply because wrapping your workload in a VM with well defined known boundaries seems to be more secure and allows for more flexibility than creating this "wrap" into software boundaries that decouple the shared platform from the application.

    Long story short I think the method you are describing may be the best of both worlds. Getting rid of the overhead VMs are associated with but keep the security/flexibility/manageability advantages they provide.

    Sorry, this is more like a blog post than a comment.

    Massimo.

  • John Ellis

    Massimo – it’s great blog fodder, isn’t it!?!

    I agree that while an AppEngine approach may make scalability easier (since you’re deploying a given task to a compute farm), it does make the boundaries between applications blurry. The concept of "the runtime is the VM" still gives you the logical boundaries while giving the hardware provider ways to innovate, such as GPU-based infrastructures.

    "The runtime is the VM" approach does mean you still have to worry about vertical and horizontal scalability, however I think you may have more vertical headroom with this approach since mutex contention, thread swapping and page swapping from garbage collection may be more elegantly handled.