I attended an all day seminar in Utility Computing. Couple of interesting presentations, the day was opened by Jim Baty and closed by Bill Vass. Frankly, its been a patchy day and I’m still not sure how firms requiring competitive advantage from their IT can leverage utility offerings because the suppliers will need a degree of homogeneity. It was definitely interesting and heartening to hear that several customers, who have the expertise to build the complex grids on which today’s utility offerings are to be based, are still coming to Sun; they want to get the systems of their books, reduce their capital expenditure and increase the return on assets. (This is obviously of particular interest to companies with high capital utilisation and where IT competes with more traditional capital goods for Capex budget.) One further area of interest is for companies undertaking “Development” projects where the Capex costs become prohibitive for the start up. This is alleviated by using a funding model where cost and activity scale in harmony, and means that the risk of over-provision is negligible, thus enabling the project to proceed.

Bill Vass’s presentation was very interesting. He illustrated a number of applications above and beyond the limited (in number not criticality) stateless parallelisable apps that Sun’s current $1/CPU/hour is currently aimed at. He showed how a single data centre architecture can supply/host these new age applications and set in my mind new goals for utility solutions and data centre architectures. It also interestingly, because the preso was first written last summer, reflects the ideas now expressed in Andy Ingram’s presentation to Sun’s Analyst Summit “Workload-based Systems Design: A new Approach” . Due to the network centric approach he takes for applications delivery, he’s also very interesting on security, even taking on nomadic issues for those last places in the developed world to get network connections i.e. trains & planes and he’s working on those. I don’t know if his visions would have saved me my unnecessary trip (see here) to town last weekend, I doubt that the battery of a wireless laptop Sun Ray would last a complete week, but I may look and see what it offers me, unlike my home systems, I don’t need games on laptop.

Jim Gately who runs Sun’s Scalable Systems Group’s computing facility also spoke to us. His team runs one of the largest grids in the world, and had some useful tips. He stated that none of the grid management solutions are efficient enough to scale beyond about 1000 nodes and they’ve written their own because of this. He was asked about re-purposing of his systems, and stated that he didn’t, but that any job running on his grid needed to acquire any additional resources as part of the job (They also decommit them as well). The average system node size is quite large (4 – 8 way) and his capacity planners understand the applications performance resource consumption behaviour well. They’ve built a theoretical model around job management throughput and queuing which he described as two dimensional. These strangely are flow, resource and priority. (Strange because now I am writing about it, I am describing three factors).

ooOOOoo

Originally posted on my sun/oracle blog, and republished here in Feb 2016

Utility Computing
Tagged on:                             

2 thoughts on “Utility Computing

  • 7th February 2016 at 11:28 pm
    Permalink

    A couple of years later, I proposed that one distributed computing architecture could support, utility, high scale web apps and HPC. This article followed the linguistic fashions of the time and referred to the common super set as utility computing. The architecture became cloud; utility became an accounting and charging solution and I am not sure that we have successfully merged HPC and multi-tenanted clouds. DFL

  • Pingback:Throughput Computing – davelevy.info

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: