Wednesday, August 13, 2008

The Future of Cloud Computing - An Army of Monkeys?








http://virtualization.sys-con.com/node/631658

I don't care if my cloud computing architecture is powered by a grid, a mainframe, my neighbor's desktop or an army of monkeys

By: Sam Johnston
Aug. 13, 2008 01:45 PM
Sam Johnston's Blog: http://samj.net/





There's been a good deal of confusion of late between the general concept of cloud computing, which I define as "Internet ('Cloud') based development and use of computer technology ('Computing')", and its various components (autonomic, grid & utility computing, SaaS etc.). Some of this confusion is understandable given issues get complex quickly when you start peeling off the layers, however much of it comes from the very same opportunistic hardware & software vendors who somehow convinced us years ago that clusters had become grids. These same people are now trying to convince us that grids have become clouds in order to sell us their version of a 'private cloud' (which is apparently any large, intelligent and/or reliable cluster).


Let's not forget that much of the value of The Cloud (remember, like the Highlander "there can be only one") comes from characteristics that simply cannot be replicated internally, like not having to engineer for peak loads and being able to build on top of the ecosystem's existing services. Yes, you can build a cloud computing architecture with large, intelligent clusters that are a second rate citizens or 'clients' of The Cloud (as most of these 'private clouds' will be) but calling them 'clouds' is a stretch at best and deceptive at worst - let's call a spade a spade shall we.




The Cloud is what The Grid Could have been




The term 'Grid' was coined by the likes of Ian Foster in the 90's to define technoloies that would 'allow consumers to obtain computing power on demand', following on from John McCarthy's 1960 prediction that 'computation may someday be organized as a public utility'. While it is true that much of the existing cloud infrastructure is powered by large clusters (what these vendors call grids) there are some solid, successful counterexamples including:


  • BitTorrent which shares files between a 'cloud' of clients

  • SETI which distributes computational tasks between volunteers

  • Skype which has minimal centralised infrastructure for tasks like account creation and authentication, delegating what they can to 'supernode' clients
By focusing on batched computational workloads and maximizing peak processing performance rather than efficiently servicing individual requests, grid computing has painted itself into a corner (or at least solves a different set of problems) thus creating a void for The Cloud to fill.


The Cloud is like the electrivity network, only photons are more convenient than electrons so the emergence of a single global provider is a possibility, some would say a threat.


Perhaps Thomas J. Watson Sr. (then president of IBM) was right when he was famously [mis]quoted as predicting a worldwide market for 5 computers back in 1943. On one hand, without the physical constraints of electrons (eg attenuation, crosstalk) it is concievable that our needs could be serviced by photons channeled over fiber optic to one massive, centralised computing fabric. We don't have national water grids simply because water is too heavy and even electrons get unmanageable on this scale (though many problems were solved by standardising and moving to alternating current), but weightless photons have no such limitation. At the other end of the scale we distribute the load across relatively tiny devices which may well outnumber their masters (pun intended). The reality will almost certainly fall somewhere in between, perhaps not too far from what we have today: a handful of global providers, scores of regional outfits and then the army of niche players. The forces of globalization, unusually free of geographic constraints, will also certainly affect how this plays out by drawing in providers from emerging economies.


The Cloud equivalent of an electron could be a standardized workload consisting of a small bundle of (encrypted) data and the code required to perform an action upon it.



Much of the infrastructure is already in place but in order to better approximate the electricity grid we need a 'commodity', analogous to the electron. Today we transfer relatively large workloads (eg virtual machines, application bundles, data sets) to our providers who run them for a relatively long time (days, weeks, months), however it's possible to concieve of far more granular alternatives for many applications. These could be processed by networked computing resources in much the same way as the cell processors that power the PlayStation 3 handle apulettes.



These resources could be anything from massive centralised data centers to their modular counterparts or indeed your neighbour's idle computer (which would pump 'computing resources' into the cloud in the same way as enterprising individuals can receive rebates for negative net consumption of electricity). Assuming you were to be billed at all, it would likely be per unit (eg MIPS and RAM time rather than kWh) and at prices set by a marketplace not unlike the existing electricity markets. There may be more service specifications than voltage and frequency (eg security, speed, latency) and compliance with the service contract(s) would be constantly validated by the marketplace. In any case, given Moore's law and rapid advances in computing hardware (particularly massively parallel processing) it is impossible to accurately predict beyond more than a few years out how these resources and marketplaces will look, but we need to start thinking outside the box now.



For those that are looking for more background information, or a more formal comparison between the different components, check out Wikipedia's cloud computing article which I have been giving a much needed overhaul:




Cloud computing is often confused with grid computing (a form of distributed computing whereby a "super and virtual computer" is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks), utility computing (the packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity) and autonomic computing (computer systems capable of self-management)[4]. Indeed many cloud computing deployments are today powered by grids, have autonomic characteristics and are billed like utilities, but cloud computing is rather a natural next step from the grid-utility model[5]. Some successful cloud architectures have little or no centralised infrastructure or billing systems whatsoever including Peer to peer networks like BitTorrent and Skype and Volunteer computing like SETI.

http://virtualization.sys-con.com/node/631658

© 2008 SYS-CON Media Inc.

0 comments:

About CherryPal for Everyone (CP4Every1 or CPFE)

CP4Every1 is constantly crawling the web (on human hands and knees) to find unique information of value regarding green technology, cheap and reliable connectivity, personal, portable and sustainable industry developments, future and social/cultural transformative technology, political relevance and news that is NOT just another re-posting of the same press release pushed out by the industry.

Please note that all copyrights and links to original material are provided and respected. NO robots were used to post content.

Your comments are invited.


Enter your Email to receive CPFE Updates




Preview Powered by FeedBlitz

ENTER CODE CPP206

ENTER CODE CPP206
for $10 off purchase price
AEoogle

Search

Scroll to bottom for Google Custom Search Results

Search Results

Other CherryPal Brand Angel Blogs