Unless you have been living under a rock in the Atacama Desert with only a Llama for company, you will have come across the term cloud computing. It is thrown about wildly on the internet, in corporate buzz word speeches and even in some of the stranger pubs in Dublin.
Believe it or not, the cloud is nothing new. Its evolution has seen it meander from a mainframe-terminal relationship in the corporations and universities of the 1950’s, to the complex array of services provided today. This can make it quite difficult when trying to find a single definition for cloud computing. Yes its roots are computing concepts of a bygone era, but it is only recent advances in network technologies that have allowed for the cloud to emerge as a potential successor to the personal computer. With this in mind do any of us really understand what cloud computing is or where it came from?
Defining the undefinable
In researching the answer to what cloud computing is, it is easy to become flummoxed with the vast differences in definitions found in a simple Google search or varying books on the topic. For example;
- Gartner defines cloud computing as “a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet technologies” and it is important to note that Gartner refer to this definition as “Official”.
- The National Institute of Standards and Technologies (NIST) define cloud computing as “a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” 
- Alternatively there is the economist’s view that “Cloud computing is essentially the next phase of innovation and adoption of a platform for computing, networking, and storage technologies designed to provide rapid time to market and drastic cost reductions.” 
A lot of these definitions don’t describe what cloud computing is, merely the usage and features of the cloud. The fact appears to be that cloud computing is such a multitude of things that these definitions amongst more are perfectly correct, but for the most part fall into the corporate buzz word trap, with a continuous regurgitation of “scalable” and “elastic”, without really giving any insight into what either term means in relation to the cloud.
A brief use case
John Smith is the owner of an online Novelty Christmas Jumper store founded in 2010. Three seasons on and coming into the fourth, John has been delivering Jumpers worldwide and in turn has had a serious increase in sales. When John initially set up the business he purchased a small server to run both company software and host the website, unfortunately this hardware investment was done without the foresight of such amazing growth. John now has hardware that is incapable of maintaining his business. During his busy period last year John received daily complaints about the performance of his website. John’s company has scaled and his hardware and software both need to do the same.
Although John Smith’s Novelty Christmas Jumper store is a fictional creation, it is a perfect example of how the cloud can benefit scaling companies. Rather than a heavy upfront investment in IT, as your business grows, the cloud allows your hardware and software solutions to scale as required. Another point to note is the major busy period at Christmas. Companies no longer have to purchase IT equipment to deal with the worst case scenario. The elasticity of the cloud allows businesses to add and subtract Virtual Machines and Disk Space with just a few lines of code. We are now in an era of service based, requirement ready computing. The next question is - how did we get here?
An ideological birth
It appears the concept of cloud computing is much attributed to John McCarthy when in 1961 he pondered that “computation may someday be organized as a public utility.” McCarthy likened his concept for the future of computing to that of the telephone service. In which all the real work is done out of view and the user is only required to lift the receiver and dial a number. His idea was termed “Utility Computing”  and provides much of the backbone on which the cloud computing idiom is based.
Whilst John McCarthy is associated with the concept J.C.R. Licklider an American computer scientist is more widely associated with proactively developing technologies that created the first networks on which a cloud computing system could exist. Licklider created the idea of a packet switching network called the “Intergalactic Computer Network” in a memo in the 1960’s. His postulation grew traction and led to the Advanced Research Projects Agency Network (ARPAnet) in 1969 which is commonly seen as the first internet. 
Despite the cloud computing concept being attributed to McCarthy and/or Licklider in hindsight it may now been seen as the next logical step in computing systems. Back in the 1960s and early 1970s, “there was no such thing as a personal computer; instead, there were large mainframes the size of small refrigerators which sat in a specially-designed room and could be accessed by users who shared its processing power, memory, software and so forth.”  Access to the mainframe was via a terminal. These terminals had virtually no capabilities other than data input and output. Early mainframe systems were purpose built, expensive and generally only manufactured by large companies such as IBM or developed by Universities.  This centralised approach to computing had many advantages, for example;
- Mainframes run multiple sessions, and with high reliability. Companies can run their IT operations for years without problems or interruptions with minimum down time.
- Administration is very easy due to the fact that all applications layers are monitored in one Machine.
- Reduced management and administrative costs whilst providing a much better scalability and reliability. 
It’s nothing personal
Despite these advantages, the biggest problem and ultimate downfall of the mainframe was the networking technologies of that era. Whilst mainframes were scalable, this scalability was normally confined to a single building. The only network around was ARPAnet which in the early 1970’s was only adding nodes at a rate of 1 per month. This lack of connectivity coupled with the invention of the microprocessor lead to the development and popularisation of the personal computer.
Personal sized computers can be traced as far back as the 1970’s. The French made Micral N being the earliest example of what we consider nowadays to be a personal computer. It was introduced in 1973, powered by Intels 8008 chip, and was the first commercial non-kit computer based on a microprocessor. It was conceived in France by François Gernelle. It took until the 1980’s before the personal computer eventually popularised. This was due to the expense involved in creating a “one size fits all” computer. The first example of this style of computer was the Amstrad CPC range launched in 1984. The early 90’s, with a special gratitude to Microsoft saw sales of the personal computer explode. Sales between 1985 and 1990 were approximately 24 million, the following five years to 1995 saw sales more than double to 58 million. This growth in the personal computer market brought a new need for network connectivity and once again the Universities came to the rescue. 
Networking is good
As the personal computer evolved, so too did the network technologies that brought the existence of the internet. ARPAnet originally designed as a way to allow the American Air Force maintain command and control after a nuclear attack, was now beginning to connect Universities across America. ARPAnet’s abilities were limited, a user could only connect to a remote computer, print to a remote computer and transfer files between computers. These limited abilities however were revolutionary at the time and ARPAnet’s Transmission Control Protocol (TCP) quickly became the network industry standard. Another major milestone in the development of the internet was Ethernet technology created by Bob Metcalfe in 1973. Ethernet quickly coupled with TCP and allowed the widespread development of local area networks in the 1980s, in turn allowing the nascent Internet to flourish. 
In 1989 Tim Berners-Lee proposed the World Wide Web (WWW) as an information management system designed to share ideas and knowledge with his counterparts across the world. In November of the same year Tim Berners-Lee implemented the first successful communication between a Hypertext Transfer Protocol (HTTP) client and server via the Internet and by 1991 the first server went live in the United States of America. The web developed and grew to the mammoth it is today, as did the means of connectivity. Broadband internet connections are now common place in homes in each corner of the globe. Free wireless connections are offered in coffee shops across all major cities and towns. Mobile phones have transgressed to being mobile devices or smart phones. This ability to always be connected to the web has proved to be the necessary step to combat the devolution the personal computer inevitably was. 
One step back, two steps forward
In a sense we have evolved back to the cloud. The difference now is the mainframes and terminals are now longer restricted to single buildings but instead span the globe in client server relationships. This has allowed for a new view on software design and hardware infrastructure.
In 1999 Salesforce.com showed the world that software can be bought as a service. This “as a Service” ideology caught on quickly and in 2002 online retailer Amazon got involved. At first Amazon offered not software but storage as a service. By 2006 Amazon had developed a full suite of services which could be used on the cloud. One example, Amazon Web Services EC2 (Elastic Compute Cloud) allows its consumers to create and destroy custom sized web based virtual machines at a whim, fulfilling the “elasticity” criteria of the cloud. 
Having looked at some of the services provided by cloud computing it is easy to see the “why” behind its current popularity. For customers, it is the ease of scalability with little upfront capital investment. For providers, it is the ability to open up to new customers and markets. Some figures for cloud products have valued it at 250 billion by 2020 in Europe alone.
Cloud computing is essentially completing the natural life cycle of the computing age. Much like electricity, computing is moving from its innovation stage to service stage. Electricity, which was much known about for hundreds of years was not really commoditised until 1821, when Michael Faraday invented the first electric motor. This caused an innovative period that was maintained through the early and mid-19th Century. In 1890 with much help from Nikola Tesla, the inventor of alternating current (AC), George Westinghouse installed the first long-distance power transmission lines - 14 miles between Willamette Falls and Portland, Oregon. Electricity in less than 80 years went from being innovation to service. Homes all across America and in time the world begin to plug in and pay for the service that is electricity. The similarities to cloud computing are uncanny. 
In truth cloud computing itself has no “real” history, like electricity cloud computing is the culmination of a century of innovations. If not for early computers the size of large rooms, if not for the transition to transistor, if not for ARPAnet and the Internet, if not for the Personal Computer and the mobile phone, if not for all the things that have been instrumental in building a world where technology is common place and connectivity is a necessity, there would be no cloud and there would be no cloud computing. Looking to the future it is easy to see that connectivity will be king and the cloud, well that will be the kingdom.