Sometimes things just happen. But rarely do they happen without many antecedents. And rarely do we see the antecedents until after they happen.
I believe that Cloud Computing has followed this pattern. The obvious antecedents are Moore’s Law (http://en.wikipedia.org/wiki/Moore%27s_law), the rapid drop in disk prices, the proliferation of virtualization, and the emergence of large, efficient datacenters. Much has been made of all of these factors.
One that isn’t mentioned is the network capacity required to move vast quantities of bandwidth required to move the huge amounts of data from customers to datacenters and between datacenters. The networks have a long lead time to install. And require vast sums of money. Think about digging very long trenches and laying fiber optic cables between cities. And then each of the cities need to be hooked up with fiber. This is outrageously expensive, especially when you consider that rights of way and approvals need to be acquired, etc.
Given that the lead time for installing these networks was decades and we didn’t know that cloud computing was going to be a key application, how did these networks get installed to be there when we needed them?
I think the best answer is bad business decisions. Wait did I just say that? Cloud computing is a key technology for the future, so how can it be a bad business decision? At the time vast quantities of network infrastructure we being put in, the Internet was in its infancy and doubling in size every 90 days. Companies (like WorldCom, Global Crossing, and MCI) decided to install capacity at a fever pace. And then the bubble burst in 2001 and the companies had a ton of stranded capacity and many went out of business. But the capacity remained, and at lower cost basis when acquired out of bankruptcy. Sometimes decisions made for one reason in one era have massively positive consequences in another.