When thoughts turn to the Internet, those thoughts tend to be of software and content. Dreams of search, e-commerce, photographs, music, and all those social applications twittering away. But I’m an old hardware guy, so my thoughts also turn to all the iron, copper, and silicon needed to spew those software and content bits out into the world.
If that sounds boring, it’s not. Taken to their logical conclusion, the changes that are already happening in how we all compute, consume, and communicate will inevitably be mirrored by equally dramatic shifts in the machines that make it all possible.
For the past 25 years or so, we’ve been living in an era of distributed computing during which computers have increasingly migrated from the “glass house” of IT out to the workgroups, small offices, and desktops on the periphery. Even before Intel Corp. (Nasdaq: INTC) and Microsoft Corp. (Nasdaq: MSFT) became early catalysts for this trend’s exponential growth, computers had been dispersing to some degree for essentially the entire history of computing. The minicomputer and Unix revolutions were among the earlier waves headed in the same general direction.
In one sense, our current era is experiencing just another wave of the same. Let's call it the mobile wave, or the pervasive wave. Computers are everywhere, from cellphones to MP3 players and refrigerators. But there’s a difference from the Wintel wave that ushered in truly widespread distributed computing.
Yes, today computers are ubiquitous, but they’re increasingly devices for interacting with information generated and stored elsewhere. Or they’re autonomous actors handling low-level tasks unbidden. In short, they’re conceptually more like terminals -- albeit compact, sophisticated, mobile ones -- than the personal computers of the last wave that gave users full control over, not just some processing power, but also their data and when and how they connected and interacted with others. The intelligence is in the network or, more precisely, in the vast server back-end that feeds all these devices.
One face of the Internet’s evolution may be the social application running on the cellphone. But the other is the mega-data-center pulling massive power from the hydroelectric dams on the Columbia River. More and more cycles and more and more stored bits are moving online. Consumer services from Google (Nasdaq: GOOG) to Flickr are in the vanguard, but software as a service, à la Salesforce.com Inc. , is making steady inroads in business as well.
There are significant economies of scale to this re-centralized computing. Perhaps we don’t end up with five “computers” (in the networked collection of computers sense) as Sun Microsystems Inc. CTO Greg Papadoupolos hyperbolically suggested. But we could well end up with hundreds -- or perhaps tens -- of computers that would handle most of our processing, storing, and communicating.
There are significant security and privacy concerns accompanying all this. The trend is particularly worrisome to folks who embraced distributing computing, not just as a good technical approach to leveraging the power of cheap microprocessors, but as a good social architecture because it puts the user in control of his information and data. But there are also profound implications for how computers will be built and who will build them -- in short, the structure of the whole industry.
Suppose there were just a dozen or half-dozen mega-service providers running mega-data-centers located where a communications nexus can enjoy cheap power. These organizations might have names such as Google or Microsoft. No doubt some countries would get in the game as well. Now here’s the question. How would such entities relate to the vendor landscape of today? Would they be merely large consumers of processors, servers, and software in much the same vein that they are today?
I wonder if we wouldn’t see some fundamental structural changes. As the largest of these service providers seek competitive differentiation and advantage, it will be very tempting for them to explore custom software and hardware angles that leave them looking more and more like today’s sophisticated hardware systems companies.
We’re already seeing harbingers of such a trend. Google doesn’t do silicon, but it does source special motherboards from Intel that it uses to build many of its own servers. And Google’s come under high-profile criticism of late for the degree to which it modifies open-source software for in-house use. In short, Google already intensely customizes “off-the-shelf” components to its own purposes.
Today’s IT is a world of specialization, and no one can “go it alone” to the same degree as the early mainframers or minicomputer makers, which built literally everything from silicon to application software. It’s a question of modern complexity and associated economies of scale. Even if Google found it could benefit from using some custom “Google search processor,” it would almost certainly have it designed and fabricated by someone in the business of doing such things.
But the new scale of recentralized computing brings different needs, which will, in turn, drive different decisions about building and buying. It’s inevitable that Google and its ilk will do many things -- whether in-house or through contractors -- that independent hardware and software vendors have long become accustomed to thinking their purview. For all intents, the biggest service providers could well become system makers (and ones in the old, proprietary, vertically integrated mold) in all but name. The only difference is that they’ll take the computing power directly to their customers rather than bothering with the old messy intermediate process of shipping computers that have to be installed, loaded, and configured.
— Gordon Haff, Senior analyst at Illuminata on grids/supercomputing