The tech industry and its users come up with new jargon and other terminology on a seemingly daily basis. In spite of this, I, in the course of my writing, regularly come across things and concepts that don’t have crisp monikers to describe them. I find myself stuck with the choice of using convoluted and awkward descriptions, or using a term that is deliberately incomplete or even inaccurate.
Some terms admittedly verge on technical pedantry. For example, if I use RISC (Reduced Instruction Set Computing) as shorthand for the class of mostly high-end server processors that aren’t x86, I’m making a generalization that was more accurate historically than it is today. Neither the Itanium processor used by Hewlett-Packard Co. (NYSE: HPQ) in its Integrity servers nor
IBM Corp. (NYSE: IBM)’s resurgent mainframes are actually RISC designs, although both are important in that segment of the market. There’s no neat umbrella term.
Is this just geekery that doesn’t matter in most circumstances? Perhaps. But consider that just saying “RISC” is the equivalent to using “V-8” as a synonym for automobile engine -- and imagine what doing so might say to a reader about your technical depth.
Here’s another word I need. I want to refer to x86 computers running general-purpose operating systems (primarily Windows, Linux, or OS X) in a desktop or notebook form factor. The term PC doesn’t really work, because that’s come to suggest Windows. Desktop doesn’t really work either -- although it probably comes closest -- because it excludes notebooks.
Nor do we have a good collective term for the software we use to access applications over the Internet -- e.g., browsers, AJAX, plug-ins, Adobe AIR applications, and so forth -- rather than full-blown C/C++ fat client apps.
Probably most troublesome though is when lack of the mot juste leads to phrasing that isn’t just sloppy but actively misleading. Case in point: the way we describe things like x86 processors and servers and many of the operating systems -- especially Linux and Windows -- that run on them.
“Commodity” is one term we see used a lot for, say, a processor from Intel, for instance. But what is a commodity? It’s a little bit simplistic, but this definition from Wikipedia captures the basics: “A commodity is something for which there is demand, but which is supplied without qualitative differentiation across a market. It is a product that is the same no matter who produces it, such as petroleum, notebook paper, or milk.”
Contrast this to, say, an Intel Corp. (Nasdaq: INTC) Nehalem processor. It’s arguably one of the most complex pieces of machinery on earth, and it’s differentiated in many, many ways from other Intel processors and from its Advanced Micro Devices Inc. (AMD) (NYSE: AMD) competitors. It’s true that x86 Intel and AMD processors can often be substituted for each other because they’re mostly compatible from a software perspective. But to use that as a definition of commodity is like saying all cars are compatible because you can drive them from one place to another on a road.
“Industry standard” is even more pernicious, given that it implies adherence to open interfaces and protocols that anyone can write to and use. TCP/IP is indeed an industry standard communications protocol. However, the aforementioned Nehalem and Microsoft Windows are not. Just because a single vendor sells a lot of something doesn’t morph it somehow into a “standard.”
I suspect that as cloud computing continues to grow in importance, we’ll hear a lot about “industry-standard” forms of it -- whether they’re really standards or not.
In practice, I tend to use “volume” as in “volume processors” and “volume operating systems” for components like Nehalem and Windows. This at least glancingly captures the dynamic, in that high volumes tend to lead to a virtuous cycle of lower costs although it’s not perfect.
There are plenty of terms that likewise need to be reexamined. I’m sure any technology-oriented reader can think of several.
— Gordon Haff, Senior Analyst at Illuminata Inc. on grids/supercomputing