You don’t need petabytes of information to play in the big data league. The low end of the threshold is more like 10 TB, and, in fact, “big” doesn’t really tell the whole story. The many types of data and the speed at which data changes are, along with sheer volume, daunting challenges for businesses struggling to make sense of it all. Volume, variety, velocity – they’re the hallmarks of the big data era we’re now in.
Variety comes in the form of Web logs, wirelessly connected RFID sensors, unstructured textual information from social networks, and myriad other data types. Velocity breeds velocity. Fast-changing data drives demand for deep analytic insights delivered in hours, minutes, or, in extreme cases, seconds, instead of the weekly or monthly reports that once sufficed.
How are IT organizations coming to grips with data volume, variety, and velocity? Specialized databases and data warehouse appliances are part of the answer. Less heralded but also essential are information management tools and techniques for extracting, transforming, integrating, sorting, and manipulating data.
IT shops often break new ground with big data projects, as new data sources emerge and they try unique ways of combining and putting them to use. Database and data management tools are evolving quickly to meet these needs, and some are blurring the line between row and column databases.
Even so, available products don’t fill all the gaps companies encounter in managing big data. IT can’t always turn to commercial products or established best-practices to solve big data problems. But pioneers are proving resourceful. They’re figuring out how and when to apply different tools – from database appliances to NoSQL frameworks and other emerging information management techniques. The goal is to cope with data volume, velocity, and variety to not only prevent storage costs from getting out of control but, more importantly, get better insights faster.
— Doug Henschen (firstname.lastname@example.org)
Next Page: History of Big Data