Are log files the beginning of Big Data?

Log files contain huge amounts of rapidly changing data that contain hidden treasures for developers and ID administrators. Is this the first instance of Big Data in the data center?

Kord Campbell, CEO of Loggly, and I had a long rambling conversation about the origination of the idea of Big Data. In his view, log data kept by every operating system, every application framework, every virtualization solution, every database on every intelligent device could be viewed as a rich source of operational data and the very first instance of what we're now calling Big Data.

Loggly, Campbell pointed out, has developed some powerful technology making it possible for developers and IT administrators to learn a great deal about what is working in their IT infrastructure, what is not working and tease out the root cause of why it is not working.

Many suppliers, I pointed out, have been offering tools that wade through log files to gather this type of operational data, evaluate it and offer some level of insight to developers and operations staff. Campbell agreed, but said that typically these suppliers require the installation of monitoring software everywhere or appliance servers.

Loggly, Campbell said, has a better idea — making it possible to collect service logs, stack traces, system metrics, etc., send them into a cloud-based application and then deliver historic and real-time data directly to the developer's or admin's Web Browser.

Loggly offers representational state transfer (REST) application programing interfaces that developers can deploy to gather data and send it to Loggly's Hadoop cluster in the cloud. The analytical results are quickly available.

Loggly appears to be offering an interesting take on Big Data and is offering the ability really know what is happening in a complex application environment.