Log management - the next hot meme!
Hmm... maybe not...
Well, at any rate, an article from Jon Oltsik caught my eye.
In The invisible log data explosion, Jon reports on ESG research regarding how much log data is being processes by large companies.
This lends weight to Tim Bray's Wide Finder as more than a mere thought experiment. Many large companies are currently processing more than 1TB of log data per month, and 10TB per month is within sight.
As Tim mentions, the most common techniques for processing log data do not cleanly scale into the impending multicpucore system architectures.
Jon reiterates the growing need for a serious look at how log data is processed.
My contention is that soon we will be talking about log management architecture and log management services the same way we discuss SOA and business intelligence today.Regardless of whether it attains the venderprise status of SOA and BI, I sincerely hope we'll see more interest in fresh approaches to improving the current state of log management.
Beyond the matters of scale discussed by Tim and Jon, there are several other aspects to log management I find troublesome.
- The current feedback loops providing by log data tends to be glacial.
- The actions prompted by log events tend to involve humans. So-called self-healing sytems have been created, but many suffer from narrow application or excessive buzzword.
- The current models for log management severely inhibit the amount of data logged, reducing the potential value of log data.
- Log management is the second-class citizen of second-class citizens, relegated to an operational tool used to victimize on-call personnel, gather security events, and summarize hard-drive failures. Ok, I'm overstating somewhat, but the point stands.
Logging 2.0, baby! Yeah!!