Log analysis


In computer log management and intelligence, log analysis is an art and science seeking to make sense out of computer-generated records. The process of creating such records is called data logging.
Typical reasons why people perform log analysis are:
Logs are emitted by network devices, operating systems, applications and all manner of intelligent or programmable device. A stream of messages in time-sequence often comprise a log. Logs may be directed to files and stored on disk, or directed as a network stream to a log collector.
Log messages must usually be interpreted with respect to the internal state of its source and announce security-relevant or operations-relevant events.
Logs are often created by software developers to aid in the debugging of the operation of an application or understanding how users are interacting with a system, such as search engine. The syntax and semantics of data within log messages are usually application or vendor-specific. Terminology may also vary; for example, the authentication of a user to an application may be described as a login, a logon, a user connection or authentication event. Hence, log analysis must interpret messages within the context of an application, vendor, system or configuration in order to make useful comparisons to messages from different log sources.
Log message format or content may not always be fully documented. A task of the log analyst is to induce the system to emit the full range of messages in order to understand the complete domain from which the messages must be interpreted.
A log analyst may map varying terminology from different log sources into a uniform, normalized terminology so that reports and statistics can be derived from a heterogeneous environment. For example, log messages from Windows, Unix, network firewalls, databases may be aggregated into a "normalized" report for the auditor. Different systems may signal different message priorities with a different vocabulary, such as "error" and "warning" vs. "err", "warn", and "critical".
Hence, log analysis practices exist on the continuum from text retrieval to reverse engineering of software.

Functions and technologies

Pattern recognition is a function of selecting incoming messages and compare with pattern book in order to filter or handle different way.
Normalization is the function of converting message parts to same format.
Classification and tagging is ordering messages into different classes or tagging them with different keywords for later usage.
Correlation analysis is a technology of collecting messages from different systems and finding all the messages belonging to one single event. It is usually connected with alerting systems.
Artificial Ignorance a type of machine learning which is a process of discarding log entries which are known to be uninteresting. Artificial ignorance is a method to detect the anomalies in a working system. In log analysis, this means recognizing and ignoring the regular, common log messages that result from the normal operation of the system, and therefore are not too interesting. However, new messages that have not appeared in the logs before can signal important events, and should be therefore investigated. In addition to anomalies, the algorithm will identify common events that did not occur. For example, a system update that runs every week, has failed to run.
Log Analysis is often compared to other analytics tools such as application performance management and error monitoring. While much of their functionality is clear overlap, the difference is rooted in process. APM has an emphasis on performance and utilized most in production. Error monitoring is driven by developers versus operations, and integrates in code in exception handling blocks.