Applications that have been deployed to production must be monitored. One of the best ways to monitor application behavior is by emitting, saving, and indexing log data. Logs can be sent to a variety of applications for indexing, where they can then be searched when problems arise.

Knowing which tools to use, and how to write logging code, makes logging far more effective in monitoring applications and diagnosing failures.

The Twelve Factor-App methodology has gained popularity as a set of guidelines for building modern software-as-a-service. One of the twelve factors is logging.

According to the twelve-factor app methodology, log data should be treated as an event stream. Streams of data are sent as a sort of “broadcast” to listeners, without regard for what will happen to the data that is received. According to the twelve-factor logging method:

“A twelve-factor app never concerns itself with routing or storage of its output stream.”

The purpose of this recommendation is to separate the concerns of application function and log data gathering and indexing. The twelve-factor method goes as far as to recommend that all log data be sent to stdout (also known as the ‘console’.) This is one of several ways to keep the application distinct from its logging. Regardless of the method used, separating these concerns simplifies application code. Developers can focus on what data they want to log, without having to worry about where the log data goes or how it is managed.

Popular logging frameworks for the .NET framework assist developers in maintaining this separation of concerns. They also provide configuration options to modify logging levels and output targets, so that logging can be modified in any environment, from Development to Production, without having to update code.

Logging frameworks

Logging frameworks typically support features including:

  • Logging levels
  • Logging targets
  • Structured (also called “semantic”) logging

NLog

NLog is one of the most popular, and one of the best-performing logging frameworks for .NET. Setting up NLog is fairly simple. Developers can use Nuget to download the dependency, then edit the NLog.config file to set up targets. Targets are like receivers for log data. NLog can target the console, which is the twelve-factor method. Other targets include File and Mail. Wrappers modify and enhance target behaviors. AsyncWrapper, for example, improves performance by sending logs asynchronously. Target configuration can be modified by updating the NLog.config file, and does not require a code change or recompile.

NLog supports the following logging levels:

  • DEBUG: Additional information about application behavior for cases when that information is necessary to diagnose problems
  • INFO: Application events for general purposes
  • WARN: Application events that may be an indication of a problem
  • ERROR: Typically logged in the catch block a try/catch block, includes the exception and contextual data
  • FATAL: A critical error that results in the termination of an application
  • TRACE: Used to mark the entry and exit of functions, for purposes of performance profiling

Logging levels are used to filter log data. A typical production environment may be configured to log only ERROR and FATAL levels. If problems arise, the logging level can be increased to include DEBUG and WARN events. The additional context provided by these logs can help diagnose failures.

Here’s an example of some code that logs using NLog:

    namespace MyNamespace{  public class MyClass  {    //NLog recommends using a static variable for the logger object    private static NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger();//NLog supports several logging levels, including INFOlogger.Info("Hello {0}", "Earth");try{//Do something}catch (Exception ex){    //Exceptions are typically logged at the ERROR level    logger.Error(ex, "Something bad happened");}  }}

Log4NET

Log4NET is a port of the popular and powerful Log4J logging framework for Java. Setup and configuration of Log4NET is similar to NLog, where a configuration file contains settings that determine how and where Log4NET sends log data. The configuration can be set to automatically reload settings if the file is changed.

Log4NET uses appenders to send log data to a variety of targets. Multiple appenders can be configured to send log data to multiple data targets. Appenders can be combined with configuration to set the verbosity, or amount of data output, by logging level. Log4NET supports the same set of logging levels as NLog, except that it does not have a built-in TRACE level.

Logging syntax in Log4NET is also similar to NLog:

private static readonly ILog log = LogManager.GetLogger(typeof(MyApp));log.Info("Starting application.");log.Debug("DoTheThing method returned X");

ELMAH (Error Logging Modules and Handlers)

ELMAH is specifically designed for ASP.NET applications. It is fairly easy to set up, and includes a dashboard application that can be used to view errors. ELMAH is popular and has been available for a long time, however, it doesn’t really follow the twelve-factor method. ELMAH saves data to databases, including MySQL, SQL Server, Postgres, and others. This method mixes concerns of logging with concerns around log persistence. Log data is stored in a relational database, which is not the optimal storage method for logs (I’ll talk about a better way in a moment.)

What and how to log

Separating the concerns of log management from logging simplifies application code. However, developers must still write logging code. Effective logging can make an application highly supportable. Poor logging can make it a nightmare for operations teams. It’s important for developers to know what data to log, and to use patterns for logging that can be enforced across an application.

Log levels

Generally speaking, the more you log, the better off you will be when problems arise. Logging everything is good for troubleshooting, but it can lead to disk usage issues. Large logs can be difficult to search. Logging levels are used to filter logging output, tailoring the amount of data output to the situation in hand.

Each logging level is associated with the type of data logged. DEBUG, INFO, and TRACE events are typically non-error conditions that report on the behavior of an application.

Depending on how they are used, they are usually enabled non-production environments and disabled in production. Production environments typically have ERROR and WARN levels enabled to report problems. This limits production logging to only critical, time-sensitive data that impacts application availability.

Developers have some freedom in precisely how to use each logging level provided by a given framework. Development teams should establish a consistent pattern for what is logged, and at what level. This may vary from one application to another but should be consistent within a single application.

Logging best practices

Debug logs typically report application events that are useful when diagnosing a problem. Investigations into application failures need the “W” words: Who, What, When, Where and Why:

  • Who was using the system when it failed?
  • Where in the code did the application fail?
  • What was the system doing when it failed?
  • When did the failure occur?
  • Why did the application fail?

“Why did the application fail” is the result of a failure investigation, and the purpose of logging. Logging targets typically handle the “when” with timestamps added to the log entries. The rest of the “Ws” come from logging statements added to the code.

There are two practices that will help make logging more effective: logging context and structured logging.

Logging context means adding the “Ws” to log entries. Without context, it can be difficult to relate application failures to logs.

This is a common log statement:

try {//Do something}catch(ex as Exception) {logger.Error(ex);throw ex;}

The exception object in this example is sent as a log entry to the logging target(s). This is needed, but there’s zero context. Which method was executing (where did the application fail)? What was the application doing? Who was using the application?

Both NLog and Log4NET have features that help add this contextual information to logs, making them far more useful. The contextual data is added as metadata to the log entries.

Structured logging means formatting the data that is logged in a consistent way. NLog and Log4NET both support layouts. These are classes and other utilities that format logs and serialize objects into a common format. Structured logs can be indexed much more effectively, making them easier to search.

Where to send log data

Earlier, I mentioned that relational databases are not the best place to send log data. Time-series databases (TSDB) are much more efficient at storing log data. TSDB require less disk space to store events that arrive in time-ordered fashion, as log events do. Open-source TSDB such as InfluxDB are much better suited to storing log data than relational databases.

The ELK stack is a popular solution for log aggregation. ELK is an acronym that stands for Elasticsearch, Logstash, and Kibana.

Elasticsearch is a fast search engine that is used to find data in large datasets.

Logstash is a data pipeline platform that will collect log data from many sources and feed it to a single persistence target.

Kibana is a web-based data visualizer and search engine that integrates with Elasticsearch.

These and other open-source and paid solutions allow developers to gather all logging to a central system. Once stored, it’s important to be able to search logs for the information needed to resolve outages and monitor application behavior. Logging can produce a lot of data, so speed is important in a search function.

Crash reporting vs. error logging

Logging and crash reporting tools are different and should be used as part of a debugging workflow.

Dedicated logging tools give you a running history of events that have happened in your application. When a user reports a specific issue, this can be quite unhelpful, as you have to manually search through log files.

Dedicated error and crash reporting tools, like Raygun, focus on the issues users face that occur when your app is in production. They record the diagnostic details surrounding the problem that happened to the user, so you can fix it quickly with minimum disruption.

Final thoughts on logging best practices

Effective logging makes a major difference in the supportability of an application. It’s important not to mix the concerns of logging and log storage. Popular logging frameworks for .NET provide this separation of concerns. They also provide features that make it easier to log consistently and to filter log data.

Open source and paid solutions for log aggregation are important tools, especially if you have many applications logging a large volume of data. With the right tools and skills, your developers can produce applications that they, and your operations teams, can support with relative ease.

Logging can be time-consuming and inaccurate when you need to solve a problem fast. Take your logging practices to the next level by using Raygun Crash Reporting.