Because logging in core is so simple to use, it’s also simple to overlook key crucial diagnostics capabilities or to misuse these potent logging tools. I’ll walk you through the fundamentals of core Structured Logging both with and without Serilog in this tutorial. You can easily understand the fundamental ideas with the help of this guide. Let’s start by exploring these 4 Structured Logging Techniques in ASP.NET Core.

ASP.NET Core Basic Logging

As I mentioned before, the default logging framework that comes with core is already set up when you start a web application builder. You don’t need to do anything because the default configuration adds console, debug, event source, and event log as described in the documentation.

Using a logging provider (use case: Serilog)

Let’s discuss about providers before getting into the meat of this piece. Since we’ll discuss both serilog and the standard core logging system. Other logging providers that will improve your logging can be used with core. You should make use of the robust features offered by these companies. We will utilize serilog as our supplier in this article.

Adding Serilog to your project

Serilog configuration is simple. Add the “Serilog.AspNetCore” nugget package to your project. Serilog must be configured after the nugget package has been included. Use this code fragment in your program.cs (Dotnet 6+) to add console logging, make serilog the default provider, and load configurations from our appsettings file.

Following this, we can expand the appsettings.json file’s configuration choices. For instance, you could use the following to modify the log level in serilog:

You might use the following to limit this log level to a certain logging context or namespace. Keep in mind that this is highly configurable. I’ll include links to places where you may learn more about these settings options at the end of this article.

1. To correctly delimit your logs, use scopes.

A logging scope is defined. I’ll use Microsoft’s explanation. “A series of logical operations can be grouped into a scope. The same data can be attached to each log that is produced as part of a set using this grouping.

Here, we’ll add scopes that cover whole HTTP requests as well as a user-id or client-id scope, allowing us to identify which user or client of our API is executing which action at any given time. This eliminates the need to repeatedly type the user id or client id in each log for the appropriate logs and clearly delimits the logs, even in the inner level logs of the framework. It also results in cleaner code. Here’s how we go about it:

NB: Scopes are called in using statements because some resources need to be freed once the scope is done.

  • We must intercept http requests at the beginning and determine whether they are authentic. If so, we define the scope using the user’s ID or the API client’s ID. If not, the request flow is simply continued. We’ll need a middleware for this.
  • Our middleware must be included to the project. Note that if it is introduced before the middleware for authentication, authentication will not take place until it is called. And that’s not what we want.

Your logs will be properly defined after doing this, which will make diagnosis easier.

Here is a fantastic post I made about CQRS Command Validation with Mediatr in Core in case you utilize CQRS. This post, in my opinion, will be very helpful for all developers, but especially for those that incorporate validation logic into their Command and Query business logic.

2. Make use of structured logging features: Logging in JSON Format

The second ASP.NET core structured logging mechanism is presented here. Let’s define structured logging first. This is the process of logging application events in a well-organized and consistent manner, facilitating reading, searching, and interpreting the logs. Compared to simply logging strings in an output, this is superior. In light of this, structuring our logs is the most effective technique to log effectively. What could be more ideal than JSON?

Configuring JSON format for your logs in core

It is simple to configure the output format of your log entries with core. You may accomplish this by using the “FormatterName” and “FormatterOptions” parameters found in the appsettings.json file. Here’s an illustration:

In the aforementioned example, I define that each console log shall be indented, contain scopes, and be in json format. Your logs will then show up in the console with the json format when you provide this.

Setting up Serilog’s JSON format for your logs

Naturally, serilog enables you to give your logs a specified structure. We’ll create a json format for our console logs in our example. Keep in mind the code above that configures Serilog? Simply make a small modification to it:

.WriteTo.Console(formatter: new JsonFormatter())

The output template can then be modified to either log solely json logs or to include more information in the log output. Here’s how to add the JSON message to logs, followed by a newline and any potential exceptions:

3. How to appropriately use logging parameters and destructure them

It is crucial to pass parameters when logging, and doing so is extremely simple. The parameter name only has to be enclosed in curly brackets and added to the logging procedure. You don’t need to preface your strings with a “$” sign. The mechanism for logging already takes care of that. The code below, for instance, logs the user’s age.

If structured logging with Json was enabled when this option was supplied, the output would be clearly documented, making it simple to query the logs using the user’s age parameter I passed. The possible output is shown in part below:

What about logging while passing complicated objects as parameters? Since it only uses complicated objects’ “ToString()” methods to convert them to strings, core demonstrates its limitations in this situation. JSON objects won’t be present in your logs. Destructuring is the process of converting these parameters to json objects, which is supported by logging providers like Serilog.

Use “@” to destructure a complex parameter in serilog to json, and “$” to merely display the parameter’s type or signification. Here’s an illustration.

The aforementioned code will generate a log entry in json format with the Info parameter. When structured logging is turned on, it will appear as follows:

Now, if I destroy using the $ sign, as displayed below:

I get the following output:

With the help of this free, serilog-compatible open-source software,, you can customize how your parameters are destructured completely. It enables you to cover confidential data with “***” in the event that it is a password, for example, or to hide sensitive data when destroying objects.

You can add custom configurations to your appsettings.json file to configure destructuring. This is an illustration of the configurations I included. I’ve included links at the conclusion of this article that you may use to learn more about this feature.

4. Don’t forget to choose the appropriate log level and select the logging category.

The structured logging method in core that is easiest to use is this one. You only need to inject an ILogger or ILoggerT> in your class, where “T” denotes the category of logs and class type. This is covered in the documents I cited before. Once T is defined, a string containing the name of the class it belongs to will appear in every report. Here’s an illustration:

 For example, if I log the following:

The output will contain the category and will be:

This is quite beneficial while conducting diagnostics. When an error happens, you want to know where your logs are coming from.

It is simple to specify the log level. The documentation does an excellent job of outlining how to do that here ( However, what log level should you pick? The various log levels and their definitions are listed below.

  • Trace: The smallest log level, used to record the most information possible. It is jam-packed with knowledge about how the system works. Write logs at this level if you wish to include a lot of specific details about how your product works internally. Additionally, due to the excessive amount of information it contains, this log level in the configurations should NEVER be activated by default in production. When attempting to resolve intricate problems that have arisen in your software, you should turn it on.
  • Debug: The following level is the bottommost one. It is used to log debug information and also contains a lot of information, as the name suggests. This ought to only be used in test settings. To determine when something went wrong, you might activate it during manufacturing, but be sure to rewind to a higher level.
  • Information: This is used to transmit data about how the software is utilized.
  • Warning: This is utilized when your software exhibits unexpected behavior but not in a way that your system will fail. For instance, when the user’s search results are not returned.
  • Error: When a system failure occurs, use this log level. failures that are unique to a particular user or usage circumstance. These errors shouldn’t prevent your machine from functioning properly.
  • Critical: Use this log level if a system malfunctions. errors that are specific to a given user or usage situation. Your machine shouldn’t be rendered incapable of performing as intended by these faults.


What do you think of these 4 ASP.NET Core structured logging techniques? These are, in my opinion, the key ideas you need to understand before adding logging functionality to your ASP.NET Core application. With these, you can log expertly and cut down on your time spent on troubleshooting and diagnostics. I hope this is useful to you as you pursue software development.

Leave a comment

Your email address will not be published.