Logging in Xamarin with Serilog

This article introduces briefly the design of the logging infrastructure in general. Although some part of the article is written in the the context of Xamarin with MvvmCross, Serilog is available across .NET Framework and .NET Standard, so most of the following content can apply to the whole .NET ecosystem in general.

Logging Infrastructure

Below are the common components involved when you do logging.

  • Logger This is the object your logic code hold to interact with the logging infrastructure. The object is normally an ILogger (for .NET Standard) or MvxLog (for MvvmCross).
    It normally has the following functions: Trace/Verbose, Debug, Info, Warning, Error, Fatal. Those functions are actually a shorthand for a common Log function with a corresponding Log Level parameter (more on Log Level in the next subsection).
    Application logic never creates a logger by itself. Instead, it gets a logger created by a Log Provider (usually via a dependency injection).
  • Log Sink A sink indicate the actual destination of the logs (e.g. debug output, console, file, etc.). It is usually configured in the Log Provider instead of embedded inside the logger logic. This separation of Sinks from Logger is extremely powerful, allowing you to change your log destination easily by swapping (or adding) Sinks without ever touching your Logger. You can even setup different filters and limits for each sinks (see example below).
  • Log Provider This component connects everything together based on the requested configuration and creates an appropriate logger object. It is usually configured at application start up.

Log Level

Log Level is a way for developer to indicate the criticality of each log entry, allowing appropriate minimum level filtering for each sink. Below are my own convention for using Log Level.

  • Trace/Verbose This level is usually used to trace each step of your logic implementation. I rarely log at this level, and usually reserve this for logs from the underlying platform.
  • Debug This level can be used as a replacement for any Debug.WriteLine() or Console.WriteLine() in your code. Keep in mind that this level is usually used for logs that should be excluded from production/release build.
  • Information This level is similar to Debug level with the exception that those logs should still show up in production/release build. Those are the logs that should help you in troubleshooting production issue.
  • Warning I usually use this level to indicate recoverable problem, for example, networking issue with a retry later.
  • Error This is usually used to indicate a more critical problem, especially those that need user intervention, e.g. database corruption. You can think of this level as something that should automatically send an email to support team.
  • Fatal This should be only used to indicate a crash.

In general, lower log level should give you more in-depth details of the application logic, however, can generate a large amount of log entries. So you should take care of your filters so as to not exhaust your sinks.

Structured Logs

If you are logging formatted string like below, you might not fully utilize the capability of your logging framework:

Instead, try to do this:

Basically you don’t pre-format your log, and instead give the template and all the parameters to the logger directly.

The logger can create the same formatted string on its own, so you don’t lose any feature. In addition, it passes all the parameters to your sinks. If you setup a sink that support structure logs, you can easily filter your logs later on using those parameter value (e.g. userId) instead of have to use Regex on your formatted strings.

Most of the logging framework nowadays support structured logging, so there are really no reason not to use it.

Serilog

Serilog provides us with a Log Provider that works well with both ASP.NET Core for your server and MvvmCross for your Xamarin App. Serilog also has a large number of sinks available as Nuget packages.

Serilog also has some additional facilities that you should learn about and utilize:

  • Enricher: add context information into each log entry
  • Filter: fine tune which log entry can go to each sink

Below is an example configuring Serilog for MvvmCross:

More examples can be found in the reference links at the end.

Some Recommendations

  • Log Level and log Filter allow you to control how much log is actually stored without touching your logging calls. Hence, you should not hesitate to log frequently in the app. Just remember to set appropriate log level and use structured log instead of formatted string.
  • If you are building an Xamarin/mobile app without Internet access to logging services such as Azure Event Hubs or Application Insights, you can write your logs into file (should be rolling to prevent filling up storage) and provide a button in app to automatically zip up the logs, open a new email composer and attach the zip file. You might also automatically filling in appropriate information for the user such as support email, subject, app versions. This should also happen after the app is restarted from a crash.
  • If you are writing mobile app, do NOT write logs into your application database, even for error logs. On a bad day, a bad bug in the clients (or even a surge of users) may generate huge amount of logs and affect your server. If you have ample disk space, you can consider using Rolling File sink.
    An even better choice is to use external services on a separate server/container for your log sink, especially if you want to analyze your low level logs. One very good service that can be installed on-premise is https://www.splunk.com.
  • Filter level switching is another useful feature of Serilog https://nblumhardt.com/2017/10/logging-filter-switch/. This allows turning on more detailed logging as and when needed without the need of restarting the application.

Links

React & ASP.NET Core

Update 08 Sep 2019: since Microsoft has officially announce the deprecation of aspnet-webpack package in .NET Core 3.0, you might not want to use the code in this blog post directly in production. However, you might still be able to learn a thing or two from it. I will write a new blog post on restoring all the following feature on the new React template in Visual Studio.

React is getting very popular for building rich web interface, and is one of the supported project template in ASP.NET Core. In .NET Core, you can quickly create a new React project by running the command (or using the new project dialog):

The template immediately gives you a working project with both React and ASP.NET Core. However, I don’t feel very happy with the latest template in .NET Core 2.2 and 3.0, so I created a new one based on the template from older version of .NET Core for my own projects.

Read More

What I’ve learned moving from Azure WebJobs to Azure Function Apps

Updated: some previous of the content in this post is no longer applicable when you upgrade Microsoft.NET.Sdk.Functions to version 1.0.26, so I have added some notes and sections on that.

If you have been using Azure for some times, you might know about Azure WebJob, a great way to run your code in background on triggers and integrate with other services via bindings. Recently, Azure introduced a similar offering: Azure Function App. Function App is based on a very similar idea, where you can run your code in any supporting languages on certain triggers such as a Queue message or a HTTP request or a timer.

We have been using Azure WebJob for a couple of years to process our email/push notifications and to clean up of our database. WebJob served that purpose perfectly until our app growth reveals a big limitation: WebJob consumes all the resource of the web app itself. In theory, WebJob should be a separate background process and should not interfere with the web app. However, since it is inside the App Service, it utilizes the same CPU/Memory consumption constraint of the App Service Plan. We started getting downtime alert from our APIs because the WebJob is running too heavily, defeating the purpose of splitting up some processing to a separate process in the first place.

We look around for solutions. First of all, we tried scaling out the app service plan based on CPU and memory consumption. However, it was hard to set a proper rule to scale up and down, and the cost multiplied quite quickly. We started looking at moving to Azure Function App with consumption plan. On paper it looks great because we can completely separate our background processing and our API. We also does not have to waste computing resources for the WebJob dashboard as telemetry is now sent to Application Insights instead of Azure Storage (I heard the WebJob dashboard consumed App Service Plan’s CPU to perform certain indexing).

Unfortunately, the migration was not as smooth as we thought. We soon met many problems with our C# implementation:

  • If you are using Timer trigger in Function with consumption plan, remember that you have to use the CRON schedule format (0 0 */8 * * *), not the Timer format (08:00:00).
  • (This is no longer applied for Microsoft.NET.Sdk.Functions 1.0.26) Using QueueTrigger, you’ll get an amazingly cryptic error message on Azure Dashboard: The binding type(s) ‘queueTrigger’ are not registered. Please ensure the type is correct and the binding extension is installed. Visual Studio generated a functions.json with binding queueTrigger and can be seen on the Azure portal. We also added queues section to host.json file. We have also installed Microsoft.Azure.WebJobs.Extensions.Storage to enable QueueTrigger in code. So what did we miss here? Turn out you have to add
    Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator nuget package as well, so that Visual Studio will generate extensions.json.
    This is not mentioned anywhere in the docs but only a small page on GitHub, which is very irritating.
  • If you are using Microsoft.NET.Sdk.Functions 1.0.26 and are using the above workaround, extensions.json might not be generated in the correct place, and you would suddenly get the same issue with unregistered binding types again when testing locally. You can try to remove Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator and let the SDK handle the json generation.
  • ILogger is supported by the new WebJob SDK since the migration to .NET Core and also by Function SDK v2, which is great since our Functions can finally share code with ASP.NET Core. Unfortunately, you might get error “Cannot bind parameter ‘log’ to type ILogger. Make sure the parameter Type is supported by the binding.” Turn out you need to use the exact version of ILogger in the SDK (e.g. SDK 1.0.19 uses 2.1.0) (stackoverflow), which would be a big problem if one of the dependencies updates to higher versions.

There are also several limitations that is probably due to Function App being a quite young platform in Azure:

  • It seems that Function is actually based on WebJob behind the scene, so the features set of Function SDK in C# is only a subset of that of WebJob:
    • (This is no longer applied for Microsoft.NET.Sdk.Functions 1.0.26) WebJob recently moved to .NET Standard and get the extremely convenient dependency injection. Meanwhile, there is no DI story in Function. This might be ok with small function, but ours get a lot of logic and we would like to have DI to get proper tests.
    • If you are using Microsoft.NET.Sdk.Functions 1.0.26, you can check out this blog post on the new DI support https://platform.deloitte.com.au/articles/performing-constructor-injections-on-azure-functions-v2.
    • A small extra note: if previously you have been using SetBasePath(context.FunctionAppDirectory) to read configuration in local.settings.json. You can now use SetBasePath(Directory.GetCurrentDirectory()) in your startup class.
  • The current dashboard experience in Azure portal lacks the ability to replay the execution. This is not really critical for applications that have already been in production for a while as you should already have some retry mechanisms. However, it is quite convenient for development and small testing. Hopefully this will come soon in the future.

If you have any better solutions for those problems, please feel free to let me know in the comments.

Azure Notification Hub SDK on .NET Standard and WNS Raw notification

Update (3 Oct 2018): Thanks to the beloved-by-community Jon Galloway, I can finally connect with the engineers working on the library. Hopefully, the team can push out a fix soon.

Update (13 Oct 2018): Microsoft team has quickly release version 2.0.1 with proper fix for the issue. Bravo for the team!

I have been using Azure Notification Hub for a very long time to send push notifications to my UWP apps on Windows 10. Since the introduction of .NET Core 1.0, I have gradually been moving my projects from .NET Framework to .NET Core. However, Microsoft.Azure.NotificationHubs was always the blockers due to the lack of support for .NET Standard.

Tl;dr version:

  • 2.0.0-preview1 and preview2 have issue sending WNS Raw message.
  • After 8 months, 2.0.0 was released with the same issue.
  • There was no place to report the issue.
  • There was no source code.
  • The fix is only 1 LOC, but I spent much more time to make the decompiled code compile back.
  • And now I have still no way to report the issue nor contribute the fix.
Read More