06 May Using Dotnet Dump To Analyze The Memory Leak Of Docker Container
I did have a feeling that we were not going things right in our code, so I started to search for “pitfalls”. If you are like me – having an app runing inside Kubernetes – you might also have questions such as “is my app behaving well?”. It’s impossible to cover everything related to a well performing app, but this post will give you some guidance at least. We also tried various different ways to reproduce the problem in our development environment. Browse other questions tagged ubuntu docker memory asp.net-core .net-core or ask your own question.
- Make sure scoped services are scoped and singletons are singletons.
- So I was wondering that maybe Kubernetes is not passing in any memory limit, and the .NET process thinks that the machine has a lot of available memory.
- Remember that k8s and containers is a very resource limited environment since a lot of containers are running there and they share physical memory and CPU.
- There’s a lot more to learn about Serilog in ASP.NET Core 3.
- It will truncate the value and that will be your number of Environment.ProcessorCount.
There’s one more thing we had to look at in my case, since we made a lot external API calls and used the network a lot. I did not have metrics for everything here, but I tried one thing after another – inspecting that the application behaved good/better and continued. Unfortunately I don’t have updated graphs in-between each step I took.
And Analysis Dump File
After you click Save and Build the image building will start on the machine you provided. After the command above successfully executed and you refreshed your Docker Cloud tab, you should see your newly created node. After the deployment succeeds, we will need to open some ports on that VM so the Docker Cloud self discovery service can work.
This ends up being a bad idea in a Docker containers because any one who can do an inspect can see the secrets. Docker Swarm introduced Secrets in version 1.13, which enables your share secrets across the cluster securely and only with the containers that need access to them. The secrets are encrypted during transit and at rest which makes them a great way to distribute connection strings, passwords, certs or any other sensitive information. Set Environment variables Set the edit environment variables and link your service to other existing services in Docker Cloud.
All we’re now dependent upon is having an environment variable, SEQ_URL in this case, set correctly in production. This doesn’t scale well to large amounts of configuration (that’s why Microsoft.Extensions.Configuration, and appSettings.json, exist), but for the handful of settings needed to get logging bootstrapped, it’s perfect. Serilog is an alternative logging implementation that plugs into ASP.NET Core. It supports the same structured logging APIs, and receives log events from the ASP.NET Core framework class libraries, but adds a stack of features that make it a more appealing choice for some kinds of apps and environments. The act of publishing your application is relatively simple due to the magic of PowerShell. You will be expected to have a certificate installed locally and this credential needs to be specified in the Cloud.xml settings file in the Service Fabric project.
You need to play detective to find out what might be wrong. Noticed that the project has two databases that will be deployed along with their respective Entity Framework Migrations. This is a nice feature because you can deploy multiple databases at the same time, such as an identity database and a product database. Once you have all the tools installed and the source code for your project, open up the solution for your API Project and conduct a Build in order to make sure everything is working as expected.
How Should Architects Collaborate With Development Teams?
Be careful how you configure your dependency injection container. Make sure scoped services are scoped and singletons are singletons. Otherwise unnecessary objects might be created as your traffic increases.
I do believe there are some practises that are generally more accepted , but it’s really hard to know the “best” configuration values for the rest. Some API’s are very dependent on other API’s and some are very CPU intense. They all behave slightly different and I think this is why it’s hard to find a single “best” configuration for the ThreadPool for example. Remember you can always scale horizontally by increasing the number of replicas to handle more traffic. Now i’ve gone through some detail regarding memory and ThreadPool.
Every container is built upon an image, that is composed of the application itself and its dependencies. If you already have a repo with an application you want to use you can do that. However, I will create a new repo and clone it on my computer. The resulting logs are much quieter, and because important properties like the request path, response status code, and timing information are on the same event, it’s much easier to do log analysis. We’ll be a bit tactical about where we add Serilog into the pipeline.
This lead me into removing all in-memory caches and reading about how the Garbage Collection works. Workstation GC made the application be much more conservative in terms of memory usage, and decreased the average from ~600 MB to ~ MB. The suggestion was to switch from Server GC to Workstation GC, which optimizes for lower memory usage.
Let’s face it – you wouldn’t choose Service Fabric as a container orchestrator unless you had to. It feels like you are running applications in an environment that was not explicitly designed to host containers. Sure, it works, but getting there takes more effort than it ought to. Choose a Server Name and fill out the rest of the inputs, make sure you use strong credentials for your server configuration. Your configuration for SQL Server and the databases should now be ready to be created.
We noticed that the process of a new API was consuming more memory compared to other processes. At first, we didn’t think much of it and we assumed it was normal because this API receives a lot of requests. At the end of the day, the API almost tripled its memory consumption and at this time we started thinking that we had a memory leak.
When you’re working in a .NET environment this usually means looking at the metrics in Application Insights. You can even create alerts to be notified when there are abnormalities. This checks to see if the directory exists and if it does checks for a file with a given key. Finally, if that key exists as a file then reads the value. In the Get method, the Include method explicitly tells Entity Framework Core to load the User’s Posts along with their other details. Entity Framework Core is smart enough to understand that the UserId field on the Post model represents a foreign key relationship between Users and Posts.
We automatically thought that our APIs had memory leaks, and spent quite a lot of time investigating the issue, checking the allocations with Visual Studio, creating memory dumps, but couldn’t find anything. After the users and posts are asynchronously retrieved from the database, the array is projected into a response model that includes only the fields you need in the response. It is not production ready, as it does not have any testing workflow put in place and the application is rather simple. So far we created a very simple ASP .NET Core application and we ran it locally inside Docker.We haven’t used the GitHub repo, Docker Hub, Docker Cloud or Azure just yet. This command started our container, so Docker must have executed dotnet run inside the container , so the application should have started. In the folder that was just created from cloning the repository, execute dotnet new in order to create a new .NET Core application.
These are all decisions that the runtime will take no matter what; and to improve them we must first understand them. We have an ASP.NET Core 3.1 web application in k8s running with 3 pods in total. Normally each pod have had a memory limit for 300MB which have been working well for two months, and all of a sudden we saw spikes in CPU usage and response times . The memory usage didn’t increase infinitely any more , but it capped at around 600MB, and this number seemed to be pretty consistent between different container instances and restarts.
Once the settings are done, continue with the next steps on the installation wizard and finish the install. Install Docker for Windows by running the installation file downloaded from the stable channel and following all the default prompts as indicated in the Docker installation wizard. The call stack of this thread follows the 300 More information than more , You can see it through DiagScenarioController.DeadlockFunc Medium Monitor.Enter Method to get some object I have to lock it . While this leak was obvious, I still felt a little bit overwhelmed with all the data.
Track Active File In Goland
Once the service is running you can use rolling upgrades to push new versions without downtime, though be aware that upgrading requires you to change the version number on your container image tag. If you tag your images with “latest” then service fabric does not download them when you run an update. Kubernetes runs the applications in Docker images, and with Docker the container receives the memory limit through the –memory flag of the docker run command. So I was wondering that maybe Kubernetes is not passing in any memory limit, and the .NET process thinks that the machine has a lot of available memory. The fastest way to look into a memory leak is to create a dump file of the process in production. There’s no need to try to reproduce the problem because you can access all the data you need.
Running The Container Application
You can provision a Service Fabric cluster in Azure but be aware that you will be charged by the hour for all the VMs, storage and network resources that you use. The cheapest test cluster will still require three VMs to be running in a virtual machine scale set. De-allocating the set of VMs stops the clock ticking on VM billing but it effectively resets the clusters , forcing you to redeploy everything when it comes back up. This has the effect of embedded configuration detail about the orchestrator into your service code.
The second day, this happened again and it was worse, the API with the memory leak was almost consuming 4GB. Which is up to 5 times more resources compared to other APIs. I will leave it to the official documentation to describe exactly how all this works but when you give a service access to the secret you essentially give it access to an in-memory file. This means your application needs to know how to read the secret from the file to be able to use the application.
Asp Net Core 31 Response Time And Memory Spikes In Kubernetes
Service Fabric projects typically use services hosted by the Service Fabric runtime and are dependent on APIs defined in the Service Fabric SDK. There are two GC modes, “Workstation mode” and “Server mode”. Server mode is the default for ASP.NET and assumes your process is the most demanding one running on the machine (which clearly isn’t a good assumption in k8s, there can be hundreds on containers running).
Running docker images should show you the newly created image containing your application and its dependencies . The first thing you’ll notice about the log output we’ve seen so far is that it’s very verbose. The ASP.NET Core framework writes events that describe its internal processes when routing and handling a request. We don’t usually want this level of detail for application diagnostics.
Deploying Asp Net Core And Ef Core To Docker On Azure Using Visual Studio 2017
In this tutorial you use the dotnet commands, dotnet trace, dotnet counters, and dotnet dump to find and troubleshoot process. This article will walk through the basics of reading that file from an ASP.NET Core application. The basic steps would be the same for ASP.NET 4.6 or any other language. I’ve used the InMemory provider to rapidly prototype APIs and test ideas, and my favorite part is the ability to switch one line of code to connect to a live database like SQL Server.
I just changed the message the application responds with and pushed the modifications to the master branch. This should trigger the build and redeploy of the container. Let’s slightly modify the application and push the modifications. This should trigger the auto build and auto redeploy of the container and without us doing anything, the modifications should be live. Because we setup the image based on the GitHub repository and we checked the AUTOREDEPLOY option, every time we will push on the master branch of the repository, the entire system will update itself. We will create a service based on the image we just created.
Ideally you should avoid adding this kind of run-time implementation detail as it undermines the portability of the service. Generate some deadlock threads and memory leaks through the interface provided in the example . During same period as above memory graph.We can see see that memory usage is now much better and well-suited to fit into k8s. The green line is asp net usage average response times.This web application handles roughly 1k requests per minute, so that is very low compared to what ASP.NET Core is benchmarked against. On the other hand, it’s a web application that is very dependent on other API’s as it doesn’t have it’s own storage, so one request in results in 1-5 different outgoing dependency calls to other API’s.
We didn’t want a dozen log events per request, but chances are, we’ll need to know what requests the app is handling in order to do most diagnostic analysis. Logs are an important interface to your application; they’re the “developer interface”, alongside the user interface, “data interface”, or programming interface. Just as we strive to create beautiful and functional pages, SQL schemas, or APIs, we should take ownership of our application’s log output and ensure it’s both usable and efficient.