The ORM profiling tools Entity Framework Profiler and NHibernate Profiler have been popular among developers for over a decade. To this day, our development team is still actively working on them, keeping them up to date and adding new features to make them even more useful.
The latest major update was 6.0 in Dec 2020, but before getting into that: a quick recap of what the profilers do.
The Profilers in a Nutshell
When using tools to automate object-relational mapping between your application and database, inefficiencies can be introduced that bring things to a grinding halt at scale (select n+1, complex joins, unbounded result sets, etc.)The profilers facilitate your SQL query optimization – an essential process after using ORM tools – helping you find and fix potential bottlenecks before you discover them in production. They show you problematic code directly in the stack trace and even suggest solutions.
What Does Version 6.0 Bring?
This latest version brings quite a lot to the table. We applied a lot of lessons to optimize the performance of the profilers so they can process events faster and in a more efficient manner. We also fully integrated them with the async/await model that became so popular. For Entity Framework users, we now support all versions of EF, running on .NET 4.7 all the way to .NET 6.3 and .NET core.
The crown jewel of this release, however, is the new Azure integration feature. We initially built this feature to allow you to profile serverless code and use Azure Functions, but it turned out that this is really useful.
The Profiler Azure Integration allows you to set up an Azure storage container that would accept the profiled output from your application. Pretty simple, right? The profiler application will monitor the container and show you the events as they are written, so even if you don’t have direct access to your code (very common on serverless / Azure scenarios), you can still profile it. This also helps a lot when running inside containers, Kubernetes, etc. The usual way the profiler and the application communicate is over a TCP channel, but given the myriad of network topologies that are now in common use, this can get complex. By utilizing Azure Integration, we can resolve the whole solution and give you a seamless way to profile your code.
On-demand profiling of your code is just one of the new features. You can now do continuous profiling of your system. Because we throw the events into an Azure blob container, and given that the data is compressed and cheap to store, you can now afford to simply record all the queries in your application. You can then return to it a few days or weeks later to look into interesting behavior.
That is also a large part of why we worked on improving our performance; we expect to be dealing with a lot of data.
You are also able to specify a certain timeframe for this kind of profiling, as well as specify whether we should remove the data after looking at it, or retain it. The new profiling feature gives you a way to answer: “What queries ran in production on Monday between 9:17 and 10:22 AM”. This comes with all of the usual benefits of the profiler, meaning that:
- You can correlate each query to the exact line of code that triggered it.
- The profiler analyzes the database interaction and raises alerts on bad behaviors.
The last part is done not on a single query but with access to the full state of the system. It can allow you to find hot spots in your program – patterns of data access that lead to much higher latency and cost a lot more money.
You can try the profilers for free by clicking the links here: