Getting Started

Using the Cosmos DB Profiler is easy. First, we need to make the application that we profile aware of the profiler. Then, just start the profiler.

Preparing an application to be profiled

Install CosmosDBProfiler NuGet package (or CosmosDBProfiler.Appender to install only the profiler's Appender):

Install-Package CosmosDBProfiler -IncludePrerelease

In the application startup (Application_Start in web applications, Program.Main in console applications, or the App constructor for WPF applications), make the following call:

HibernatingRhinos.Profiler.Appender.CosmosDB.CosmosDBProfiler.Initialize(cosmosClient); // cosmosClient is CosmosClient or DocumentClient instance

ASP.NET Core app

In ASP.NET Core application please use UseCosmosDBProfiler() extension method available for .NET Core projects to initialize profiling (you'll find it in HibernatingRhinos.Profiler.Appender.CosmosDB namespace):

using HibernatingRhinos.Profiler.Appender.CosmosDB;

public void ConfigureServices(IServiceCollection services)

Under the covers, in addition to intializing the profiling by calling CosmosDBProfiler.Initialize(cosmosClient), it will also register a default implementation of IHttpContextAccessor helper service by calling services.AddHttpContextAccessor(). It provides access to the current HttpContext in ASP.NET Core applications. This way the profiler is able to correlate outgoing requests to CosmosDB with a URL address of a controller method which sent it.

Introduction Video

Learning the CosmosDB Profiler Cloud Cost Optimization Tool

In this video, RavenDB CEO Oren Eini unveils our CosmosDB Profiler tool, which will dissect how your application communicates with your database and isolates all the ways you can save time and money merely refactoring a few data queries.

You will learn:

  • What is the CosmosDB Profiler Tool?
  • How to find out the number of calls to the database your application makes
  • Where to find recommendations to optimize your code for easier data retrieval
  • How much money you can save


Why do we need a Profiling tool?

How the scope and composition of data has changed, along with the environment it is operating on that requires a new way to analyze how it is working with your application.

The Pricing Model of CosmosDB

The cost and resource usage implications of a simple data request. How much cloud resources am I consuming by rendering a single page for a single user?

How do you handle usage patterns, like what happens at 9AM when 70% of your users want to see a page at the same time?

What is the Profiling Tool for?

There are no open source projects such as this one. What is the Profiler doing? Where inside an application is it looking?

Live use of the CosmosDB Cloud Cost Optimization Tool

Scanning the code base of a project to search for how it is interacting with its database. Demonstrating the system and its UI.

The one piece of the CosmosDB system that impacts all your queries

Leveraging things that are completely transparent on your server to get you top notch information on how you can enhance application internals to your advantage.

Uses for the Architect: Integrating with CI systems

Profiler shortcut Keys

Beyond the standard operating systems shortcut keys, the profiler supports the following shortcut keys:

SFocus on Requests tab header
TFocus on Operations tab header
DFocus on Details tab header
FFocus on Statistics tab header
/ Move to the next/prev tab
/ Move to next request/operation

Programmatic Access to the Profiler

One of the DLLs that comes with the profiler is HibernatingRhinos.Profiler.Integration, this DLL can give you programmatic access to the profiler object model. On NuGet it's available as CosmosDBProfiler.Integration package. For example, you may do so inside your unit tests to be able to assert on the behavior of your code.

    var captureProfilerOutput = new CaptureProfilerOutput(@"/path/to/profiler.exe");
    // run code that uses Cosmos DB   
    Report report = captureProfilerOutput.StopAndReturnReport(); 
    // Assert / inspect on the report instance
  1. Create a new CaptureProfilerOuput file and pass it the path to the cosmosdbprof.exe executable.
  2. Call StartListening to start the listening to your application's behavior.
  3. Execute your code
  4. Call StopAndReturnReport to get the report and assert / inspect it.

You can also use this DLL to give you programmatic access to the XML report files that are generated during the continuous integration build. This is done using the following code:

var xml = new XmlSerializer(typeof(Report));
using(var reader = File.OpenText(reportFile))
    Report result = (Report)xml.Deserialize(reader);

Using the profiler with Continuous Integration

The profiler supports the following command line interface, which allows you to execute the profiler as part of your continuous integration builds:

/CmdLineMode[+|-]         (short form /C)
/File:<string>            (short form /F)
/ReportFormat:{Json|Html|Xml}  (short form /R)
/Port:<int>               (short form /P)
/InputFile:<string>       (short form /I)
/Shutdown[+|-]            (short form /S)

Starting up with no options will result in the UI showing up.

Here is how we can get a report from our continuous integration build:

CosmosDBProf.exe /CmdLineMode /File:report-output /ReportFormat:Json /ReportFormat:Html # starts listening to applications
xunit.console.exe Northwind.IntegrationTests.dll
CosmosDBProf.exe /Shutdown # stop listening to applications and output the report

This approach should make it simple to integrate into your CI process. The JSON and XML output allows you to do programmatic access to the report, while the HTML version is human readable. You can generate JSON, XML and HTML reports by specifying the ReportFormat option multiple times.

One thing that you might want to be aware of, writing the report file is done in an async manner, so the shutdown command may return before writing the file is done. If you need to process the file as part of your build process, you need to wait until the first profiler instance is completed. Using PowerShell, this is done like this:

CosmosDBProf.exe /CmdLineMode /File:report-output /ReportFormat:Json /ReportFormat:Html
xunit.console.exe Northwind.IntegrationTests.dll
CosmosDBProf.exe /Shutdown
get-process CosmosDBProf | where {$_.WaitForExit() } # wait until the report export is completed


Please note that from a licensing perspective, the CI mode is the same as the normal GUI mode. On one hand, it means that you don't need to do anything if you have the profiler already and want to run the CI build on your machine. On the other, if you want to run it on a CI machine, you would need an additional license for that.


Aggregate related sessions under scopes

Cosmos DB Profiler can aggregate related sessions under the same scope. By default, in web applications, we aggregate sessions which are opened in the same web page under the same scope. But we provide an API that you can use to aggregate related session in your applicatoin using a different strategy.

You can use


in order to start new scope, and dispose the returned object in order to end the scope. You can also give a name to the scope using:

ProfilerIntegration.StartScope("Task #" + taskId);

In addition, you can override the ProfilerIntegration.GetCurrentScopeName() method in order to set the current scope using your application specific details:

ProfilerIntegration.GetCurrentScopeName = () => { return "Scope ID" };

Azure Storage Channel

By default the profiler logs are sent from an application via TCP. Although sometimes your application might not be able to reach the profiler over the network. Using Azure Storage Channel feature you can configure the appender to send the logs as blobs to the Azure storage container and analyze them in CosmosDB Profiler. This way you can profile your CosmosDB queries from Azure Functions.

How it works

The idea of Azure Storage Channel is that profiler logs are uploaded as blobs to an Azure container. In Azure you can configure a subscription so on each BlobCreated event an appropriate message will be created in the Azure storage queue.

The Cosmos DB Profiler will listen to those queue messages, download the blobs, analyze them and displays your queries in real time. Also you can load existing profiler logs later on.

Setup on Azure

In order to make the uploaded logs to be available for Cosmos DB Profiler you need to execute the following steps:

  • create a storage account,
  • create a container,
  • create a storage queue,
  • create a subscription to enqueue messages about blobs creation.

You can define it using the Azure portal as shown here or use Azure CLI script (Bash, Powershell)

Appender configuration

When initializing the appender you need to specify AzureStorageToLogTo property and provide the connection string and container name where the blobs of profiler logs will be sent.

HibernatingRhinos.Profiler.Appender.CosmosDB.CosmosDBProfiler.Initialize(new CosmosDBAppenderConfiguration()
                        AzureStorageToLogTo = new AzureStorageConfiguration("azure-storage-connection-string", "container-name")

Connecting to Azure storage from Profiler

In order to be able to load the logs you need to connect to Azure using Options > Azure Storage Channel

Connecting to Azure storage from Profiler

Once you’re connected to the Azure channel, the profiler will start showing data from the profiled application which is configured to send logs to Azure container. You can choose whether the logs should be deleted from the container after processing or they should stay there. If the container already contains some profiling data you can also load it into the profiler by providing a date range.

Ignore / Resume profiling

You can instruct the profiler to ignore some parts of your application, and not publishing any events generated from those sections.

This can be achieved using one following ways:

using (ProfilerIntegration.IgnoreAll())
     // Ignore all events generated from here


// Ignore all events generated from here

You'll want to use this feature when you have a massive amount of operations that you don't need to profile. A typically scenario for this will be building a huge data set as part of the test process, which can be time consuming to profile.

What firewall permissions does the profiler need?

While using the profiler you may be prompted by your firewall to allow the profiler to continue working.

The way the profiler works is by listening on a TCP sockets for events from the profiler Appender watching Cosmos DB client instance, in order to do that, it needs to listen to a socket, an action that sometimes trigger an alert in firewalls.

Denying permission to start listening on a socket will disable the profiler's ability to do live profiling. It is strongly recommended that you would allow it.

The profiler also makes a check for updated version at each startup. In the case of a profiler error, the profiler will ask you if it is allowed to submit the error details so it can be fixed.

In addition to that, the profiler will periodically report (anonymously) which features are being used in the application, which allows us to focus on commonly used parts of the product.


By Per Seat Do You Mean I Need One License For Home And One For Work?

If you bought them, you may use them wherever you want. As long as only you are using it.

If your company bought the license, then you would require a separate license if you want this for home. Per seat means the number of people in an organization using it. Not the number of machines it is used on.

Note that there is a corollary to this, a single machine used by two developers, all of them using Cosmos DB Prof requires two licenses, one for each user of the profiler.

Resolving A Permission Error When Trying To Save License

You might get an error about access to the license file being denied when first trying to use the Profiler. This is usually a result of Windows' locking out the Profiler because it was downloaded from the Internet.

To resolve this issue, go to the folder where you extracted the Profiler, right click on CosmosDBProf.exe and select Properties.

Next, click on the Unlock button to grant the Profiler standard application permissions and resolve the issue


CONTAINS Function Usage

CONTAINS() function usage prevents from utilizing an index. If you need full-text search functionality consider Azure Cognitive Search service.

Entity too Large (HTTP Status Code - 413)

The document size in the request exceeded the allowable size for a request (2MB). Modify your requests to use smaller document sizes to avoid this issue.

Favor Queries with Partition Key

Queries with the Partition Key value in the filter clause result in low latency. The following list shows type of queries ordered by efficiency/lower RU usage:

  • query with a filter clause on a single partition key
  • query without an equality or range filter clause on any property
  • query without filters

Queries that need to consult multiple partitions will cause higher latency and can consume more RUs. You can provide PartitionKey explicitly via query request options:

var resultSet = container.GetItemQueryIterator<Family>(query, requestOptions: new QueryRequestOptions()
                    PartitionKey = new PartitionKey("Anderson")

You can make queries that span partitions faster by using the parallelism options.

Low Index Utilization

Index utilization is the ratio of number of documents matched by the filter to the number of documents loaded during the query. When it's low then it means that Cosmos DB indexes have not been used efficiently to serve query results.

This might impact the query performance and the request change of a given query. You should consider to improve the indexing of your Cosmos DB database by checking the defined indexing policy in your container.

Mathematical System Function That Does Not Utilize Index

Query's filter uses a mathematical system function that prevents from utilizing an index. If you need to frequently compute a value in a query, consider storing this value as a property in a document. The list of functions: ACOS, ASIN, ATAN, ATN2, COS, COT, DEGREES, LOG, LOG10,RADIANS, RAND, SIGN, SINSQRT, SQUARE, TAN.

No Index Utilization

If an index utilization ratio is 0 then it means that a query wasn't served an the index. It means that many documents are loaded during the query before returning the result set. Please read about indexes in Cosmos DB and consider to customize indexing policy in your container to better suit your requirements.

Provide Partition Key Explicitly

In order to extract the Partition Key the SDK needs to parse an item and find the correct attribute. If you provide it explicitly then it will improve the overall performance, especially when you perform large number of operations.

The Cosmos DB API allows to specify the Partition Key by passing PartitionKey object. For example:

await container.UpsertItemAsync(upserted, partitionKey: new PartitionKey(upserted.AccountNumber));

Request Rate too Large (HTTP Status Code - 429)

The actual throughput exceeded the provisioned throughput. The server preemptively ends the request with RequestRateTooLarge (HTTP status code 429) and returns the x-ms-retry-after-ms header indicating the amount of time, in milliseconds, that you must wait before reattempting the request.

Cosmos DB .NET SDK will retry the request automatically. Althoug if the default retry count does not suffice then the cosmos client will throw an exception to the application. The default retry count and retry wait time on rate throttled requests can be changed in cosmos client options:

new CosmosClient("connection-string", new CosmosClientOptions()
  MaxRetryAttemptsOnRateLimitedRequests = ...,
  MaxRetryWaitTimeOnRateLimitedRequests = ...

The retry behavior makes that RequestRateTooLarge errors won't interrupt most applications although the client side latency will spike if the requests become silently throttled.

You should you monitor the request change of your operations with the usage of the Profiler and ensure that they are operating below the reserved request rate.

Request Timeout (HTTP Status Code - 408)

The request did not complete in time. This code is returned when a stored procedure, trigger or a user-defined function (within a query) does not complete execution within the maximum execution time.

Select N+1

Select N + 1 is a data access anti-pattern where the database is accessed in a suboptimal way. Take a look at this code sample. Say you want to show the user all comments from all posts so that they can delete all of the nasty comments. The implementation could be something like:

foreach (Post post in postsContainer.GetItemLinqQueryable<Post>(allowSynchronousQueryExecution: true).List())
   QueryDefinition query = new QueryDefinition("SELECT * FROM Comments c WHERE c.PostId = @postId") 
      .WithParameter("@postId", post.Id);

   FeedIterator<Comment> commentsQueryIterator = commentsContainer.GetItemQueryIterator<Comment>(query);

   while (commentsQueryIterator.HasMoreResults) 
       await itemQueryIterator.ReadNextAsync();

       //print comment...


In this example, we can see that we are loading a list of posts (using postsContainer) and then traversing the object graph. Next we go to CosmosDB database and bring the results back one row at a time by issusing a separate query (SELECT * FROM Comments ... ) for each post. So our one query turns into n+1 queries to return all related documents. This is inefficient, and the CosmosDB Profiler will generate a warning whenever it encounters such a case.

The solution for this example could be to model entities differently and put comments as an array field in blog documents.

Note: this is the classical appearance of the problem. It can also surface in other scenarios, such as calling the database in a loop, or more complex object graph traversals. In those cases, it it generally much harder to see what is causing the issue.

Having said that, CosmosDB Profiler will detect those scenarios just as well, and give you the exact line in the source code that cause this query to be generated.

String System Function That Does Not Utilize Index

Query's filter uses a string system function that prevents from utilizing an index. The list of functions: CONCAT, ENDSWITH, LENGTH, LTRIM, REPLACE, REPLICATE, REVERSE, RIGHT, TRIM,RTRIM, StringToArray, StringToBoolean, StringToNull, StringToNumber.

Transient write error (HTTP Status Code - 449)

The write operation encountered a transient error. It is safe to retry it.

Unbounded result set

An unbounded result set is where a query is performed and does not explicitly limit the number of returned results. Usually, this means that the application assumes that a query will always return only a few records. That works well in development and in testing, but it is a time bomb waiting to explode in production.

The query may suddenly start returning thousands upon thousands of documents, and in some cases, it may return millions of documents. This leads to more load on the database server, the application server, and the network. In many cases, it can grind the entire system to a halt, usually ending with the application servers crashing with out of memory errors.

Here is one example of a query that will trigger the unbounded result set warning:

SELECT * FROM Families

We are going to load all documents from Families collection, which is probably not what was intended. This can be fixed fairly easily by using OFFSET LIMIT clause:


Now we are assured that we only need to handle a predictable, small result set.

Also using filtering like WHERE IN clause will limit the number of results:

SELECT * FROM Families f WHERE IN ('AndersonFamily', 'WakefieldFamily')

Another option is to use TOP keyword in order to get the first N query results:

SELECT TOP 2 * FROM Families f

Unsuccessful Response

Request ended with non successful Status Code. Please refer to Cosmos DB Status Codes for details.

UPPER / LOWER Functions Usage

The usage of UPPER() and LOWER() functions prevents from utilizing an index. Consider normalize the casing upon insertion.

A query like:
SELECT * FROM Families f where lower(f.Address.City) = 'seattle'
SELECT * FROM Families f where f.Address.City = 'seattle'.

Write Conflict (HTTP Status Code - 409)

Write operation resulted in conflict. The ID provided for a document is already taken by an existing document. For partitioned collections, the ID must be unique within the partition.

Modify your requests to ensure that IDs are unique for each partition.