Contents
data/authors/Paul Logan.json

Providing data to Mobile Apps with Cosmos DB

Bridging the gap between new mobile app and legacy in-house system.

Highly Available Data for Mobile App

I am using Cosmos DB as a highly available cached data store to remove any dependency of the mobile app on an in-house stock management system and database.

A full refresh of the data set will be regularly sent from the internal system to an Azure Function that is part of an Azure Static Web App (SWA) - this is the backend of the mobile Progressive Web App (PWA).

The Azure Function will then update the Cosmos DB container with the latest data.

App availability and responsiveness should be much better than querying data from the internal system & database.

Using the Azure Cosmos DB lifetime free tier

This was the first time I used Cosmos DB in a live application.

I documented the decision to use Cosmos DB in this application within an Architecture Decision Record, saved as a file within the project and therefore version controlled with the source code:

## Decision

Table Storage was considered first as I was already using it in the prototype employee app.

Table Storage does not easily cater for types with nested children - specifically, 
it does not support IEnumerable or ICollection type properties.

Azure Cosmos DB uses same API and SDK as Table Storage and does allow for complex types.
There is a "Always Free Quantity" switch available for one Cosmos DB account per subscription.
The amount for free would cover the requirements for this app.

Keeping the cached data fresh

Each warehousing message from the mobile app was picked up from the Azure Service Bus Message Queue by the in-house stock management system.

After the message was processed, the stocks effected were recalculated by the system and the new location stocks were posted to the mobile app’s API.

This API was responsible for creating an UpsertItemAsync command to update/insert records into the Cosmos DB.

As a pre-cautionary measure, I scheduled an early hours total refresh of the data stored in the cache - this was not as easy as it sounded!

Upserting each record would take way too long - I needed to clear out the existing cached date and insert all of the new records.

Truncating Data

In the world of SQL, a TRUNCATE of a table is a fast and efficient way of deleting all the records in a table.

Currently, there is no such thing in Cosmos DB, however…..

There is a preview only feature that comes close to it: “Delete items by partition key value”.

To enable it is a two-stage process:

Enable the feature for a Cosmos DB Account

Head over to this Microsoft Learning page, where you can use a Cloud Shell to add the capability to the Cosmos DB account.

The feature is called DeleteAllItemsByPartitionKeyStreamAsync.

Install the Preview Cosmos DB package

Browse to the Microsoft.Azure.Cosmos package on the Nuget website link.

I picked the most recent preview version package, copied the .Net CLI command and ran it in the VS Code terminal.

How long is a preview

The Learning article specifies version 3.25.0-preview - which was last updated on 18/02/2022.

I used the most recent preview version, 3.44.0-preview.0 - last updated 04/09/2024.

So this feature has been in preview mode for over 2 years!

The TRUNCATE table effect

To delete all the data in the container, I use Linq within the Azure function to select all of the partition keys and then iterate over them, performing a DeleteAllItemsByPartitionKeyStreamAsync against each key.

[Function(nameof(UpdateAllItems))]
        public async Task UpdateAllItems([HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequest req)
        {
            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            List<Item> items = JsonSerializer.Deserialize<List<Item>>(requestBody);
            items.Select(stk => (double)stk.PropertyUsedAsKey).Distinct().ToList()
                .ForEach(async key => await itemsContainer.DeleteAllItemsByPartitionKeyStreamAsync(new PartitionKey(key)));

            await Task.Delay(30000);

            items.ForEach(async item =>
            {
                item.Timestamp = DateTimeOffset.Now;
                item.id = $"{item.PropOneID}-{item.PropTwoID}";
                item.RowKey = item.id.ToString();
                await Task.Delay(5000).ContinueWith(async t => await itemsContainer.UpsertItemAsync<Item>(item: item));
            });
        }

Burst Capacity

Sending every location stock at once to be inserted into Cosmos DB resulted in a breach of the provisioned throughput limit of the free tier.

This resulted in a flood of 429 status error codes - representing “Request rate too large” exceptions.

The Burst capacity feature is an ideal solution to this issue.

It “carries-over” idle throughput capacity to handle spikes of traffic - ideal for the once-a-day morning rush hour traffic.