Cosmos DB
Azure Cosmos DB is Microsoft's proprietary globally-distributed, multi-model database service "for managing data at planet-scale" launched in May 2017. It is schema-agnostic, horizontally scalable and generally classified as a NoSQL database.
Data model
Internally, Cosmos DB stores "items" in "containers", with these two concepts being surfaced differently depending on the API used. Containers are grouped in "databases", which are analogous to namespaces above containers. Containers are schema-agnostic, which means that no schema is enforced when adding items.By default, every field in each item is automatically indexed, generally providing good performance without tuning to specific query patterns. These defaults can be modified by setting an indexing policy which can specify, for each field, the index type and precision desired. Cosmos DB offers two types of indexes:
- range, supporting range and ORDER BY queries,
- spatial, supporting spatial queries from points, polygons and line strings encoded in standard GeoJSON fragments.
Each Cosmos DB container exposes a change feed, which clients can subscribe to in order to get notified of new items being added or updated in the container. Item deletions are currently not exposed by the change feed. Changes are persisted by Cosmos DB, which makes it possible to request changes from any point in time since the creation of the container.
A "Time to Live" can be specified at the container level to let Cosmos DB automatically delete items after a certain amount of time expressed in seconds. This countdown starts after the last update of the item. If needed, the TTL can also be overloaded at the item level.
Multi-model APIs
The internal data model described in the previous section is exposed through:- a proprietary SQL API
- five different compatibility APIs, exposing endpoints that are partially compatible with the wire protocols of MongoDB, Gremlin, Cassandra, Azure Table Storage, and etcd; these compatibility APIs make it possible for any compatible application to connect to and use Cosmos DB through standard drivers or SDKs, while also benefiting from Cosmos DB's core features like partitioning and global distribution.
SQL API
- Stored procedures. Functions that bundle an arbitrarily complex set of operations and logic into an ACID-compliant transaction. They are isolated from changes made while the stored procedure is executing and either all write operations succeed or they all fail, leaving the database in a consistent state. Stored procedures are executed in a single partition. Therefore, the caller must provide a partition key when calling into a partitioned collection. Stored procedures can be used to make up for the lack of certain functionality. For instance, the lack of aggregation capability is made up for by the implementation of an OLAP cube as a stored procedure in the open sourced documentdb-lumenize project.
- Triggers. Functions that get executed before or after specific operations that can either alter the operation or cancel it. Triggers are only executed on request.
- User-defined functions . Functions that can be called from and augment the SQL query language making up for limited SQL features.
Partitioning
Cosmos DB added automatic partitioning capability in 2016 with the introduction of partitioned containers. Behind the scenes, partitioned containers span multiple physical partitions with items distributed by a client-supplied partition key. Cosmos DB automatically decides how many partitions to spread data across depending on the size and throughput needs. When partitions are added or removed, the operation is performed without any downtime so data remains available while it is re-balanced across the new or remaining partitions.Before partitioned containers were available, it was common to write custom code to partition data and some of the Cosmos DB SDKs explicitly supported several different partitioning schemes. That mode is still available but only recommended when storage and throughput requirements do not exceed the capacity of one container, or when the built-in partitioning capability does not otherwise meet the application's needs.
Tunable throughput
Developers can specify desired throughput to match the application's expected load. Cosmos DB reserves resources to guarantee the requested throughput while maintaining request latency below 10ms for both reads and writes at the 99th percentile. Throughput is specified in Request Units per second. The cost to read a 1 KB item is 1 Request Unit. Select by 'id' operations consume lower number of RUs compared to Delete, Update, and Insert operations for the same document. Large queries and stored procedure executions can consume hundreds to thousands of RUs depending on the complexity of the operations needed. The minimum billing is per hours.Throughput can be provisioned at either the container or the database level. When provisioned at the database level, the throughput is shared across all the containers within that database, with the additional ability to have dedicated throughput for some containers. The throughput provisioned on an Azure Cosmos container is exclusively reserved for that container. The default maximum RUs that can be provisioned per database and per container are 1,000,000 RUs, but customers can get this limit increased by contacting customer support.
Using a single region instance, a count of 1,000,000 records of 1k each in 5s requires 1,000,000 RUs At $0.008/hour this is $800. Two regions double the cost.
Global distribution
Cosmos DB databases can be configured to be available in any of the Microsoft Azure regions, letting application developers place their data closer to where their users are. Each container's data gets transparently replicated across all configured regions. Adding or removing regions is performed without any downtime or impact on performance. By leveraging Cosmos DB's multi-homing API, applications don't have to be updated or redeployed when regions are added or removed, as Cosmos DB will automatically route their requests to the regions that are available and closest to their location.Consistency levels
is configurable on Cosmos DB, letting application developers choose among five different levels:- Eventual does not guarantee any ordering and only ensures that replicas will eventually converge
- Consistent prefix adds ordering guarantees on top of eventual
- Session is scoped to a single client connection and basically ensures a read-your-own-writes consistency for each client; it is the default consistency level
- Bounded staleness augments consistent prefix by ensuring that reads won't lag beyond x versions of an item or some specified time window
- Strong consistency ensures that clients always read the latest globally committed write
Multi-master
Cosmos DB's original distribution model involves one single write region, with all other regions being read-only replicas. In March 2018, a new multi-master capability was announced, enabling multiple regions to be write replicas within a global deployment. Potential merge conflicts that may arise when different write regions issue concurrent, conflicting writes can be resolved by either the default Last Write Wins policy, or a custom JavaScript function.Analytical Store
This feature announced in May 2020 is a fully isolated column store for enabling large scale analytics against operational data in the Azure Cosmos DB, without any impact to its transactional workloads. This feature addresses the complexity and latency challenges that occur with the traditional ETL pipelines required to have a data repository optimized to execute Online analytical processing by automatically syncing the operational data into a separate column store suitable for large scale analytical queries to be performed in an optimized manner, resulting in improving the latency of such queries.Using Microsoft Azure Synapse Link for Cosmos DB, it is possible to build no-ETL Hybrid transactional/analytical processing solutions by directly linking to Azure Cosmos DB analytical store from Synapse Analytics. It enables to run near real-time large-scale analytics directly on the operational data.
Reception
Gartner Research positions Microsoft as the leader in the Magic Quadrant Operational Database Management Systems in 2016 and calls out the unique capabilities of Cosmos DB in their write-up.Real-world use cases
These Microsoft services utilize Cosmos DB: Microsoft Office, Skype, Active Directory, Xbox, MSN.In build a more-globally-resilient application / system, Cosmos DB combine with other Azure services, such as Azure App Service and Azure Traffic Manager.
Cosmos DB Profiler
The Cosmos DB Profiler cloud cost optimization tool detects inefficient data queries in the interactions between an application and its Cosmos DB database. The profiler alerts users to wasted performance and excessive cloud expenditures. It also recommends how to resolve them by isolating and analyzing the code and directing its users to the exact location.Limitations, criticism and cautions
- Limited backup/restore features. Whilst automated backups are taken, they are limited in duration. Restoration of backups can only be achieved by raising a support ticket and awaiting Microsoft Support Team's assistance. Furthermore, whilst the backup facility does protect against accidental deletion of databases and whole collections, it offers very little protection against document-level corruption, due to the fact that there is no "point-in-time" restore option. These limiting factors mean that Cosmos DB may not satisfy the long-term data retention policies and requirements of many organizations.
- Triggers must be explicitly specified for each operation that you wish to use them which renders them ineffective as a mechanism for maintaining business logic consistency unless you can be certain that all the correct triggers are specified for every operation.
- .NET LINQ language integrated queries are not fully supported. More and more LINQ support has been added over time, but developers are often confused when the LINQ code that they use on other systems fails to work as expected on Cosmos DB as evidenced by the large number of StackOverflow questions containing both tags.
- Transactions are not currently supported at the API level, for example Cosmos DB does not participate in the.NET TransactionScope pattern. Transactions are only currently supported from within JavaScript stored procedures.
- For the local development experience, a local emulator is available, but only for Microsoft Windows. This emulator can be accessed either as a program that runs in the background or as a Docker for Windows container image. The emulator is limited in features compared to the Cosmos DB service on Azure and is intended for development only.
- SQL is very limited. Aggregations limited to COUNT, SUM, MIN, MAX, AVG functions but no support for GROUP BY or other aggregation functionality found in database systems. However, stored procedures can be used to implement in-the-database aggregation capability.
- "Collection" means something different in Cosmos DB. It is simply a bucket of documents. There is a tendency to equate them to tables where each collection would hold only a single type of document which is not recommended with Cosmos DB. Rather, developers are encouraged to distinguish document types with a "type" field or by adding an "isTypeA = true" field to all documents of TypeA, "isTypeB = true" for all documents of Type B, etc. This is especially confusing to developers that are coming from MongoDB which has a "collection" entity that is intended to be used in a very different way.
- The lack of query plan visibility.
- Support only for pure JSON data types. Most notably, Cosmos DB lacks support for date-time data requiring that you store this data using the available data types. For instance, it can be stored as an ISO-8601 string or epoch integer. MongoDB, the database to which Cosmos DB is most often compared, extended JSON in their BSON binary serialization specification to cover date-time data as well as traditional number types, regular expressions, and Undefined. However, many argue that Cosmos DB's choice of pure JSON is actually an advantage as it's a better fit for JSON-based REST APIs and the JavaScript engine built into the database.