Implement an update by using a patch operation
Write data back to the transactional store from Spark
Choose a partitioning strategy based on a specific workload
Monitor distribution of data across partitions
Implement a point operation that creates, updates, and deletes documents
Perform a query against the transactional store from Spark
Specify a default TTL on a container for a transactional store
Monitor throughput across partitions
Choose when to use a point operation versus a query operation
Identify data and associated access patterns
Enable a connection to an analytical store and query from Azure Synapse Spark or Azure Synapse SQL
Implement and query Azure Cosmos DB logs
Implement queries based on variable data
Identify primary and unique keys
Enable the analytical store on a container
Configure Azure Monitor alerts for Azure Cosmos DB