How to efficiently manage Azure Cosmos DB throughput for multiple collections in a shared database?
I'm sure I'm missing something obvious here, but I'm currently using Azure Cosmos DB to manage data for a multi-tenant application, and I've set up multiple collections within a single database to segregate tenant data... However, I'm facing challenges in managing the provisioned throughput effectively. Each collection has been set up with a manual throughput of 1000 RU/s, but during peak hours, I'm hitting the maximum provisioned throughput which results in request rate too large errors. I've tried enabling autoscale on the collections, but I'm still seeing throttling because the autoscale doesn't seem to react quickly enough to sudden spikes in traffic. Additionally, I've considered partitioning the collections further, but that would significantly complicate my application logic and introduce overhead. Here's a snippet of my current configuration when I set up the collections: ```csharp var throughputProperties = new ThroughputProperties(1000); await cosmosClient.GetDatabase("MyDatabase").DefineContainer("MyCollection1", "/tenantId") .WithThroughput(throughputProperties) .CreateAsync(); ``` I've also looked into using shared throughput mode, but I'm not entirely sure how that would work for my scenario where each collection is serving a different tenant. Can anyone suggest best practices or strategies for managing throughput more efficiently without hitting the throttling limits? Are there patterns to consider that might help optimize the database access patterns for my tenants? Any insights would be greatly appreciated! This is part of a larger application I'm building. Has anyone else encountered this?