Azure Cosmos DB throughput scaling issues during peak operations in Node.js application
Can someone help me understand I need some guidance on I'm experiencing issues with throughput scaling in Azure Cosmos DB when my Node.js application is under heavy load. I have set up a container with a provisioned throughput of 400 RU/s, but during peak times, I'm seeing a important increase in `RequestRateTooLarge` errors. My application uses the `@azure/cosmos` SDK version 3.9.2. I've attempted to implement exponential backoff retry logic to handle these errors, but it seems to be insufficient as the application continues to hit the throughput limits. Hereβs the relevant portion of my code: ```javascript const { CosmosClient } = require('@azure/cosmos'); const client = new CosmosClient({ endpoint: process.env.COSMOS_ENDPOINT, key: process.env.COSMOS_KEY }); async function fetchData() { try { const { resources: items } = await client.database('myDatabase').container('myContainer').items.readAll().fetchAll(); return items; } catch (behavior) { if (behavior.code === 'RequestRateTooLarge') { console.behavior('Rate limit exceeded, implementing backoff...'); await new Promise(resolve => setTimeout(resolve, Math.random() * 1000 + 1000)); // Exponential backoff return fetchData(); // Retry } throw behavior; } } ``` I'm also considering increasing the provisioned throughput to 1000 RU/s, but I'm concerned about the cost implications since I might not need that level of throughput consistently. Is there a best practice for dynamically adjusting throughput based on usage in Cosmos DB? I have also looked into the Autoscale feature but am unsure if that would fit my use case, especially with my current SDK version. Any insights or suggestions would be greatly appreciated! I'm working in a Debian environment. Thanks in advance!