AWS Lambda Function with DynamoDB Trigger Not Processing Events After Scaling Down
I'm collaborating on a project where I'm optimizing some code but I've set up an AWS Lambda function that gets triggered by DynamoDB streams. Initially, everything was working fine, but after I scaled down the provisioned throughput on the DynamoDB table to save costs, I'm no longer seeing any events being processed by the Lambda function. The Lambda function is configured to handle 100 concurrent executions, but it seems like it is not being triggered at all now. In the Lambda console, I checked the monitoring metrics and noticed that the invocation count has dropped to zero since the scaling down. This is the relevant part of my function's code: ```python import json def lambda_handler(event, context): print("Received event: ", json.dumps(event)) # Process records here for record in event['Records']: # Further processing... pass ``` I also confirmed that the DynamoDB stream is enabled and configured to view both new and old images. Hereβs the configuration I have set for the stream: - Stream View Type: NEW_AND_OLD_IMAGES - Stream Status: ENABLING (previously it was ACTIVE) Iβve tried re-enabling the stream, updating the Lambda function configuration to use a different IAM role, and even redeploying the function, but the question continues. I don't see any errors in the Lambda monitoring, but it just seems like it's not being invoked at all after the throughput change. Does anyone have any insights on why this might be happening or how I could troubleshoot this scenario further? My development environment is Windows. Thanks in advance! The stack includes Python and several other technologies. Could someone point me to the right documentation? I'm using Python 3.11 in this project. Is there a simpler solution I'm overlooking? I'm on macOS using the latest version of Python. Thanks for taking the time to read this!