OCI Functions: How to Handle Timeout and Retries for Long-Running Tasks in Python GDK
I'm trying to implement I'm writing unit tests and I'm having a hard time understanding I've searched everywhere and can't find a clear answer... I'm currently developing a serverless application using Oracle Cloud Infrastructure (OCI) Functions with the Python GDK. I have a function that processes large batches of data, which can sometimes take longer than the default execution timeout of 60 seconds. I need to ensure that if the function times out, it can be retried automatically without losing any data. I attempted to modify the function's timeout setting to 120 seconds in the OCI Console, but I still encounter the following behavior: ``` behavior: Function execution timed out after 60 seconds ``` To address this, I added retry logic in my function code itself, like this: ```python import oci import time RETRY_LIMIT = 3 def process_data(data): # Simulating long processing time time.sleep(70) # Change this to simulate longer processing return 'Processed: ' + str(data) def handler(ctx, data: bytes = None): retries = 0 while retries < RETRY_LIMIT: try: result = process_data(data) return result except Exception as e: print(f'behavior: {e}') retries += 1 time.sleep(5) # wait before retrying return 'Failed after retries' ``` However, the function still times out before it can complete and retry. Is there a recommended approach to handle long-running tasks in OCI Functions? Should I consider using OCI Event Service or breaking the tasks into smaller units? Any insights on best practices for this scenario would be greatly appreciated! I'm working on a CLI tool that needs to handle this. Any help would be greatly appreciated! Is this even possible? I've been using Python for about a year now. Any suggestions would be helpful. My development environment is Linux.