CodexBloom - Programming Q&A Platform

Handling Concurrent Writes to SQLite with FastAPI and SQLAlchemy

👀 Views: 48 đŸ’Ŧ Answers: 1 📅 Created: 2025-06-06
fastapi sqlalchemy sqlite concurrency Python

I'm stuck on something that should probably be simple... Quick question that's been bugging me - This might be a silly question, but I've searched everywhere and can't find a clear answer. I'm facing a challenge with concurrent writes to an SQLite database using FastAPI and SQLAlchemy. When multiple requests attempt to write to the database at the same time, I'm encountering the error: `sqlite3.OperationalError: database is locked`. I understand that SQLite has limitations with concurrent write access, but I need to support high concurrency for my application. I've tried implementing a simple retry mechanism using a decorator to catch the `OperationalError` and retry the operation, but it doesn't seem to resolve the issue effectively. Here's a simplified version of my code: ```python from fastapi import FastAPI, HTTPException from sqlalchemy import create_engine, Column, Integer, String from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker, Session import time DATABASE_URL = "sqlite:///./test.db" engine = create_engine(DATABASE_URL, connect_args={"check_same_thread": False}) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base() class Item(Base): __tablename__ = "items" id = Column(Integer, primary_key=True, index=True) name = Column(String, index=True) Base.metadata.create_all(bind=engine) app = FastAPI() def retry_on_locked(func): def wrapper(*args, **kwargs): for _ in range(5): # Retry 5 times try: return func(*args, **kwargs) except Exception as e: if "database is locked" in str(e): time.sleep(0.5) # Wait before retrying else: raise e raise HTTPException(status_code=500, detail="Database is locked") return wrapper @app.post("/items/") @retry_on_locked def create_item(item: Item): db: Session = SessionLocal() db.add(item) db.commit() db.refresh(item) return item ``` While this approach seems logical, it hasn't completely solved the issue. Additionally, I've tried tweaking the `timeout` parameter in the `create_engine` call, but it doesn't seem to help much. I'm looking for suggestions on how to effectively manage concurrent writes in this context or any best practices that could help mitigate the locking issues with SQLite in FastAPI. I'm working on a application that needs to handle this. This is part of a larger API I'm building. I'm working on a application that needs to handle this. Any help would be greatly appreciated! This issue appeared after updating to Python LTS.