Slow performance when using SQLAlchemy with large datasets in Python 3.10
I'm relatively new to this, so bear with me. After trying multiple solutions online, I still can't figure this out. I'm sure I'm missing something obvious here, but I've looked through the documentation and I'm still confused about This might be a silly question, but I'm working on a Python 3.10 project that involves querying a PostgreSQL database using SQLAlchemy. The application experiences important performance optimization when trying to fetch a large number of records (around 100,000 rows) from a specific table. I've created a simple query like this: ```python from sqlalchemy import create_engine, select from sqlalchemy.orm import sessionmaker from mymodels import MyTable engine = create_engine('postgresql://user:password@localhost/dbname') Session = sessionmaker(bind=engine) session = Session() query = select(MyTable) results = session.execute(query).scalars().all() # Fetching all records ``` This query takes a long time to execute, and I'm concerned that it's not optimized for performance. I've tried using `yield_per()` to fetch results in chunks, but it didn't seem to help much: ```python results = session.execute(query).scalars().yield_per(1000) for row in results: process(row) # Some processing function ``` I also tried adding indexes to the relevant columns in the database, but that didn't seem to provide a noticeable improvement either. I'm wondering if there are any best practices or design patterns that I might be missing to optimize performance with SQLAlchemy in this scenario. Any suggestions would be greatly appreciated! For context: I'm using Python on Linux. Any ideas what could be causing this? Has anyone else encountered this? My development environment is Windows 10. I'd be grateful for any help.