PostgreSQL query performance implementing large JOINs and missing indexes
I'm confused about This might be a silly question, but I'm experiencing important performance optimization with a complex query in PostgreSQL that involves multiple JOINs across large tables. My query looks something like this: ```sql SELECT a.id, a.name, b.amount, c.date FROM accounts a JOIN transactions b ON a.id = b.account_id JOIN logs c ON b.id = c.transaction_id WHERE b.amount > 1000 AND c.date BETWEEN '2023-01-01' AND '2023-12-31'; ``` The `accounts` table has about 1 million rows, the `transactions` table has about 10 million rows, and the `logs` table has around 20 million rows. I've tried adding indexes to the `account_id` and `transaction_id` columns, but I'm still seeing slow performance with execution times often exceeding 10 seconds. I ran `EXPLAIN ANALYZE` on my query and it shows that the query planner is opting for a sequential scan on the `transactions` and `logs` tables, which I suspect is the main reason for the poor performance. Hereโs a snippet of the output: ``` Seq Scan on transactions b (cost=0.00..345000.00 rows=10000000 width=12) Filter: (amount > 1000) ``` I also tried reorganizing my WHERE clauses and using CTEs but that didnโt yield any improvement. Currently, Iโm using PostgreSQL version 14.5. Iโm wondering if there are any specific indexing strategies or query rewriting techniques I could apply to optimize this query further. Any advice would be greatly appreciated! For context: I'm using Sql on Ubuntu. What am I doing wrong? I'm working on a service that needs to handle this.