CodexBloom - Programming Q&A Platform

Analyzing slow query performance in a legacy PostgreSQL application

πŸ‘€ Views: 347 πŸ’¬ Answers: 1 πŸ“… Created: 2025-10-17
postgresql performance optimization legacy-code SQL

I'm experimenting with I'm testing a new approach and Recently started working on a legacy application that uses PostgreSQL 12..... We've been getting reports of slow query performance, particularly with a specific report query that joins multiple tables. The report pulls data from a users table, an orders table, and a products table, and the execution time has increased significantly over the past few months. Here's the main part of the SQL query: ```sql SELECT u.name, p.product_name, COUNT(o.id) AS order_count FROM users u JOIN orders o ON u.id = o.user_id JOIN products p ON o.product_id = p.id WHERE o.created_at > NOW() - INTERVAL '30 days' GROUP BY u.name, p.product_name ORDER BY order_count DESC; ``` I ran an `EXPLAIN ANALYZE` on the query and noticed that it wasn't using an index on the `created_at` column in the orders table. I created an index as follows: ```sql CREATE INDEX idx_orders_created_at ON orders(created_at); ``` After adding the index, the query execution time improved, but it’s still not optimal. The current execution time is around 4 seconds for a dataset of about 50,000 orders. I also looked into the possibility of caching results for frequent queries, but I need to understand the trade-offs better. Additionally, I’m considering whether restructuring the database or breaking down the query might help. Any suggestions on further optimizations? Are there any best practices for dealing with legacy systems like this that could help us improve performance without extensive refactoring? I appreciate any insights! For context: I'm using Sql on Windows 11. Any pointers in the right direction?