PostgreSQL query performance degradation after adding new index on large table
I recently added a new index on a large table in my PostgreSQL database to improve query performance. The table contains over 1 million rows and the index was created on a column that is frequently used in `WHERE` clauses. After creating the index, I expected better performance. However, I have noticed that my query times have actually increased significantly. For example, the following query that previously took about 200ms is now taking over 1 second: ```sql SELECT * FROM orders WHERE customer_id = 12345; ``` Prior to adding the index, I was using the `EXPLAIN ANALYZE` command, which showed the sequential scan: ```sql EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 12345; ``` After creating the index with: ```sql CREATE INDEX idx_customer_id ON orders(customer_id); ``` I expected the execution plan to switch to using the index, but it still shows a sequential scan: ```sql EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 12345; ``` The output indicates that the planner is still opting for the sequential scan despite the new index being available. I have tried running `VACUUM` and `ANALYZE` to ensure statistics are updated, but the performance hasn’t improved. I also checked for any filters or constraints that might affect the planner's decision. I am running PostgreSQL version 13.3. Could there be any issues with the way the index was created, or is there something else I might be overlooking? Any guidance would be appreciated!