SQLite: Performance implementing large datasets when using window functions in data analysis
I'm stuck on something that should probably be simple... I'm currently working with SQLite version 3.36.0, trying to analyze a large dataset consisting of several million rows. I've noticed important performance optimization when using window functions for running totals and ranking operations. For instance, my query looks something like this: ```sql SELECT id, amount, SUM(amount) OVER (ORDER BY id) AS running_total, RANK() OVER (ORDER BY amount DESC) AS rank FROM transactions; ``` While this query returns the expected results, it takes an unusually long time to execute, especially as the number of rows increases. I’ve tried adding indexes to the `amount` and `id` columns, but it doesn't seem to help much. Additionally, when I run the query on a smaller subset of data (a few thousand rows), the execution time is quite reasonable. I also explored using a temporary table to store intermediate results, but it didn't yield better performance either. I suspect that SQLite's implementation of window functions might not be optimized for large datasets, as I’ve read online that it can struggle compared to other RDBMS solutions like PostgreSQL or MySQL. Is there a known best practice for optimizing window functions in SQLite for large datasets? Any suggestions for alternative approaches to achieve similar results more efficiently would be greatly appreciated. For context: I'm using Sql on Linux. I'd really appreciate any guidance on this.