Incorrectly Calculating Longest Common Subsequence in Python - Returns Suboptimal Result
I'm working on a project and hit a roadblock... I've searched everywhere and can't find a clear answer. I'm trying to implement the Longest Common Subsequence (LCS) algorithm in Python, but the output seems incorrect for certain test cases. Specifically, when I test it with the strings 'AGGTAB' and 'GXTXAYB', it returns the length of the LCS as 4, while I believe it should be 5 ('GTAB'). I've implemented a dynamic programming solution, but I suspect there's an scenario in how I'm updating my 2D DP table. Here's the code I've written: ```python def lcs(X, Y): m = len(X) n = len(Y) # Create a table to store lengths of longest common subsequence. L = [[0] * (n + 1) for _ in range(m + 1)] for i in range(m + 1): for j in range(n + 1): if i == 0 or j == 0: L[i][j] = 0 elif X[i - 1] == Y[j - 1]: L[i][j] = L[i - 1][j - 1] + 1 else: L[i][j] = max(L[i - 1][j], L[i][j - 1]) return L[m][n] X = 'AGGTAB' Y = 'GXTXAYB' print(lcs(X, Y)) # Expected output: 5 ``` I've double-checked the logic, but I need to seem to find where I might be going wrong. My current environment is Python 3.9. When I run the code, I get the expected output for shorter strings like 'ABC' and 'AC', which returns 2, so I believe the basic logic is sound. However, I am confused why it fails for longer strings. Any insights on what could be wrong or if there's a specific edge case I might be missing would be greatly appreciated! Any ideas what could be causing this? This is part of a larger web app I'm building. Any help would be greatly appreciated!