OpenCV: Unexpected Distortion in 3D Object Recognition Using Stereo Matching on High-Resolution Images
I'm learning this framework and Hey everyone, I'm running into an issue that's driving me crazy. I'm currently working on a project that involves recognizing 3D objects using stereo matching techniques in OpenCV 4.5.1. I have set up my stereo camera and have calibrated it successfully using `stereoCalibrate` and `stereoRectify`. However, when I process high-resolution images (around 4000x3000 pixels), I notice significant distortion in the disparity map, leading to incorrect depth perception results. I've tried both `StereoBM` and `StereoSGBM` to compute the disparity, but the issue persists. My code snippet for disparity calculation is as follows: ```python import cv2 import numpy as np # Load images imgL = cv2.imread('left_image.jpg') imgR = cv2.imread('right_image.jpg') # Create StereoSGBM object stereo = cv2.StereoSGBM_create( minDisparity=0, numDisparities=16*5, blockSize=5, P1=8 * 3 * 5**2, P2=32 * 3 * 5**2, disp12MaxDiff=1, uniquenessRatio=15, speckleWindowSize=0, speckleRange=0 ) disparity = stereo.compute(imgL, imgR) # Normalize the disparity map for visualization normalized_disparity = cv2.normalize(disparity, None, 255, 0, cv2.NORM_MINMAX).astype(np.uint8) # Display disparity map cv2.imshow('Disparity Map', normalized_disparity) cv2.waitKey(0) cv2.destroyAllWindows() ``` Despite adjusting parameters like `numDisparities` and `blockSize`, the resulting disparity map appears smeared and lacks the expected depth accuracy. I suspect that the issue might be related to how the camera is calibrated or the conditions under which the images are taken. I've ensured that the images are well-lit and that the objects are clearly visible. Any insights on how to improve the disparity map quality or common pitfalls to avoid would be greatly appreciated. I'm working on a application that needs to handle this. I'm coming from a different tech stack and learning Python. Any suggestions would be helpful.