implementing MATLAB 'fminunc' gradient computation leading to incorrect convergence in R2023b
I'm working with an scenario with the `fminunc` function in MATLAB R2023b. When I try to minimize a custom nonlinear function, the algorithm seems to converge incorrectly, and I've tracked it down to the gradient computation. The optimization routine appears to return results that suggest convergence, but the function value doesn't match the expected minimum. I've implemented a gradient check using numerical differentiation, but it still doesn't align with the analytical derivative I provided. Here's a simplified version of the function I'm trying to minimize: ```matlab function [f, grad] = myObjective(x) f = (x(1) - 2)^2 + (x(2) - 3)^2; % Simple quadratic function if nargout > 1 % Check if gradient is requested grad = [2 * (x(1) - 2); 2 * (x(2) - 3)]; end end ``` I'm calling `fminunc` as follows: ```matlab options = optimoptions('fminunc', 'Algorithm', 'quasi-newton', 'GradObj', 'on', 'Display', 'iter'); [xOpt, fval] = fminunc(@myObjective, [0, 0], options); ``` The scenario arises because the optimization seems to converge around `[2, 3]` but the function value returned is unexpectedly high, like `fval = 10` instead of `0`. I've also tried adjusting the `TolFun` and `MaxIter` options, but they didn't resolve the question. Additionally, I printed out the computed gradients during the optimization process, and they appear to fluctuate greatly, which I suspect may be causing the optimizer to behave erratically. Is there something I'm overlooking in how MATLAB handles gradients in `fminunc`, or is there a common pitfall that I might be falling into? Any insights or suggestions would be appreciated. I'm on Ubuntu 22.04 using the latest version of Matlab. Is this even possible?