Issues calculating the mean of masked arrays in NumPy with inconsistent output
I'm building a feature where I'm working on a project and hit a roadblock... I've spent hours debugging this and I'm refactoring my project and I'm trying to compute the mean of values in a 2D NumPy array where certain values are masked. I'm using `numpy.ma.masked_array` to mask out the unwanted values, but I'm noticing that the output is inconsistent when I attempt to compute the mean. My array looks like this: ```python import numpy as np data = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) masked_data = np.ma.masked_array(data, mask=[[0, 0, 0], [0, 1, 0], [0, 0, 1]]) ``` When I calculate the mean using `numpy.ma.mean`, I expect it to disregard the masked values (5 and 9) in the output. However, when I run: ```python mean_value = np.ma.mean(masked_data) print(mean_value) ``` I get `5.0`, which is not what I expected since the mean should be based on the values [1, 2, 3, 4, 6, 7, 8] only. After checking the documentation, I also tried specifying the `axis` parameter: ```python mean_value_axis0 = np.ma.mean(masked_data, axis=0) mean_value_axis1 = np.ma.mean(masked_data, axis=1) print(mean_value_axis0, mean_value_axis1) ``` However, the results were still not consistent with what I had anticipated. I also verified that I'm using NumPy version 1.21.0. Is there something I'm missing in the setup or an edge case with masked arrays that I should be aware of? Any help would be greatly appreciated! This is my first time working with Python 3.10. For reference, this is a production web app. I'm coming from a different tech stack and learning Python.