Difficulty Using tf.keras.layers.experimental.preprocessing.RandomFlip with Custom Datasets in TensorFlow 2.12
I recently switched to I'm attempting to set up I'm relatively new to this, so bear with me. I'm trying to use `tf.keras.layers.experimental.preprocessing.RandomFlip` to augment my images in a custom dataset while training my model in TensorFlow 2.12. However, I keep running into issues where the images don't seem to be flipped as expected. Instead, they appear unchanged after applying the augmentation layer. My dataset is created using `tf.data.Dataset.from_tensor_slices`, and I apply the augmentation in the model definition. Here's the relevant part of my code: ```python import tensorflow as tf from tensorflow.keras import layers, models # Assuming images is a NumPy array of shape (num_samples, height, width, channels) images = ... # Your image data labels = ... # Corresponding labels # Create a custom dataset dataset = tf.data.Dataset.from_tensor_slices((images, labels)) def preprocess(image, label): image = layers.experimental.preprocessing.RandomFlip('horizontal')(image) return image, label dataset = dataset.map(preprocess) # Create model model = models.Sequential([ layers.Input(shape=(height, width, channels)), layers.experimental.preprocessing.RandomFlip('horizontal'), layers.Conv2D(32, (3, 3), activation='relu'), layers.MaxPooling2D(), layers.Flatten(), layers.Dense(num_classes, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # Train model.fit(dataset.batch(32), epochs=5) ``` When I run this code, I expect the images to be randomly flipped during training. However, the output shows that the images are not being changed at all; they look exactly the same as the input images. I've confirmed that the dataset is correctly set up and that the preprocessing function is being called, but the flipping operation doesn't seem to apply. I've also tried moving the `RandomFlip` layer outside the preprocessing function, applying it directly in the model definition, but the results are still the same. Could this be an issue related to the input data format or a specific behavior of the augmentation layer? Any suggestions on how I can resolve this would be greatly appreciated. Any help would be greatly appreciated! For context: I'm using Python on Debian. Has anyone dealt with something similar? What's the best practice here?