Kubernetes ConfigMap Update Not Reflected in Running Pods: Using Spring Boot and Helm
I'm sure I'm missing something obvious here, but I'm experiencing an issue where updates to a Kubernetes ConfigMap are not being reflected in my running Spring Boot application pods. I have a ConfigMap that holds some application properties, but when I update it and redeploy my Helm chart, the changes don't seem to take effect. I'm using Helm 3.5.0 and Kubernetes v1.21.0. Here's how my ConfigMap is defined in the Helm chart: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: my-app-config data: application.properties: | spring.datasource.url=jdbc:mysql://db:3306/mydb spring.datasource.username=root spring.datasource.password=secret ``` I’ve made changes to the `application.properties` inside the ConfigMap, but after running `helm upgrade my-app ./my-chart`, the old values still persist in the application. I also confirmed that the ConfigMap is updated in Kubernetes by running `kubectl get configmap my-app-config -o yaml` and seeing the new values there. My pod template in the deployment looks like this: ```yaml apiVersion: apps/v1 definition: name: my-app spec: template: spec: containers: - name: my-app image: my-app:latest env: - name: SPRING_PROFILES_ACTIVE value: "production" volumeMounts: - name: config-volume mountPath: /config volumes: - name: config-volume configMap: name: my-app-config ``` I also tried forcing the pods to restart by running `kubectl rollout restart deployment my-app`, but the old properties are still being used. Moreover, I've confirmed that there are no caching layers in my application that would prevent it from picking up the changes. My Spring Boot application is using version 2.5.4. Does anyone know what could be causing this issue? Is there something specific I might be missing in the Helm or Kubernetes configuration that prevents the new ConfigMap values from being utilized by the running pods?