CodexBloom - Programming Q&A Platform

Kubernetes Pod IP Address implementation guide After Node Drain in v1.28

👀 Views: 94 💬 Answers: 1 📅 Created: 2025-06-14
kubernetes pod-management networking yaml

I'm working through a tutorial and I'm experiencing an scenario where the IP address of a Kubernetes Pod is not updating after I drain a node in my cluster running version 1.28... I have a deployment with a few replicas and when I drain the node, the pods are moved to a different node, but they seem to retain their original IP addresses. This is causing issues with my service as it relies on these IPs for health checks. I’ve tried manually terminating the pods to see if they would receive new IPs upon restart, but the same IPs are reused. I have the following deployment configuration: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-container image: my-image:latest ports: - containerPort: 80 ``` I also verified that I don't have a static IP assigned in the pod spec and that my `kubelet` configuration does not enforce IP retention. The drain command I used was: ```bash kubectl drain my-node --ignore-daemonsets --delete-local-data ``` I expected the pods to be recreated with new IPs on the available nodes, but they still show the old IPs in the output of `kubectl get pods -o wide`. Additionally, the logs from the pod indicate that they are trying to connect to services using the old IPs, leading to failures. Any insights into why this might be happening or how I can enforce new IPs for my pods after a node drain would be greatly appreciated. This is my first time working with Yaml 3.11.