Kubernetes Pod Termination Grace Period Not Respecting Pre-Stop Hook for Cleanup Tasks
I'm attempting to set up I've spent hours debugging this and I just started working with I'm working with an scenario where my Kubernetes pods are not respecting the termination grace period when using pre-stop hooks for cleanup tasks. I've defined a pre-stop hook in my Deployment configuration to run a script that cleans up temporary files, but it seems like the pod is being forcefully terminated before the script has a chance to execute fully. The configuration for my deployment looks like this: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: example-app spec: replicas: 2 selector: matchLabels: app: example-app template: metadata: labels: app: example-app spec: containers: - name: app-container image: my-app-image:latest lifecycle: preStop: exec: command: ["/bin/sh", "-c", "sleep 5; /app/cleanup.sh"] terminationGracePeriodSeconds: 30 ``` When I initiate a shutdown of the Deployment using `kubectl delete deployment example-app`, I'm expecting the pre-stop hook to run for 5 seconds before the pod is terminated. However, the logs show that the cleanup script is interrupted and I receive the following behavior message: ``` behavior: context deadline exceeded ``` I have verified that the script itself executes correctly when run manually and that itβs indeed located in the right directory within the container. Additionally, I've tried increasing the `terminationGracePeriodSeconds` to 60 seconds, but the behavior remains the same. Is there something Iβm missing regarding the execution of the pre-stop hook or the termination process? Any insights on how to ensure that the pre-stop hook completes before the pod is forcefully terminated would be appreciated! I'm working with Yaml in a Docker container on Windows 10. I'm coming from a different tech stack and learning Yaml. What would be the recommended way to handle this?