CodexBloom - Programming Q&A Platform

Azure Kubernetes Service not scaling pods as expected despite HPA configuration

👀 Views: 117 đŸ’Ŧ Answers: 1 📅 Created: 2025-06-10
azure kubernetes hpa yaml

I recently switched to I'm stuck on something that should probably be simple..... I'm running an application on Azure Kubernetes Service (AKS) and have configured Horizontal Pod Autoscaler (HPA) to scale the pods based on CPU utilization. However, I noticed that the pods are not scaling as expected even when CPU usage spikes above the defined threshold. I've set the target CPU utilization to 70% and deployed the following HPA configuration: ```yaml apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: my-app-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my-app minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 ``` I've checked the metrics server and it seems to be running correctly. The HPA status is showing that it is monitoring the deployment, but the number of replicas remains constant at the minimum value. I tried manually scaling the deployment and confirmed that the application can handle more replicas without issues. The logs from the metrics server don't show any errors, but I'm still puzzled why the HPA isn't triggering any scaling actions. Here are the details of my setup: - AKS version: 1.22.6 - Metrics server version: v0.5.0 - Resource requests/limits for my deployment are set to 200m for requests and 400m for limits. I would appreciate any insights on what might be preventing the HPA from functioning correctly or if there are any additional configurations I should consider. For context: I'm using Yaml on Windows. How would you solve this? I'm coming from a different tech stack and learning Yaml.