Kubernetes continues to evolve and mature as the most preferred orchestration platform for containerized applications and microservices. Kubernetes 1.35, the most recent version, released in mid-December 2025, offers several enhancements that are now generally available. These features bring reliability and efficiency to workloads running on Kubernetes clusters.
This article focuses on five major enhancements that are generally available in Kubernetes 1.35.
In-place pod resource updates
In-place pod resource update is the standout GA feature of the latest Kubernetes release. In prior versions, modifying CPU, memory requests, and limits on a container forced a pod restart, disrupting stateful workloads and long-running processes. With the in-place pod vertical scaling feature, you can resize the resources of a running pod’s containers seamlessly without killing the pod.
Implementing this capability is similar to manipulating the specifications of an existing pod. By using kubectl patch or kubectl edit, you can directly change the CPU and memory requests. When manipulating controllers such as deployments and StatefulSets, the spec can be changed and applied to make it persistent. Remember that this capability is limited to CPU and memory, not other resources such as ephemeral storage, which still forces a pod restart.
This feature has a significant impact on pod vertical autoscaling. When integrated with Vertical Pod Autoscaling (VPA), a pod can be vertically scaled based on real-time metrics from sources such as the Metrics Server or Prometheus. It’s especially useful for managing stateful services, where restarts trigger data rebalancing or failovers. For AI and machine learning (ML) workloads, it helps preserve in-memory caches and model checkpoints, minimizing interruptions during training, fine-tuning, and inference.
DevOps professionals should be aware of the potential limitations of this feature. The host node must have sufficient resources available before scaling up the pod’s CPU or memory allocation. Overcommitting resources may lead to out-of-memory (OOM) errors or pod eviction. Use commands like kubectl describe pod to check the status of resizes, and pair this with the Cluster Autoscaler for proactive node provisioning.
Fine-grained supplemental group control
Security in shared Kubernetes clusters depends on effective management of group permissions for file access. The new supplemental groups policy field, now GA, provides precise control over how supplemental Unix groups are assigned to individual containers in a pod.