Kubernetes released version called named after the This release brings several long-awaited features that are now stable and ready for production. It also introduces powerful new capabilities in alpha for those exploring what’s next. Alongside this, continued infrastructure cleanup shows how the ecosystem is maturing to better support modern workloads and everyday operations.This release introduces across stable graduations, beta features and new alpha capabilities. It also includes a set of deprecations and removals that highlight how Kubernetes foundations continue to evolve for future-ready platform behaviour.In this post, we’ll explore the most impactful changes in Kubernetes 1.35, what they enable for clusters and workloads and what operators and developers should consider before upgr…
Kubernetes released version called named after the This release brings several long-awaited features that are now stable and ready for production. It also introduces powerful new capabilities in alpha for those exploring what’s next. Alongside this, continued infrastructure cleanup shows how the ecosystem is maturing to better support modern workloads and everyday operations.This release introduces across stable graduations, beta features and new alpha capabilities. It also includes a set of deprecations and removals that highlight how Kubernetes foundations continue to evolve for future-ready platform behaviour.In this post, we’ll explore the most impactful changes in Kubernetes 1.35, what they enable for clusters and workloads and what operators and developers should consider before upgrading.Important Features in Kubernetes 1.351. In-Place Pod Resource Resize Reaches Stable (GA)The in-place pod resource resizing is now stable. In previous Kubernetes versions, adjusting the CPU or memory allocated to a running container required replacement of the Pod. This “kill and recreate” approach often disrupted stateful services, long-running batch jobs, and applications that maintain internal state or warm caches. With 1.35, you can modify these resources without replacing the Pod itself that is a fundamental improvement to how clusters handle resource changes.Suppose a Pod named webapp needs more memory based on observed usage. You can update it directly:kubectl patch pod webapp –subresource=resize -p ’{ “spec”: { “name”: “app”, “requests”: {“cpu”: “500m”, “memory”: “1Gi”}, “limits”: {“cpu”: “1”, “memory”: “2Gi”} } }With this command, the kubelet updates cgroups for the container without terminating it. This approach significantly improves uptime and responsiveness for production traffic. The following are some benefits of the in-place pod resource resize:Applications stay online during resizing.Resource management becomes more flexible and adaptive.Less need to tolerate overprovisioning.Improved cluster utilization and cost savings.2. OCI Image Volumes Graduate to StableThe ability to mount an inside a Pod is stable. Traditionally, mounting data packaged in an OCI image required either rebuilding your application image or using init containers to fetch files before the main container started. This new stable feature simplifies workflows such as delivering static assets, configuration bundles or AI models directly as volume content.The following is a basic manifest showing how to mount an OCI image volume:apiVersion: v1kind: Pod name: app-with-image-volume containers: image: my-app:latest - name: data-volume volumes: image: reference: registry.example.com/my-data-image:latestThis approach decouples data packaging from your application image, reduces complexity and avoids init container overhead. This feature can be helpful in following ways:Simplifies delivery of static data into Pods.Avoids redundant image rebuilds.Supports cleaner CI/CD pipelines.Helps with independent versioning of data and application logic.3. Dramatic Improvements for AI and Batch WorkloadsVersion 1.35 also enhances the Dynamic Resource Allocation (DRA) ecosystem and adds scheduling capabilities useful for AI and batch workloads. Earlier this space was handled by device plugins, but DRA now allows making structured claims on hardware resources such as requesting GPUs with specific attributes or even fractional slices of a device.This capability matters for heterogeneous and high performance compute environments where:GPU hardware may vary in memory size, cache configuration or acceleratorsAI workloads require precise control over hardware accessEdge and mixed hardware environments serve advanced inference or training logic4. PreferSameZone Traffic DistributionThis release improves how Services distribute traffic across endpoints by making traffic locality explicit. The older PreferClose option was vague in meaning, so 1.35 replaces it with a clear PreferSameZone policy. This setting ensures that requests tend to stay within the same zone when possible. This reduces cross-zone hops that increase latency and network cost.This feature benefits applications deployed across multiple zones where latency, cost, and network traffic matter. Legacy PreferClose behavior remains supported as an alias, but the recommendation is to move to PreferSameZone for clarity and consistency.apiVersion: v1kind: Service name: webapp selector: ports: targetPort: 8080 trafficDistribution: PreferSameZoneKubernetes will try to route traffic from a client Pod to backend Pods within the same availability zone. It will fall back to other zones only if no healthy endpoints are available locally.5. Modernization of Interactive Commands via WebSocketsKubernetes 1.35 replaces the older SPDY protocol used for interactive operations such askubectl exec, attach and port-forward with . This update aligns Kubernetes with modern networking stacks and improves interoperability with proxies, load balancers and security tools.To support interactive sessions, you now need to grant explicit CREATE permissions on corresponding subresources in your RBAC policies. Failing to update RBAC rules may block users from performing these common tasks.apiVersion: rbac.authorization.k8s.io/v1kind: Role namespace: defaultrules: resources: [“pods”] verbs: [“get”, “list”, “watch”]- apiGroups: [“”] resources: [“pods/exec”, “pods/attach”, “pods/portforward”] verbs: [“create”]This role grants a user the ability to initiate interactive sessions (exec, attach and port-forward) on pods.6. Gang Scheduling Support (Alpha)This release introduces as an alpha feature aimed at workloads that require multiple pods to start simultaneously. Traditional scheduling evaluates each Pod independently, which can result in partial allocations that make distributed jobs wait indefinitely for full resources. Gang Scheduling enables the scheduler to treat a group of Pods as a single unit that ensures all or none are scheduled together.This is particularly relevant for AI training jobs, MPI workloads and other distributed tasks where partial resource allocation is not helpful. Instead of launching subsets of pods and waiting inefficiently for capacity, gang scheduling improves resource planning and reduces wasted scheduling cycles. Since this feature is alpha, it must be enabled using feature gates and tested thoroughly before use in production.apiVersion: scheduling.sigs.k8s.io/v1alpha1kind: PodGroup name: mpi-training minAvailable: 2apiVersion: v1metadata: labels:spec: containers: image: mpi-training:latest requests: memory: “4Gi“apiVersion: v1metadata: labels:spec: containers: image: mpi-training:latest requests: memory: “4Gi“In this example, the PodGroup defines the gang of pods, with minAvailable: 3 ensuring that all three pods must be scheduled together. The schedulerName: volcanodirects Kubernetes to use gang scheduling. So, the pods will wait until sufficient resources are available to schedule the entire group simultaneously.Deprecations and Breaking Changes in Kubernetes 1.35The following are the most important deprecations and breaking changes you should understand before moving to Kubernetes 1.35.1. cgroup v1 Support Has Been RemovedThe support for cgroup v1 has been removed in Kubernetes 1.35. All nodes running Kubernetes 1.35 must use cgroup v2.cgroups are a Linux kernel feature that Kubernetes relies on to enforce CPU, memory and resource limits. For many years, Kubernetes supported both cgroup v1 and cgroup v2 to accommodate older Linux distributions and container runtimes.With Kubernetes 1.35, nodes that boot with only cgroup v1 enabled will fail to start the kubelet. This means clusters running on older operating systems or outdated container runtimes may experience node failures during upgrade.The following are some reason for the removal of cgroup v1:cgroup v2 provides better memory accounting and resource isolation.Advanced features such as in place pod resizing depend on cgroup v2 behaviour.2. IPVS Mode in kube-proxy Is DeprecatedIPVS mode in kube-proxy is deprecated. You should migrate to based implementations.Historically, kube proxy supported multiple backend implementations such as iptables and IPVS. Linux networking has evolved significantly and nftables has emerged as the modern replacement for both iptables and IPVS. The following are some benefits of nftables:nftables provides better performance and simpler rule management.Maintaining multiple proxy backends increases complexity and maintenance cost.Modern Kubernetes networking solutions increasingly rely on eBPF and nftables.3. Final Support for containerd 1.xFinal support for containerd 1.x in Kubernetes 1.35. You should plan to migrate to or later.Historically, Kubernetes has relied on the containerd 1.x series (including the long-term supported 1.7 release) as the default container runtime. As the Kubernetes and Linux ecosystems have evolved, the container runtime interface and node architecture have advanced, and containerd 2.0 has emerged as the modern, long-term supported runtime. The following are some benefits of moving to containerd 2.0:Containerd 2.0 provides improved architecture, better performance, and stronger security isolation with native alignment to cgroup v2 and modern kernel capabilities.Future Kubernetes releases will require containerd 2.0+, making early migration essential to avoid upgrade blocks and to stay aligned with the Kubernetes node and runtime roadmap.Major Considerations Before Upgrading to Kubernetes 1.35The following are several broader considerations that should guide your upgrade strategy:1. Validate Node and OS CompatibilityThis release tightens system requirements. Therefore, it is critical to ensure all nodes meet kernel, operating system, and runtime expectations. This includes cgroup v2 support, container runtime versions and kernel networking features.2. Review RBAC Changes for Interactive CommandsThe interactive operations are modernised such as exec, attach and port forward by moving to WebSocket based communication. This change introduces stricter permission checks. Clusters with tightly scoped RBAC policies may see users unexpectedly lose access to these commands unless permissions are updated accordingly.3. Test Resource Management Changes ThoroughlyFeatures such as in place pod resizing introduce new behaviour at the kubelet and container runtime level. While this feature is stable, workloads that rely on strict resource assumptions should be tested carefully. Therefore, pay particular attention to stateful applications, JVM based workloads, and memory sensitive services.4. Upgrade in Phases and Monitor CloselyFinally, treat the upgrade to Kubernetes 1.35 as a phased rollout rather than a single event. Begin with development and staging clusters, then move gradually toward production. Monitor node health, workload behaviour and control plane metrics closely at each stage.Clear rollback plans, strong observability and well-defined success criteria make the upgrade process far more predictable.Kubernetes 1.35 represents a meaningful step forward in the platform’s evolution that combines long-awaited stability improvements with forward-looking capabilities for modern, large-scale and AI-driven workloads. Features such as in-place Pod resizing, OCI image volumes, improved traffic locality and the move to WebSockets modernize core operations. While advancements in scheduling and resource allocation reflect Kubernetes’ growing role as a foundation for high-performance and distributed systems.At the same time, the deprecations and removals in this release underscore a clear direction toward a cleaner and more future-ready stack. The transition to cgroup v2, the deprecation of legacy networking backends, and the final support for containerd 1.x all signal the need for proactive planning and disciplined upgrades. By validating infrastructure readinessand approaching the rollout in phases with strong observability and rollback strategies, teams can adopt Kubernetes 1.35 with confidence and fully realize the benefits of this World Tree release (Timbernetes).If you find my work valuable and would like to support it, you are welcome to If you found this post helpful and enjoy reading about AWS architecture, DevOps, Containers and infrastructure automation, feel free to connect with me and follow my work on other platforms — , , .