Finally create Kubernetes clusters and deploy workloads in a single Terraform apply
reddit.com·21h·
Discuss: r/kubernetes
Flag this post

The problem: You can’t create a Kubernetes cluster and then add resources to it in the same apply. Providers are configured at the root before resources exist, so you can’t use dynamic outputs (like a cluster endpoint) as provider config.

The workarounds all suck:

Two separate Terraform stacks (pain passing values across the boundary)

null_resource with local-exec kubectl hacks (no state tracking, no drift detection)

Manual two-phase applies (wait for cluster, then apply workloads)

After years of fighting this, I realized what we needed was inline per-resource connections that sidestep Terraform’s provider model entirely.

So I built a Terraform provider (k8sconnect) that does exactly that:

# Create cluster
resource "aws_eks_cluster" "main" {
...

Similar Posts

Loading similar posts...