Published 6 minutes ago
Ayush Pande is a PC hardware and gaming writer. When he’s not working on a new article, you can find him with his head stuck inside a PC or tinkering with a server operating system. Besides computing, his interests include spending hours in long RPGs, yelling at his friends in co-op games, and practicing guitar.
Besides housing my self-hosted arsenal, Proxmox serves as a solid testing ground for tinkering with VMs and training my DevOps skills. After all, it’s light enough to run on practically any old hardware, while including advanced features such as clustering and SDN stacks without charging a dime. In fact, I often use Proxmox with different automation services just to get the hang of managing nodes via co…
Published 6 minutes ago
Ayush Pande is a PC hardware and gaming writer. When he’s not working on a new article, you can find him with his head stuck inside a PC or tinkering with a server operating system. Besides computing, his interests include spending hours in long RPGs, yelling at his friends in co-op games, and practicing guitar.
Besides housing my self-hosted arsenal, Proxmox serves as a solid testing ground for tinkering with VMs and training my DevOps skills. After all, it’s light enough to run on practically any old hardware, while including advanced features such as clustering and SDN stacks without charging a dime. In fact, I often use Proxmox with different automation services just to get the hang of managing nodes via configuration files.
Terraform is one such DevOps tool, though it’s also pretty useful for provisioning fully-configured virtual guests with the press of a button. Or should I say, the click of a mouse, as I use it with a web UI to automatically deploy test environments on multiple PVE nodes.
Related
6 of my favorite TurnKey templates for Proxmox
TurnKey LXC templates are one of the most underrated features of Proxmox
I typically use Terraform to provision VMs on Proxmox
It works with cloud-init as well as custom templates
When it comes to creating a Terraform file for Proxmox, you need a couple of things. Leaving the Proxmox credentials aside for a moment, you’ll require a template to base the virtual machine on. I’ve played with both cloud-init configurations and custom templates that I designed from previously-deployed virtual machines, and I prefer using the latter. While cloud-init templates are better for lightweight and disposable VMs, I often use virtual machines with pre-configured desktop environments, network settings, and existing packages to make my projects simpler to build (and troubleshoot when things go wrong).
I’ve gone with a simple Debian VM in the screenshots, but I also have different Terraform configs in separate directories to cater to the other distributions I love to tinker with. Creating this template was pretty simple, as I simply provisioned a VM the traditional way, installed Debian using the Console interface, ejected the boot ISO from the Hardware tab, and used the Convert to template button after installing Emacs and a couple of other important packages.
As for the Proxmox node details, I needed an API key, which I created from the API Tokens within Datacenter. With that, I created a .tf file and added the following code to it:
terraform {required_providers { proxmox = { source = "telmate/proxmox" version = "3.0.2-rc04" }}}provider "proxmox" {pm_api_url = "https://IP_address_of_Proxmox_node:8006/api2/json"pm_api_token_id = "Token_name"pm_api_token_secret = "Token_secret"pm_tls_insecure = true}resource "proxmox_vm_qemu" "linux" {name = "VM_name"target_node = "PVE_node"clone = "template_name"full_clone = trueboot = "order=scsi0"cores = 2sockets = 1memory = 2048disks { scsi { scsi0 { disk { size = "20G" storage = "local-lvm" discard = true } } }}network {id = 0model = "virtio"bridge = "vmbr0"firewall = false}}
The latest version of the Terraform provider for Proxmox is actually 3.0.2-rc07, but I went with 3.0.2-rc04, as it works well with every Proxmox node in my home lab. Cloud-init virtual machines have a couple of extra arguments, like IP address and password, as these are meant to be configured before spinning up a virtual machine.
I prefer relying on Semaphore to execute the Terraform scripts
But it’s also possible to run them via terminal commands
I started my automation journey with VS Code and CLI commands, but I ended up ditching the latter for a web UI. Specifically, I use Semaphore, which runs as an LXC on a separate Proxmox node. Sure, terminal commands are ideal for hardcore users, but Semaphore lets me control automation tasks and organize configuration settings from its handy interface.
Terraform is slightly easier to use on Semaphore than Ansible, since it doesn’t need an army of artifacts to deploy virtual guests on Proxmox. Once I pasted the contents of the Terraform config from my VS Code app to a *.tf *file on Semaphore, I added the directory of this document as the Repository on Semaphore. I’d already enabled Terraform Configs on Semaphore, so I simply added its name as a new Task, chose the Repository I created earlier, and used the Run button to execute it on my Proxmox node.
But if you’re a CLI purist, you can run the terraform init, terraform plan, and terraform apply commands (in that particular order) to provision virtual guests using .tf configs viathe terminal.
Terraform can also provision LXCs
With a different set of arguments, of course
Although I typically use my Proxmox + Terraform combo to spin up virtual machines, LXCs are fair game, too. But as you may have guessed, the Telmate provider has entirely different attribute names, and some arguments aren’t even available for containers. For example, the network section doesn’t accept a model as an argument and requires a name, IP address, gateway, and other parameters instead. The same holds true for the hostname and password fields.
Anyway, I created another .tf file and stored it in a separate directory on my Semaphore instance to avoid configuration conflicts. Here’s what it looks like:
terraform {required_providers { proxmox = { source = "telmate/proxmox" version = "3.0.2-rc04" }}}provider "proxmox" { pm_api_url = "https://IP_address_of_Proxmox_node:8006/api2/json" pm_api_token_id = "token_name" pm_api_token_secret = "token_secret" pm_tls_insecure = true}resource "proxmox_lxc" "debian" {hostname = "name_of_LXC_host"password = "an_8_letter_string"target_node = "PVE_node_name"ostemplate = "local:vztmpl/template_name.tar.zst"cores = 2memory = 2048rootfs { storage = "local-lvm" size = "25G"}network { name = "eth1" bridge = "vmbr0" firewall = false ip = "192.168.0.39/24" gw = "192.168.0.1" }}
The real fun begins once you add Ansible to the equation
Up until now, I’ve only talked about provisioning virtual guests on Proxmox, and Terraform is great for this task. However, Ansible clearly has the lead when it comes to configuring the LXCs and VMs once Terraform has finished provisioning them. And as someone with more than a dozen Ansible playbooks at the ready for different distros, few things are as satisfying as watching freshly set up virtual guests get armed with new users, proper network settings, and additional packages from a single Ansible playbook.
Related
This is my favorite LXC on Proxmox – and it’s not what you think
I daresay this LXC is more important than the rest of my VM collection