Overview
This article provides instruction on how to use Terraform to build virtual machines on VMware.
Preparation
I use templates to build virtual machines and have several templates in order to build systems quickly. I mainly had a single machine for each type of system that I want to use to test things.
- CentOS 7
- CentOS 8
- Debian 10
- Red Hat Enterprise Linux 8
- Rocky Linux 8
- SUSE 15
- Ubuntu 20
Each of these template images is updated periodically or replaced, has my personal account on it, has my service account, and is configured so my service account is able to run ansible playbooks against any built machine.
In order for the process I’m using to build machines, I’ve created a unique template for each environment that begins with the environment name (bldr0, cabo0, and so on). In addition, to speed up builds, I have duplicates in each environment for each of my R720XD servers (Monkey, Morgan, and Slash). It speeds up from around 6 minutes to provision a VM to 2 minutes. If you have shared storage, having a single template would work fine. I’ve configured the VM to have a base template IP address (192.168.101.42, and so on) in order for the Terraform script to be able to communicate and configure the new machine. The templates also have some standard information such as the service account, ssh keys, sudoers access, and the necessary standard groups (sysadmin and tmproot).
Process
I’m a learn from example person and tend to look for a page where someone has a successful build process. The problem I find with official documentation is it has to address all possibilities so it’s a lot more complicated to parse and apply. It’s one of the reasons for this documentation as it’s just what’s needed to build the environment for my purposes. Sometimes even the best docs don’t explain everything, for example Gary’s docs don’t provide information on the example shell script he listed at the end of his docs.
Per the Terraform docs, they don’t recommend using a provider to reconfigure your new virtual machine. The suggestion is to mount a virtual CD similar to the cloud-init image to automatically configure the VM. The reason for this makes sense. For the provider process, you’re logging into the new VM so you have to have the credentials in the Terraform script plus have a unique shell script to run any commands you want to run to reconfigure the new VM.
Personally in a homelab environment, I don’t have a problem with this and it is easier than building a CD image and a process for accessing it and reconfiguring the VM.
Configuration
The first part is to set the variables used to build the virtual machine. I would suggest using Hashicorp Vault to manage credentials. Note that my kvm Terraform installation I use a variables file and fill in a module when building systems. I plan on that here as well so the document will be updated (again) when I figure that out.
For the below configuration, I’ll have my information since I’m using this document to rebuild my environment if needed. Update it with your information. I will hide the credentials of course.
I’m using a pretty basic vCenter configuration. I don’t have centralized storage and am not using centrally managed network configurations. It’s just easier and less complicated.
About the only thing I haven’t figured out yet is how to place the new VM in the correct folder. If I figure it out, I’ll update this document.
provider "vsphere" {
# If you use a domain, set your login like this "Domain\\User"
user = "administrator@vcenter.local"
password = "[password]"
vsphere_server = "lnmt1cuomvcenter.internal.pri"
# If you have a self-signed cert
allow_unverified_ssl = true
}
data "vsphere_datacenter" "dc" {
name = "Colorado"
}
# If you don't have any resource pools, put "/Resources" after cluster name
data "vsphere_resource_pool" "pool" {
name = "192.168.1.15/Resources"
datacenter_id = data.vsphere_datacenter.dc.id
}
# Retrieve datastore information on vsphere
data "vsphere_datastore" "datastore" {
name = "NikiVMs"
datacenter_id = data.vsphere_datacenter.dc.id
}
# Retrieve network information on vsphere
data "vsphere_network" "network" {
name = "QA Network"
datacenter_id = data.vsphere_datacenter.dc.id
}
# Retrieve template information on vsphere
data "vsphere_virtual_machine" "template" {
name = "cabo0cuomcentos7_monkey"
datacenter_id = data.vsphere_datacenter.dc.id
}
System Built
The second section defines the build for Terraform and creates the VM. At the end the remote-exec provisioning pushes up the shell script which reconfigures the new VM.
The resource name, ‘cabo-02’ gives me a unique name in the Terraform state file so that I can manage different machines. This one is for cabo0cuomkube1. The cabo0cuomkube2 system will be ‘cabo-03’ and so on.
We define the number of CPUs and RAM in case it’s different from the main template (2 CPUs and 4 Gigs of RAM).
In the disk block, define the template to be used. The clone block will then build the system using that image.
The remote-exec process provides the credentials to upload the script.sh shell script. The host line is the template IP address which is the same for every VM created in this environment (cabo).
# Set vm parameters
resource "vsphere_virtual_machine" "cabo-02" {
name = "cabo0cuomkube1"
num_cpus = 4
memory = 8192
datastore_id = data.vsphere_datastore.datastore.id
resource_pool_id = data.vsphere_resource_pool.pool.id
guest_id = data.vsphere_virtual_machine.template.guest_id
scsi_type = data.vsphere_virtual_machine.template.scsi_type
# Set network parameters
network_interface {
network_id = data.vsphere_network.network.id
}
# Use a predefined vmware template as main disk
disk {
label = "cabo0cuomcentos7_monkey.vmdk"
size = "100"
}
# create the VM from a template
clone {
template_uuid = data.vsphere_virtual_machine.template.id
}
# Execute script on remote vm after this creation
provisioner "remote-exec" {
script = "script.sh"
connection {
type = "ssh"
user = "root"
password = "[password]"
host = "192.168.102.42"
}
}
}
Shell Script
Finally the shell script simply has commands used to reconfigure the VM.
#!/bin/bash
hostnamectl set-hostname cabo0cuomkube1.qa.internal.pri
nmcli con mod ens192 ipv4.method manual ipv4.addresses \
192.168.102.160 ipv4.gateway 192.168.102.254 ipv4.dns \
192.168.1.254 ipv4.dns-search 'qa.internal.pri schelin.org'
shutdown -t 0 now -r
Due to the shutdown command, the script exits with a non-zero value so an error will be reported.
And that’s it. The VM should be created and accessible via the new IP address after it reboots.
References
- Gary Flynn’s Documentation – This was key in building my scripts.
- Terraform Provider Documentation – This has a lot more information.
Pingback: Kubernetes Index | Motorcycle Touring
Pingback: Python and Postgresql Index | Motorcycle Touring