Automating VergeOS with Ansible
I recently came across a Verge.io LinkedIn post highlighting their new Ansible integration, and it immediately sparked my curiosity. As someone who lives and breathes infrastructure automation, I decided to jump on it, dig in, and put it to the test!
In this post, I'll walk you through my journey of setting up the VergeOS Ansible Collection, from basic installation to building a robust, production-ready VM deployment workflow that handles complex legacy imports automatically.
TL;DR
I successfully automated VergeOS VM management using their Ansible collection. In under an hour, I went from zero to deploying complex production VMs that auto-remediate legacy drive configurations. The collection is mature and solves real-world integration challenges effectively.
1. Getting Started
Installation was refreshingly simple. No complex dependencies, just a standard Ansible collection install.
# Install Ansible
sudo apt-get install ansible
# Install the VergeOS Collection
git clone https://github.com/verge-io/ansible-collection-vergeos.git
cd ansible-collection-vergeos
ansible-galaxy collection build
ansible-galaxy collection install vergeio-vergeos-*.tar.gz
1.1 Testing Connectivity
I created a simple playbook to verify I could talk to the API.
File: test-connection.yml
---
- name: Test VergeOS Connectivity
hosts: localhost
gather_facts: false
vars:
vergeos_host: "192.168.1.111"
vergeos_username: "admin"
vergeos_password: "YOUR_PASSWORD"
vergeos_insecure: true
tasks:
- name: Get Cluster Info
vergeio.vergeos.cluster_info:
host: "{{ vergeos_host }}"
username: "{{ vergeos_username }}"
password: "{{ vergeos_password }}"
insecure: "{{ vergeos_insecure }}"
register: cluster_info
- name: Display Info
debug:
msg: "Connected to VergeOS v{{ cluster_info.version }}"
Result: Connected instantly. We were off to the races!
2. The Real Challenge: Legacy OVAs in a Modern World
Creating a blank VM is easy, but in the real world, we often need to import existing appliances or cloud images provided as OVAs.
I downloaded the Ubuntu Noble (24.04) cloud image OVA and tried to import it. This is where I hit an interesting snag that required some advanced Ansible logic.
The Problem
- Modern Standards: I wanted to run the VM with the modern Q35 machine type for UEFI support and better PCIE performance.
- Legacy Baggage: The OVA was built with legacy defaults, specifically using an IDE controller for the CD-ROM.
- The Conflict: VergeOS (correctly) strictly prohibits IDE drives on Q35 machines. When I tried to boot, I got:
"It is using a Q35 machine type along with IDE drive(s). Please change all IDE drives to SATA (AHCI)."
I didn't want to fix this manually every time. I wanted Ansible to handle it.
3. The Solution: The "Production Pattern" Playbook
I developed a robust playbook that acts as an intelligent bridge. It imports the VM, detects the legacy configuration, and auto-remediates the drive interfaces before the VM ever powers on.
Key Technical Hurdle: VM ID vs. Machine ID
This was the "gotcha" moment.
- The VM ID (e.g.,
41) is what you use for high-level operations like power control. - The Machine ID (e.g.,
61) is the internal ID used for hardware components like Drives.
If you try to filter drives using the VM ID, you won't find them! My playbook performs a lookup to resolve the correct Machine ID dynamically.
The Complete Playbook
Here is the final, production-ready workflow. I've simplified the credentials for readability.
File: deploy-production.yml
---
- name: Deploy Production VM (Auto-Remediate Legacy Drives)
hosts: localhost
gather_facts: false
vars:
vm_name: "prod-web-01"
target_machine_type: "q35" # Modern UEFI Machine Type
tasks:
# 1. Import the OVA (Legacy settings imported as-is)
- name: Import VM
vergeio.vergeos.vm_import:
name: "{{ vm_name }}"
ova_file_name: "noble-server-cloudimg-amd64.ova"
# ... credentials ...
register: import_result
# 2. Resolve the correct Machine ID for hardware access
- name: Get Machine ID
uri:
url: "https://{{ vergeos_host }}/api/v4/vms/{{ import_result.vm_id }}"
method: GET
# ... credentials ...
register: vm_details
- set_fact:
machine_id: "{{ vm_details.json.machine }}"
# 3. Upgrade Logic: Machine Type
- name: Upgrade to Q35
uri:
url: "https://{{ vergeos_host }}/api/v4/vms/{{ import_result.vm_id }}"
method: PUT
body_format: json
body:
machine_type: "{{ target_machine_type }}"
# ... credentials ...
# 4. Fetch Drives using Machine ID
- name: Get All Drives
uri:
url: "https://{{ vergeos_host }}/api/v4/machine_drives?machine={{ machine_id }}"
method: GET
# ... credentials ...
register: all_drives
# 5. Remediation: Remove incompatible IDE CD-ROMs
- name: Remove Legacy IDE CD-ROMs
uri:
url: "https://{{ vergeos_host }}/api/v4/machine_drives/{{ item['$key'] }}"
method: DELETE
# ... credentials ...
# Filter for CDROMs belonging to this Machine
loop: "{{ all_drives.json | selectattr('machine', 'equalto', machine_id | int) | selectattr('media', 'equalto', 'cdrom') | list }}"
# 6. Remediation: Upgrade Disks to high-performance VirtIO-SCSI
- name: Upgrade Disks to VirtIO-SCSI
uri:
url: "https://{{ vergeos_host }}/api/v4/machine_drives/{{ item['$key'] }}"
method: PUT
body:
interface: "virtio-scsi"
# ... credentials ...
# Filter for Disks belonging to this Machine
loop: "{{ all_drives.json | selectattr('machine', 'equalto', machine_id | int) | selectattr('media', 'equalto', 'disk') | list }}"
# 7. Final Configuration & Boot
- name: Configure Network & Cloud-Init
vergeio.vergeos.cloud_init:
vm_name: "{{ vm_name }}"
datasource: nocloud
# ... network settings ...
- name: Power On
vergeio.vergeos.vm:
name: "{{ vm_name }}"
state: running
4. The Result
When I ran this playbook, the output was incredibly satisfying:
TASK [Remove Legacy IDE CD-ROMs] ********************************
changed: [localhost] => (item=Removing drive_0 (cdrom))
TASK [Upgrade Disks to VirtIO-SCSI] *****************************
changed: [localhost] => (item=Updating drive_1 to VirtIO-SCSI)
TASK [Final Status] *********************************************
VM Deployed: prod-web-01
Machine Type: q35
Drives: All IDE CDROMs removed, Disks set to VirtIO-SCSI.
Status: Running
It worked perfectly! I now have a repeatable process that takes any legacy OVA and transforms it into a modern, high-performance VM on VergeOS automatically.
Conclusion
The VergeOS Ansible integration is powerful. While the core modules like vm and vm_import handle 90% of the work, the API is flexible enough to handle that last 10% of complex logic (like drive remediation) right inside the playbook.
This experience proved that VergeOS isn't just a GUI-driven platform—it's fully ready for serious Infrastructure as Code.
Resources: