25+ years building infrastructure at scale
- Teaching others to do the same

OpenStack 2025.2 Flamingo: Deployment Guide

Pinterest
Facebook
Twitter
Reddit
LinkedIn
Email

Coming soon on TSB


Introduction

This guide documents a complete OpenStack 2025.2 Flamingo deployment built for TSB. Every command, every error, and every fix is captured here because real learning happens when things break.

What we built:

  • A 5-node virtual OpenStack cluster on a single AMD Ryzen 9 9950X workstation
  • Kolla-Ansible as the deployment engine
  • OpenStack 2025.2 Flamingo, the latest release at time of writing
  • Full stack: Compute (Nova), Networking (Neutron/OVS), Block Storage (Cinder), Identity (Keystone), Image (Glance), Orchestration (Heat)

Time to expect: End-to-end including VM creation, OS preparation, Kolla-Ansible installation, configuration, and first instance takes roughly 4 hours in a lab environment. The actual Kolla-Ansible install and deploy steps alone take around 1.5 hours. Your mileage will vary depending on internet speed and how many errors you hit.

Philosophy: OpenStack on VMs is not recommended for production. Physical nodes give you real performance and proper isolation. However, for learning the internals, a virtualised lab is perfectly valid and that is exactly what this is.


Host Hardware

ComponentSpec
CPUAMD Ryzen 9 9950X (16-core, 32 threads)
RAM~198GB
Nested VirtualisationEnabled (KVM/AMD)

Verify nested virt is enabled before starting:

Bash
cat /sys/module/kvm_amd/parameters/nested
# Must return: 1

VM Architecture

NodevCPUsRAMDiskRole
os-controller624GB100GBControl plane: Keystone, Glance, Nova API, Neutron API, Cinder API, Horizon, Heat
os-compute01616GB50GBNova compute
os-compute02616GB50GBNova compute
os-storage48GB50GB boot + 3x40GBCinder block storage
os-network48GB30GBNeutron, OVS

IP addressing:

NodeIP
os-controller192.168.1.160
os-compute01192.168.1.161
os-compute02192.168.1.162
os-storage192.168.1.163
os-network192.168.1.164

VIP (Virtual IP): 192.168.1.160 (pointed at controller for single-node control plane)


OS Preparation

All nodes run Ubuntu 24.04 LTS, cloned from a base image with cloud-init.

Network Interface Naming

Ubuntu 24.04 uses predictable network naming by default (enp6s19 etc). Force legacy ethX naming for consistency across all nodes. This is critical because Kolla-Ansible references interface names by string.

Bash
sudo nano /etc/default/grub

Change:

Plain Text
GRUB_CMDLINE_LINUX=""

To:

Plain Text
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
Bash
sudo update-grub && sudo reboot

Second Network Interface (eth1)

All nodes except os-storage need a second NIC (eth1) for Neutron's external interface. Do not assign an IP to eth1 -- Neutron/OVS takes it over completely as a raw bridge port.

Create a netplan config for eth1 on all applicable nodes:

Bash
sudo tee /etc/netplan/eth1.yaml << EOF
network:
  version: 2
  ethernets:
    eth1:
      dhcp4: false
      dhcp6: false
      optional: true
EOF
sudo chmod 600 /etc/netplan/eth1.yaml
sudo netplan apply

No output after netplan apply means success. If you see a permissions warning before running chmod, that is expected -- just run chmod before applying.

Why optional: true? Without it, systemd-networkd-wait-online will wait 2 minutes on every boot for eth1 to come online. Since eth1 has no IP, systemd considers it "not ready" and times out. optional: true tells systemd this interface does not block boot.

Cloud-Init: Persistent /etc/hosts

Cloud-init overwrites /etc/hosts on every reboot by default. Fix this permanently by removing the updateetchosts module:

Bash
sudo sed -i '/- update_etc_hosts/d' /etc/cloud/cloud.cfg

Then add your hosts entries:

Bash
sudo tee -a /etc/hosts << EOF
192.168.1.160 os-controller
192.168.1.161 os-compute01
192.168.1.162 os-compute02
192.168.1.163 os-storage
192.168.1.164 os-network
EOF
Why edit cloud.cfg directly? The drop-in directory /etc/cloud/cloud.cfg.d/ with manageetchosts: false only controls the template renderer. The updateetchosts module still runs regardless. You must remove the module from the run list itself.

Do this on all 5 nodes and reboot to verify it survives.

Baseline Verification

Run on all 5 nodes to confirm everything is correct before proceeding:

Bash
hostname && ip a | grep 192.168 && free -h | grep Mem && nproc && lsblk

Node Connectivity Test

From the controller:

Bash
for node in os-controller os-compute01 os-compute02 os-storage os-network; do
    echo -n "Testing $node: "
    ping -c 1 $node | grep -q "1 received" && echo "OK" || echo "FAIL"
done

Expected output:

Plain Text
Testing os-controller: OK
Testing os-compute01: OK
Testing os-compute02: OK
Testing os-storage: OK
Testing os-network: OK

All 5 must show OK before proceeding.

LVM Volume Group for Cinder

On os-storage only, create the LVM volume group that Kolla-Ansible expects:

Bash
sudo pvcreate /dev/sdb /dev/sdc /dev/sdd
sudo vgcreate cinder-volumes /dev/sdb /dev/sdc /dev/sdd
sudo vgs

Expected output from vgs:

Plain Text
VG             #PV #LV #SN Attr   VSize    VFree
  cinder-volumes   3   0   0 wz--n- <119.99g <119.99g
Kolla-Ansible's precheck will fail with Volume group "cinder-volumes" not found if this is missing. The name cinder-volumes is hardcoded and must match exactly.

Kolla-Ansible Installation

All Kolla-Ansible work is done from os-controller only.

Estimated time: 10-15 minutes

Install Prerequisites

Bash
sudo apt install -y python3-dev libffi-dev gcc libssl-dev python3-pip python3-venv git

Create Virtual Environment

Bash
sudo mkdir -p /opt/kolla-venv
sudo chown $USER:$USER /opt/kolla-venv
python3 -m venv /opt/kolla-venv
source /opt/kolla-venv/bin/activate

Optionally, add to .bashrc so the venv auto-activates on login:

Bash
echo "source /opt/kolla-venv/bin/activate" >> ~/.bashrc

You should see (kolla-venv) prepended to your shell prompt:

Plain Text
(kolla-venv) ubuntu@os-controller:~$

Install Kolla-Ansible

Bash
pip install -U pip
pip install kolla-ansible

Expected output (final lines):

Plain Text
Successfully installed ansible-core-2.19.6 kolla-ansible-21.0.0 ...
Kolla-Ansible 21.0.0 (Flamingo) will pull in ansible-core 2.19.6 and manage its own dependencies.

Install Ansible Galaxy Dependencies

Bash
kolla-ansible install-deps

Expected output:

Plain Text
Starting galaxy collection install process
Process install dependency map
...
kolla.kolla (21.0.0) was installed successfully

Setup Configuration Directory

Bash
sudo mkdir -p /etc/kolla
sudo chown $USER:$USER /etc/kolla
cp -r /opt/kolla-venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla/
cp /opt/kolla-venv/share/kolla-ansible/ansible/inventory/multinode /etc/kolla/multinode

Configuration

Generate Passwords

Bash
kolla-genpwd

No output means success. This populates /etc/kolla/passwords.yml with secure random passwords for every OpenStack service. Never edit this file by hand.

Configure Inventory

Edit /etc/kolla/multinode and replace the top section:

INI
[control]
os-controller

[network]
os-network

[compute]
os-compute01
os-compute02

[monitoring]
os-controller

[storage]
os-storage

[deployment]
localhost       ansible_connection=local

Leave everything below [common:children] untouched -- those sections inherit from the groups above.

Configure globals.yml

Edit /etc/kolla/globals.yml (use Ctrl+W to search in nano):

YAML
kolla_base_distro: "ubuntu"
openstack_release: "2025.2"
kolla_internal_vip_address: "192.168.1.160"
network_interface: "eth0"
neutron_external_interface: "eth1"
enable_haproxy: "no"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"
Why VIP = controller IP? With a single controller, HAProxy has nothing to balance. Pointing the VIP at the controller's actual IP avoids MariaDB connection failures that occur when Kolla tries to reach a VIP that nothing is listening on.

Verify active settings (strips commented lines):

Bash
grep -E 'kolla_base_distro|openstack_release|vip_address|network_interface|neutron_external|haproxy|cinder' /etc/kolla/globals.yml | grep -v '^#'

Expected output:

Plain Text
kolla_base_distro: "ubuntu"
openstack_release: "2025.2"
kolla_internal_vip_address: "192.168.1.160"
network_interface: "eth0"
neutron_external_interface: "eth1"
enable_haproxy: "no"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"

SSH Key Setup

Kolla-Ansible SSHes from the controller to all other nodes. The controller also needs to SSH to itself by hostname.

Generate a key on the controller:

Bash
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -N ""

Add the controller's public key to its own authorized_keys:

Bash
cat ~/.ssh/id_ed25519.pub >> ~/.ssh/authorized_keys

Distribute to all other nodes from your laptop:

Bash
CONTROLLER_KEY=$(ssh ubuntu@192.168.1.160 cat ~/.ssh/id_ed25519.pub)
for ip in 192.168.1.161 192.168.1.162 192.168.1.163 192.168.1.164; do
    ssh ubuntu@$ip "echo '$CONTROLLER_KEY' >> ~/.ssh/authorized_keys"
done

Verify from the controller:

Bash
for node in os-controller os-compute01 os-compute02 os-storage os-network; do
    echo -n "Testing $node: "
    ssh -o ConnectTimeout=5 ubuntu@$node hostname
done

Expected output:

Plain Text
Testing os-controller: os-controller
Testing os-compute01: os-compute01
Testing os-compute02: os-compute02
Testing os-storage: os-storage
Testing os-network: os-network

Deployment

Bootstrap

Installs Docker and all prerequisites on all 5 nodes.

Estimated time: 5-10 minutes

Bash
kolla-ansible bootstrap-servers -i /etc/kolla/multinode

Expected final output:

Plain Text
PLAY RECAP *******************************
localhost    : ok=1  changed=0  failed=0
os-compute01 : ok=44 changed=16 failed=0
os-compute02 : ok=44 changed=16 failed=0
os-controller: ok=44 changed=16 failed=0
os-network   : ok=44 changed=16 failed=0
os-storage   : ok=44 changed=16 failed=0

All nodes must show failed=0 before proceeding.

Common bootstrap errors:

ErrorCauseFix
Host key verification failed on os-controllerController has not accepted its own SSH host keyssh ubuntu@os-controller to accept the key, then add public key to authorized_keys
Interface not found on os-controllerCascade failure from SSH error aboveFix the SSH error first

Pre-flight Checks

Estimated time: 2-3 minutes

Bash
kolla-ansible prechecks -i /etc/kolla/multinode

Expected final output:

Plain Text
PLAY RECAP *******************************
os-controller: ok=87 changed=0  failed=0
os-compute01 : ok=51 changed=0  failed=0
os-compute02 : ok=51 changed=0  failed=0
os-network   : ok=51 changed=0  failed=0
os-storage   : ok=32 changed=0  failed=0

Common precheck errors:

ErrorCauseFix
Volume group "cinder-volumes" not found on os-storageLVM VG not createdpvcreate /dev/sdb /dev/sdc /dev/sdd && vgcreate cinder-volumes /dev/sdb /dev/sdc /dev/sdd

To debug a single failing node:

Bash
kolla-ansible prechecks -i /etc/kolla/multinode --limit os-storage -vv 2>&1 | grep -A 10 "FAILED\|fatal"

Deploy

This is the main event. Kolla-Ansible pulls all Docker images and deploys OpenStack across the cluster.

Estimated time: 30-40 minutes (varies significantly with internet speed for image pulls)

Bash
kolla-ansible deploy -i /etc/kolla/multinode

Watch progress on the controller in a separate terminal while deploy runs:

Bash
watch -n 2 'sudo docker ps --format "table {{.Names}}\t{{.Status}}"'

You will see containers appearing and starting in dependency order. After a successful deploy you should see approximately 30+ containers running on the controller alone.

Expected final output from deploy:

Plain Text
PLAY RECAP *******************************
os-controller: ok=293 changed=164 failed=0
os-compute01 : ok=95  changed=45  failed=0
os-compute02 : ok=83  changed=44  failed=0
os-network   : ok=94  changed=39  failed=0
os-storage   : ok=46  changed=15  failed=0

Total tasks across all nodes: approximately 611. All must show failed=0.

Common deploy errors:

ErrorCauseFix
Can't connect to MySQL server on '192.168.1.170' (No route to host)VIP not reachable with HAProxy disabledChange kollainternalvip_address to controller's actual IP
eth1 interface issues during networking taskseth1 not configured in netplanCreate eth1.yaml and netplan apply on affected nodes

Kolla-Ansible is idempotent. If deploy fails, fix the issue and re-run kolla-ansible deploy. It will resume without destroying existing containers.

Post-Deploy

Estimated time: 2-3 minutes

Bash
kolla-ansible post-deploy -i /etc/kolla/multinode

Source the admin credentials:

Bash
source /etc/kolla/admin-openrc.sh

Install the OpenStack CLI client inside the venv (not with snap or apt):

Bash
pip install python-openstackclient
Why not snap? Snap packages run in isolation and conflict with the virtual environment. The snap version of the OpenStack client is also typically several releases behind.

Verify the deployment:

Bash
openstack compute service list

Expected output:

Plain Text
+------+----------------+--------------+----------+-------+
| ID   | Binary         | Host         | Zone     | State |
+------+----------------+--------------+----------+-------+
| ...  | nova-scheduler | os-controller| internal | up    |
| ...  | nova-conductor | os-controller| internal | up    |
| ...  | nova-compute   | os-compute01 | nova     | up    |
| ...  | nova-compute   | os-compute02 | nova     | up    |
+------+----------------+--------------+----------+-------+
Bash
openstack network agent list

All agents should show :-) in the Alive column.


Network Setup

External Network

Bash
openstack network create \
    --provider-network-type flat \
    --provider-physical-network physnet1 \
    --external \
    --share \
    ext-net

openstack subnet create \
    --network ext-net \
    --subnet-range 192.168.1.0/24 \
    --gateway 192.168.1.1 \
    --dns-nameserver 8.8.8.8 \
    --dns-nameserver 1.1.1.1 \
    --allocation-pool start=192.168.1.210,end=192.168.1.220 \
    --no-dhcp \
    ext-subnet
Floating IP pool: Carve out a range that your DHCP server does not use. In this lab pfSense serves 192.168.1.30-200 so we safely use 210-220 for floating IPs.
--no-dhcp: Critical -- OpenStack must not run DHCP on the external network. Your existing DHCP server owns that responsibility.

Internal Network

Bash
openstack network create int-net

openstack subnet create \
    --network int-net \
    --subnet-range 10.0.0.0/24 \
    --gateway 10.0.0.1 \
    --dns-nameserver 8.8.8.8 \
    --dns-nameserver 1.1.1.1 \
    int-subnet
Note the MTU difference: ext-net uses 1500 (flat/physical), int-net uses 1450 (VXLAN encapsulation adds 50 bytes of overhead).

Virtual Router

Bash
openstack router create router1
openstack router set router1 --external-gateway ext-net
openstack router add subnet router1 int-subnet

Verify router wiring:

Bash
openstack router show router1

externalgatewayinfo should show ext-net with an IP from your floating pool. interfaces_info should show 10.0.0.1.

Horizon showing "Something went wrong" after creating networks? This is a known issue caused by a stale memcached session. Horizon cached your login token before the networks existed and now can't resolve them. Fix it with:
```bash
sudo docker restart memcached horizon
```
Then log out of Horizon completely, clear your browser session, and log back in fresh.

Hello World

Prerequisites

Flavor:

Bash
openstack flavor create \
    --vcpus 1 \
    --ram 512 \
    --disk 10 \
    m1.tiny

Upload Ubuntu 24.04 cloud image:

Bash
wget https://cloud-images.ubuntu.com/minimal/releases/noble/release/ubuntu-24.04-minimal-cloudimg-amd64.img

openstack image create \
    --container-format bare \
    --disk-format qcow2 \
    --file ubuntu-24.04-minimal-cloudimg-amd64.img \
    --public \
    ubuntu-24.04

Upload CirrOS (lightweight test image, 12MB, boots in seconds):

Bash
wget http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img

openstack image create \
    --container-format bare \
    --disk-format qcow2 \
    --file cirros-0.6.2-x86_64-disk.img \
    --public \
    cirros-0.6.2
CirrOS default credentials: user cirros, password gocubsgo (changed in newer images). CirrOS does not use keypair auth by default -- use the Horizon console (Compute > Instances > Console) or SSH with password authentication.

SSH Keypair:

Bash
openstack keypair create mykey > ~/.ssh/mykey.pem
chmod 600 ~/.ssh/mykey.pem
Security note: A single keypair deployed to all instances is convenient for a lab but is not suitable for production. If the private key is compromised, every instance is exposed. For production environments a Zero Trust architecture should be applied -- short-lived certificates issued by a CA (such as HashiCorp Vault SSH CA), per-role keypairs, a hardened bastion host as the single SSH entry point, and just-in-time access with full audit logging. No static keys, no permanent access.

Security Group:

Bash
openstack security group create default-web-access

openstack security group rule create --protocol icmp default-web-access

openstack security group rule create \
    --protocol tcp \
    --dst-port 22 \
    default-web-access

openstack security group rule create \
    --protocol tcp \
    --dst-port 80 \
    default-web-access

openstack security group rule create \
    --protocol tcp \
    --dst-port 443 \
    default-web-access

Launch Instance

Bash
openstack server create \
    --flavor m1.tiny \
    --image ubuntu-24.04 \
    --network int-net \
    --security-group default-web-access \
    --key-name mykey \
    hello-world

Watch status:

Bash
watch -n 2 'openstack server show hello-world | grep -E "status|task_state|addresses|host"'

You will see the instance progress through BUILD / spawning then BUILD / networking and finally reach ACTIVE. The Ubuntu 24.04 cloud image typically takes 30-60 seconds.

Expected final state:

Plain Text
| OS-EXT-SRV-ATTR:host                | os-compute01     |
| OS-EXT-SRV-ATTR:hypervisor_hostname | os-compute01     |
| addresses                           | int-net=10.0.0.6 |
| status                              | ACTIVE           |

Assign Floating IP

Bash
openstack floating ip create ext-net
openstack server add floating ip hello-world <floating-ip>

Verify:

Bash
openstack server show hello-world | grep addresses
# int-net=10.0.0.6, 192.168.1.214

SSH Access

From the controller:

Bash
ssh -i ~/.ssh/mykey.pem ubuntu@<floating-ip>

From your laptop (after copying the key):

Bash
scp ubuntu@192.168.1.160:~/.ssh/mykey.pem ~/.ssh/mykey.pem
chmod 600 ~/.ssh/mykey.pem
ssh -i ~/.ssh/mykey.pem ubuntu@<floating-ip>

Optionally, add your own key for passwordless access going forward (replace with whatever .pub file you use):

Bash
ssh-copy-id -i ~/.ssh/id_ed25519.pub -o "IdentityFile ~/.ssh/mykey.pem" ubuntu@<floating-ip>

Useful Commands

List all instances:

Bash
openstack server list

Check service health:

Bash
openstack compute service list
openstack network agent list
openstack volume service list

Watch containers on a node:

Bash
watch -n 2 'sudo docker ps --format "table {{.Names}}\t{{.Status}}"'

Follow a container log:

Bash
sudo docker logs -f <container-name>

Connect to MariaDB:

Bash
DB_PASS=$(sudo grep '^database_password:' /etc/kolla/passwords.yml | awk '{print $2}')
sudo docker exec -it mariadb mariadb -u root -p"$DB_PASS"

Redeploy after config change:

Bash
kolla-ansible deploy -i /etc/kolla/multinode

Run prechecks on single node:

Bash
kolla-ansible prechecks -i /etc/kolla/multinode --limit os-storage -vv

Lessons Learned

#IssueRoot CauseFix
1/etc/hosts wiped on rebootCloud-init updateetchosts module runs on every bootRemove module from cloud.cfg run list
2MariaDB No route to host on VIPkollainternalvip_address pointed at unused VIP with HAProxy disabledSet VIP to controller's actual IP
3Precheck fails on os-storagecinder-volumes LVM volume group not createdpvcreate + vgcreate cinder-volumes on storage node
4Bootstrap SSH fails on os-controllerController had not accepted its own host keySSH to self once + add public key to authorized_keys
5eth1 interfaces DOWNNo netplan config for eth1Create eth1.yaml with optional: true
62 minute boot delaysystemd-networkd-wait-online waiting for eth1Add optional: true to eth1 netplan config
7Horizon 500 error after network creationStale memcached session from before networks existedRestart memcached + clear browser session
8Instance fails on compute02KVM/nested virt not enabled on compute02 VMEnable CPU passthrough in hypervisor settings for compute02
9Floating IP unreachableeth1 interfaces DOWN, OVS bridge not connected to physical networkBring eth1 UP via netplan
10Netplan permissions warningFile permissions too openchmod 600 /etc/netplan/eth1.yaml

Architecture Overview

Plain Text
Your Laptop
    SSH / HTTP
192.168.1.210-220 (Floating IPs on ext-net)
    NAT/SNAT through virtual router
10.0.0.0/24 (int-net, instance private network)
    VXLAN tunnel between compute nodes
os-compute01 / os-compute02 (KVM/libvirt via nova_libvirt)
    iSCSI (for Cinder volumes)
os-storage (Cinder LVM, cinder-volumes VG)
    All API calls
os-controller (Keystone, Nova, Neutron, Glance, Cinder, Heat APIs)
    All network traffic
os-network (Neutron L3, DHCP, Metadata, OVS)

What's Next

  • Breaking session -- kill a compute node, observe, recover
  • Zantu Cloud deployment -- real SaaS workload on the cluster
  • Juju / Charmed OpenStack -- Day 2 operations deep dive
  • Masakari -- automatic instance HA
  • HashiCorp Vault SSH CA -- Zero Trust access

Built for TSB
OpenStack 2025.2 Flamingo | Kolla-Ansible 21.0.0
February 2026

subscribe to receive posts directly in your inbox
Pinterest
Facebook
Twitter
Reddit
LinkedIn
Email

Leave a Reply

Your email address will not be published. Required fields are marked *

Welcome to ScottHugh.com

I’m Scott Hugh, and this is where I write about the intersection of business, technology, and practical problem-solving.

What You’ll Find Here

HPF (Hocus Pocus Focus) Newsletter – My regular deep-dives shared on LinkedIn, exploring cloud-native infrastructure, Kubernetes, enterprise Linux, and open-source technologies. From architecture decisions to hands-on implementations, I focus on what actually works in production environments rather than vendor marketing.

Solutions Architecture & Technical Insights – Thoughts on enterprise infrastructure, hybrid cloud strategy, partner enablement, and technical approaches that deliver real customer value. I write about Linux systems, container orchestration, distributed teams, and the intersection of technical depth and business outcomes.

Frameworks & Practical Tools – Decision frameworks, technical approaches, and lessons learned from 30 years in technology – from building systems as a teenager to architecting solutions for global enterprises. Engineering mindset: build it, test it, see what works, iterate.

A Bit About Me

I’m a solutions architect and technical consultant with 30 years in technology – starting from building systems and databases as a teenager – with 25+ years designing, deploying, and managing enterprise infrastructure.

My approach: Build it, test it, see what works, iterate. I believe in open-source technologies, distributed team collaboration, and technical depth over buzzword bingo. Currently exploring new opportunities where I can combine technical expertise with partner enablement and customer success.

I’ve built Kubernetes clusters, migrated hundreds of customers across infrastructure platforms, presented to C-level executives, and trained partner technical teams. I code, I architect, I solve problems – and I share what I learn here.

My Interests

Beyond technical content, you might find posts about:

  • Open source advocacy – Ubuntu, Linux, and the technologies shaping modern infrastructure
  • Cloud-native architectures – Kubernetes, containers, and distributed systems
  • Partner enablement – Technical workshops, reference architectures, and solution development
  • Distributed team leadership – Remote work, cultural collaboration, and building cohesion across time zones
  • Continuous learning – Books, courses, and insights that change how I approach technical and business challenges
  • The Weekend SaaS Builder – YouTube channel documenting technical builds, infrastructure experiments, and hands-on learning

Connect With Me

This blog is where I work through ideas in detail. For shorter updates and discussions, find me on LinkedIn. I also document technical projects and infrastructure builds on YouTube (The Weekend SaaS Builder).

Thanks for visiting. Let’s figure out what actually works.

"I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed."

Michael Jordan