Deploy OPNsense in Azure

Recently, I wanted to add an OPNsense firewall to my Azure environment to act as an NVA (Network Virtual Appliance) for a production network, giving me proper stateful inspection and NAT capabilities rather than relying solely on Azure’s built-in networking features. I came across the excellent dmauser/opnazure project on GitHub, which automates most of the heavy lifting, but since I wanted full control over my network layout I chose to deploy it manually.

In this post I’ll walk you through the entire process, including the VM deployment, OPNsense configuration, NSG rules, DNS, routing, and VNet peering.

Network Layout

Before jumping into commands, it helps to have a clear picture of the subnets involved. In my setup I have two subnets inside the OPNsense VNet:

SubnetCIDRPurpose
vnet-02_snet_1 (untrusted)172.16.2.0/29WAN-facing NIC
vnet-02_snet_2 (trusted)172.16.2.8/29LAN-facing NIC

Address assignments:

  • 172.16.2.4: WAN NIC of OPNsense (vm-fw-01_nic_wan)
  • 172.16.2.12: LAN NIC of OPNsense (vm-fw-01_nic_lan)

The production networks (your workload VNets) will route their Internet traffic through 172.16.2.12 (the trusted/LAN IP).

Prerequisites

Before starting, make sure you have:

  • Azure CLI installed and logged in
  • An existing resource group and VNet with your two subnets created
  • Accepted the FreeBSD marketplace image terms (required once per subscription):
az vm image terms accept --urn thefreebsdfoundation:freebsd-14_1:14_1-release-amd64-gen2-zfs:14.1.0

Step 1: Create the Network Interfaces

OPNsense needs two NICs – one for WAN (untrusted) and one for LAN (trusted). IP forwarding must be enabled on both so Azure doesn’t drop routed traffic.

az network nic create `
    --resource-group rg-prod-01 `
    --location swedencentral `
    --name vm-fw-01_nic_wan `
    --vnet-name vnet-02 `
    --subnet vnet-02_snet_1 `
    --ip-forwarding true

az network nic create `
    --resource-group rg-prod-01 `
    --location swedencentral `
    --name vm-fw-01_nic_lan `
    --vnet-name vnet-02 `
    --subnet vnet-02_snet_2 `
    --ip-forwarding true

Make sure to change the IP address assignment to static afterwards, or add --private-ip-address <ip> to the commands.

Step 2: Deploy the FreeBSD VM

The opnazure project deploys OPNsense on top of a FreeBSD 14.1 image from The FreeBSD Foundation. The WAN NIC must be listed first in --nics – OPNsense will treat the first attached interface as WAN (vtnet0).

az vm create `
    --resource-group rg-prod-01 `
    --location swedencentral `
    --name vm-fw-01 `
    --image thefreebsdfoundation:freebsd-14_1:14_1-release-amd64-gen2-zfs:14.1.0 `
    --size Standard_B1ms `
    --nics vm-fw-01_nic_wan vm-fw-01_nic_lan `
    --admin-username admin `
    --admin-password "YourStrongPasswordHere" `
    --no-wait

Standard_B1ms is sufficient for lab or light production use. For heavier traffic consider Standard_B2s or larger.

Step 3: Install OPNsense via Custom Script Extension

Once the VM is running, the opnazure project provides a shell script and a base config.xml that bootstraps OPNsense silently. On your local computer, create a file called settings.json with the following content:

{
  "fileUris": [
    "https://raw.githubusercontent.com/dmauser/opnazure/master/scripts/configureopnsense.sh",
    "https://raw.githubusercontent.com/dmauser/opnazure/master/scripts/config.xml"
  ],
  "commandToExecute": "sh configureopnsense.sh https://raw.githubusercontent.com/dmauser/opnazure/master/scripts/ 26.1 2.15.0.1 TwoNics 172.16.2.8/29 1.1.1.1/32"
}

A quick breakdown of the commandToExecute arguments:

ArgumentValueDescription
Script URI basehttps://raw.githubusercontent.com/dmauser/opnazure/master/scripts/Where to pull additional resources from
OPNsense version26.1OPNsense version to install
WAAgent version2.15.0.1Azure Linux Agent version
ScenarioTwoNicsTwo-NIC deployment (WAN + LAN)
Trusted subnet172.16.2.8/29Your LAN subnet CIDR
Windows subnet1.1.1.1/32Placeholder (not used in this setup, but required to set)

Now apply the extension:

az vm extension set `
    --resource-group rg-prod-01 `
    --vm-name vm-fw-01 `
    --name CustomScriptForLinux `
    --publisher Microsoft.OSTCExtensions `
    --version 1.5 `
    --% --settings @settings.json

The installation takes around 10 minutes. You can monitor progress by checking whether port 443 becomes available on the VM’s public IP. Once it’s up, browse to https://<PublicIP> and log in with the credentials you configured during VM creation.

Step 4: Configure DNS in OPNsense

With OPNsense up, the first thing to configure is DNS so that both OPNsense itself and any clients routing through it can resolve names correctly.

WAN DNS (for OPNsense’s own lookups):

Go to System → Settings → General and set the DNS servers. I use OpenDNS (208.67.222.222 and 208.67.220.220) bound to the WAN gateway so that upstream lookups go out through the WAN interface.

Unbound DNS (for LAN clients):

Go to Services → Unbound DNS → General and enable the service. Make sure it is listening on the LAN interface only – you don’t want it exposed on WAN.

Query forwarding for production domains:

Go to Services → Unbound DNS → Query Forwarding. Disable Use System Nameservers and add a custom forwarder for your internal/production domains pointing to 168.63.129.16 (Azure’s internal DNS resolver). This ensures that any DNS queries for resources in your Azure VNets (e.g., private DNS zones or internal hostnames) are resolved correctly via Azure’s DNS, while all other queries continue to go to OpenDNS.

Step 5: Add Static Routes

OPNsense needs to know how to reach your production networks via the LAN gateway. Go to System → Routes → Configuration and add a static route for each production network CIDR, with the LAN gateway (172.16.2.9) as the next hop.

Important: Do not add a route for the OPNsense VNet’s own subnets (172.16.2.0/29 and 172.16.2.8/29) – those are directly connected and adding a static route for them will cause issues.

Step 6: Configure NAT

To allow production network clients to reach the Internet through OPNsense, you need outbound NAT. The reason NAT is required here is rooted in how Azure handles return traffic. In a typical on-premises setup, your firewall owns a public IP natively and can route return traffic freely. In Azure, however, the public IP is not directly assigned to the VM’s NIC – it is mapped by the Azure fabric. When a production VM sends a packet out through OPNsense, OPNsense forwards it via the WAN interface, and Azure routes it to the Internet using the associated public IP. The problem is on the way back; the return packet arrives at OPNsense’s WAN NIC with the production VM’s private IP as the destination, which Azure has no idea how to deliver – it is not a routable address from the Internet’s perspective. By NATing the source address of outbound packets to OPNsense’s WAN IP, we ensure that all return traffic comes back to OPNsense, which can then correctly forward it to the originating production VM via its LAN interface.

Go to Firewall → NAT → Outbound, set the mode to Hybrid Outbound NAT, and add a rule that matches traffic sourced from your production networks (e.g., 10.0.0.0/8 or whatever your production CIDR is), leaving via the WAN interface, and translating the source to the WAN address. This ensures production traffic egresses with OPNsense’s public IP.

Step 7: Add Firewall Rules

Go to Firewall → Rules → LAN and add a rule to allow traffic from your production networks to any destination. OPNsense is stateful, so return traffic will be permitted automatically. You can tighten this rule later once everything is working.

Step 8: VNet Peering

To connect your production VNet to the OPNsense VNet, you need to set up VNet peering. Note that subnet-scoped peering is not fully supported in Azure yet, so the peering must be created at the VNet level. Create a bidirectional peering between the OPNsense VNet (vnet-02) and each of your production VNets. Make sure that Allow forwarded traffic is configured appropriately on both sides of the peering.

Step 9: Configure User-Defined Routes (UDR) in Production VNets

For traffic from your production VMs to actually flow through OPNsense, you need to override Azure’s default routing. Create a Route Table and add a route:

  • Address prefix: 0.0.0.0/0
  • Next hop type: Virtual appliance
  • Next hop IP address: 172.16.2.12 (the OPNsense LAN IP)

Associate this Route Table with every subnet in your production VNets that should route through OPNsense.

Step 10: Configure Network Security Groups

NSG configuration is where things get a little nuanced. You need NSGs on both the OPNsense VNet and the production VNets, and they need to be crafted carefully to avoid asymmetric routing.

OPNsense VNet NSG

Inbound rules:

PrioritySourceDestinationPortActionDescription
100Internet172.16.2.4 (WAN IP)AnyAllowAllow inbound internet traffic – OPNsense filters this
110VirtualNetwork172.16.2.12 (LAN IP)AnyAllowAllow management traffic from peered VNets
200VirtualNetworkInternetAnyAllowAllow packets arriving at the NSG that have Internet as their destination tag
4096AnyAnyAnyDenyDeny everything else

Why the third rule? This one is counterintuitive. When traffic from a production VM is routed to OPNsense via the UDR, the packet arrives at the NSG on the LAN NIC. Azure evaluates the destination of the original packet (e.g., a public IP on the Internet), which it classifies as Internet. Without this rule, the NSG would block it before OPNsense even sees it.

Outbound rules:

PrioritySourceDestinationPortActionDescription
100172.16.2.4 (WAN IP)InternetAnyAllowAllow outbound traffic from WAN to internet
110172.16.2.12 (LAN IP)VirtualNetworkAnyAllowAllow LAN to VNets (if needed)
200AnyAnyAnyDenyDeny everything else – prevents asymmetric routing

The explicit outbound deny is important. Without it, traffic could potentially exit through an unexpected path, breaking connection state in OPNsense.

Production VNets NSG

Inbound rules:

PrioritySourceDestinationPortActionDescription
100172.16.2.12 (LAN IP)VirtualNetworkAnyAllowAllow LAN to VNets (if needed)
4096AnyAnyAnyDenyDeny everything else

Outbound rules:

PrioritySourceDestinationPortActionDescription
100VirtualNetwork172.16.2.12 (LAN IP)AnyAllowAllow traffic toward OPNsense for management
110VirtualNetworkInternetAnyAllowAllow outbound – this traffic will be routed to OPNsense via UDR
4096AnyAnyAnyDenyDeny everything else

Wrapping Up

Once all of this is in place, your production VMs should be able to reach the Internet through OPNsense, with all traffic inspected and NATed via the WAN interface. You can verify end-to-end by SSH-ing into a production VM and running a curl https://ifconfig.me – the returned IP should be OPNsense’s public IP.

From here, you can take advantage of OPNsense’s rich feature set – IDS/IPS via Suricata, traffic shaping, VPN (WireGuard or OpenVPN), and much more.

Happy firewalling!

Leave a Reply

Your email address will not be published. Required fields are marked *