Ingress, Gateway Nodes & DDNS
Expose workloads to the internet with gateway nodes, ingress rules, HTTPS, and dynamic DNS
Overview
Most self-hosted installations sit behind NAT with a dynamic public IP. Exposing a workload to the internet traditionally means configuring port forwarding, setting up a reverse proxy, obtaining SSL certificates, and keeping DNS in sync when your IP changes.
PodWarden handles all of this:
- Gateway nodes — designate a host as the public entry point for traffic
- Ingress rules — map domains and paths to workloads or manual backends
- DDNS — automatically update DNS records when your public IP changes
- Hub subdomains — get a public URL from PodWarden Hub with one click
Gateway Nodes
A gateway node is any host in your fleet that has a public IP (or is port-forwarded from your router). For K8s clusters, PodWarden creates standard Kubernetes Ingress resources that the cluster's ingress controller (e.g. Traefik in K3s) picks up automatically.
Enabling a Gateway
- Go to Hosts and select the host
- Toggle Enable as Gateway Node in the gateway section
- PodWarden detects the host's public IP automatically (via
curl ifconfig.meover SSH) - You can also set the public IP manually if auto-detection doesn't work (e.g. behind a load balancer)
The gateway role adds "gateway" to the host's roles and stores the detected public IP.
Requirements
- The host must be reachable from the internet on ports 80 and 443 (for Let's Encrypt and HTTPS)
- If behind a router, configure port forwarding for 80/443 to the gateway host
- For K3s clusters: Traefik (built-in ingress controller) and ServiceLB handle traffic routing automatically
How It Works (K3s)
When you apply an ingress rule, PodWarden creates Kubernetes resources:
- Ingress — tells Traefik to route traffic for the domain to a Service
- Service — ClusterIP service pointing to the workload (or a headless Service + Endpoints for manual backends)
Traefik automatically:
- Obtains and renews Let's Encrypt certificates
- Terminates TLS
- Redirects HTTP to HTTPS
- Reverse-proxies requests to the correct backend
Ingress Rules
Ingress rules map a domain (and optional path) to a workload or manual backend running in your fleet.
Creating an Ingress Rule
- Go to Ingress in the sidebar
- Click New Rule
- Fill in:
- Domain — the FQDN (e.g.
swift17.vxloc.comorapp.example.com). Automatically normalized to lowercase. - Path — the URL path prefix (default
/). Must start with/. Use paths to route different parts of a domain to different backends (see Multi-Path Routing). - Backend Type —
Managed (K8s workload)orManual (IP:port) - deployment — which running workload to route to (managed type only)
- Backend Address — address of the target service (manual type only). Accepts either an IP or hostname, with optional port:
192.168.1.50:8080,myhost:8080, ormyhost(uses the backend port). Hostnames are resolved automatically — see Hostname Resolution. - Backend Port — the container port to forward to (managed type)
- Gateway Host — which gateway node handles this rule
- TLS — enabled by default (Let's Encrypt)
- Backend uses HTTPS — enable if the backend serves HTTPS instead of HTTP (see HTTPS Backends)
- Backend Timeout — optional custom timeout in seconds for slow backends
- Notes — optional freeform notes
- Domain — the FQDN (e.g.
- Click Create
The combination of domain + path must be unique across all rules.
Backend Types
Managed (K8s Workload)
Routes traffic to a containerized workload running in your K8s cluster. PodWarden creates a ClusterIP Service that selects the workload's pods. Use this when the backend is a PodWarden-deployed container.
Manual (IP:Port)
Routes traffic to any reachable IP address or hostname — VMs, bare-metal servers, other services on your network. PodWarden creates a headless Service and Endpoints object pointing to the external address. Use this for anything not managed by PodWarden, such as:
- A web app running on a VM (
192.168.1.50:8080) - A NAS web interface (
nas.local:8080) - A service on another host in your Tailscale mesh (
myserver:3000)
HTTPS Backends
Some self-hosted applications serve HTTPS-only on their container ports — they have no HTTP fallback. By default, Traefik connects to backends via plain HTTP, which causes a 502 Bad Gateway when the backend expects HTTPS.
Enable the Backend uses HTTPS toggle when creating or editing an ingress rule to tell Traefik to connect to the backend via HTTPS.
When enabled, PodWarden:
- Adds the
traefik.ingress.kubernetes.io/service.serversscheme: httpsannotation to the K8s Service - Creates a Traefik ServersTransport CRD with
insecureSkipVerify: true(since backend certs are typically self-signed) - References the ServersTransport from the Service annotation
Common apps that require this setting:
| Application | Port | Notes |
|---|---|---|
| Kasm Workspaces | 3000, 443 | Both ports serve HTTPS-only |
| Portainer | 9443 | HTTPS management port |
| Proxmox VE | 8006 | Web UI is HTTPS-only |
| UniFi Controller | 8443 | HTTPS management interface |
| Synology DSM | 5001 | HTTPS port |
| TrueNAS | 443 | Web UI when HTTPS is enforced |
How to tell if you need it: If your ingress rule deploys successfully but HTTP checks return 502, and the backend pod is running and healthy, the backend likely serves HTTPS-only. Enable "Backend uses HTTPS" and re-deploy the rule.
In the ingress rules table, rules with backend HTTPS enabled show a BE label next to the TLS icon.
Hostname Resolution
Manual backend addresses can use hostnames instead of raw IPs. PodWarden resolves them in two steps:
- Local DNS — standard system DNS resolution
- SSH fallback — if local DNS fails, PodWarden SSHes to the gateway host and runs
getent hosts <hostname>there
The SSH fallback handles names that only resolve on the gateway's network, such as:
- Tailscale MagicDNS names (e.g.
myserver.h) - Entries in the gateway host's
/etc/hosts - Split-horizon DNS records
Kubernetes Endpoints require IP addresses, so hostnames are always resolved to IPs before creating K8s resources.
Multi-Path Routing
You can route different URL paths of the same domain to different backends by creating multiple ingress rules with different paths:
| Domain | Path | Backend |
|---|---|---|
app.example.com | /api | API server on port 8000 |
app.example.com | / | Frontend on port 3000 |
Traefik matches the most specific path first, so /api takes priority over / for requests starting with /api.
Deploying Ingress Rules
After creating rules, you need to deploy them to create the actual Kubernetes resources:
- Deploy (play button) — deploys a single rule, creating its Ingress, Service, and Endpoints resources in the cluster
- Deploy All — deploys all non-disabled rules for a gateway host at once. Reports how many succeeded vs failed.
On successful deploy, the rule's status is set to active and the deploy timestamp is recorded.
Health Checks
PodWarden provides three types of health checks to verify the full chain from DNS to backend. Each can be run individually per rule, or all at once via Test All.
DNS Check
Verifies that the domain's DNS A record points to your gateway's public IP.
- Click the magnifying glass button on any ingress rule
- PodWarden resolves the domain and compares it to the gateway's public IP
- Status updates to:
- Active — DNS resolves to the gateway's public IP
- DNS Mismatch — domain resolves to a different IP (amber warning)
- Error — DNS resolution failed entirely
- Pending — gateway host has no public IP configured yet
DNS checks run automatically when a rule is first created. They are advisory — a mismatch warns you about a problem but doesn't block deploying.
Cloudflare-proxied domains: If your domain uses Cloudflare's proxy (orange cloud), DNS will resolve to Cloudflare's edge IPs (e.g.
104.21.x.x) instead of your gateway's IP. This causesdns_mismatchstatus, but traffic still works because Cloudflare forwards it to your origin. See Cloudflare-Proxied Domains for details.
Automatic DNS Record Creation
When you create an ingress rule for a domain that doesn't have a DNS record yet, PodWarden can automatically create the A record in Cloudflare:
- PodWarden checks if the domain resolves
- If it doesn't resolve, PodWarden looks up the domain's parent zone in the Domains page
- If the zone is Cloudflare-managed (has a zone ID and API token), PodWarden creates an A record pointing to the gateway's public IP
- DNS is re-checked after a brief propagation delay
This eliminates the manual step of creating DNS records before setting up ingress. The auto-created records are:
- Type: A record
- Name: the full domain from the ingress rule
- Content: the gateway host's public IP
- TTL: Auto
- Proxied: No (direct, not through Cloudflare proxy)
Auto-creation only works for domains whose parent zone is registered in PodWarden's Domains page with valid Cloudflare credentials. For domains managed outside Cloudflare, create the A record manually.
HTTP Health Check
Verifies that the domain is actually serving traffic end-to-end.
- Click the globe button on any ingress rule
- PodWarden makes an HTTPS request (or HTTP if TLS is disabled) to
https://domain/pathand reports:- Status code (e.g. 200, 301, 404, 502)
- Response time in milliseconds
- Results appear inline — green badge for success (< 400), red for server errors (>= 500)
The HTTP check follows redirects and skips TLS certificate verification, so it works even during initial Let's Encrypt provisioning. Timeout is 10 seconds.
This verifies the full chain: DNS resolution → network routing → gateway → Traefik → backend service.
TLS Certificate Check
Verifies that the domain has a valid TLS certificate and shows its details.
- Click the lock button on any TLS-enabled ingress rule
- PodWarden connects to the domain on port 443 via SSL and inspects the certificate
- Results show:
- Issuer — who issued the certificate (e.g. "Let's Encrypt", "Google Trust Services")
- Expiry — when the certificate expires
- Days remaining — shown as a badge (green if > 30 days, red if expiring soon)
This is useful for:
- Confirming Let's Encrypt certificates have been issued after initial deploy
- Monitoring certificate expiry across all your domains
- Identifying domains still using old or wrong certificates
The lock button only appears for rules with TLS enabled.
Test All
The Test All button in the header bar runs DNS, HTTP, and TLS checks on all active, TLS-enabled rules in one click:
- Click Test All at the top of the Ingress page
- PodWarden runs all three checks (DNS, HTTP, TLS) for every non-disabled rule
- Checks run in sequence per rule but continue even if individual rules fail
- The table refreshes when all checks are complete
Use Test All after:
- Deploying multiple rules at once
- Changing your public IP or DNS configuration
- As a periodic health check across all your domains
Ingress Statuses
| Status | Badge | Meaning | Common Causes |
|---|---|---|---|
active | Green | Domain resolves correctly, ingress deployed and working | DNS check passed — domain points to gateway |
pending | Gray | Rule created but not yet verified | New rule; gateway has no public IP set; DNS not checked yet |
dns_mismatch | Amber | Domain resolves to a different IP than the gateway | DNS not updated yet; domain points to old server; Cloudflare proxy enabled (expected) |
error | Red | Deploy or DNS check failed | Backend unreachable; hostname can't resolve; K8s apply failed; DNS lookup failed |
disabled | Default | Rule is inactive | Manually disabled by user; stale backend taken offline |
Status is updated by DNS checks, not by deploy. A rule can be successfully deployed but still show pending if DNS hasn't been checked yet.
Cloudflare-Proxied Domains
If your domain uses Cloudflare as a DNS proxy (the orange cloud icon in Cloudflare dashboard), traffic flows through Cloudflare's edge network before reaching your gateway:
Browser → Cloudflare Edge → Your Gateway → BackendThis has a few implications for PodWarden:
DNS Check shows dns_mismatch — This is expected. The domain resolves to Cloudflare IPs (e.g. 104.21.x.x, 172.67.x.x) rather than your gateway's IP. Traffic still works because Cloudflare forwards it to your origin server. You can safely ignore this status for Cloudflare-proxied domains.
TLS Check shows Cloudflare's issuer — The certificate PodWarden sees is Cloudflare's edge certificate (typically issued by "Google Trust Services"), not the Let's Encrypt certificate on your gateway. Traefik still obtains a Let's Encrypt cert for the origin connection between Cloudflare and your gateway.
HTTP Health Check works normally — The request goes through Cloudflare just like real user traffic, so this is still a valid end-to-end check.
Tip: If you don't need Cloudflare's CDN or DDoS protection for a domain, switch it to DNS-only mode (gray cloud) in Cloudflare. This lets Let's Encrypt validate directly and gives you accurate DNS check results in PodWarden.
Dynamic DNS (DDNS)
DDNS automatically updates DNS records when your public IP changes. PodWarden checks your IP every 5 minutes and updates all configured providers.
Supported Providers
| Provider | Use Case |
|---|---|
| Cloudflare | Domains managed via Cloudflare DNS (most common) |
| DuckDNS | Free dynamic DNS service |
| Webhook | Custom HTTP endpoint for any DNS provider |
| Hub | PodWarden Hub-managed subdomains (easiest setup) |
Setting Up DDNS
- Go to Settings → DDNS
- The current public IP is displayed at the top
- Click Add Config to create a DDNS configuration
Cloudflare
- Select Cloudflare as the provider
- Enter your Zone ID (from Cloudflare dashboard → domain → Overview → API section)
- Enter a Cloudflare API Token with
Zone:DNS:Editpermissions - Specify the domain(s) to update
- Important: Set DNS records to DNS-only mode (not proxied) in Cloudflare so Let's Encrypt validation works
DuckDNS
- Select DuckDNS as the provider
- Enter your DuckDNS token
- Specify your DuckDNS subdomain(s)
Webhook
- Select Webhook as the provider
- Configure the URL, HTTP method, headers, and body template
- Use
{{ip}}as a placeholder in the body template for the current IP
Hub (Managed Subdomains)
The easiest option — no DNS provider setup needed. See Hub DDNS Subdomains.
- Connect to PodWarden Hub (Settings → Hub)
- Allocate a subdomain from the Hub DDNS section
- PodWarden automatically keeps the IP updated
How the Update Loop Works
- Every 5 minutes, PodWarden detects your current public IP
- If the IP has changed since the last check, it updates all enabled DDNS configs
- Each config's status shows:
- Active — last update succeeded
- Error — last update failed (check
last_errorfor details) - Pending — not yet updated
- Disabled — manually disabled
Linking DDNS to a Gateway
Each DDNS config can optionally be linked to a specific gateway host. This is useful when you have multiple public IPs (e.g. different ISPs or locations).
Hub DDNS Subdomains
PodWarden Hub provides managed DDNS subdomains — the fastest way to get a public URL. See Hub DDNS Subdomains for the full guide.
Quick overview:
- Connect to Hub (Settings → Hub)
- In the Hub DDNS section, click Allocate
- Pick a domain (e.g.
vxloc.com) and optionally enter a custom slug - Get a URL like
swift17.vxloc.comimmediately - Create an ingress rule pointing to your workload
- PodWarden keeps the DNS updated automatically
Tier Limits
| Feature | Free | Pro | Business | Enterprise |
|---|---|---|---|---|
| DDNS subdomains | 1 | 50 | Unlimited | Unlimited |
| Custom slugs | No | Yes | Yes | Yes |
| Bring your own domain | No | No | Yes | Yes |
End-to-End Example
Here's how to expose a workload to the internet from scratch:
1. Set up a gateway
- Go to Hosts → select your edge server → enable Gateway Node
- Verify the public IP is detected (or set it manually)
- Ensure ports 80/443 are forwarded to this host
2. Get a domain
Option A: Hub subdomain (easiest)
- Go to Settings → Hub → DDNS Subdomains → Allocate
- Pick a domain and get your URL
Option B: Your own domain
- Go to Settings → DDNS → Add Config
- Configure Cloudflare, DuckDNS, or webhook for your domain
- Point your domain's A record to your gateway's public IP
3. Create an ingress rule
- Go to Ingress → New Rule
- Enter your domain, select the deployment (or enter a manual backend address), set the backend port
- Select your gateway host
- If the backend serves HTTPS (e.g. Kasm, Portainer, Proxmox), enable Backend uses HTTPS
- Click Create
4. Deploy and verify
- Click Deploy (play button) to create the Kubernetes Ingress and Service
- Click Check DNS (magnifying glass) to verify the domain resolves to your gateway
- Click Check HTTP (globe) to verify the site is alive and responding
- Click Check TLS (lock) to confirm the Let's Encrypt certificate is issued
- Visit your domain in a browser — you should see your workload with HTTPS
Troubleshooting
DNS Mismatch
Symptom: Status shows dns_mismatch after running a DNS check.
Causes and fixes:
- DNS not updated yet — If you just changed your A record, wait for DNS propagation (can take minutes to hours depending on TTL). Run the DNS check again later.
- DDNS hasn't run yet — The DDNS updater runs every 5 minutes. If your IP just changed, wait for the next cycle.
- Cloudflare proxy enabled — If the domain uses Cloudflare's proxy (orange cloud), DNS resolves to Cloudflare IPs, not your gateway. This is expected — see Cloudflare-Proxied Domains. Traffic still works.
- Wrong DNS record — Check your DNS provider to make sure the A record points to the correct gateway public IP.
- Gateway public IP not set — Go to Hosts → your gateway → verify the public IP field is correct.
502 Bad Gateway
Symptom: HTTP check returns 502 or you see 502 in the browser after deploying.
Causes and fixes:
- Backend serves HTTPS, not HTTP — This is the most common cause for apps like Kasm, Portainer, Proxmox, and UniFi Controller. Traefik connects to backends via HTTP by default — if the backend only speaks HTTPS, it rejects the connection. Fix: Edit the ingress rule, enable Backend uses HTTPS, and re-deploy. See HTTPS Backends.
- Backend service is down — Verify the backend is running and listening on the expected port. For manual backends, check the target host:
curl http://<backend-ip>:<port>. - Wrong backend port — Double-check the port number in the ingress rule matches what the backend actually listens on.
- Backend not reachable from the cluster — The K8s node must be able to reach the backend IP. SSH to the gateway host and test connectivity:
curl http://<backend-ip>:<port>. - Hostname resolution failed during deploy — If a hostname-based backend address can't be resolved, deploy fails. Check that the hostname is resolvable from either the PodWarden API server or the gateway host.
503 Service Unavailable
Symptom: Traefik returns 503.
Causes and fixes:
- Ingress rule not deployed — Click Deploy to create the K8s resources. The rule may have been created but not yet deployed.
- K8s Endpoints missing or empty — For manual backends, the Endpoints object may not have been created. Redeploy the rule.
- Service port mismatch — The K8s Service port must match the Ingress backend port. Redeploying the rule regenerates both.
Certificate Not Issued
Symptom: Browser shows certificate error or TLS check shows no issuer.
Causes and fixes:
- DNS not pointing to gateway — Let's Encrypt validates via HTTP-01 challenge, which requires the domain to resolve to your gateway. Fix DNS first.
- Ports 80/443 not forwarded — Let's Encrypt needs port 80 for the challenge. Verify port forwarding in your router.
- Rate limited — Let's Encrypt has rate limits. If you see rate limit errors in Traefik logs, wait an hour and try again. Avoid repeatedly deploying/undeploying rules for the same domain.
- Cloudflare proxy blocking challenge — If Cloudflare is proxying the domain, make sure Cloudflare's SSL/TLS mode is set to "Full" or "Full (strict)" so the HTTP-01 challenge can reach Traefik.
- Check Traefik logs — On the K3s control plane:
kubectl logs -n kube-system -l app.kubernetes.io/name=traefik --tail=100to see ACME errors.
Hostname Resolution Failures
Symptom: Deploy fails with "Cannot resolve hostname 'myhost' to an IP address".
Causes and fixes:
- Hostname not in DNS — The hostname must be resolvable via standard DNS or from the gateway host's
/etc/hosts. - Tailscale/mesh not running on gateway — If using Tailscale MagicDNS names, ensure Tailscale is running on the gateway host so the SSH fallback can resolve the name.
- SSH to gateway failed — PodWarden SSHes to the gateway host for fallback resolution. Verify SSH connectivity to the gateway host from the PodWarden API server.
- Use an IP instead — If hostname resolution is unreliable, use the backend's IP address directly.
Deploy Fails
Symptom: Clicking Deploy shows an error.
Causes and fixes:
- Gateway host not in a K8s cluster — The gateway must be a node in a K8s cluster. Verify the host is joined to a cluster.
- Kubeconfig unavailable — PodWarden fetches the kubeconfig from the cluster's control plane via SSH. Check that the control plane host is reachable.
- K8s API unreachable — The K8s API server must be reachable from where PodWarden runs
kubectl apply. Check cluster health. - Namespace doesn't exist — PodWarden deploys to the workload's namespace or the cluster's default namespace. Ensure it exists.
Backend Unreachable After Deploy
Symptom: Rule is deployed and DNS is correct, but the site doesn't load.
Diagnostic steps:
- Check DNS — Run the DNS check to confirm the domain resolves to your gateway IP
- Check HTTP — Run the HTTP health check to see the status code and response time
- Check TLS — Run the TLS check to verify the certificate is valid
- Test from gateway — SSH to the gateway host and curl the backend directly:
curl http://<backend-ip>:<port> - Check Traefik — Look at Traefik logs for routing errors:
kubectl logs -n kube-system -l app.kubernetes.io/name=traefik --tail=100 - Check K8s resources — Verify Ingress and Service exist:
kubectl get ingress,svc -l app.kubernetes.io/managed-by=podwarden
Stale Rules
If a backend service has been decommissioned, disable the ingress rule rather than leaving it active. Disabled rules are skipped by Deploy All and Test All. This also prevents Traefik from repeatedly trying to obtain certificates for domains with no DNS.
Best Practices
-
One gateway per location. If you have servers in multiple locations, designate one gateway per location and create ingress rules pointing to the local gateway.
-
Use Hub subdomains for homelabs. No DNS provider setup, no API tokens — just connect to Hub and allocate.
-
Set Cloudflare to DNS-only mode for domains where you don't need Cloudflare's CDN. This gives you accurate DNS check results and lets Let's Encrypt validate directly. Keep the proxy enabled only when you want Cloudflare's DDoS protection or CDN features.
-
Enable "Backend uses HTTPS" for HTTPS-only apps. Apps like Kasm, Portainer, and Proxmox serve HTTPS on their container ports. Without this setting, you'll get 502 errors because Traefik tries to connect via HTTP. Check the HTTPS Backends table for common apps that need this.
-
Run Test All periodically. After any infrastructure changes (IP changes, DNS updates, backend moves), use Test All to quickly verify all domains are healthy.
-
Check DNS after IP changes. The DDNS updater runs every 5 minutes, but DNS propagation can take longer. If you see
dns_mismatchstatus, wait a few minutes and check again. -
Deploy All after bulk changes. Use the Deploy All button after creating or modifying multiple ingress rules to deploy everything at once.
-
Use TLS checks to monitor certificate expiry. Run TLS checks regularly or after deploying new rules to confirm certificates are issued and not close to expiring.
-
Disable stale rules. When a backend is decommissioned, disable the ingress rule instead of leaving it active. This prevents unnecessary certificate requests and keeps Test All results clean.
-
Use IPs for manual backends when possible. IP addresses are more reliable than hostnames since they don't depend on DNS resolution. Use hostnames only when the IP is dynamic or the service is on a mesh network (e.g. Tailscale).