Managing 8 Domains with Terraform and Cloudflare
I run 8 domains on Cloudflare. Portfolio sites, side projects, SaaS products, things I’m playing with. Each one has DNS records, zone settings, and varying combinations of Workers KV, D1 databases, Queues, R2 buckets, Pages projects, and an AI Gateway.
At two domains, the Cloudflare dashboard is fine. Click around, configure things, move on. At eight, I started forgetting what I’d configured where. “Did I set up DMARC on that domain? Is HSTS on? Which KV namespace is FormRecap using again?” The dashboard became a liability — not because it’s bad, but because my memory is.
Terraform fixed that. Everything is in code, version controlled, and recoverable. If I get hit by a bus (or more realistically, forget what I did three months ago), the repo has the answer.
What’s Managed (And What Isn’t)
The Terraform repo handles account-level infrastructure — the stuff that sits underneath application deployments:
| Resource | Count |
|---|---|
| Zone settings (SSL, TLS, HSTS, Brotli, HTTP/3) | 98 |
| DNS records (A, CNAME, MX, TXT, DKIM, DMARC) | 31 |
| KV namespaces | 5 |
| D1 databases | 2 |
| Worker queues | 3 |
| R2 buckets | 3 |
| Pages projects | 3 |
| Vectorize indexes | 1 |
| AI Gateway | 1 |
Worker scripts and secrets are deliberately not managed here. Those live in each project’s wrangler.jsonc and deploy through Cloudflare Build. Terraform provisions the resources. Wrangler deploys the code that uses them.
This boundary exists for a practical reason: Worker deployments happen on every push to main. Infrastructure changes happen rarely and need the plan/apply review cycle. Mix the two and you end up with a deployment pipeline that’s either too slow for code or too fast for infrastructure. Neither is fun.
File Organisation
One resource type per file. I’ve seen repos with a 500-line main.tf that manages everything from DNS to databases and I’d rather not.
main.tf → terraform block, providers, R2 backend
variables.tf → all input variables
zones.tf → zone settings (applied to all 8 domains)
workers.tf → KV, D1, queues, vectorize
pages.tf → Pages projects
r2.tf → R2 buckets
ai_gateway.tf → AI Gateway (restful provider)
dns_formrecap.tf → DNS for formrecap.com
dns_example.tf → DNS for another domain
dns_jasonmatthew.tf → DNS for jasonmatthew.dev + .me
...
Need to update DNS for FormRecap? Open dns_formrecap.tf. Need a new KV namespace? workers.tf. No grep required. It’s boring, and boring is the point.
Zero Secrets
Every credential flows through 1Password CLI. A committed .env.op file maps environment variables to op:// vault references. The Makefile wraps every Terraform command with op run:
OP := op run --env-file=.env.op --
plan:
$(OP) terraform plan
apply:
$(OP) terraform apply
Secrets are injected into the process environment at runtime and cleaned up when it exits. Nothing on disk, nothing in git history, nothing in shell profiles. I wrote about this approach — and why it matters after the axios breach — in a separate post.
State lives in a Cloudflare R2 bucket using Terraform’s S3-compatible backend with a custom endpoint. The R2 credentials are also injected via 1Password at make init time. It’s 1Password all the way down.
Provider Gaps and Stopgaps
The Cloudflare Terraform provider (v5) is solid for the core resources — zones, DNS, KV, D1, Queues, R2, Pages. But it doesn’t cover everything yet, and pretending otherwise leads to “I’ll just configure that one thing in the dashboard” which is how drift starts.
AI Gateway and Vectorize indexes don’t have official Terraform resources. For these, I use the magodo/restful provider — it makes raw API calls to the Cloudflare REST API and manages the lifecycle through Terraform’s standard plan/apply cycle:
resource "restful_resource" "ai_gateway" {
path = "/accounts/${var.cloudflare_account_id}/ai-gateway/gateways"
body = jsonencode({
id = "my-gateway"
cache_ttl = 3600
rate_limiting = { ... }
})
}
It’s a stopgap, and it’s documented as one. When official provider support lands, migrating is a terraform state mv. The important thing is that it’s in code rather than a manual dashboard click that someone (me) forgets about six months later.
Zone Settings as a Baseline
Every domain gets the same security and performance baseline:
- SSL: full mode, always HTTPS, TLS 1.2 minimum, TLS 1.3 enabled
- Performance: Brotli, Early Hints, HTTP/3, 0-RTT
- Security: browser integrity check, security level medium
- HSTS: enabled on production domains with outbound traffic
This runs in a for_each over all zone IDs. Add a new domain, it inherits the baseline automatically. No checklist, no “did I remember to turn on HTTPS.” It’s just on.
Disaster Recovery
This is the actual reason I did all of this. The whole point of infrastructure-as-code is that “everything breaks” should be boring.
If I needed to rebuild from scratch tomorrow, the recovery is:
- Run Terraform in a new account
- Run D1 migrations from each project
- Set Worker secrets per the documented manifests
- Push each project to trigger Cloudflare Build
That’s not aspirational — it’s how I deployed in the first place. Every resource is in code. Every secret is documented (not stored — documented). Every step is written down. Disaster recovery is just deployment with a worse reason.
Drift Detection
Cloudflare’s cf-terraforming tool can snapshot your live account into .tf and import blocks. The repo has a make extract command that runs this — comparing the snapshot against what’s in the repo catches any manual dashboard changes that bypassed Terraform.
I’ll be honest: this repo has only been up for a couple of weeks, so I haven’t built the discipline of running this regularly yet. But the tooling is there for when I inevitably click something in the dashboard at 11pm and forget about it.
Is This Overkill?
Probably. For two domains it absolutely would be. But at eight and growing, with resources shared across projects and a genuine need to know “what did I configure and why,” the alternative is a spreadsheet I’d never maintain or a dashboard I’d never fully remember.
And honestly — I use Terraform at work. Our cloud engineering team manages Cloudflare infrastructure for production systems. Staying sharp on it through personal infrastructure is the same reason I build side projects. The patterns transfer. When I’m reviewing a PR from my cloud eng team, I want to know what good Terraform looks like because I’ve written it recently, not because I read about it two years ago.