After successfully migrating Traefik to a dedicated host (covered in Part 1), the next challenge was updating the DNS infrastructure to point all services to the new location. What started as a straightforward update turned into a comprehensive DNS cleanup operation.
The Scope of the DNS Update
Moving Traefik from 192.168.50.20 to 192.168.50.21 meant every single subdomain pointing to the old host needed to be updated. In my case, this involved 31 active services under the internal.domain domain, each with its own DNS A record in Cloudflare.
The services included everything from media servers and monitoring tools to home automation and development environments. Each one was critical to my daily homelab operations, making accuracy paramount.
Working with the Cloudflare API
While Cloudflare's web interface is user-friendly, updating 31 records manually would be tedious and error-prone. Instead, I opted to use the Cloudflare API with PowerShell for bulk operations. This approach provided several advantages:
- Automation: Script once, run multiple times with different parameters
- Audit Trail: Maintain a record of exactly what was changed and when
- Repeatability: Easy to verify changes or roll back if needed
- Speed: Bulk operations complete in seconds rather than minutes
The Update Process
The DNS update operation followed a careful, methodical approach:
- Inventory Current Records: Listed all existing A records for
internal.domainto identify what needed updating - Identify Conflicts: Found several CNAME records that would conflict with the new A records
- Plan the Changes: Created a prioritized list of updates, starting with the most critical services
- Execute Updates: Used the Cloudflare API to update each A record to point to
192.168.50.21 - Verify Changes: Confirmed DNS propagation using
nslookupand tested service accessibility
Handling CNAME Conflicts
During the inventory phase, I discovered several CNAME records that were pointing to the old infrastructure. CNAMEs can't coexist with A records for the same hostname, so these needed to be resolved first.
For example, filezilla.internal.domain had a CNAME pointing to another service, but I needed it to be an A record pointing to the new Traefik host. The solution was to delete the CNAME and create a new A record in its place.
The Great Legacy Cleanup
While working in Cloudflare, I noticed something interesting: 35 A records from an old domain (hailhydra.org) that were no longer in use. This was a legacy domain from a previous homelab iteration that I'd forgotten to clean up.
These records were:
- Consuming unnecessary space in my DNS zone
- Creating confusion when searching for active records
- Potentially exposing information about my old infrastructure
- Costing (albeit minimal) money for zones I wasn't using
I took this opportunity to purge all 35 obsolete records, finally retiring the hailhydra.org domain from my active infrastructure. It felt satisfying to clear out this digital clutter.
System-Wide Cleanup
With the DNS migration complete, I turned attention to the Docker hosts themselves. The migration had left several unused containers and images:
Container Removal
Several services were no longer needed on the media server:
satisfactory-server- Game server I no longer usedtechnitium-dns- Replaced by AdGuard Homewizarr- Media invitation service that never saw adoption
Docker System Prune
After removing unused containers, I ran docker system prune -f on both Docker hosts. This operation cleaned up:
- Stopped containers
- Unused networks
- Dangling images
- Build cache
The result? A satisfying 9.6GB of reclaimed disk space across both hosts.
Preventing Future Log Bloat
While performing the cleanup, I noticed that Docker container logs were consuming more space than expected. To prevent this from becoming an issue in the future, I implemented global logging limits.
I added the following configuration to /etc/docker/daemon.json on both hosts:
Log Configuration:
max-size: 15m- Individual log files capped at 15MBmax-file: 3- Keep only the 3 most recent log files
This limits each container's logs to a maximum of 45MB, preventing runaway log growth.
After updating the configuration, I restarted the Docker daemon on both hosts to apply the new settings. Existing containers would pick up the new limits automatically.
Verification and Testing
With all changes complete, comprehensive testing was essential:
- DNS Resolution: Verified all 31 subdomains resolved to the correct IP using
nslookup - Service Accessibility: Tested each service endpoint to confirm Traefik was routing correctly
- Certificate Validation: Confirmed HTTPS certificates were being served properly
- Performance Check: Monitored both hosts for any performance degradation
Lessons Learned
This DNS management operation reinforced several important practices:
- Automate When Possible: Using the API saved hours compared to manual updates
- Regular Cleanup: Don't let legacy infrastructure accumulate—schedule regular cleanup reviews
- Plan for Logs: Implement log rotation and size limits from the start, not after disk space becomes an issue
- Document Changes: Keeping a log of API commands and changes made verification much easier
- Verify Everything: DNS can cache unexpectedly—always test after making changes
What's Next
Despite careful planning, no migration is perfect. The next day revealed a redirect loop issue with one of my services that required additional troubleshooting. Part 3 covers this follow-up work and the lessons learned from debugging the new setup.
adguardhome.internal.domain, including the multi-part fix involving DNS changes, Traefik middleware, and transport configuration.
Tools & Workflow
DNS management and cleanup operations showcase the power of CLI-based AI workflows:
- Gemini CLI & CodeX: Automated DNS record updates via Cloudflare API, Docker cleanup across multiple hosts, and system inventory operations. Generated PowerShell and bash scripts on-demand for bulk operations.
- GEMINI.md Documentation: The operations log section of GEMINI.md captured exact API syntax that worked, specific curl commands for validation, and the sequence of changes. When issues arose days later, I could reference this file to understand exactly what was configured and how.
- Claude: Created this blog and website to document the project. The infrastructure work itself was performed entirely through CLI tools.
This workflow reduced hours of manual work to minutes while maintaining perfect documentation. The .md file became both instruction manual and changelog—any engineer (or AI) reading it would understand the complete infrastructure state and recent modifications.