How to create a resilient DNS framework
Telephones used to have a dial. Television viewers used to have to get up to change the channel. Internet connections used to run at 56 kbit/s. And, not so long ago, organizations could run their service from a single data center. Their DNS servers were placed inside it with no contingency plan. After all, if the data center went down, the DNS server was useless.
But time and technology march on, and a single data center is now the exception rather than the norm. Enterprises run multiple data centers, sometimes in multiple countries, not to mention cloud regions and highly distributed networks. Consequently, your DNS needs to be just as highly distributed as your content. What good is a disaster recovery site if you have no way to direct your users to it?
That is why today’s leading DNS providers offer extremely resilient networks with multiple anycast groups and hundreds of servers spread out around the world. However, the hard reality is that impairments, outages and massive Distributed Denial of Service (DDoS) attacks can and do happen. To truly bulletproof your distributed infrastructure against an issue where your users cannot resolve your domain, you might very well consider hosting your DNS records with two providers.
This is a good idea in theory, but it comes with some troublesome details. Prior to today’s next-generation DNS solutions, you basically had three choices:
- Run one DNS provider as primary and the second as the replicated slave
- Run two DNS providers, both as primary, and (carefully) make your record changes in each
- Run two DNS providers, both as primary, and code your own middleware application that is capable of understanding a requested DNS change and pushing that change to each provider’s unique API
Option one deprives you of the RUM-based telemetry, traffic management features and powerful geographic routing that some top-tier providers offer. The use of the zone transfer (XFR) technology condemns you to using only the most basic, plain-vanilla DNS records.
Option two opens a Pandora’s Box of potential human error. If you don’t painstakingly and laboriously keep two different providers in perfect sync, you will end up with traffic routing problems that are shockingly difficult to troubleshoot.
Option three requires substantial time and mental effort to write your own DNS management software, with in-depth integration with each of your DNS providers. You lose all the advantages of your providers’ portals and dashboards and will have to roll your own interpretation layer to keep one provider’s advanced features in approximate synchronization with the next provider’s.
Other possibilities exist today -- thank goodness. Dedicated DNS solutions allow you to place real or virtual servers anywhere you want them: in your office, in your data centers, inside your DMZs, behind your firewalls -- literally anywhere that makes sense for your infrastructure. You can then install a DNS software stack on them and turn them into fully managed DNS delivery nodes that are dedicated to you. Through the same portal and API as you use right now to manage your DNS on a managed DNS anycasted world-wide platform, you can choose which domains you want to also serve from your dedicated DNS nodes.
What you end up with is a framework that enables to you benefit from the resilience of two DNS providers with the ease of management through a single portal and API. All your advanced traffic management and intelligent Filter Chain configurations work exactly the same, too. And if something were to happen to any part of the managed DNS infrastructure, your dedicated DNS nodes would be unaffected and would continue to happily serve DNS. Once they re-established contact with the "mothership," they would push their queued query statistics upstream and apply any pending record changes.
Dedicated DNS nodes are not only authoritative DNS servers, but they also support recursion, so you can point all your DNS clients (laptops, servers, EC2 instances, etc.) at them. This results in all your DNS needs being met and queries directed at your own domains and records being resolved in single-digit millisecond time. You can also leverage advanced Filter Chain capabilities to intelligently direct traffic within your own data centers and achieve greater performance, failover and resiliency between server or application tiers.
With the speed at which technology is moving, you can’t afford to sit around or to jump in before evaluating your DNS options. The first choice places you in the realm of the dinosaurs; the second can lead to headaches at best and server downtime at worst. Fortunately, it’s possible today to use the powerful combination of managed DNS and dedicated DNS solutions to get the ease and performance you need.
Carl J. Levine is the senior technical evangelist for NS1. Carl is an established and time-tested product manager with the unique ability to iterate use cases, bring understanding to those seeking to explore complicated technical concepts and increase revenue across diverse sales channels.
Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.
Photo Credit: Mopic/Shutterstock