Sometimes I run into the same incorrect opinions so often I just want to vent and do a post like this. I don’t know if anyone reads this stuff but it is therapy for me. DNS and Active Directory is one of these subjects. So here we go…….
I will break these myths into two groups: “Configuration” and “DNS Vendor and Placement”.
I will assume for the purposes of this post that this is in a split-horizon DNS context. That encompasses how most companies with Active Directory manage DNS.
A Domain Controller should always point to itself first for DNS
This one is supposedly proposed in the name of efficiency. There were even best practice articles from Microsoft indicating this was correct. The efficiency argument is nonsense. A DC only uses it’s DNS server entries to register DNS records in DNS and do lookups for replication. Everything else uses the internal server and it’s own files. If you don’t believe me, install DNS on a server, give it an DNS server address of another DC, and then try to promote it. It will not see the Active Directory DNS zone on the external server and the promotion will fail. That is why we ‘bootstrap’ DNS when a DC’s being promoted. So there is no efficiency gained in pointing a DC to itself.
What are the issues of doing this? The ‘DNS Island‘ issue can be the result of pointing a DC to itself for DNS. DNS and replication involve a ‘chicken-and-egg’ process. DNS is replicated if it is AD integrated but replication depends on convergence of DNS for the servers to ‘find’ each other. If the servers point to themselves and therefore register their records on themselves, convergence is never achieved and replication fails. This is especially true when DC IP addresses change.
There are several opinions on the best configuration consensus seems to be that every DC should point to another DC first (preferably in the same site) and either a third DC second or its own IP. 127.0.0.1 should only be used as a third DNS server address. I use 3 servers with it’s own IP last. I never use 127.0.0.1. Best practice beyond this depends on site configuration and DC geographic placement. Too complex for discussion here.
Only Port 53 UDP needs to be open in the firewall for DNS lookups.
DNS lockups use port 53 UDP ‘datagrams’ by default but the RFC allows this to ‘failover’ to TCP 53 if the response exceeds the maximum size of a DNS datagram (512 bytes for DNS 4096 for EDNS). What actually happens is that the response is returned with the TC Bit (truncated indicator) set and the client responds by trying TCP 53.
In addition some firewalls still block DNS UDP packets over 512 bytes. So, if you turn on EDNS, query responses over 512 bytes fail anyway.
TCP 53 creates a socket so multiple messages can be sent to deal with the size. In the past this didn’t happen very often but did happen on a few queries that returned a long list of A records prompting client questions like “why can’t I send mail to domain.com” but can to everyone else?” Now, however, things like IPv6 or DNSSEC cause the switch to TCP to happen more often. Just open both TCP and UDP 53 and avoid the problem.
Clients should have one public DNS server in case AD is down.
The theory is that only if all Internal servers are not responding, the public server be used. Well true. And that when the internal servers come back online the client will use them again. That is False. Here is how the client works. The scenario: DNS server settings: DC1, DC2, 220.127.116.11. The client tries DC1. If it does not respond, it moves on to DC2. Assume for this discussion that DC2 is also not responding (note: not answering a query is not the same as — NOT RESPONDING AT ALL). It moves on to 18.104.22.168. Server 22.214.171.124 works but 126.96.36.199 knows naught of Active Directory. Nothing AD related works. DC1 comes back online, what happens? Nothing. The DNS client only changes servers when the one it is using fails to respond – i.e. goes off-line. The client continues using 188.8.131.52. Assuming no other changes, or until the DNS Client service restarts or the client is rebooted.
In fact, another myth is that the client always starts at the beginning of the list. Also False. It tries the ‘next’ server depending on wherever in the list it currently is. With 3 servers, the order is always 1, 2, 3, 1, 2, 3 and it only moves when the current server does not respond at all.
Internal DNS Servers should always forward to public caching server.
This one is not strictly False but it not strictly True either. It can, however, certainly cause issues.
Here is the dilemma: You host a zone for “mydomain.com” internally but also use a domain “myotherdomain.com” as an email domain but it is only hosted on public DNS. The “myotherdomain.com” zone contains a CNAME record that points to a record in “mydomain.com”. When a lookup for the CNAME in “myotherdomain.com” is initiated internally, it is forwarded to the public DNS server. Because “do referral’s” is on in the query, the public server looks up the CNAME target in its public “mydomain.com” zone and returns the PUBLIC IP for that record and the user gets the PUBLIC rather than PRIVATE IP. Anti spoofing is likely on and the access fails.
This is a common scenario for Exchange (why I used that as the example) where multiple domains are hosted but only the Exchange server domain itself is in internal DNS. It breaks Auto Discover. If you use Root Hints lookup, the referral is internal and the IP returned is the PRIVATE one.
This does not have to be Exchange. It happens anytime there are CNAME records in a zone you don’t host internally which points to records in a zone you do host internally. Exchange just happens to do this a lot.
So why do this at all? Good question since the primary justification is efficiency. But, since the internal DNS servers are running their own cache, the efficiency gained is minimal because the server only looks records up when the TTL expires and that lookup is just a little longer vs the risk of getting an incorrect IP to simplify those lookups. Further, it is better to protect your own DNS against pollution. Public DNS forwarding counts on their doing it for you. I always prefer the former.
Vendor and Placement Myths:
Each Active Directory Domain must host DNS for it’s Active Directory zone.
Each Domain needs a DNS zone but it does not to be hosted on DCs in that domain.
Each Forest must host the zones for it’s Domains.
As above, each domain needs a zone but it does not to be hosted on DCs in that forest. There are limitations in multi-forest implementations. Specifically, without a trust, secure updates for a domain in one forest to DNS hosted in a domain in another forest are not possible.
Active Directory must use Microsoft DNS.
As per all the above, each domain needs a DNS zone but it does not need to be Microsoft DNS. Active Directory Requirements for DNS back to the very beginning simply require that the DNS support dynamic updates, and support SRV records. It can even not support dynamic updates but that is difficult to live with.
I have personally worked with clients with third-party DNS. In my case it was UNIX based. Not my first choice but it worked. The Microsoft Press book ‘Building Enterprise Active Directory Services – Notes from the Field’ published in 2000 references similar implementations.
So what does this mean for DNS placement. It should be in multiple forests\domains or not? There is a case to be made for simplicity of design and manageability. Correct design for the placement of DNS takes this into consideration other factors such as forest\domain design and geographic placement. I have been involved in and designed DNS multiple ways to accommodate these variables to produce an optimum result. The key here is you are NOT locked into any specific choices for DNS. That is myth. Optimum design gets complex enough that I have covered at least the high-level issues another post ‘DNS Design – Centralize or Decentralize?‘.