View Single Post
  #8  
Old January 2nd 18, 01:40 AM posted to alt.windows7.general,microsoft.public.windowsxp.general
Brian Gregory[_2_]
external usenet poster
 
Posts: 166
Default Windows DNS cache

On 01/01/2018 03:38, VanguardLH wrote:
Brian Gregory wrote:

Note: I'm not going to reconstruct the attribution lines that Mayayana
discards in his replies. So I only quote Brian's post in my reply in
this subthread.

You don't need it if you LAN has it's own DNS cache but I guess it might
be worth saving the 12MB of RAM it uses to save doing unnecessary DNS
lookups over the Internet.


Pages nowadays have resources across many and far flung sites. The
content of a page can have ad resources on CDNs (content delivery
networks), scripts on tertiary domains (same or different owner than the
visited domain), CSS files on other servers, etc. All those resources
require DNS lookups. With some pages having hundreds of externally
linked resources, there can be hundreds of such DNS requests in just one
page. Rarely do sites use IP addresses for their external resources.
Some resources may be relatively pathed (i.e., under the same domain as
visited) but many sites incorporate off-site or external resources.

Having a local cache to shortcut the DNS lookups by finding the IP
address for a previously visited site will speed up all those DNS
lookups. The positive lookups (those that succeeded) are cached for
only a day, by default. The negative lookups (that that failed) are
cached for only 4 hours, by default. Registry entries can be used to
alter those retention intervals. If the local DNS caching client is
disabled, ALL those hostnames (even those on the same domain) will have
to get looked up by issuing DNS requests out to the network, out to the
Internet, to the specified DNS server (which the user can specify or use
the one assigned to them by whatever upstream DHCP server they use which
is often their ISP's). All that DNS network traffic takes time.

The time for hundreds of DNS lookup requests and waiting to get back a
response (the IP address for the external resources on a page) is very
short. Whether you use a DNS caching client or not, speeding that up
will not alter the time it takes for those external resources to deliver
their content for that page. That's why many users use adblockers to
eliminate the time to download the unwanted content.

You can use GRC's DNSbench to see what are the request and acknowledge
times for DNS lookup requests. Different DNS server will have differing
response times, and that includes the hops between your endpoint and the
targeted DNS server.

https://www.grc.com/dns/benchmark.htm

I would suggest editing their install-time list of DNS servers as there
are *many* that are of no use to you or will never be considered for
use. This tool will also indicate which servers will redirected failed
lookups to their "help" redirection site (for which they get
clickthrough revenue) and which will break some webcentric apps that
actually expect a negative (failed) DNS lookup to return a code rather
than a success code when reaching their redirection page. Some DNS
servers include some filtering, like eliminating or blocking known
malicious sites (but there are always a few false positives in those
blacklists). For me, I configured the IP protocols on my PC to use the
following DNS servers in the following order: Google DNS (8.8.8.8 and
2001:4860:4860::8888), OpenDNS (208.67.222.222 and 2620:0:ccc::2), and
my router's internal DNS server (10.0.0.1 and 0:0:0:0:0:ffff:a00:1).
This is the preference or fallback order: first to last.

OpenDNS includes a malicious site filter that you cannot disable (unless
you enlist as a reporter with them). However, I found them (according
to DNSbench) to be a tad slower overall than Google's. My router's
internal DNS server is not really a server. It is a transparent proxy
that merely passes all DNS requests up to its upstream DNS filter. The
router is configured to use DHCP which means the router will use my
ISP's DNS server; however, that is only used if the prior DNS servers
listed in preference order are unreachable (fallback order uses the
router last). Remember to do the static DNS server config for both IPv4
and IPv6 addressing.

The only time it is recommended to disable the DNS Client server (the
local DNS cache) is when using pre-compiled and HUGE 'hosts' files. The
'hosts' file entries are used before using the local DNS cache. In
9x-based Windows, it was noticed the DNS Client could add overhead to
using a huge 'hosts' file (I'm talking about the thousands of entries in
the 'hosts' file versus the few to a couple hundred for which that text
file opened on every DNS lookup and read sequentially line by line was
designed for). However, those huge pre-compiled 'hosts' files (used for
ad and tracking blocking) add more overhead than does the DNS Client's
caching. Those pre-compiled 'hosts' file are huge. The one from MVPS
is over *14 THOUSAND* lines long. The 'hosts' file is not cached into
memory. It is opened (file I/O API system call) and read one line at a
time to sequentially scan the text file for a matching entry on a
hostname. It only works on hostnames, not domains, and why there are
dozens and dozens of entries for just one resource (e.g., 117 for
doubleclick in the MVPS pre-compiled 'hosts' file).

I don't believe the DNS Client has incurred overhead on a prior 'hosts'
success lookup for a long time in NT-based Windows. As with any
process, the DNS Client service will consume resources (CPU and RAM) but
it's been awhile since users are still using such ancient processors
with tiny system RAM and a slow data bus on the mobo. However, the user
might wish to tweak the DNS Client's settings in the registry to
immediately flush negative (failed) DNS lookups. The default is 900
seconds (15 minutes). The site may fix a problem but the user will
continue to get failed lookups due to the local DNS caching still
listing a negative result for that host, but 15 minutes isn't very long.
It eliminates you (or external resource links in a delivered web page)
from wasting time to query a DNS server only to get back yet another
failed result. See Microsoft's KB 318803 (http://tinyurl.com/ybjwbc37).
86400 seconds (24 hours) is the default cache time for positive results.
If you often visit flaky or unreliable site or the type that move around
a lot, you might want to shorten this to, say, 4 hours which is probably
longer than your web sessions in your HTTP client. Because these
registry tweaks are under the HKEY_Local_Machine hive, changes there
will affect all users accounts in that instance of Windows. If the
settings are absent in the registry, the defaults get used.

I've left the positive cache set to the 24-hour expiration. I don't
leave the web browser open all day but I may load it several times per
day and often revisit the same sites (or different sites often access
the same off-domain resources; e.g., the Google site for jquery). Since
I'm using the defaults, negative results are cached for 15 minutes. I
don't visit sites by hostname that move around that often, and if I get
a negative DNS result then it is cleared in 15 minutes which is probably
longer than me figuring out the cause of the problem with the site being
faster than that to correct the problem.

There is also the issue that many ISP's operate caching DNS servers.
This is to quickly return a positive result for the same lookup request
from hundreds, or more, of their customers. Server-side caching helps
but you have no control over their positive and negative cache
expirations. The GRC DNSbench tool will measure the difference between
raw or uncached DNS lookup requests versus those return due to
server-side DNS caching: red = cached DNS lookup time, green = uncached
DNS lookup time, blue = dot.com lookup time since .com is the most
widespread TLD [top-level domain].


I did most of that but now I have set up a DNS cache in my router which
intercepts all traffic aimed outwards to port 53 at any IP address and
queries OpenDNS when entries have expired.

--

Brian Gregory (in England).
Ads