View Single Post
  #21  
Old May 31st 17, 09:41 PM posted to microsoft.public.windowsxp.general,alt.comp.os.windows-xp,comp.os.ms-windows.networking.misc
VanguardLH[_2_]
external usenet poster
 
Posts: 10,881
Default Can't connect to Web

Steve Hayes wrote:

Having read the replies to your post, I think the DNS thing is
unlikely, but last week Google appeared to be down quite a bit, but
only in some places. I had to resort to Bing for searches, and was
quite surprised at how quickly it appeared, much faster than Google,
probably because it has less traffic. Sites that connect to Google
were also much slower to load -- they seemed to hang until the Google
connection timed out.

From people I asked, it seemed that Google's servers in the east of
South Africa and in New Zealand were down, but not in the UK or USA.


You might also trying to find out if the route you happen to get is
actually usable to you. Routing is not dynamic: when you can't get
through one route to a target, you don't automatically get assigned a
different route to try. Hosts (nodes in a route) go down, become
unresponsive, or get too busy so they are excessively slow and clients
will timeout. Updating the routing tables (once the problem has been
reported and after someone takes action) can take about 4 hours (that's
usually how long I wait for routing problems but they can take a lot
longer).

Run a traceroute (tracert) to see if you can reach the target host. For
example, last night I could not get to boatloadpuzzles.com. The web
browser said the site was unresponsive. Not true. A traceroute should
I wasn't even getting outside my own ISP's network. Something weird was
happening where I kept getting looped back through the same two nodes.
I could get to other sites, like yahoo.com, but not to that one site.
It wasn't a site problem. It was a routing problem so I wasn't even
reaching the site.

You can use public proxies to reach a site using different routes. I
once couldn't reach creative.com for several days. I could get there
okay using a public proxy. Why? The route that I got happened to hit
one of their boundary servers (front end) to their web farm that was
down. The other routes hit different boundary servers so I could get
in. When presented with the various routings (mine that was unusable
and others that worked) to show which boundary host was unresponsive,
they fixed the problem in a day.

Another time I couldn't get to any site on the west coast but I could
get elsewhere. Turns out an entire backbone provider (Sprint) had an
outage so no one could go either direction across most of the Rockies.
I reported the problem but they already knew. That was a short-lived
outage but not when you're sitting at your computer wondering why you
can't get somewhere.

By the way, and back to the DNS topic, there is a possiblity that it
will interfere with visiting a site but not because the DNS server is
down but because the site might've changed their IP address and you're
still trying to use the old lookup from the local DNS client's cache.
TTL (time-to-live) entries for successful lookups (positives) last
longer in the DNS client's cache than those for failed lookups
(negatives). Some users might suggest disabling the DNS Client service
but that means your end has to do more DNS lookups. Without the cache,
every hostname has to be looked up even if it is the same one. A web
page can have hundreds of references requiring a DNS lookup. Having to
send a request to a DNS server and get a response takes time. Looking
it up in a local cache is much quicker. By default in Windows, the TTLs
a

positive TTL = 24 hours
negative TTL = 5 minutes

You can modify those DWORD values in the registry.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\DNSCache\Parameters

MaxCacheEntryTtlLimit (positive TTL), default = 86400 seconds (24 hours)
NegativeCacheTime (negative TTL), default = 300 seconds (5 minutes)
(if the entries are absent, the defaults get used)

Some folks set the negative TTL to zero to not cache any of the failed
DNS lookups. Their intent is to keep requesting new lookups from the
DNS server until they happen to get one that works. They're hoping to
get to the site as soon as the DNS records get updated. Seems a bit
rude but I can see reducing this to one minute. The positive TTL is a
bit overly long since it is possible a site has changed their IP address
inside of a day. An hour or two seems more appropriate.

If you suspect there is a problem with the Windows DNS cache, you can
flush it so start over with all new DNS lookups and new cachine. Run:

ipconfig /flushdns

Note that web clients can incorporate their own internal DNS cache which
overrides the defaults defined in the registry. The client uses its own
DNS cache instead of relying on the one in Windows. Firefox has its own
DNS cache. I suspect that is partly due to Firefox being
cross-platform: they want to rely on their own DNS cache instead of
hoping there is one (and still functional) back in the OS. I forget why
I ran into problems with Firefox's own DNS cache but back then I
disabled it to resolve whatever was the problem. I have not disabled
the DNS Client in Windows so that DNS caching is available and I don't
want (and ran into problems with) Firefox's own internal DNS cache. In
about:config, Firefox's TTL setting is at:

network.dnsCacheExpiration
default = 3600 (1 hour)
0 (zero) disables Firefox's DNS cache

Setting to zero means disabling the cache which flushes all currently
cached DNS lookups. Apparently Firefox is only caching positive
results, not negative ones. You could then reset back to 3600 or a
value of your choice or just leave it zero (and rely on the DNS Client
in Windows to do both positive and negative DNS caching). There are
add-ons for Firefox to flush Firefox's internal DNS cache, like DNS
Flusher, but I disabled Firefox's internal DNS cache so I don't need an
add-on to fix DNS caching problems within Firefox.

In Firefox, I also set network.dns.disablePrefetch = True but that's for
a different DNS issue: Firefox populating its internal DNS cache for any
resources specified in the currently loaded page. This has Firefox
prefetching IP addresses from the DNS server for resources that you may
never need, like a hyperlink to another site that you won't be visiting
or for ad or tracking sources that can then see your IP address visited
them. You might use an adblocker but prefetching in Firefox partially
cripples the adblocker from doing its job. See:

https://en.wikipedia.org/wiki/Link_p...and_criticisms
Ads