A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Microsoft Windows 7 » Windows 7 Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Can HDTune go slower where there is data, if something's set wrong?



 
 
Thread Tools Rate Thread Display Modes
  #1  
Old March 5th 18, 11:39 AM posted to alt.windows7.general
J. P. Gilliver (John)[_4_]
external usenet poster
 
Posts: 2,679
Default Can HDTune go slower where there is data, if something's set wrong?

I have a brand-new "1T" (of course thus only 931.5G) drive. (It is now
showing 158 hours on time.)

When I run HDTune, I get a huge gouge from about 11-23%, and some in the
first 3%, about which I'm somewhat concerned: surely a brand-new drive
can't be this bad? (The SMART data actually showed 0 for hours the first
time I looked, and I have stored looks that then showed 4 hours and now
15x; it came in a sealed bag; and the label on the drive itself says
November 2017. I _imagine_ all these things can be faked, though I
imagine the SMART one requires special knowledge/equipment; however,
I've no real reason to suspect the trader I bought it from - and
besides, 1T 2.5" haven't been around _that_ long, have they?)

However, it has just occurred to me that the gouges correspond to where
I actually have data: I have it partitioned into C:, which is 71.6G of
99.9G free, and D:, which is 723G of 831G free. [Is there a setting in
explorer to show used rather than free?]

Since the gouges correspond rather well to where my data is, I wonder if
that's their reason. Also, the remainder of the HDTune graph is fairly
flat, whereas the ones I've seen people post usually tail down towards
the end, so I'm wondering if I've got HDTune - or some other setting in
the computer (BIOS?) - set at sub-optimum, such that it's (a) limiting
the maximum (the flat line is at about 170 MB/sec - is that low? - and
the "gouge" is at about 90), and (b) making HDTune drop where there's
actually something on the disc. I'm quite happy with the performance of
the computer, and all other tests including "Very long test (~4 min)"
from http://www.softwareok.com/?Microsoft/IsMyHdOK say it's OK. (I
haven't run HDTune's non-quick error scan yet, as that clearly will take
hours; I'll set it going tonight if I remember.)

It's an HGST (I think that's Hitachi/Toshiba combined?) HTS541010B7E610,
according to HDTune [I deliberately went for a "5400" rather than a
"7200"], and (again according to HDTune) apparently supports UDMA Mode 6
(Ultra ATA/133), with Active being UDMA Mode 5 (Ultra ATA/100). The PC
is a Toshiba Portégé R700-1F5.
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

"I'm a self-made man, thereby demonstrating once again the perils of unskilled
labor..." - Harlan Ellison
Ads
  #2  
Old March 5th 18, 12:55 PM posted to alt.windows7.general
Ed Cryer
external usenet poster
 
Posts: 2,621
Default Can HDTune go slower where there is data, if something's setwrong?

J. P. Gilliver (John) wrote:
I have a brand-new "1T" (of course thus only 931.5G) drive. (It is now
showing 158 hours on time.)

When I run HDTune, I get a huge gouge from about 11-23%, and some in the
first 3%, about which I'm somewhat concerned: surely a brand-new drive
can't be this bad? (The SMART data actually showed 0 for hours the first
time I looked, and I have stored looks that then showed 4 hours and now
15x; it came in a sealed bag; and the label on the drive itself says
November 2017. I _imagine_ all these things can be faked, though I
imagine the SMART one requires special knowledge/equipment; however,
I've no real reason to suspect the trader I bought it from - and
besides, 1T 2.5" haven't been around _that_ long, have they?)

However, it has just occurred to me that the gouges correspond to where
I actually have data: I have it partitioned into C:, which is 71.6G of
99.9G free, and D:, which is 723G of 831G free. [Is there a setting in
explorer to show used rather than free?]

Since the gouges correspond rather well to where my data is, I wonder if
that's their reason. Also, the remainder of the HDTune graph is fairly
flat, whereas the ones I've seen people post usually tail down towards
the end, so I'm wondering if I've got HDTune - or some other setting in
the computer (BIOS?) - set at sub-optimum, such that it's (a) limiting
the maximum (the flat line is at about 170 MB/sec - is that low? - and
the "gouge" is at about 90), and (b) making HDTune drop where there's
actually something on the disc. I'm quite happy with the performance of
the computer, and all other tests including "Very long test (~4 min)"
from http://www.softwareok.com/?Microsoft/IsMyHdOK say it's OK. (I
haven't run HDTune's non-quick error scan yet, as that clearly will take
hours; I'll set it going tonight if I remember.)

It's an HGST (I think that's Hitachi/Toshiba combined?) HTS541010B7E610,
according to HDTune [I deliberately went for a "5400" rather than a
"7200"], and (again according to HDTune) apparently supports UDMA Mode 6
(Ultra ATA/133), with Active being UDMA Mode 5 (Ultra ATA/100). The PC
is a Toshiba Portégé R700-1F5.


I think that in your situation I'd run some MS utility before anything else.
Firstly a simple check file structure.
Secondly a pre-boot full scan.
I'd decide what next from those results.

If ok, look further in HDTune.
If not, then back to the vendor.

Have you got a good backup?

Ed
  #3  
Old March 5th 18, 07:02 PM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Can HDTune go slower where there is data, if something's setwrong?

J. P. Gilliver (John) wrote:
I have a brand-new "1T" (of course thus only 931.5G) drive. (It is now
showing 158 hours on time.)

When I run HDTune, I get a huge gouge from about 11-23%, and some in the
first 3%, about which I'm somewhat concerned: surely a brand-new drive
can't be this bad? (The SMART data actually showed 0 for hours the first
time I looked, and I have stored looks that then showed 4 hours and now
15x; it came in a sealed bag; and the label on the drive itself says
November 2017. I _imagine_ all these things can be faked, though I
imagine the SMART one requires special knowledge/equipment; however,
I've no real reason to suspect the trader I bought it from - and
besides, 1T 2.5" haven't been around _that_ long, have they?)

However, it has just occurred to me that the gouges correspond to where
I actually have data: I have it partitioned into C:, which is 71.6G of
99.9G free, and D:, which is 723G of 831G free. [Is there a setting in
explorer to show used rather than free?]

Since the gouges correspond rather well to where my data is, I wonder if
that's their reason. Also, the remainder of the HDTune graph is fairly
flat, whereas the ones I've seen people post usually tail down towards
the end, so I'm wondering if I've got HDTune - or some other setting in
the computer (BIOS?) - set at sub-optimum, such that it's (a) limiting
the maximum (the flat line is at about 170 MB/sec - is that low? - and
the "gouge" is at about 90), and (b) making HDTune drop where there's
actually something on the disc. I'm quite happy with the performance of
the computer, and all other tests including "Very long test (~4 min)"
from http://www.softwareok.com/?Microsoft/IsMyHdOK say it's OK. (I
haven't run HDTune's non-quick error scan yet, as that clearly will take
hours; I'll set it going tonight if I remember.)

It's an HGST (I think that's Hitachi/Toshiba combined?) HTS541010B7E610,
according to HDTune [I deliberately went for a "5400" rather than a
"7200"], and (again according to HDTune) apparently supports UDMA Mode 6
(Ultra ATA/133), with Active being UDMA Mode 5 (Ultra ATA/100). The PC
is a Toshiba Portégé R700-1F5.


It's possible for a drive to have a "wear" pattern.

The place where say, an OS lives, can cause a lot of reallocations
just in that area. When HDTune runs over it, the reallocations take
longer to read, and cause the visible "gouge" symptoms. I had to
replace a drive here, just because the OS portion (72GB approx)
had become too slow to use. Reallocations still read out as "0".

Benchmarking is a more sensitive test than looking for Reallocation
raw data of 0. You can have a gouge in a drive, the drive can "feel"
slow, and Reallocations still reads 0 and the overall drive health
will use a superlative.

And yes, of course drives leave the factory with defects already
patched. No drive leaves the factory without reallocations already
on there. There is an "acceptance" value for defects. If the drive
is below that limit, "ship it". For example, there might already
be 100,000 reallocations as it slips out the door.

When Reallocations data becomes non-zero, you're looking at some "slice"
of ill health, rather than receiving a totally unbiased report. In the
example here, we only find out about the N reallocations as the
drive is approaching end of life. The value of N is high enough,
the user runs out to the store and buys another. The drive is
acceptable to the factory, anywhere in the "0" area. The
drives I was watching here, the percent life said the
N would span 0..5500 or so as a raw value. Above 5500, one
would guess that visible CRC errors and retired NTFS clusters
would be the result. Once the Service Area (Track -1) is
unreadable, the drive should stop being detected.

|
|
|------- 0 -------| "N" | "~dead..."
|______________________________________


The first hard drives, the full height drives which were 3.5"
high, they had "factory" and "grown" errors. I used to get
drives, and dump the "factory" list and print it on a piece
of paper (pasted inside the computer chassis). The fact there
was a "factory" list tells you the "factory" knew about them.
And during the testing/verifying phase at the factory, sectors
were already being reallocated. On those drives, you could
reset the "grown" defects and let the reallocation algo
re-detect them. Which worked pretty well (I tested it, because
that's the kind of guy I am). There weren't a lot of grown
that weren't re-detected that way, so the ones it had detected,
really were bad. Modern IDE drives do not support that interface
and once a sector is reallocated on an IDE/SATA drive, it's
like that forever (or until the factory refurbishes the drive
or something).

*******

HGST appeared out of no-where one day. There was some announcement
of the consumer part of the IBM disk business, being part of the
new venture. With it, went some IBM scientists. The IBM staff
were responsible for the HGST web pages, the ones that explained
how modern drives worked, and what wizmo material science
was being used. I don't know what hand Hitachi had in all
of it. Japanese companies don't typically have those sorts
of web pages, explaining how they do stuff. It was the IBM
people doing that. HGST was later bought by Western Digital.
There was an article on Anandtech in the last couple years,
where it listed who bought who, and it's possible the
only three brands left are WDC, Seagate, and Toshiba.
HGST drive may still keep their branding, but WDC is the
corporate master now. I don't know whether the IBM staff
go back to IBM or what happens to them.

This article actually sums up what happened, in a paragraph or two.
What these articles don't typically cover, is what happened to the people.

https://en.wikipedia.org/wiki/HGST

Paul
  #4  
Old March 6th 18, 02:44 AM posted to alt.windows7.general
J. P. Gilliver (John)[_6_]
external usenet poster
 
Posts: 2
Default Can HDTune go slower where there is data, if something's set wrong?

In message , Paul
writes:
J. P. Gilliver (John) wrote:
I have a brand-new "1T" (of course thus only 931.5G) drive. (It is
now showing 158 hours on time.)
When I run HDTune, I get a huge gouge from about 11-23%, and some in
the first 3%, about which I'm somewhat concerned: surely a brand-new
drive can't be this bad? (The SMART data actually showed 0 for hours

[]
However, it has just occurred to me that the gouges correspond to
where I actually have data: I have it partitioned into C:, which is
71.6G of 99.9G free, and D:, which is 723G of 831G free. [Is there a
setting in explorer to show used rather than free?]

[]
It's possible for a drive to have a "wear" pattern.


I'm not sure I understand what you mean by a wear pattern. (Would you
expect one after only 173 hours from new anyway?)

The place where say, an OS lives, can cause a lot of reallocations
just in that area. When HDTune runs over it, the reallocations take
longer to read, and cause the visible "gouge" symptoms. I had to
replace a drive here, just because the OS portion (72GB approx)
had become too slow to use. Reallocations still read out as "0".


This drive "feels" fine.

Benchmarking is a more sensitive test than looking for Reallocation
raw data of 0. You can have a gouge in a drive, the drive can "feel"
slow, and Reallocations still reads 0 and the overall drive health
will use a superlative.

And yes, of course drives leave the factory with defects already
patched. No drive leaves the factory without reallocations already
on there. There is an "acceptance" value for defects. If the drive
is below that limit, "ship it". For example, there might already
be 100,000 reallocations as it slips out the door.


Yes, I realise that, and that they conceal the first few to prevent
selective returns; and what you've said often about HDTune being a
reasonable way to detect what the SMART won't tell you, by consistent
dips on successive runs. But (a) there seem to be a lot more - i. e.
about 12% or more - than I'd have expected on a new disc, and (b) they
_do_ seem to correspond rather suspiciously to where I actually have
data. [And (c) the HDTune line doesn't tail off at all towards 100%.]

(If you email me, I'll send you the HDTune screenshots.)
[]
The first hard drives, the full height drives which were 3.5"
high, they had "factory" and "grown" errors. I used to get
drives, and dump the "factory" list and print it on a piece
of paper (pasted inside the computer chassis). The fact there


I remember seeing those with a provided list pasted to the drive.
[]
HGST appeared out of no-where one day. There was some announcement

[]
where it listed who bought who, and it's possible the
only three brands left are WDC, Seagate, and Toshiba.
HGST drive may still keep their branding, but WDC is the
corporate master now. I don't know whether the IBM staff
go back to IBM or what happens to them.

This one says Nov 2017 on the label.

This article actually sums up what happened, in a paragraph or two.
What these articles don't typically cover, is what happened to the people.

https://en.wikipedia.org/wiki/HGST

Paul

Thanks. So the H was originally Hitachi, Toshiba is nowhere in it, and
in practice it's part of WD now.
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

There's only so much you can do... with gravel.
- Charlie Dimmock, RT 2016/7/9-15
  #5  
Old March 7th 18, 10:51 PM posted to alt.windows7.general
J. P. Gilliver (John)[_4_]
external usenet poster
 
Posts: 2,679
Default Can HDTune go slower where there is data, if something's set wrong?

In message , "J. P. Gilliver (John)"
writes:
In message , Paul
writes:

[]
This drive "feels" fine.

Benchmarking is a more sensitive test than looking for Reallocation
raw data of 0. You can have a gouge in a drive, the drive can "feel"
slow, and Reallocations still reads 0 and the overall drive health
will use a superlative.

And yes, of course drives leave the factory with defects already
patched. No drive leaves the factory without reallocations already
on there. There is an "acceptance" value for defects. If the drive
is below that limit, "ship it". For example, there might already
be 100,000 reallocations as it slips out the door.


Yes, I realise that, and that they conceal the first few to prevent
selective returns; and what you've said often about HDTune being a
reasonable way to detect what the SMART won't tell you, by consistent
dips on successive runs. But (a) there seem to be a lot more - i. e.
about 12% or more - than I'd have expected on a new disc, and (b) they
_do_ seem to correspond rather suspiciously to where I actually have
data. [And (c) the HDTune line doesn't tail off at all towards 100%.]

(If you email me, I'll send you the HDTune screenshots.)

[]
I've now done the non-"Quick Scan" scan (about 155 minutes), and it
didn't find and damaged sectors.

I've also played with the settings for Benchmark. There are two - one
slider for speed versus accuracy, with five levels, and one for block
size. What do these mean? I can see that the speed one affects how fast
the test runs, but it doesn't seem to affect whether the gouges are
there or not. But the block size does! All the tests I'd done so far -
that show gouges (wide dips) near the beginning and at around 12-23%
(corresponding rather suspiciously to where there is data on the disc)
were with a block size of 64 KB, which was how HDTune came up. They gave
- speed 2: flat at about 160 MB/sec, gouge roughly 90 (3 runs). Then I
did a speed 1 (slowest) run with block size of 512 B (smallest): this
gives a flat trace - no gouges! - but at only about 2.75 MB/sec. Then I
did a speed 1 but back at 64 KB block size: gouges are back - flat is
about 140 MB/s, gouge about 90. Then speed 2, 512 B block: flat, no
gouges, about 2.8 MB/s. Then speed 5, 64 KB: gouges, flat ~140M/s, gouge
90ish. Finally speed 2, block size 8 MB: flat way up at about 230
MB/sec, gouge bottoming still at about 90.

In short, a small block size gives a flat line with no gouges, but at a
very low level; a larger one gives a graph with the gouges (which seem
to correspond to where there is data), with varying flat level.

So what does the block size parameter mean/do?
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

Advertising is legalized lying. - H.G. Wells
  #6  
Old March 7th 18, 11:14 PM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Can HDTune go slower where there is data, if something's setwrong?

J. P. Gilliver (John) wrote:
In message , "J. P. Gilliver (John)"
writes:
In message , Paul
writes:

[]
This drive "feels" fine.

Benchmarking is a more sensitive test than looking for Reallocation
raw data of 0. You can have a gouge in a drive, the drive can "feel"
slow, and Reallocations still reads 0 and the overall drive health
will use a superlative.

And yes, of course drives leave the factory with defects already
patched. No drive leaves the factory without reallocations already
on there. There is an "acceptance" value for defects. If the drive
is below that limit, "ship it". For example, there might already
be 100,000 reallocations as it slips out the door.


Yes, I realise that, and that they conceal the first few to prevent
selective returns; and what you've said often about HDTune being a
reasonable way to detect what the SMART won't tell you, by consistent
dips on successive runs. But (a) there seem to be a lot more - i. e.
about 12% or more - than I'd have expected on a new disc, and (b) they
_do_ seem to correspond rather suspiciously to where I actually have
data. [And (c) the HDTune line doesn't tail off at all towards 100%.]

(If you email me, I'll send you the HDTune screenshots.)

[]
I've now done the non-"Quick Scan" scan (about 155 minutes), and it
didn't find and damaged sectors.

I've also played with the settings for Benchmark. There are two - one
slider for speed versus accuracy, with five levels, and one for block
size. What do these mean? I can see that the speed one affects how fast
the test runs, but it doesn't seem to affect whether the gouges are
there or not. But the block size does! All the tests I'd done so far -
that show gouges (wide dips) near the beginning and at around 12-23%
(corresponding rather suspiciously to where there is data on the disc)
were with a block size of 64 KB, which was how HDTune came up. They gave
- speed 2: flat at about 160 MB/sec, gouge roughly 90 (3 runs). Then I
did a speed 1 (slowest) run with block size of 512 B (smallest): this
gives a flat trace - no gouges! - but at only about 2.75 MB/sec. Then I
did a speed 1 but back at 64 KB block size: gouges are back - flat is
about 140 MB/s, gouge about 90. Then speed 2, 512 B block: flat, no
gouges, about 2.8 MB/s. Then speed 5, 64 KB: gouges, flat ~140M/s, gouge
90ish. Finally speed 2, block size 8 MB: flat way up at about 230
MB/sec, gouge bottoming still at about 90.

In short, a small block size gives a flat line with no gouges, but at a
very low level; a larger one gives a graph with the gouges (which seem
to correspond to where there is data), with varying flat level.

So what does the block size parameter mean/do?


I would think a too-small block size choice, would result
in not getting any "interesting" information from the hard drive.
To get "gouges" to show up, you need to be asking for at least
a handful of sectors, so the one or more bad sectors show abnormal
response time (or equivalent bandwidth).

If you wanted to know the details (stride and block size actually used),
you could use Sysinternals Process Monitor and log what it is doing.
It doesn't read every sector. It strides along the disk and
takes samples of "blocksize" size. But you can verify that theory
with ProcMon. I don't want to spoil your fun :-)

Paul
  #7  
Old March 7th 18, 11:54 PM posted to alt.windows7.general
J. P. Gilliver (John)[_4_]
external usenet poster
 
Posts: 2,679
Default Can HDTune go slower where there is data, if something's set wrong?

In message , Paul
writes:
J. P. Gilliver (John) wrote:

[]
So what does the block size parameter mean/do?


I would think a too-small block size choice, would result
in not getting any "interesting" information from the hard drive.
To get "gouges" to show up, you need to be asking for at least
a handful of sectors, so the one or more bad sectors show abnormal
response time (or equivalent bandwidth).

If you wanted to know the details (stride and block size actually used),
you could use Sysinternals Process Monitor and log what it is doing.
It doesn't read every sector. It strides along the disk and
takes samples of "blocksize" size. But you can verify that theory
with ProcMon. I don't want to spoil your fun :-)

Paul

It isn't fun any more!

I've just experimented: the gouge shows up with a block size of 32 KB,
but not with one of 16 KB.

As it "strides along the disc" (presumably at a speed determined by the
speed slider setting), when it takes samples, does it find empty gaps of
that size and use those? If it is looking at a part of the disc where
there is data rather than empty space, what does it do - move aside the
data in order to do its tests? That might take longer for larger block
sizes maybe?
I've put some screenshots in http://255soft.uk/temp/HDTune.zip
(filenames giving speed and blocksize settings).
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

Someone once said that scientists and prostitutes get paid for doing what they
enjoy. - Prof Stepehen Hawking in RT 2013/12/7-13
  #8  
Old March 8th 18, 05:00 AM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Can HDTune go slower where there is data, if something's setwrong?

J. P. Gilliver (John) wrote:
In message , Paul
writes:
J. P. Gilliver (John) wrote:

[]
So what does the block size parameter mean/do?


I would think a too-small block size choice, would result
in not getting any "interesting" information from the hard drive.
To get "gouges" to show up, you need to be asking for at least
a handful of sectors, so the one or more bad sectors show abnormal
response time (or equivalent bandwidth).

If you wanted to know the details (stride and block size actually used),
you could use Sysinternals Process Monitor and log what it is doing.
It doesn't read every sector. It strides along the disk and
takes samples of "blocksize" size. But you can verify that theory
with ProcMon. I don't want to spoil your fun :-)

Paul

It isn't fun any more!

I've just experimented: the gouge shows up with a block size of 32 KB,
but not with one of 16 KB.

As it "strides along the disc" (presumably at a speed determined by the
speed slider setting), when it takes samples, does it find empty gaps of
that size and use those? If it is looking at a part of the disc where
there is data rather than empty space, what does it do - move aside the
data in order to do its tests? That might take longer for larger block
sizes maybe?
I've put some screenshots in http://255soft.uk/temp/HDTune.zip
(filenames giving speed and blocksize settings).


I think I'd pull the drive from the laptop and test
it on another computer. It's a SATA III drive, with
a 128MB cache, 1040Mbit/sec media rate. If we take
the media rate and divide by 10, the disk should
do about 104MB/sec reads. If we take the media rate
and divide by 8 (unlikely), the rate would be about
125MB/sec. Yet, some of your benches show 225MB/sec
(like the drive is on a SATA II port and all the
data is coming from a controller cache).

Media rate is a shabby way of stating performance,
as it requires guys like me to estimate the coding
loss. Sustained read rate is a better measure, as
it requires no "secret knowledge" of the thing.

HDTune should be doing a read-only test that makes no
state changes to the disk whatsoever. It's doing raw disk reads
while ignoring the file system (it's like it is reading /dev/sda).

You could do an error scan while screen recording the bandwidth
recorded as a function of position, if you wanted to continue using
your existing hardware setup for this. Those two numbers
in the corner, can be used to chart disk performance in a
pinch. With the assurance the test is reading the entire
1TB of data from the disk.

https://s10.postimg.org/x1zpk616x/hdtune_error_scan.gif

I've recorded such an HDTune run with a screen recorder, then
applied OCR to the screen dump, but it was far from "fun".
The OCR of course, mixes up 0 and O with the usual comedic
results. The log generated required significant manual
correction.

*******

https://www.hgst.com/sites/default/f..._datasheet.pdf

Your transfer curves are flat. That's unrealistic. HDDs have a declining curve.

Your access time is 2.8ms. That's unrealistic.

One of your scans shows 225MB/sec sustained, when the data sheet
allows at most 125MB/sec. That's unrealistic.

Maybe your machine has a Robson cache (20GB Flash that goes
with certain chipsets). Maybe you've loaded a third-party
caching software that doesn't listen to "Do-Not-Cache"
requests.

But as it stands now, your scans lack a certain amount of
realism. The drive isn't as fast as an SSD. But it's too
fast for an HDD in certain ways.

Paul
  #9  
Old March 8th 18, 02:40 PM posted to alt.windows7.general
J. P. Gilliver (John)[_4_]
external usenet poster
 
Posts: 2,679
Default Can HDTune go slower where there is data, if something's set wrong?

In message , Paul
writes:
J. P. Gilliver (John) wrote:

[]
I've put some screenshots in http://255soft.uk/temp/HDTune.zip
(filenames giving speed and blocksize settings).


I think I'd pull the drive from the laptop and test
it on another computer. It's a SATA III drive, with
a 128MB cache, 1040Mbit/sec media rate. If we take


Unfortunately, I suspect this machine is SATA II (Toshiba Portégé
R700-1F5); I'm pretty sure any others I have will not be above SATA II,
and I'd almost certainly have to test it via USB2 anyway.

the media rate and divide by 10, the disk should
do about 104MB/sec reads. If we take the media rate
and divide by 8 (unlikely), the rate would be about
125MB/sec. Yet, some of your benches show 225MB/sec
(like the drive is on a SATA II port and all the
data is coming from a controller cache).


Mostly 140 or 170 - only the one with 8 MB "block size" had the 225. But
yes, suspiciously fast.

Media rate is a shabby way of stating performance,
as it requires guys like me to estimate the coding
loss. Sustained read rate is a better measure, as
it requires no "secret knowledge" of the thing.

HDTune should be doing a read-only test that makes no
state changes to the disk whatsoever. It's doing raw disk reads
while ignoring the file system

That I understand, I think
(it's like it is reading /dev/sda).

(that I don't, but I assume it's UNIX/Linuxspeak).

I see (I think) - it's doing reads, but ignoring the data that is
actually read.

You could do an error scan while screen recording the bandwidth
recorded as a function of position, if you wanted to continue using
your existing hardware setup for this. Those two numbers
in the corner, can be used to chart disk performance in a
pinch. With the assurance the test is reading the entire
1TB of data from the disk.

https://s10.postimg.org/x1zpk616x/hdtune_error_scan.gif

I've recorded such an HDTune run with a screen recorder, then
applied OCR to the screen dump, but it was far from "fun".
The OCR of course, mixes up 0 and O with the usual comedic
results. The log generated required significant manual
correction.


Sounds like hard work (-:

*******

https://www.hgst.com/sites/default/f..._datasheet.pdf

Your transfer curves are flat. That's unrealistic. HDDs have a declining curve.


Yes, I thought so.

Your access time is 2.8ms. That's unrealistic.

One of your scans shows 225MB/sec sustained, when the data sheet
allows at most 125MB/sec. That's unrealistic.

Maybe your machine has a Robson cache (20GB Flash that goes
with certain chipsets). Maybe you've loaded a third-party
caching software that doesn't listen to "Do-Not-Cache"
requests.


The cache seems unlikely: I got the machine second-hand over a year ago.
I can't see a manufacturing date on it. It only has 3G of RAM, so a 20G
cache seems unlikely!

But as it stands now, your scans lack a certain amount of
realism. The drive isn't as fast as an SSD. But it's too
fast for an HDD in certain ways.

Paul


Any thoughts on the suspicious similarity between where the gouges are
and where I have data? (I think it's likely to be at the beginning of
the partitions - I've done a few defrags, and only been using the new
drive for a couple of weeks anyway.) And why they only show up over a
certain "block size" (not there at 16 KB or less, there at 32 KB or
more)?

For the moment, all seems to be working fine - I've not been aware of
sudden slowdowns. And it is a brand new drive, which I'd thought
_unlikely_ to have about 10-15% bad sectors. Which I bought for capacity
only; the cache is nice to have, but I didn't go looking for it, and I
specifically went for a "5400" rather than a 7200, feeling that would
probably run cooler and be more reliable.
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

The average US shareholding lasts 22 seconds. Nobody knows who invented the
fire hydrant: the patent records were destroyed in a fire. Sandcastles kill
more people than sharks. Your brain uses less power than the light in your
fridge. The Statue of Liberty wears size 879 shoes.
- John Lloyd, QI supremo (RT, 2014/9/27-10/3)
  #10  
Old March 8th 18, 07:00 PM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Can HDTune go slower where there is data, if something's setwrong?

J. P. Gilliver (John) wrote:

Any thoughts on the suspicious similarity between where the gouges are
and where I have data? (I think it's likely to be at the beginning of
the partitions - I've done a few defrags, and only been using the new
drive for a couple of weeks anyway.) And why they only show up over a
certain "block size" (not there at 16 KB or less, there at 32 KB or more)?

For the moment, all seems to be working fine - I've not been aware of
sudden slowdowns. And it is a brand new drive, which I'd thought
_unlikely_ to have about 10-15% bad sectors. Which I bought for capacity
only; the cache is nice to have, but I didn't go looking for it, and I
specifically went for a "5400" rather than a 7200, feeling that would
probably run cooler and be more reliable.


It looks like some cache in your setup, is defeating
the "do-not-cache" features that HDTune will use.

The authors of these benchmarks go to a great deal
of trouble to defeat caches, and get raw media
characteristics in a scan. These schemes are
easily defeated by new hardware, requiring the
author of the software to "re-jig" how they
do stuff, to get the declining curve back.

If you simply refuse to remove the drive from the
laptop, that's OK. Get yourself the *trial* version
of the current HDTune, which should be able to
give a more realistic result. As the current
HDTune is maintained by its author. Uninstall
2.55 and give this a try.

http://www.hdtune.com/download.html

"HD Tune Pro 5.70 4 August 2017 hdtunepro_570_trial.exe 2187 KB

Licensing information:

after installation you can try out the program for 15 days.

If you want to you use the program after this period
you have to purchase a serial number.
"

I think your "gouge" pattern is real. But
when the other parameters of the scan look
more realistic, who knows, maybe the pattern
will be as realistic as possible too.

The thing is, no regular computer can cache
the entire 1TB drive. Only an Epyc with $40000 of
DRAM in it, could come close (there are 128GB RDIMMs
now). The benchmark curve reads a limited amount
of data, making it easier to cache. The error scan
reads the entire disk, but, it doesn't have a
graph. Let's hope the trial version sees the
cache problem and does something about it.

Paul
  #11  
Old March 9th 18, 02:11 AM posted to alt.windows7.general
J. P. Gilliver (John)[_4_]
external usenet poster
 
Posts: 2,679
Default Can HDTune go slower where there is data, if something's set wrong?

In message , Paul
writes:
J. P. Gilliver (John) wrote:

Any thoughts on the suspicious similarity between where the gouges
are and where I have data? (I think it's likely to be at the

[]
It looks like some cache in your setup, is defeating
the "do-not-cache" features that HDTune will use.


Seems plausible.

The authors of these benchmarks go to a great deal
of trouble to defeat caches, and get raw media
characteristics in a scan. These schemes are
easily defeated by new hardware, requiring the
author of the software to "re-jig" how they
do stuff, to get the declining curve back.

If you simply refuse to remove the drive from the


Well, it's not so much _refuse_ (though I agree it'd be inconvenient),
just that the only way I'd have to test it would then be via a USB2
interface.

laptop, that's OK. Get yourself the *trial* version
of the current HDTune, which should be able to
give a more realistic result. As the current
HDTune is maintained by its author. Uninstall
2.55 and give this a try.

http://www.hdtune.com/download.html

"HD Tune Pro 5.70 4 August 2017 hdtunepro_570_trial.exe 2187 KB

[]
OK, got it. Though not too encouraged by the other utility that appeared
in the same program group as "HD Tune Pro" - "HD Tune Pro Drive Status"
- which tells me
Capacity: 1000 gB
1,000,202,273,280 bytes
1,953,520,065 sectors

Available Space: 76.0 gB (8%)
, which sounds like a wraparound error ... but still, carrying on:

I think your "gouge" pattern is real. But
when the other parameters of the scan look
more realistic, who knows, maybe the pattern
will be as realistic as possible too.

The thing is, no regular computer can cache
the entire 1TB drive. Only an Epyc with $40000 of
DRAM in it, could come close (there are 128GB RDIMMs


Certainly not in this little old laptop!

now). The benchmark curve reads a limited amount
of data, making it easier to cache. The error scan
reads the entire disk, but, it doesn't have a
graph. Let's hope the trial version sees the
cache problem and does something about it.

Paul


Benchmark curve, using default settings (speed 2, 64K blocks):
http://255soft.uk/temp/2018-3-9%201-...%2064%20KB.png
No real difference - the flat line is still higher than you'd expect,
and the gouge is still there. Doing the error scan now - it's done 7200
MB in nearly 5 minutes, so I'll let it run and upload when finished. Not
showing any Damaged 381MB squares yet (and 8001 MB is beyond the initial
glitches on the Benchmark curve).

Do you _really_ think a new drive would have 10-15% bad sectors? (Even
have that many "swap" ones, for that matter?)

Now got up to "12 gB", no red squares yet.
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

Beatrix Potter was a bunny boiler.
- Patricia Routledge, on "Today" 2016-1-26
  #12  
Old March 9th 18, 05:55 AM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Can HDTune go slower where there is data, if something's setwrong?

J. P. Gilliver (John) wrote:


Benchmark curve, using default settings (speed 2, 64K blocks):
http://255soft.uk/temp/2018-3-9%201-...%2064%20KB.png
No real difference - the flat line is still higher than you'd expect,
and the gouge is still there. Doing the error scan now - it's done 7200
MB in nearly 5 minutes, so I'll let it run and upload when finished. Not
showing any Damaged 381MB squares yet (and 8001 MB is beyond the initial
glitches on the Benchmark curve).

Do you _really_ think a new drive would have 10-15% bad sectors? (Even
have that many "swap" ones, for that matter?)

Now got up to "12 gB", no red squares yet.


I take it you've reviewed the "Health" tab in HDTune,
and verified the drive is new that way ? How many power on hours
does it have ? Is the stated number of power on hours
consistent with how long you've been using it ?

Did you check Programs and Features for any third party
cache programs ? Intel used to make a software cache
program, and Samsung may have tried their hand at it
too. I don't keep tabs on stuff like that, so can't
suggest any programs names.

I've had *one* drive here, enter into the "gouge" pattern,
right where the OS was located. And the drive had become so
slow, it was doing maybe 20MB/sec or so and I could *notice* it.
Terrible. That's why I ran HDTune that day, because the drive
was suspiciously slow. And the gouge pretty well aligned
perfectly with the OS. And the reallocated raw data field
was still "zero".

My drive might have had 1000 hours on it at that point. I have
another drive here with 39000 hours on it, and it doesn't have a
mark on it. The benchmark looks as good as the day I bought it.
Very strange.

I've had other drives, with the same model number as "Mr.Gouge"
where the reallocated field went non-zero, and that's the first
thing I knew about health not being what it should be. I retired
those from OS service. but, as scratch drives, I've put many
additional hours on those drives (more than one drive showed
reallocations), and they still haven't failed!

If you cannot improve the quality of the scan, you can:

1) Increase frequency of backups.

2) Simply watch for performance issues by "feel".
If the drive doesn't feel right some day, pull it
and retire it.

I probably have five or six drives in the "retired" pool,
and even the drive that let out a "squeak" one day, haven't
caused any more problems. In the old days, drives had an
anti-static "spring" which contacted the shaft of the
spindle, and that's where a squeak would come from.
Modern drives are not set up that way. And to me, if
a modern drive squeaks, you'd better do (1) immediately.
My rule of thumb is "the more noise a drive makes,
the sooner it will break". The head lock on my Barracuda
32550N which jammed and ground the heads into the platter,
it was plenty noisy before the failure. Every day, when
the heads unlocked, it sounded like the "free game"
relay on a pinball machine. Well, one day, instead of
unlocking, a sound like a giant watch spring was emitted
by the drive, and that was the heads being pushed face-first
into the platter. I don't know of too many heads with
sufficient flying height to "fly over" a channel
cut in the platter :-)

Paul
  #13  
Old March 9th 18, 06:25 AM posted to alt.windows7.general
J. P. Gilliver (John)[_4_]
external usenet poster
 
Posts: 2,679
Default Can HDTune go slower where there is data, if something's set wrong?

In message , Paul
writes:
J. P. Gilliver (John) wrote:

Benchmark curve, using default settings (speed 2, 64K blocks):
http://255soft.uk/temp/2018-3-9%201-...%2064%20KB.png
No real difference - the flat line is still higher than you'd expect,
and the gouge is still there. Doing the error scan now - it's done 7200
MB in nearly 5 minutes, so I'll let it run and upload when finished. Not
showing any Damaged 381MB squares yet (and 8001 MB is beyond the initial
glitches on the Benchmark curve).
Do you _really_ think a new drive would have 10-15% bad sectors?
(Even
have that many "swap" ones, for that matter?)
Now got up to "12 gB", no red squares yet.


Completed - http://255soft.uk/temp/2018-3-9%205-58.png

I take it you've reviewed the "Health" tab in HDTune,
and verified the drive is new that way ? How many power on hours
does it have ? Is the stated number of power on hours
consistent with how long you've been using it ?


Now showing 248 hours. Yes, that's about right: I bought it a week ago
last Sunday, and fitted it within a couple of days, and haven't turned
it off much since then. And it was in a sealed bag.

Did you check Programs and Features for any third party
cache programs ? Intel used to make a software cache
program, and Samsung may have tried their hand at it
too. I don't keep tabs on stuff like that, so can't
suggest any programs names.


This one is Toshiba. I can't see anything in Programs and Features that
looks like a cacher.

I've had *one* drive here, enter into the "gouge" pattern,
right where the OS was located. And the drive had become so
slow, it was doing maybe 20MB/sec or so and I could *notice* it.
Terrible. That's why I ran HDTune that day, because the drive
was suspiciously slow. And the gouge pretty well aligned
perfectly with the OS. And the reallocated raw data field
was still "zero".


Oh dear. But at least this one is not giving me any cause for concern
other than the HDTune graphs: if it wasn't for those, I'd not be worried
at all.

My drive might have had 1000 hours on it at that point. I have
another drive here with 39000 hours on it, and it doesn't have a
mark on it. The benchmark looks as good as the day I bought it.
Very strange.

I've had other drives, with the same model number as "Mr.Gouge"
where the reallocated field went non-zero, and that's the first
thing I knew about health not being what it should be. I retired
those from OS service. but, as scratch drives, I've put many
additional hours on those drives (more than one drive showed
reallocations), and they still haven't failed!

If you cannot improve the quality of the scan, you can:

1) Increase frequency of backups.

2) Simply watch for performance issues by "feel".
If the drive doesn't feel right some day, pull it
and retire it.


I'll probably do both of those. I have three year guarantee specifically
written on the receipt, so I'm OK there, apart from the inconvenience.


I probably have five or six drives in the "retired" pool,
and even the drive that let out a "squeak" one day, haven't
caused any more problems. In the old days, drives had an
anti-static "spring" which contacted the shaft of the
spindle, and that's where a squeak would come from.
Modern drives are not set up that way. And to me, if
a modern drive squeaks, you'd better do (1) immediately.
My rule of thumb is "the more noise a drive makes,
the sooner it will break". The head lock on my Barracuda


Indeed - I really don't expect _any_ noise from modern drives.

32550N which jammed and ground the heads into the platter,
it was plenty noisy before the failure. Every day, when
the heads unlocked, it sounded like the "free game"
relay on a pinball machine. Well, one day, instead of
unlocking, a sound like a giant watch spring was emitted
by the drive, and that was the heads being pushed face-first
into the platter. I don't know of too many heads with
sufficient flying height to "fly over" a channel
cut in the platter :-)


So far, I've only had one that I _think_ was a contact fault - I think
due to overheating (caused by the PC, not the drive); I couldn't hear it
rotating even with my ear to it. After trying all the internet
suggestions, I gave up and opened it (fortunately I had access to a
positive-pressure clean air facility at work); sure enough the heads
were at a position on the surface rather than parked. I applied slight
rotational torque to the hub (it took the same size driver as the case
screws), and I could feel it unstick, though I could _see_ no damage;
after I put the case back on, I was able to get about 97-98% of the data
off it, though definitely considered that one not even a junk drive as
it had been opened. Since then I've been very aware of temperature,
hence the recent concern when Corsair Link seemed to have turned off my
fan!

Paul

John
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

"I've got this shocking pain right behind the eyes."
"Have you considered amputation?" - Vila & Avon
  #14  
Old March 9th 18, 09:56 AM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Can HDTune go slower where there is data, if something's setwrong?

J. P. Gilliver (John) wrote:


Oh dear. But at least this one is not giving me any cause for concern
other than the HDTune graphs: if it wasn't for those, I'd not be worried
at all.


OK, found another way to do it.

You can add a script to perfmon, and have it record
Disk Read bytes/sec for a selected disk or for the
whole disk pool.

https://www.veritas.com/support/en_US/article.000036737

Using that, that gives an instantaneous bandwidth recorder.
Your "script" is stored in the "Counter logs" section.

Now, you could run the error scan again, as a stimulus
for your perfmon recording. You can set the Start and Stop
on your perfmon script to "manual", and then stop it
from the menu in perfmon. Then go to C:\perflogs (it insists
they go there) to find your numbered log. I presume if
a log file gets large enough, it starts a new log or
something.

https://s10.postimg.org/zbk5kfqhl/us...r_csv_text.gif

Anyway, that solves the OCR problem I was having.

Another way to generate a stimulus, is with dd.

http://www.chrysocome.net/dd

dd --list # dump the naming convention
# Harddisk2 is my golden disk,
# Partition 0 the entire disk

dd if=\\?\Device\Harddisk2\Partition0 bs=1048576 count=476940 NUL

That reads my golden drive and throws the data away in NUL (the
Windows equivalent of /dev/null). When you don't specify of=
or of=- , the output defaults to STDOUT, the "" redirects STDOUT
to NUL. (I did it this way, in case dd.exe doesn't like of=NUL .
I made it so the issue is handled at the shall level via "". )

I stopped the dd command with ctrl-C , then stopped the perfmon
counter script from its menu, and this is the test log.

C:\PerfLogs\perftest.log_000001.csv

"03/09/2018 04:43:57.156","0","DIsk Read Monitoring"
"03/09/2018 04:43:58.156","0","DIsk Read Monitoring"
"03/09/2018 04:43:59.156","43093231.386613525","DIsk Read Monitoring"
"03/09/2018 04:44:00.156","123731611.41122638","DIsk Read Monitoring"
"03/09/2018 04:44:01.156","124779021.6261901","DIsk Read Monitoring"
"03/09/2018 04:44:02.156","124779218.15853371","DIsk Read Monitoring"
"03/09/2018 04:44:03.156","123730390.24815789","DIsk Read Monitoring"
"03/09/2018 04:44:04.156","121633710.42662525","DIsk Read Monitoring"
"03/09/2018 04:44:05.156","123730268.35547325","DIsk Read Monitoring"
"03/09/2018 04:44:06.156","124778744.01596877","DIsk Read Monitoring"
"03/09/2018 04:44:07.156","124779738.2623236","DIsk Read Monitoring"
"03/09/2018 04:44:08.156","123730429.52062416","DIsk Read Monitoring"
"03/09/2018 04:44:09.156","118491019.49676122","DIsk Read Monitoring"
"03/09/2018 04:44:10.156","124779564.52027358","DIsk Read Monitoring"
"03/09/2018 04:44:11.156","123731317.9742112","DIsk Read Monitoring"

The title on the end, is the name I typed in for the script name.
From one second to the next, there is some variation in the
amount of data charged to that second.

You should be able to plot those in Excel, as the data is CSV.

If there is caching or read-ahead or any other distorting
behavior, it's still going to be there. But at least now
there will be no more OCR experiments :-)

Paul
  #15  
Old March 9th 18, 01:54 PM posted to alt.windows7.general
J. P. Gilliver (John)[_4_]
external usenet poster
 
Posts: 2,679
Default Can HDTune go slower where there is data, if something's set wrong?

In message , Paul
writes:
J. P. Gilliver (John) wrote:

Oh dear. But at least this one is not giving me any cause for
concern other than the HDTune graphs: if it wasn't for those, I'd not
be worried at all.


OK, found another way to do it.


I'm afraid my ability (or, perhaps, willingness) to take in the level of
detail in your instructions isn't what it was even a few years ago. I'll
have to stop asking questions here, as I don't want to put you and
others to the amount of effort you're putting in, only for me to waste
your effort by saying "that's too complicated for me".

For the moment, unless (is anyone else still reading this thread?) a
_simple_ drive health method comes to light (that actually tells me
there _is_ something wrong with my new drive), I'll just perhaps do the
two things you suggested in an earlier post: 1. do backups more often,
2. listen out for any noises.

Sorry to have wasted your time.

You can add a script to perfmon, and have it record
Disk Read bytes/sec for a selected disk or for the
whole disk pool.

https://www.veritas.com/support/en_US/article.000036737


That looks a well-written article - certainly written by people who
understand the subject very well - but is a bit more than I can take in.
(Also, it's titled "How to use Performance Monitoring (Perfmon.msc) to
investigate slow disk performance (DSSU)" - and I'm still not
_convinced_ that I _do_ have slow disc performance, or in fact anything
wrong with the drive at all.)

Using that, that gives an instantaneous bandwidth recorder.
Your "script" is stored in the "Counter logs" section.

Now, you could run the error scan again, as a stimulus
for your perfmon recording. You can set the Start and Stop
on your perfmon script to "manual", and then stop it
from the menu in perfmon. Then go to C:\perflogs (it insists
they go there) to find your numbered log. I presume if
a log file gets large enough, it starts a new log or
something.

https://s10.postimg.org/zbk5kfqhl/us...r_csv_text.gif

Anyway, that solves the OCR problem I was having.

Another way to generate a stimulus, is with dd.

http://www.chrysocome.net/dd

dd --list # dump the naming convention
# Harddisk2 is my golden disk,
# Partition 0 the entire disk

dd if=\\?\Device\Harddisk2\Partition0 bs=1048576 count=476940 NUL

That reads my golden drive and throws the data away in NUL (the
Windows equivalent of /dev/null). When you don't specify of=
or of=- , the output defaults to STDOUT, the "" redirects STDOUT
to NUL. (I did it this way, in case dd.exe doesn't like of=NUL .
I made it so the issue is handled at the shall level via "". )

I stopped the dd command with ctrl-C , then stopped the perfmon
counter script from its menu, and this is the test log.

C:\PerfLogs\perftest.log_000001.csv

"03/09/2018 04:43:57.156","0","DIsk Read Monitoring"
"03/09/2018 04:43:58.156","0","DIsk Read Monitoring"
"03/09/2018 04:43:59.156","43093231.386613525","DIsk Read Monitoring"
"03/09/2018 04:44:00.156","123731611.41122638","DIsk Read Monitoring"
"03/09/2018 04:44:01.156","124779021.6261901","DIsk Read Monitoring"
"03/09/2018 04:44:02.156","124779218.15853371","DIsk Read Monitoring"
"03/09/2018 04:44:03.156","123730390.24815789","DIsk Read Monitoring"
"03/09/2018 04:44:04.156","121633710.42662525","DIsk Read Monitoring"
"03/09/2018 04:44:05.156","123730268.35547325","DIsk Read Monitoring"
"03/09/2018 04:44:06.156","124778744.01596877","DIsk Read Monitoring"
"03/09/2018 04:44:07.156","124779738.2623236","DIsk Read Monitoring"
"03/09/2018 04:44:08.156","123730429.52062416","DIsk Read Monitoring"
"03/09/2018 04:44:09.156","118491019.49676122","DIsk Read Monitoring"
"03/09/2018 04:44:10.156","124779564.52027358","DIsk Read Monitoring"
"03/09/2018 04:44:11.156","123731317.9742112","DIsk Read Monitoring"

The title on the end, is the name I typed in for the script name.
From one second to the next, there is some variation in the
amount of data charged to that second.

You should be able to plot those in Excel, as the data is CSV.

If there is caching or read-ahead or any other distorting
behavior, it's still going to be there. But at least now
there will be no more OCR experiments :-)

Paul

John
--
J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

My movies rise below vulgarity. - Mel Brooks, quoted by Barry Norman in RT
2016/11/26-12/2
 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 05:48 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.