View Single Post
  #66  
Old May 21st 18, 11:22 AM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Mon, 21 May 2018 03:37:30 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
On Mon, 21 May 2018 00:04:56 +0100, Paul wrote:



I used to get OCZ SSDs (not sure why, maybe price?) - but I had
literally one in 5 of them fail completely. I now use Crucial, which
100% work and never go wrong (except a couple that needed firmware
upgrade as they "disappeared". Looking at current reviews, OCZ are also
the slowest, not sure why they're sold at all.


Currently, Toshiba owns the remains of OCZ. Toshiba invented
NAND flash and is still in the business of making flash chips.
Using OCZ gives them an enthusiast branding to use. And
branding is important, if you want to squeeze a few dollars
more of margin from retail pricing.


Surely every enthusiast knows OCZ is slow and likely to fall apart.

If they sold Toshiba brand (vertically integrated) drives
in the consumer space, nobody would know who that is.


I remember them advertising on TV when I were a lad (30 years ago), but it was a terrible ad:
https://youtu.be/9urXG2TpOio

Toshiba makes more stuff than you can imagine,


I've seen Toshiba parts inside many other makes of LCD TV, even other big names that I would have thought would use their own parts.

but fell
on hard times recently when their nuclear business tanked
and sank.

https://en.wikipedia.org/wiki/Westin...ectric_Company

"On March 24, 2017, parent company Toshiba announced that
Westinghouse Electric Company would file for Chapter 11
bankruptcy"


I'm only familiar with the previous title:
https://en.wikipedia.org/wiki/Westin...ic_Corporation
Can't remember what they made but I'm sure I had something made by them - probably an electric meter or something.
Wikipedia says they dissolved, yet they're still trading under the name "Westinghouse Electric Corporation" - "Westinghouse Electric Corporation Provides Smart Home Appliances To Energy Solutions That Are Cleanly And Safely Powering Us Into The Next Generation.":
http://westinghouse.com

And they had to scramble and shake some piggy banks, to find
enough money to keep going.


*******

If the issue was internal fragmentation, you could back up,
Secure Erase, and restore.


What do you mean "internal fragmentation"?


On Flash, 4KB writes aren't the same size as the smallest
unit of storage inside the Flash chip. If you spray a drive
with 4KB writes (or copy a folder with a lot of 4K files
in it), the drive works in the background to arrange
the data to take up fewer internal units of storage.

This amounts to fragmentation. Write amplification, is
the process of needing to write the Flash more than once,
when adding a file to the partition. And drives have varied
in how good they are at this stuff. The invention of the
TRIM command, where the OS tells the drive what clusters
no longer have files in them, gives the drive a larger
free pool to work with, and makes it easier to rearrange
data during idle periods.

Back when enthusiast sites were doing random 4KB write
tests on drives, they used to leave the drive powered,
and it might take all night before the performance
level would return to normal the next morning. And
that's because the drive is busy moving stuff around.

The drive has virtual and physical storage. The outside
of the drive is the virtual part. Your files are *not*
stored in order, inside the Flash chip. If you pulled
the Flash chip and looked under a microscope, your
file is spread far and wide through the Flash chip. You
need the "mapping table" to figure out which storage location
on the inside, maps to an LBA on the outside. There's a layer
of indirection you cannot see, and the handling of such
issues can make a drive slow.

As can doing error correction with the multi-core CPU
inside the drive. On some drives "weak cells" require
a lot more error correction on a read (every sector
has to be corrected), and this reduces the rate that
the reads come back to you.


Maybe that's why mine are slowing down - more cells are becoming weak?

Some SSDs have three-core ARM processors for these
activities, rather than the drive being "purely mechanical"
and "filled with wire-speed logic blocks". That
doesn't seem to be the case. Instead, the drive relies
on firmware for a lot of the "features".

And surely fragmentation only matters on a mechanical drive, as the
heads have to read data from several different places. This won't slow
down an SSD?


It takes an SSD about 20uS to read a "chunk", even if only
a part of it is being used. While normally some size of
stuff is contiguous, if the usable bits are really small,
the "chunk" overhead becomes significant. It would be
less significant, after the drive has (transparently)
rearranged some blocks inside, where you can't see
what it's doing.


Christ, these things are more complicated than I thought.

The only way you can get a hint as to what the drive
is doing, is by watching the power consumption on VCC,
and guessing based on power state, whether it is house
cleaning during idle periods.


I take it the drive LED only lights when data is transferring from the controller on the motherboard to the SSD, and not when internal stuff is happening? I.e. the SSD doesn't tell the controller it's doing anything.

The Intel Optane/XPoint stuff, doesn't do this. It's
a byte addressable storage, which can directly write
where required.


That's how I thought SSDs should have been made in the first place. Presumably something prevents this being easy or cheap?

And there's less maintenance inside
the drive. Too bad the chips aren't as big as Flash,
and the devices are a lot more expensive at the moment.


It'll get cheaper and bigger....

The question is, will we ever replace mechanical drives completely, or will they keep getting larger at the same rate?

If you happen to have a lot of persistent VSS shadows
running on the system, there can be "slow copy on write",
and the OS Optimize function will actually defragment an SSD
if such a condition is noted. I still haven't located a tech
article to decode how "slow COW" happens or why it happens.
There is some deal, when you defragment a drive, the defragmenter
is supposed to limit cluster movements to certain sizes, to prevent
interaction with a Shadow which is already in place. It's possible
shadow copies work at the cluster level.


I thought SSDs should never be defragged, and the data storage position
was handled internally to the drive. I read something about running a
defrag program on an SSD just wore it out.


https://www.hanselman.com/blog/TheRe...YourSS D.aspx

"Actually Scott and Vadim are both wrong. Storage Optimizer will defrag
an SSD once a month if volume snapshots are enabled."

An example of a reason to use a Shadow, might be File History,
or it might be the usage of Incremental or Differential backups.
There are some commands you can use, to list the current
shadows in place. WinXP has VSS, but no persistent shadows
are available, and a reboot should close them all. Later
OSes allow keeping a shadow running between sessions
(for tracking file system state). The maximum number
of shadows is something like 64.

vssadmin list shadows

[vssadmin]
https://technet.microsoft.com/en-us/.../dd348398.aspx


I'm not familiar with shadows. Is that what allows me to use something
like EaseUS Backup to clone a drive while windows is still running?


Yes. The shadow is probably released at the end of the clone run.

Whereas some flavors of backup, require the shadow to keep track
of what was backed up, and what wasn't backed up. The shadow may
remain in place in such a case.

I've never seen much of anything doing "list shadows", but somebody
must be using them. I only do full backups here, not incremental
ones, so one backup run doesn't need to know anything about
a previous backup run.


I only do full aswell. I just swap the backup drive I use every so often, so I can go back to older versions of a file.

--
I was doing some remolishments to my house the other day and accidentally defurbished it.
Ads