View Single Post
  #65  
Old May 21st 18, 03:37 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default USB thumb drives.

Jimmy Wilkinson Knife wrote:
On Mon, 21 May 2018 00:04:56 +0100, Paul wrote:



I used to get OCZ SSDs (not sure why, maybe price?) - but I had
literally one in 5 of them fail completely. I now use Crucial, which
100% work and never go wrong (except a couple that needed firmware
upgrade as they "disappeared". Looking at current reviews, OCZ are also
the slowest, not sure why they're sold at all.


Currently, Toshiba owns the remains of OCZ. Toshiba invented
NAND flash and is still in the business of making flash chips.
Using OCZ gives them an enthusiast branding to use. And
branding is important, if you want to squeeze a few dollars
more of margin from retail pricing.

If they sold Toshiba brand (vertically integrated) drives
in the consumer space, nobody would know who that is.

Toshiba makes more stuff than you can imagine, but fell
on hard times recently when their nuclear business tanked
and sank.

https://en.wikipedia.org/wiki/Westin...ectric_Company

"On March 24, 2017, parent company Toshiba announced that
Westinghouse Electric Company would file for Chapter 11
bankruptcy"

And they had to scramble and shake some piggy banks, to find
enough money to keep going.


*******

If the issue was internal fragmentation, you could back up,
Secure Erase, and restore.


What do you mean "internal fragmentation"?


On Flash, 4KB writes aren't the same size as the smallest
unit of storage inside the Flash chip. If you spray a drive
with 4KB writes (or copy a folder with a lot of 4K files
in it), the drive works in the background to arrange
the data to take up fewer internal units of storage.

This amounts to fragmentation. Write amplification, is
the process of needing to write the Flash more than once,
when adding a file to the partition. And drives have varied
in how good they are at this stuff. The invention of the
TRIM command, where the OS tells the drive what clusters
no longer have files in them, gives the drive a larger
free pool to work with, and makes it easier to rearrange
data during idle periods.

Back when enthusiast sites were doing random 4KB write
tests on drives, they used to leave the drive powered,
and it might take all night before the performance
level would return to normal the next morning. And
that's because the drive is busy moving stuff around.

The drive has virtual and physical storage. The outside
of the drive is the virtual part. Your files are *not*
stored in order, inside the Flash chip. If you pulled
the Flash chip and looked under a microscope, your
file is spread far and wide through the Flash chip. You
need the "mapping table" to figure out which storage location
on the inside, maps to an LBA on the outside. There's a layer
of indirection you cannot see, and the handling of such
issues can make a drive slow.

As can doing error correction with the multi-core CPU
inside the drive. On some drives "weak cells" require
a lot more error correction on a read (every sector
has to be corrected), and this reduces the rate that
the reads come back to you.

Some SSDs have three-core ARM processors for these
activities, rather than the drive being "purely mechanical"
and "filled with wire-speed logic blocks". That
doesn't seem to be the case. Instead, the drive relies
on firmware for a lot of the "features".


And surely fragmentation only matters on a mechanical drive, as the
heads have to read data from several different places. This won't slow
down an SSD?


It takes an SSD about 20uS to read a "chunk", even if only
a part of it is being used. While normally some size of
stuff is contiguous, if the usable bits are really small,
the "chunk" overhead becomes significant. It would be
less significant, after the drive has (transparently)
rearranged some blocks inside, where you can't see
what it's doing.

The only way you can get a hint as to what the drive
is doing, is by watching the power consumption on VCC,
and guessing based on power state, whether it is house
cleaning during idle periods.

The Intel Optane/XPoint stuff, doesn't do this. It's
a byte addressable storage, which can directly write
where required. And there's less maintenance inside
the drive. Too bad the chips aren't as big as Flash,
and the devices are a lot more expensive at the moment.


If you happen to have a lot of persistent VSS shadows
running on the system, there can be "slow copy on write",
and the OS Optimize function will actually defragment an SSD
if such a condition is noted. I still haven't located a tech
article to decode how "slow COW" happens or why it happens.
There is some deal, when you defragment a drive, the defragmenter
is supposed to limit cluster movements to certain sizes, to prevent
interaction with a Shadow which is already in place. It's possible
shadow copies work at the cluster level.


I thought SSDs should never be defragged, and the data storage position
was handled internally to the drive. I read something about running a
defrag program on an SSD just wore it out.


https://www.hanselman.com/blog/TheRe...YourSS D.aspx

"Actually Scott and Vadim are both wrong. Storage Optimizer will defrag
an SSD once a month if volume snapshots are enabled."


An example of a reason to use a Shadow, might be File History,
or it might be the usage of Incremental or Differential backups.
There are some commands you can use, to list the current
shadows in place. WinXP has VSS, but no persistent shadows
are available, and a reboot should close them all. Later
OSes allow keeping a shadow running between sessions
(for tracking file system state). The maximum number
of shadows is something like 64.

vssadmin list shadows

[vssadmin]
https://technet.microsoft.com/en-us/.../dd348398.aspx


I'm not familiar with shadows. Is that what allows me to use something
like EaseUS Backup to clone a drive while windows is still running?


Yes. The shadow is probably released at the end of the clone run.

Whereas some flavors of backup, require the shadow to keep track
of what was backed up, and what wasn't backed up. The shadow may
remain in place in such a case.

I've never seen much of anything doing "list shadows", but somebody
must be using them. I only do full backups here, not incremental
ones, so one backup run doesn't need to know anything about
a previous backup run.

Paul
Ads