A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Microsoft Windows 7 » Windows 7 Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Allocation of hiberfil.sys



 
 
Thread Tools Rate Thread Display Modes
  #106  
Old November 29th 14, 02:28 PM posted to alt.windows7.general
Spalls Hurgenson
external usenet poster
 
Posts: 123
Default Allocation of hiberfil.sys

On Sat, 29 Nov 2014 00:06:03 -0700, Jeff Barnett
wrote:

Paul wrote, On 11/28/2014 8:18 AM:




The issue, I believe, is caused because the file is virtually never
written; not because it is written too much. The blocks in the file are
tied up for say a year and just sit there. However, other blocks are
recycled and evenly used among themselves. It's possible the SSD
firmware could simply spot the tied up blocks and replace them under the
table. Some stuff I've read seems to imply that could happen but nothing
says it actually does.



If your drive supports static wear leveling (also known as global wear
leveling) then that is exactly what it does. It keeps track of filled
cells that haven't been updated recently and swaps the data to more
heavily-used cells in order to keep the amount of wear-n-tear evenly
balanced. This feature is built into the firmware and the drive does
this in the background without requiring (or notifying) the OS.

Most modern SSDs support static wear leveling. If you ever bothered to
say the make and model of your SSD, people would be able to give you a
more precise answer, but odds are that you have nothing to worry
about. The SSD has probably noticed your hiberfil.sys isn't being
updated (because you don't hibernate except in rare emergencies) and
has moved its data to more heavily-used cells several times already,
freeing up the cells to which it was originally written.


Ads
  #107  
Old November 29th 14, 04:41 PM posted to alt.windows7.general
Jeff Barnett[_2_]
external usenet poster
 
Posts: 298
Default Allocation of hiberfil.sys

Mark F wrote, On 11/29/2014 6:21 AM:
On Sat, 29 Nov 2014 03:54:24 -0500, Paul wrote:

Jeff Barnett wrote:
Paul wrote, On 11/28/2014 8:18 AM:
Jeff Barnett wrote:

Thanks a lot! I have some reading to do and I'm looking forward to it.
After our traditional Thanksgiving goose, of course.

It turns out, hibernation is pretty easy on your SSD.

1) When hiberfil.sys is created, it only takes 10 seconds to write it.
The file size on my hard drive, would take about 3 minutes if every
sector was written. So when first created, the majority of LBAs are
not written. That means 50GB of your flash is not tied up, just after
hiberfil.sys is created for the first time.

2) After the file was created, I used "shred" to zero the file in place.
That was to make sure the file was in a defined state (all zeros).
I did this, so I could detect changes to the file.
The syntax used is similar to this.

shred --iterations=0 -z F:\hiberfil.sys

Checking with "sum.exe", gives 00000 for the file. In other
words, a sum of all the content of the file, happens to be zero.

3) I set the hiberfil.sys size to 100%, rather than the default 75%.
Just to see how far it would go. Machine has 16GB of RAM.

4) Hibernation of freshly booted Windows 7 x64 (no programs running).

243,870,208 bytes written, out of 17,115,807,744 byte hiberfil.sys
Shutdown was somewhere in the 7-10 second range (write time).

Now, run GIMP image editor, create a 41000x40000 pixel image.
Use the Noise function to make RGB noise. This effectively uses
a random number generator, to fill the empty image. This is
intended to occupy memory, with something relatively incompressible.
The machine was hibernated, with the 41000x40000 image still on the
screen.

11,779,794,944 bytes written, out of 17,115,807,744 bytes (69%).
It took about 3 minutes 20 seconds to write that out. Presumably
your machine is going to write faster than that :-)

So it looks like 75% for the default hiberfil.sys size, isn't a
bad choice.

The hiberfil.sys file contents were analyzed with a "zero block"
compressor I wrote (something I put together several months ago).
I don't know how many bytes were actually written, but the above
amounts were the portions of the file that were non-zero.

So if your machine hibernates once a year, and the machine is not
actually doing anything, the impact on the SSD is minimal. If you're
doing something pathologically bad (memory actually filled), then it
does seem to finally write most of the file. I doubt if the hiberfil.sys
is ever overwritten by a future hibernation attempt, no TRIM is going
to get issued for the (now unused) LBAs.

If you suspected a large hibernation had been done, you can
use powercfg -h off and powercfg -h on, to remove and re-create
the file. I'm guessing that is when proper TRIM would get issued,
to get better freeing up of LBAs. Since creating a new hiberfil.sys
involves minimal writing, there isn't really much of a penalty
evident there. On my hard drive, there was only 7-10 seconds of
write activity at the point in time mine was created.

And the LBAs did not move, after hibernating and recovering. I did
not attempt to use the powercfg -h off method, to move it
somewhere else. And my experiment was to see if it stayed put.
And it stayed at the same starting address (according to nfi.exe)
the whole time. I had to boot Windows 8.1, to run nfi.exe
and get the LBA addresses of the Windows 7 pagefile and hiberfil.

Paul
Interesting analysis.

The issue, I believe, is caused because the file is virtually never
written; not because it is written too much. The blocks in the file are
tied up for say a year and just sit there. However, other blocks are
recycled and evenly used among themselves. It's possible the SSD
firmware could simply spot the tied up blocks and replace them under the
table. Some stuff I've read seems to imply that could happen but nothing
says it actually does.


If you think a significant part of hiberfil.sys actually got
written, you can try this to free up the LBAs. I'm hoping this
unleashes a "giant TRIM" command.

powercfg -h off --- Deletes file
powercfg -h on --- Creates new file, does not write many LBAs
Command takes only a few seconds to run.

I think you are on to something, but I don't think that would make
much difference in practice for the specific issue at hand, which is
reducing the wear and delays caused by a seldom changed file, mainly
because the naive approach and the extra work caused by having to
preserve the uninteresting old contents of the hiberfil.sys won't
make a significant performance and wear difference even though the old
data will be copied each time the pages involved are in blocks
involved in wear leveling.

For an experiment you could:
1. have empty partition. Record SMART data
2. do benchmark with lots of writes over a long time
The benchmark program should be able to provide a graph of
write speed and read speed versus time.
3. Look at SMART data, in particular the effective
Write Amplification Factor during the time the benchmark was
running

4. write a large file with random contents. Record SMART information
5. Run the benchmark again.
See if lower performance is seen
When done, look at the SMART data and examine the
effective WAF from this benchmark was run.

6. Delete the large file with random contents.
Allocate a large file, but don't write to it.
For Windows XP and Windows 7 and perhaps other versions
using a command window from an Administrator account
and issuing one or more commands of the form:
FSUTIL FILE CREATENEW {filename} {size}
will get blocks allocated in the file system,
but the SSD won't think the blocks are in use
You want to allocate as much space as the original
large file with random contents.
Record the SMART data and confirm that not
much was written to the device under test.

4. run the benchmark again and look at the results
for differences in performance and the final
SMART data.

Perhaps you would see faster performance
and lower WAF than with the random file in
was present.

Try other variations, such as writing 0 to original
file, rather than random data.

Things will vary between SSD models. In particular,
some SSD's may recognize zeroes and have a quick way
of handling leveling blocks including pages that are
all zero.

Once again, I think that although you may see an improvement
in performance and an increase SSD lifetime by doing things
the optimal way, it won't make a difference unless you
are sending 10000 machines on a trip to Mars and need long
life.

I can't think of a way to prove how much good that is,
and maybe you can think of a way (using SMART
statistics perhaps).

Perhaps the procedure I outlined above will show the
performance difference.

The original poster wasn't interested in performance
of the actual hibernation, but it now occurs to
me that an interesting tweak to an SSD interface would
be to reserve blocks for writes when realtime performance is
critical. So:
. Ahead of time you tell the device reserve X bytes
for critical times. Also indicate if reservation should be
maintained
. At some point tell the device it is a critical time
. Tell the device it is no longer a critical time
(or power down device)
. If it was initially requested the reservation is
renewed.

This would likely result in larger WAF and probably
shorten the time before performance drop at heavy write
loads, but, based on the Samsung paper with 840 Pro
information, could result in write performance at critical
times being 4 times as fast.

HTH,
Paul


Am I correct in thinking that SMART data are collect per disk and not
per partition? If that is wrong your idea of keeping a little-used
partition would be even neater. One could compare it's SMART and
performance data to the regularly used partition. This would be
interesting in spite of the fact that both partitions would share the
same SSD block and page farms.
--
Jeff Barnett
  #108  
Old November 29th 14, 04:50 PM posted to alt.windows7.general
Jeff Barnett[_2_]
external usenet poster
 
Posts: 298
Default Allocation of hiberfil.sys

Spalls Hurgenson wrote, On 11/29/2014 7:28 AM:
On Sat, 29 Nov 2014 00:06:03 -0700, Jeff Barnett
wrote:

Paul wrote, On 11/28/2014 8:18 AM:




The issue, I believe, is caused because the file is virtually never
written; not because it is written too much. The blocks in the file are
tied up for say a year and just sit there. However, other blocks are
recycled and evenly used among themselves. It's possible the SSD
firmware could simply spot the tied up blocks and replace them under the
table. Some stuff I've read seems to imply that could happen but nothing
says it actually does.



If your drive supports static wear leveling (also known as global wear
leveling) then that is exactly what it does. It keeps track of filled
cells that haven't been updated recently and swaps the data to more
heavily-used cells in order to keep the amount of wear-n-tear evenly
balanced. This feature is built into the firmware and the drive does
this in the background without requiring (or notifying) the OS.

Most modern SSDs support static wear leveling. If you ever bothered to
say the make and model of your SSD, people would be able to give you a
more precise answer, but odds are that you have nothing to worry
about. The SSD has probably noticed your hiberfil.sys isn't being
updated (because you don't hibernate except in rare emergencies) and
has moved its data to more heavily-used cells several times already,
freeing up the cells to which it was originally written.


First off I said "My SSDs are Samsung 840 PRO 256GB" as the last line of
my original message. Second I've repeated that information a few times
in this 100+ message thread. However, there is no reason why you should
have read all of those messages, many of which are long, off topic, and
based on many things not said or said and missed.

So let me put my question to you (or anyone else who's still tuned in)
in a better form: Does the Samsung 840 PRO 256GB SSD do static (global)
leveling?

I wish I know the term "static (global) leveling" when I posted my first
message; it might have shortened this thread to 10% of its current length.

Thank you.
--
Jeff Barnett

 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 01:50 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.