A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Microsoft Windows XP » General XP issues or comments
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

pagefile.sys header



 
 
Thread Tools Display Modes
  #16  
Old November 1st 15, 10:22 PM posted to microsoft.public.windowsxp.general
Bill Cunningham[_2_]
external usenet poster
 
Posts: 441
Default pagefile.sys header


"Ken Blake, MVP" wrote in message
...

1. If you don't have a page file, you can't use all the RAM you have.
That's because Windows preallocates virtual memory in anticipation of
a possible need for it, even though that allocated virtual memory may
never be used. Without a page file, that allocation has to be made in
real memory, thus tying up that memory and preventing it from being
used for any purpose.

2. There is never a benefit in not having a page file. If it isn't
needed, it won't be used. Don't confuse allocated memory with used
memory.


Hum. Some apps require it. I didn't know that. You are right of course.

Bill


Ads
  #17  
Old November 1st 15, 10:25 PM posted to microsoft.public.windowsxp.general
Bill Cunningham[_2_]
external usenet poster
 
Posts: 441
Default pagefile.sys header


"Paul" wrote in message
...
Bill Cunningham wrote:
"VanguardLH" wrote in message
...

For example, you can configure Windows to zero out (clear); see
https://support.microsoft.com/en-us/kb/314834.


I didn't know that. Well I guess it's not needed then. This doesn't erase
the file does it?

Bill


It would overwrite all the clusters of that file.

Deleting the pointer to the file doesn't clear it.
Overwriting all the clusters does clear it.

If the pagefile is large, this will extend the shutdown time.


Well I see now that fsutil behavior doesn't allow pagefile encryption
with XP. I guess that's the newer windows OSes. Is there anyway t oencrypt
it another way on XP? Would I be worth it?
Bill


  #18  
Old November 1st 15, 10:51 PM posted to microsoft.public.windowsxp.general
Paul
external usenet poster
 
Posts: 18,275
Default pagefile.sys header

Bill Cunningham wrote:
"Paul" wrote in message
...
Bill Cunningham wrote:
"VanguardLH" wrote in message
...

For example, you can configure Windows to zero out (clear); see
https://support.microsoft.com/en-us/kb/314834.
I didn't know that. Well I guess it's not needed then. This doesn't erase
the file does it?

Bill

It would overwrite all the clusters of that file.

Deleting the pointer to the file doesn't clear it.
Overwriting all the clusters does clear it.

If the pagefile is large, this will extend the shutdown time.


Well I see now that fsutil behavior doesn't allow pagefile encryption
with XP. I guess that's the newer windows OSes. Is there anyway t oencrypt
it another way on XP? Would I be worth it?
Bill


The only way to encrypt it, is with some other kind
of encryption solution.

Truecrypt for the entire partition.

A Seagate Momentus drive with full disk encryption.
(The tough part is not getting the disk, it's getting
some sort of software or BIOS code to work with it
properly. This kind of hardware solution is still
not being marketed to the public, only to
"system integrators" or larger OEM suppliers of
computers to government.)

That sort of idea.

And the thing is, those are comprehensive solutions, that
help cover all the other leakage mechanisms in the OS. So
they're desirable from that point of view.

Paul

  #19  
Old November 1st 15, 11:33 PM posted to microsoft.public.windowsxp.general
Paul
external usenet poster
 
Posts: 18,275
Default pagefile.sys header

Micky wrote:


I'm in over my head here but doesn't more efficiency in memory
management equate to better use of the swapfile? For example, iiuc
now most programs read ahead, so that while you're reading page 21 to
25 of the file, the program foresees that you will soon need page 26
to 30, or maybe even 31 to 35, and in background it gets it from the
swapfile to RAM so that it's ready when you get there, and even if you
page ahead.


Then you haven't seen just how ridiculous this is getting :-)

On the Win10 machine, I have a copy of Macrium, It has
a conversion routine, to convert a .mrimg backup file,
to a .vhd file.

I open Task Manager and watch.

The operation starts with a lot of prefetch. Around
5GB of memory is actually booked by the operation.

The destination drive is slower than the source drive.
The output file will be about the same size (the .vhd
is about the same size as the .mrimg, if the .mrimg
wasn't compressed).

The booked memory continues to increase.

Soon, 10GB of memory is used, some for read prefetch,
some for write cache.

Eventually, the program is done. I click quit.

I look over at the hard drive status LED. It's
still lit and going full tilt. The Task Manager
memory thing indicates that 5GB of memory is
"draining" and that is what keeps the hard drive
light running. When this caching mechanism gets near
the end, it stops running the disk full tilt. It
"burps" out smaller write operations in pulses.
The write activity at the end, is a declining
write curve.

Eventually, the disk drive settles down, and it
looks like the caching mechanism is now drained.

So if a person was measuring the "time to complete"
the operation, it would be from clicking the button
to start the operation in Macrium, until the last
"write burp" to the drive.

Well, how much does that gain us ?

The operation cannot go faster than the destination
drive is willing to go (in this case). At some point,
either the source disk or the destination disk is
an issue.

*******

When the first desktop computers existed, there
wasn't any overlapping I/O. Certainly, on a dual
floppy drive machine, you could blame having only
one floppy controller and two drives on the cable for
it. But the software was also blocking the operations,
and only allowing one outstanding operation at a time.

+----------+ +----------+
| Read #1 | | Read #2 |
+----------+-----------+----------+-----------+
| Write #1 | | Write #2 |
+-----------+ +-----------+

Later, OSes like Windows acquired non-blocking
operations, intended to support overlapped I/O.
It was up to the application to make the right calls,
so many programs continued to do it the old way.
The first program I saw here, to do overlapped
I/O, was Robocopy.

x ------ Program running ------- X

+----------+-----------+
| Read #1 | Read #2 |
+----------+-----------+-----------+
| Write #1 | Write #2 |
+-----------+-----------+

With the large prefetch and large write buffer case
I've seen just recently, I'm not going to try to do
an ASCII Art diagram of that, but in essence, the
difference is like this. The program running portion
can appear shorter, but some hardware is still huffing
and puffing after the fact.

x - Program running - X

+----------+-----------+
| Read #1 | Read #2 |
+----------+-----------+-----------+
| Write #1 | Write #2 |
+-----------+-----------+
X -Cache- X
Drains

I'm having trouble seeing whether this new behavior
is a big win or not. This could be due to the
application using MapViewOfFile(), but I can't really
be sure of that.

So yes, there are instances of prefetch going on.
Even Explorer in Win10 attempts prefetch, as it
affects the appearance of the progress graph during
a file copy.

There are, in fact, a couple of RAM buffering options.
If you read a file, the contents are left in memory.

md5sum file.txt
md5sum file.txt

On the first run, the command gobbles data at 100MB/sec.
It is limited by the disk drive.

On the second run, it gobbles data at 300MB/sec. Why ?
The system file cache (which can use all unallocated
memory), holds a copy of the file. As long as the
file system is convinced the cached copy is the latest,
and nothing has purged the system file cache, you see
a performance speedup. The king at this, was Win2K, where
the system file cache was every bit as good as the
competing ones (SunOS or Solaris may have had this
well before any desktop OS, MacOSX has a good system
file cache too). The modern Windows ones, find more
excuses not to use it. It's still there though. For
example, if you defragment, the defragmenter will not
refer to any files contained in the system file
cache. It does read_uncached() instead, for "safety".

This is separate from the MapViewOfFile or similar concept.
The memory in that case is "charged to the system"
and you can see the activity in Task Manager. Whereas
the system file cache, there isn't a visual representation
for it. So in fact, some MapViewOfFile activity, as
it acquires RAM and is charged for it, that could be
purging a portion of the system file cache.

In short, there is lots going on behind the scenes.
More than I can keep track of. And some of it
is downright silly. It distorts progress bars (when
a file copy pre-fetches part of the copy from the
system file cache) and also makes dangerous situations
(from the user perspective), when 5GB of write cache memory
drains to disk and takes a whole minute to do it.
Any buffering on writes, should be short enough
so the bad battery on my UPS isn't an issue on
a power failure (power drops, before the 5GB of writes
are done).

Most of the time, on a modern OS, when I look
at the file transfer graph, my mouth is open
and I have that "WTF" look on my face. Because
the numbers in the graph are nonsense, and the
usage of RAM for stuff is a root cause. But many
times, things can't go any faster than the slowest
hard drive, so it's all a merry joke.

And if you ever see a drive deliver only half
of what you were expecting, check the "alignment".
I had a 4K sector hard drive, where I had to
realign it, to get the damn thing to run at
the proper speed.

Paul
  #20  
Old November 2nd 15, 08:47 PM posted to microsoft.public.windowsxp.general
Bill Cunningham[_2_]
external usenet poster
 
Posts: 441
Default pagefile.sys header


"VanguardLH" wrote in message
...
mike wrote:

On 10/31/2015 10:30 PM, VanguardLH wrote:
mike wrote:

VanguardLH wrote:


....

Well Those were nice links. I have my pagefile.sys set at 1341 and I guess
that is MB. It never seems to get any larger. So I set it at that size. I
will defrag it and I don't think it will need it again.

What about encrypting it with an outside encryption program? I don't
know if NTFS's encryption would encrypt it. Could XP still use it if it's
encrypted?

Bill


  #21  
Old November 3rd 15, 01:45 AM posted to microsoft.public.windowsxp.general
VanguardLH[_2_]
external usenet poster
 
Posts: 10,881
Default pagefile.sys header

Bill Cunningham wrote:

What about encrypting it with an outside encryption program? I don't
know if NTFS's encryption would encrypt it. Could XP still use it if it's
encrypted?


Already mentioned: whole-disk encryption. See my other reply that
mentions it.
  #22  
Old November 8th 15, 12:48 PM posted to microsoft.public.windowsxp.general
VanguardLH[_2_]
external usenet poster
 
Posts: 10,881
Default pagefile.sys header

VanguardLH wrote:

For example, you can configure Windows to zero out (clear); see
https://support.microsoft.com/en-us/kb/314834. If you delete the
pagefile.sys file, it gets recreated on Windows startup.


I wasn't sure how you could delete the pagefile. I suspect it is
protected because it is always inuse by Windows. Rather than waiting
the long time to zero out all the pages in the pagefile, I read (but
have not tried) that you can tell Windows to delete the pagefile (and
have it create a new one on its next load) by running:

wmic pagefileset where name="C:\\pagefile.sys" delete

The double backslash is probably needed for parsing (i.e., escaping a
character which is done by prefixing it with a backslash, so you use a
backslash to escape a backslash). WMI is Windows Management Instruction
(https://en.wikipedia.org/wiki/Window...nstrumentation) and
wmic.exe is the console-mode command to interface with WMI.

Of course, per your concern, the new pagefile will be allocated to the
available free space in the file system on the next Windows load, so its
sector positioning could change (and why you might have to use a defrag
to move it back to the the beginning [outside] of the platter).
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off






All times are GMT +1. The time now is 06:47 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.