A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Microsoft Windows XP » Performance and Maintainance of XP
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

No memory although more than 1GB free



 
 
Thread Tools Display Modes
  #31  
Old February 6th 04, 06:49 AM
Igor Tandetnik
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

"André Pönitz" wrote in message
...
In microsoft.public.vc.stl Igor Tandetnik wrote:
That's why I think there's fragmentation at work. Imagine the
pathological scenario: your whole memory is occupied by 1K allocated
chunk followed by 1023K free chunk, and so on. 1024 such pairs will

eat
up 1GB of RAM in a way that only 1MB is actually used, but you

cannot
allocate another 1MB chunk. It's an artificial example of course,

it's
highly unlikely to occur in practice, but it demonstrates the idea

of
fragmentation nicely.


Does that mean the allocator does _not_ try to collect chunks of

'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]


It may or it may not, I don't know. Whatever strategy a particular
memory allocator employs, it is always possible to construct a
pathological sequence of allocations and deallocations that leads to
fragmentation with this allocator. In my example, I assumed a naive
allocator so that a pathological case would be easy to explain and
understand.
--
With best wishes,
Igor Tandetnik

"For every complex problem, there is a solution that is simple, neat,
and wrong." H.L. Mencken


Ads
  #32  
Old February 6th 04, 06:49 AM
Klaus Bonadt
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

Does that mean the allocator does _not_ try to collect chunks of 'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]


At least it would be worthwile in order to reduce fragmentation to think
about creating different heaps, each responding for allocating chunks with
similar size?

If I create two growable heaps, how the system does position these heaps in
the virtual address space? If the second heap starts at the end of the first
heap, the first heap is actually not growable?
On the other hand, the space between heaps will also cause fragmentation.
Any ideas for balancing?

Regards,
Klaus


  #33  
Old February 6th 04, 06:50 AM
Klaus Bonadt
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

Xref: kermit microsoft.public.vc.language:144412 microsoft.public.vc.stl:18875 microsoft.public.windowsxp.perform_maintain:155644

Which part of "each process has a separate virtual address space" do you
have difficulty understanding? A success or failure of a memory
allocation in one process tells you nothing about the state of the
virtual address space in another. It's like saying: "I have machine A
where memory allocation fails. Then I run a test program on machine B,
and it can successfully allocate plenty of memory. So enough memory
should be avaiable on machine A, right?"


I was not aware of your reply, when I wrote this. I got it.
However, could you take a look to my answer regarding André?
After understanding the concept, I am now interesting in figuring out how to
avoid fragmentation.

Regards,
Klaus


  #34  
Old February 6th 04, 06:50 AM
Nick Savoiu
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

"André Pönitz" wrote in message
...
In microsoft.public.vc.stl Igor Tandetnik wrote:
That's why I think there's fragmentation at work. Imagine the
pathological scenario: your whole memory is occupied by 1K allocated
chunk followed by 1023K free chunk, and so on. 1024 such pairs will eat
up 1GB of RAM in a way that only 1MB is actually used, but you cannot
allocate another 1MB chunk. It's an artificial example of course, it's
highly unlikely to occur in practice, but it demonstrates the idea of
fragmentation nicely.


Does that mean the allocator does _not_ try to collect chunks of 'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]


It could but there isn't a perfect solution. Most of the time one won't run
into this problem. If one does then they are also better positioned to fix
it as they know more about how things are and will be allocated than the OS.
MS tries to alleviate this by providing:

http://msdn.microsoft.com/library/de...ation_heap.asp

Nick


  #35  
Old February 6th 04, 06:50 AM
André Pönitz
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

In microsoft.public.vc.stl Igor Tandetnik wrote:
That's why I think there's fragmentation at work. Imagine the
pathological scenario: your whole memory is occupied by 1K allocated
chunk followed by 1023K free chunk, and so on. 1024 such pairs will eat
up 1GB of RAM in a way that only 1MB is actually used, but you cannot
allocate another 1MB chunk. It's an artificial example of course, it's
highly unlikely to occur in practice, but it demonstrates the idea of
fragmentation nicely.


Does that mean the allocator does _not_ try to collect chunks of 'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]

Andre'
  #36  
Old February 6th 04, 06:50 AM
Klaus Bonadt
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

Not a problem. We're all here to hopefully learn something. Try re-reading
Igor's posting again. He explains quite well what goes on with

virtual/real
memory and address spaces.


I wrote my answer to you before reading Igor's reply. Anyway, thanks a lot.

Regards,
Klaus


  #37  
Old February 6th 04, 06:51 AM
Klaus Bonadt
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

By the way, with Windows virtual memory management, you cannot fragment
physical memory at all, you can only fragment virtual address space. The
way it works, all physical memory is broken into pages 4KB (on some
systems 8KB) large. So is virtual memory. When a region of virual memory
is allocated, each page of virtual memory is backed by a page of
physical memory. Now, virtual addresses within this region need to be
consequtive (because your program expects the address arithmetic to
work), but physical RAM pages don't need to be. The system just picks
free RAM pages lying around and maps them to virtual pages. Moreover, if
the system runs out of RAM, it picks a physical page (mapped to some
virtual page in some process A), saves its contents to disk and reuses
it for another virtual page possibly in a different process B. If
process A later needs to refer to that virtual page, some other physical
page may be assigned to it and contents read back from disk.


Thanks a lot, Igor, I got it!


  #38  
Old February 6th 04, 06:51 AM
Igor Tandetnik
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

"Klaus Bonadt" wrote in message
...
But anyway, as I have mentioned above there must be at least 2 -

1,8 =
0,2GB
unfragmented memory available, otherwise my test program could not

allocate
1,2GB until the whole virtual memory is occupied.


Each process has a separate virtual address space so fragmentation

in one
would not affect another.


In my current situation, my application runs out of memory due to a

memory
claim of 0.5MB RAM.
Let us assume this is due to fragmented memory, i.e. there is no

contiguous
range with more than 0.5MB RAM in the lower 2 GB of virtual address

space.
The application stops with popping a message box.
Now I start another application, which allocates in a loop chunks of

memory,
each chunk is 1MB. The application does this up to 1.2 GB RAM.


Which part of "each process has a separate virtual address space" do you
have difficulty understanding? A success or failure of a memory
allocation in one process tells you nothing about the state of the
virtual address space in another. It's like saying: "I have machine A
where memory allocation fails. Then I run a test program on machine B,
and it can successfully allocate plenty of memory. So enough memory
should be avaiable on machine A, right?"
--
With best wishes,
Igor Tandetnik

"For every complex problem, there is a solution that is simple, neat,
and wrong." H.L. Mencken


  #39  
Old February 6th 04, 06:51 AM
Igor Tandetnik
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

"André Pönitz" wrote in message
...
In microsoft.public.vc.stl Igor Tandetnik wrote:
That's why I think there's fragmentation at work. Imagine the
pathological scenario: your whole memory is occupied by 1K allocated
chunk followed by 1023K free chunk, and so on. 1024 such pairs will

eat
up 1GB of RAM in a way that only 1MB is actually used, but you

cannot
allocate another 1MB chunk. It's an artificial example of course,

it's
highly unlikely to occur in practice, but it demonstrates the idea

of
fragmentation nicely.


Does that mean the allocator does _not_ try to collect chunks of

'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]


It may or it may not, I don't know. Whatever strategy a particular
memory allocator employs, it is always possible to construct a
pathological sequence of allocations and deallocations that leads to
fragmentation with this allocator. In my example, I assumed a naive
allocator so that a pathological case would be easy to explain and
understand.
--
With best wishes,
Igor Tandetnik

"For every complex problem, there is a solution that is simple, neat,
and wrong." H.L. Mencken


  #40  
Old February 6th 04, 06:51 AM
Klaus Bonadt
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

Does that mean the allocator does _not_ try to collect chunks of 'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]


At least it would be worthwile in order to reduce fragmentation to think
about creating different heaps, each responding for allocating chunks with
similar size?

If I create two growable heaps, how the system does position these heaps in
the virtual address space? If the second heap starts at the end of the first
heap, the first heap is actually not growable?
On the other hand, the space between heaps will also cause fragmentation.
Any ideas for balancing?

Regards,
Klaus


  #41  
Old February 6th 04, 06:51 AM
Klaus Bonadt
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

Xref: kermit microsoft.public.vc.language:144412 microsoft.public.vc.stl:18875 microsoft.public.windowsxp.perform_maintain:155644

Which part of "each process has a separate virtual address space" do you
have difficulty understanding? A success or failure of a memory
allocation in one process tells you nothing about the state of the
virtual address space in another. It's like saying: "I have machine A
where memory allocation fails. Then I run a test program on machine B,
and it can successfully allocate plenty of memory. So enough memory
should be avaiable on machine A, right?"


I was not aware of your reply, when I wrote this. I got it.
However, could you take a look to my answer regarding André?
After understanding the concept, I am now interesting in figuring out how to
avoid fragmentation.

Regards,
Klaus


  #42  
Old February 6th 04, 06:51 AM
Klaus Bonadt
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

I mean what I said - address space. Each process is allocated a flat 4GB
address space - that's as much as you can address with a 32-bit pointer.
Upper 2GB are reserved for the system (note - this does not mean the
system uses up 2GB of RAM, it just means that you cannot allocate memory
with an address in the upper 2GB). The bottom 2GB are partially occupied
by your executable code and any DLLs it may load. The rest is available
for allocation.


Thus, each process is able to allocate 2GB at maximum. Indeed, I can almost
allocate 2GB with my test program, allocating chunks of 1MB.
However, when my crucial application runs out of memory, Task Manager tells
that there is only 1,6GB in use (see my first mail). Furthermore, starting
my test program at this point in time, this program is able to allocate
further 1,2 GB until my whole virtual memory is allocated. Thus, all other
processes (system and my application) share 3GB - 1,2GB = 1,8GB. One more
indication that my crucial application was still not be able to allocate
nearly 2GB.

My question is, why could my application allocate only 1,4GB (this is what
Process Viewer (Dev Studio6 tools) says) although it should be able to
allocate 2GB.

Now, with very large amounts of RAM, it may so happen that the process
runs out of addresses before it runs out of physical memory (in fact, if
you have more than 2GB of RAM you simply cannot address it all as a flat
space). That's the primary motivation for moving to 64bit processors.


I have the AMD 64 processor, but I need a special 64-bit XP system, which is
not yet available for AMD, correct?

Even if you have enough address space, it may be fragmented. That is,
there are many small stretches of unused address space, but none large
enough to accomodate your allocation request.


What are PVIEW and Task Manager showing, the sum of heap sizes (including
unused space due to fragmentation) or the sum of actual allocated memory
(HeapAlloc) which was not freed afterwards?
I guess the first case, which means the sum of memory which is reserved for
the process even the process is not able to allocate due to fragmentation?
But anyway, as I have mentioned above there must be at least 2 - 1,8 = 0,2GB
unfragmented memory available, otherwise my test program could not allocate
1,2GB until the whole virtual memory is occupied.

Does it mean something like "page handles"? Maybe this number is

restricted?

Maybe there is another limiting resource, for instance the amount of allocs.
I just wrote another test program to clarify this. The test program
allocates only 2Bytes with HeapAlloc() in a loop. It was able to allocate
133,164,202 * 2Bytes = 266,328,404 Bytes, which is 254MB. However, process
viewer shows for this process a heap usage of 2080848KB, which is almost
2GB.
It seems that for every HeapAlloc there are 8 Bytes additional costs in
terms of memory, but the number of allocations
seem to be not limited.

Regards,
Klaus


  #43  
Old February 6th 04, 06:51 AM
Nick Savoiu
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

"André Pönitz" wrote in message
...
In microsoft.public.vc.stl Igor Tandetnik wrote:
That's why I think there's fragmentation at work. Imagine the
pathological scenario: your whole memory is occupied by 1K allocated
chunk followed by 1023K free chunk, and so on. 1024 such pairs will eat
up 1GB of RAM in a way that only 1MB is actually used, but you cannot
allocate another 1MB chunk. It's an artificial example of course, it's
highly unlikely to occur in practice, but it demonstrates the idea of
fragmentation nicely.


Does that mean the allocator does _not_ try to collect chunks of 'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]


It could but there isn't a perfect solution. Most of the time one won't run
into this problem. If one does then they are also better positioned to fix
it as they know more about how things are and will be allocated than the OS.
MS tries to alleviate this by providing:

http://msdn.microsoft.com/library/de...ation_heap.asp

Nick


  #44  
Old February 6th 04, 06:52 AM
Nick Savoiu
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

"Klaus Bonadt" wrote in message
...
I have the AMD 64 processor, but I need a special 64-bit XP system, which

is
not yet available for AMD, correct?


You can download a preview version for free from the MS site.

But anyway, as I have mentioned above there must be at least 2 - 1,8 =

0,2GB
unfragmented memory available, otherwise my test program could not

allocate
1,2GB until the whole virtual memory is occupied.


Each process has a separate virtual address space so fragmentation in one
would not affect another.

It seems that for every HeapAlloc there are 8 Bytes additional costs in
terms of memory, but the number of allocations
seem to be not limited.


Of course. The OS needs to do some bookkeeping for each allocation.

Nick


  #45  
Old February 6th 04, 06:52 AM
Klaus Bonadt
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

Not a problem. We're all here to hopefully learn something. Try re-reading
Igor's posting again. He explains quite well what goes on with

virtual/real
memory and address spaces.


I wrote my answer to you before reading Igor's reply. Anyway, thanks a lot.

Regards,
Klaus


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off






All times are GMT +1. The time now is 01:24 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.