View Single Post
  #73  
Old February 6th 04, 07:13 AM
Klaus Bonadt
external usenet poster
 
Posts: n/a
Default No memory although more than 1GB free

I mean what I said - address space. Each process is allocated a flat 4GB
address space - that's as much as you can address with a 32-bit pointer.
Upper 2GB are reserved for the system (note - this does not mean the
system uses up 2GB of RAM, it just means that you cannot allocate memory
with an address in the upper 2GB). The bottom 2GB are partially occupied
by your executable code and any DLLs it may load. The rest is available
for allocation.


Thus, each process is able to allocate 2GB at maximum. Indeed, I can almost
allocate 2GB with my test program, allocating chunks of 1MB.
However, when my crucial application runs out of memory, Task Manager tells
that there is only 1,6GB in use (see my first mail). Furthermore, starting
my test program at this point in time, this program is able to allocate
further 1,2 GB until my whole virtual memory is allocated. Thus, all other
processes (system and my application) share 3GB - 1,2GB = 1,8GB. One more
indication that my crucial application was still not be able to allocate
nearly 2GB.

My question is, why could my application allocate only 1,4GB (this is what
Process Viewer (Dev Studio6 tools) says) although it should be able to
allocate 2GB.

Now, with very large amounts of RAM, it may so happen that the process
runs out of addresses before it runs out of physical memory (in fact, if
you have more than 2GB of RAM you simply cannot address it all as a flat
space). That's the primary motivation for moving to 64bit processors.


I have the AMD 64 processor, but I need a special 64-bit XP system, which is
not yet available for AMD, correct?

Even if you have enough address space, it may be fragmented. That is,
there are many small stretches of unused address space, but none large
enough to accomodate your allocation request.


What are PVIEW and Task Manager showing, the sum of heap sizes (including
unused space due to fragmentation) or the sum of actual allocated memory
(HeapAlloc) which was not freed afterwards?
I guess the first case, which means the sum of memory which is reserved for
the process even the process is not able to allocate due to fragmentation?
But anyway, as I have mentioned above there must be at least 2 - 1,8 = 0,2GB
unfragmented memory available, otherwise my test program could not allocate
1,2GB until the whole virtual memory is occupied.

Does it mean something like "page handles"? Maybe this number is

restricted?

Maybe there is another limiting resource, for instance the amount of allocs.
I just wrote another test program to clarify this. The test program
allocates only 2Bytes with HeapAlloc() in a loop. It was able to allocate
133,164,202 * 2Bytes = 266,328,404 Bytes, which is 254MB. However, process
viewer shows for this process a heap usage of 2080848KB, which is almost
2GB.
It seems that for every HeapAlloc there are 8 Bytes additional costs in
terms of memory, but the number of allocations
seem to be not limited.

Regards,
Klaus


Ads