A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Windows 10 » Windows 10 Help Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

CPU generation question



 
 
Thread Tools Rate Thread Display Modes
  #1  
Old July 26th 19, 11:31 PM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default CPU generation question

Hi All,

I got talking to a guy yesterday whilst handing out cards.
He started expounding on how he built his own computer
and from what I saw, he did a pretty good job. He was
able to move 3D graphics in real time.

The thing he was the most proud of was the "generation"
of the processors he picks. I presume he means Intel's
processors.

Now, to me the generation of the processor does not mean a
lot. When building a customer computer, I first find the
motherboard I want and then look at the specs to see what
processor it takes. Then I check my suppliers stock to see
what is in stick and what is the best value for what is
needed. This usually is the current generation and one back.

As far a generation of processors goes, the higher the generation,
the better the power consumption. I haven't seen more than four
cores making any practical difference with Windows. And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

As far a performance goes, the big bottleneck it the hard
drive. I adore using NVMe drives ans they make a YUGE difference.
Next would be the memory bus speed. Last of all would be
the generation of the processor.

I go for the motherboard that meets the customer's needs.
To me the generator of the processor is what fits on the
motherboard.

Am I missing something? Does the "generation" of the processor
really make that much difference?

-T

Ads
  #2  
Old July 27th 19, 12:21 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default CPU generation question

T wrote:
Hi All,

I got talking to a guy yesterday whilst handing out cards.
He started expounding on how he built his own computer
and from what I saw, he did a pretty good job. He was
able to move 3D graphics in real time.

The thing he was the most proud of was the "generation"
of the processors he picks. I presume he means Intel's
processors.

Now, to me the generation of the processor does not mean a
lot. When building a customer computer, I first find the
motherboard I want and then look at the specs to see what
processor it takes. Then I check my suppliers stock to see
what is in stick and what is the best value for what is
needed. This usually is the current generation and one back.

As far a generation of processors goes, the higher the generation,
the better the power consumption. I haven't seen more than four
cores making any practical difference with Windows. And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

As far a performance goes, the big bottleneck it the hard
drive. I adore using NVMe drives ans they make a YUGE difference.
Next would be the memory bus speed. Last of all would be
the generation of the processor.

I go for the motherboard that meets the customer's needs.
To me the generator of the processor is what fits on the
motherboard.

Am I missing something? Does the "generation" of the processor
really make that much difference?

-T


You would need to be keeping careful notes, for
the "generation number" to make a difference.

Intel and Moores Law and brick walls and all.

TSMC claims to be working on 3nm right now, but of
course that "dimension thing" isn't exactly all that
honest, and I fully expect someone to claim
their geometry is "zero" any day now...
(Zero, plus or minus a 14nm error bar.)

Imagine how long it's going to take to do lithography
at 3nm. Chip manufacture takes around 3 months as it is.
(Ninety days, for sixty to seventy process steps.)
And that's why, when the power failed the last time
at the fab, they lost 3 months worth of production.

For all the "power saving" these chips provide,
the top of the line keeps setting records (like 400W).

The best way to compare generations, is try a single
threaded benchmark on Passmark. That takes core count
out of the mix, and should simplify the math to bring
them all to a common clock.

https://www.cpubenchmark.net/singleThread.html

Three times the clock gives seven times the performance,
so the IPC seems to have increased. You really need
details about the benchmark itself, to determine whether
it's excessively tied to memory or cache bandwidth.
Some of the processors in that chart, only had 300MB/sec
memory bandwidth. A significant impediment if CPU cache
isn't big enough.

Paul
  #3  
Old July 27th 19, 12:45 AM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default CPU generation question

On 7/26/19 4:21 PM, Paul wrote:
T wrote:
Hi All,

I got talking to a guy yesterday whilst handing out cards.
He started expounding on how he built his own computer
and from what I saw, he did a pretty good job. He was
able to move 3D graphics in real time.

The thing he was the most proud of was the "generation"
of the processors he picks.Â* I presume he means Intel's
processors.

Now, to me the generation of the processor does not mean a
lot.Â* When building a customer computer, I first find the
motherboard I want and then look at the specs to see what
processor it takes.Â* Then I check my suppliers stock to see
what is in stick and what is the best value for what is
needed. This usually is the current generation and one back.

As far a generation of processors goes, the higher the generation,
the better the power consumption.Â* I haven't seen more than four
cores making any practical difference with Windows.Â* And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

As far a performance goes, the big bottleneck it the hard
drive.Â* I adore using NVMe drives ans they make a YUGE difference.
Next would be the memory bus speed.Â* Last of all would be
the generation of the processor.

I go for the motherboard that meets the customer's needs.
To me the generator of the processor is what fits on the
motherboard.

Am I missing something?Â* Does the "generation" of the processor
really make that much difference?

-T


You would need to be keeping careful notes, for
the "generation number" to make a difference.

Intel and Moores Law and brick walls and all.

TSMC claims to be working on 3nm right now, but of
course that "dimension thing" isn't exactly all that
honest, and I fully expect someone to claim
their geometry is "zero" any day now...
(Zero, plus or minus a 14nm error bar.)

Imagine how long it's going to take to do lithography
at 3nm. Chip manufacture takes around 3 months as it is.
(Ninety days, for sixty to seventy process steps.)
And that's why, when the power failed the last time
at the fab, they lost 3 months worth of production.

For all the "power saving" these chips provide,
the top of the line keeps setting records (like 400W).

The best way to compare generations, is try a single
threaded benchmark on Passmark. That takes core count
out of the mix, and should simplify the math to bring
them all to a common clock.

https://www.cpubenchmark.net/singleThread.html

Three times the clock gives seven times the performance,
so the IPC seems to have increased. You really need
details about the benchmark itself, to determine whether
it's excessively tied to memory or cache bandwidth.
Some of the processors in that chart, only had 300MB/sec
memory bandwidth. A significant impediment if CPU cache
isn't big enough.

Â*Â* Paul


Ya, that is what I though.

Thank you!

  #4  
Old July 27th 19, 12:49 AM posted to alt.comp.os.windows-10
VanguardLH[_2_]
external usenet poster
 
Posts: 10,881
Default CPU generation question

T wrote:

Hi All,

I got talking to a guy yesterday whilst handing out cards.
He started expounding on how he built his own computer
and from what I saw, he did a pretty good job. He was
able to move 3D graphics in real time.

The thing he was the most proud of was the "generation"
of the processors he picks. I presume he means Intel's
processors.

Now, to me the generation of the processor does not mean a
lot. When building a customer computer, I first find the
motherboard I want and then look at the specs to see what
processor it takes. Then I check my suppliers stock to see
what is in stick and what is the best value for what is
needed. This usually is the current generation and one back.

As far a generation of processors goes, the higher the generation,
the better the power consumption. I haven't seen more than four
cores making any practical difference with Windows. And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

As far a performance goes, the big bottleneck it the hard
drive. I adore using NVMe drives ans they make a YUGE difference.
Next would be the memory bus speed. Last of all would be
the generation of the processor.

I go for the motherboard that meets the customer's needs.
To me the generator of the processor is what fits on the
motherboard.

Am I missing something? Does the "generation" of the processor
really make that much difference?

-T


There are some Windows requirements that require a minimum generation of
Intel processor. See:

https://docs.microsoft.com/en-us/win...r-requirements

Once you know what software (OS and apps) you want to run on the
platform is when you'll know the hardware requirements for that
software. In addition, when looking at motherboards that meet my
hardware specifications, the candidates kept being 8th generation
minimum Intel CPUs that the mobo supported. Once a mobo meet my
hardware criteria, it only supported 8th or 9th generation Intels.

Then there is the converse: you need to know what is the minimum version
of Windows that supports a generation of Intel i3/i5/i7 processors.
Windows 10 is the only version of Windows that supports Intel 7th
generation processors.

Your premise of getting the latest generation, or maybe one back, means
you'll be paying a super high price premium for the latest generation.
I couldn't afford the 9th generation CPUs, so I got an 8th generation
(i7-8700 non-K). It is highly unlikely that I would experience any
performance boost of the 9th over the 8th generation. I also picked the
non-overclocked version to reduce the power consumption: 65W for the
i7-8700 (non-overclocked) versus 95W for either the i7-8700K
(overclocked) or any of the i7-9xxx. This wasn't to reduce the size of
the PSU (which I get way over the required VA) but to reduce heat while
keeping the fan RPMs low for reduced noise.

Most customers have no clue what are their OS or hardware needs. They
can only give you some vague description of their expectation in usage
and perhaps a list of software that is critical to them. Based on that,
rarely are the last 2 generations of Intels (or AMDs) needed to satisfy
those customers needs ... while also significantly reducing the cost of
over building the platform. Unless they are planning for an 8-year
lifespan for the computer, there is little need to go latest
generations. Most users replace their computers a lot sooner, so they
would never achieve ROI on their investment with an overbuilt platform.
  #5  
Old July 27th 19, 02:14 AM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default CPU generation question

On 7/26/19 4:49 PM, VanguardLH wrote:
Your premise of getting the latest generation, or maybe one back, means
you'll be paying a super high price premium for the latest generation.


You misunderstood me. I pick the motherboard first, then look and
see what processors go on it. Usually there is two. You mistook
me saying the latest that goes on the motherboard as the latest
generation that is sold.

And you are correct. the latest sold is way too expensive and as
far as I can tell, gives no describable improvement in
performance. Your take too?

Excellent write up, by the way. Thank you.

They guy was just being condescending. "I built these computers,
why would I need a computer consultant." So you are good at
assembling pop beads. It is only a tiny part of I.T.. If
he ever calls me, and I think hell will freeze over first,
I will just make a polite excuse as to being too busy at the moment.

  #6  
Old July 27th 19, 05:01 AM posted to alt.comp.os.windows-10
lonelydad
external usenet poster
 
Posts: 90
Default CPU generation question

T wrote in :


As far a generation of processors goes, the higher the generation,
the better the power consumption. I haven't seen more than four
cores making any practical difference with Windows. And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

I know I will probably catch some flack for this, but the only reason for
more than four cores would be if the user is going to run a specially
written massively parallel program. An example would be the simiulation
programs urn at Los Alamos, et el, when they do things like simulate
nuclear explosions, or when NOAA and others are doing weather forcasts.
There are really very few truly parallel processes required in the programs
most run on their desktop PCs.
  #7  
Old July 27th 19, 05:08 AM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default CPU generation question

On 7/26/19 9:01 PM, lonelydad wrote:
T wrote in :


As far a generation of processors goes, the higher the generation,
the better the power consumption. I haven't seen more than four
cores making any practical difference with Windows. And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

I know I will probably catch some flack for this, but the only reason for
more than four cores would be if the user is going to run a specially
written massively parallel program. An example would be the simiulation
programs urn at Los Alamos, et el, when they do things like simulate
nuclear explosions, or when NOAA and others are doing weather forcasts.
There are really very few truly parallel processes required in the programs
most run on their desktop PCs.


I have to agree. So no flack from here. And I have not seen
Windows being able to take advantage of more than four real
cores either. Linux does, but that is a totally different
technology.
  #8  
Old July 27th 19, 05:16 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default CPU generation question

lonelydad wrote:
T wrote in :

As far a generation of processors goes, the higher the generation,
the better the power consumption. I haven't seen more than four
cores making any practical difference with Windows. And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

I know I will probably catch some flack for this, but the only reason for
more than four cores would be if the user is going to run a specially
written massively parallel program. An example would be the simiulation
programs urn at Los Alamos, et el, when they do things like simulate
nuclear explosions, or when NOAA and others are doing weather forcasts.
There are really very few truly parallel processes required in the programs
most run on their desktop PCs.


How are we to play Fritz chess and make a decent opponent ? :-)

At some point on high core count systems, you have to learn
how to fork multiple jobs, and that's a way to get some
usage from the excess cores.

7ZIP scales to some extent, but it really needs better memory
bandwidth to use a lot of cores. I would think the best core
count machines available today, could keep up with the bandwidth
offered by an eight year old hard drive. I don't know whether
spanning 7ZIP across two sockets works as well as it might
(like say, 128 cores).

Here's a machine at Microsoft, with a terabyte of memory and
a good number of cores. I think there is another picture with
maybe 192 cores on the machine. They need stuff like that to
prove the limits of the OS actually work (that it can manage
that many cores).

https://d2rormqr1qwzpz.cloudfront.ne...y_3e37e697.png

The other technique is the "heat map" which shows core usage.

https://d2rormqr1qwzpz.cloudfront.ne...ger_teaser.jpg

Paul
  #9  
Old July 27th 19, 05:58 AM posted to alt.comp.os.windows-10
VanguardLH[_2_]
external usenet poster
 
Posts: 10,881
Default CPU generation question

lonelydad wrote:

I know I will probably catch some flack for this, but the only reason for
more than four cores would be if the user is going to run a specially
written massively parallel program. An example would be the simiulation
programs urn at Los Alamos, et el, when they do things like simulate
nuclear explosions, or when NOAA and others are doing weather forcasts.
There are really very few truly parallel processes required in the programs
most run on their desktop PCs.


Well one use is to set aside one core for a virtual machine, like using
VirtualBox or VMWware Player and letting a guest OS running in a VM have
its own CPU which might make it more responsive.

You can also set a program's CPU affinity to use a core that isn't much
used rather than trying to share a busy core with other programs. See
https://en.wikipedia.org/wiki/Processor_affinity.

Although I don't have any, I've seen some games that can benefit from
using more processors (cores).

Users love to see more "cores" available even if they are virtual via
hyperthreading. I forget which of the nasty CPU vulnerabilities are
affected, but remember reading that disabling hyperthreading eliminated
that vulnerability. Might've been for ZombieLoad, a Spectre-like
variant; see:

https://www.pcworld.com/article/3395...u-exploit.html

Your 8 logical core machine would go down to the 4 real cores. I
haven't bothered to disable hyperthreading which could reduce
performance of any apps that use threads across [logical] cores.

http://techgenix.com/intels-hyper-threading-technology/

That exploits exist doesn't mean you are guaranteed to get hit by any of
them. That the Sun could destroy Earth by a solar superflare ("Knowing"
movie) doesn't mean it must happen.

Back to the topic, there has always been an advantage to having more
processors. Do you still own and use a single processor (with just one
core) machine? Sounds like yours has 1 CPU with 4 cores, and yet your
argument that more isn't better but you already more than one. Using
multiple cores on the same die eliminates having to pump up the power to
bring the signal off the die and over to another CPU which means more
power consumption which means more heat. Consider what you're using now
used to be considers a server-quality box. As the hardware went up, so
did the OS. As support in the OS went, the hardware followed. That
your setup doesn't use more than 4 cores doesn't mean no one else cannot
use more.

It's not just for massive parallel computing where more cores are
beneficial. The OS already can use multithreading for itself.
Applications can themselves use multithreading. You can run more than
one multithreaded application. Multithreading is often used in server
or enterprise setups but has been slower to migrate to end user or
consumer platforms (which is likely what you are asking about), but it's
use is growing. E-mail and web browsing apps that are mostly waiting
eons for the user to do something won't benefit much from
multithreading. However, graphics or video editing can greatly benefit
from multithreading, and the more cores means distributing the load
across multiple cores for higher parallelism. Really depends on how you
use your platform. If you're converting a video from one format to
another, it might not matter if it takes 2 or 4 hours for just one video
conversion you do once in a blue moon, but it is important to someone
doing it every day. It's also nice to do some graphics in real-time
rather than wait for a jerky piecemeal visual.

Look at some games. Most need rendering of the graphics (display) to
show what is happening at the moment and often at high resolution. Then
there's the computation involved in the artifical intelligence of the
characters in the game. A single core would have do both but a
multithreaded game could load balance across multiple cores. Say you're
doing some video [re]encoding while recompiling some code and while
waiting might want to play a game. More apps doing more multithreading
across more cores.

Think about the Internet speeds you have nowadays compared to 30 years
ago with dial-up, and the increased resolution of monitors, and the
higher clock rates of computers, and so on. You get used to power, and
going back is not really an option. Speed is addictive. Sure, you
could go back to limiting how many concurrent CPU hungry apps you run
and wait longer for them to complete. Do you really want to? You'd be
like those turtles in the Comcast ad that don't want speed and want
everything to be tortoise paced.

Sounds like you don't have much beyond the OS that can make use of
multithreading. I have a few apps that make use of multithreading. I'm
not much of a gamer (stealth is so elusive in the vast majority of
games), but there are lots of gamers that play multithreaded games.
There are graphic designers, AutoCAD, video encoders, and folks doing a
hell of a lot on their hardware than I am. However, I tend to design my
builds for many years of use where even after many years the platform is
still quite usable. I like having potential rather than too soon
hitting a wall because I didn't build robust enough.

I could buy a dinky smartcar if the only task needed by my car was to
commute between home and work. But I also need to carry stuff for home
repair in my car, and tote groceries, haul the family around, go
offroading, get through snow, tow a boat or trailer, and so on. The
money spent on a dinky smartcar would be wasted as I soon would need my
car to more than just commute. Do I want more than 4 cupholders? Hell
yeah, because they hold my phone(s), change, and other stuff, and even
cups. I'd like more cubbies to keep all my stuff organized. The more
cubbies there are, the more stuff I could put in the car.

Of course, you're bringing up this topic to the wrong audience.
"programs most run" doesn't really apply to this audience. Yeah, most
users buy a computer, use it until they need something better, and then
buy another better computer. If you were to inventory the software this
audience uses, it's likely outside or larger than "most programs" used
by most users. Those most-users wouldn't have a clue what we're talking
about here regarding generation of CPUs.

  #10  
Old July 27th 19, 06:07 AM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default CPU generation question

On 7/26/19 9:58 PM, VanguardLH wrote:
Well one use is to set aside one core for a virtual machine


Hi Vanguard,

That is a misconception I also had. A developer on
the KLM project straightened me out. EVERYTHING
in a virtual machine is fake. Commands get in line
with everything else to run on the native cores like
any other native program.

To the virtual machine it looks like you have x cores.
And you can have far more cores, if your heart desires,
than the actual machine has. The hybervisor takes care
of turning fake into real.

This also applies to memory. You have x fake amount
of RAM.

Love virtual machines.

:-)

-T

Here is one of his comments:

"It's how it works, think of kvm as an application. If you
tell that it should use 4 core it's like if you tell it
to use 4 processes or threads. The host operating system
will decide which processor core or thread to use ..."
  #11  
Old July 27th 19, 07:26 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default CPU generation question

T wrote:
On 7/26/19 9:01 PM, lonelydad wrote:
T wrote in :


As far a generation of processors goes, the higher the generation,
the better the power consumption. I haven't seen more than four
cores making any practical difference with Windows. And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

I know I will probably catch some flack for this, but the only reason for
more than four cores would be if the user is going to run a specially
written massively parallel program. An example would be the simiulation
programs urn at Los Alamos, et el, when they do things like simulate
nuclear explosions, or when NOAA and others are doing weather forcasts.
There are really very few truly parallel processes required in the
programs
most run on their desktop PCs.


I have to agree. So no flack from here. And I have not seen
Windows being able to take advantage of more than four real
cores either. Linux does, but that is a totally different
technology.


What does your statement mean exactly ?

Here's a quick 7ZIP compression run.

All cores in use.

https://i.postimg.cc/852B2nd2/compression.gif

Paul

  #12  
Old July 27th 19, 07:43 AM posted to alt.comp.os.windows-10
VanguardLH[_2_]
external usenet poster
 
Posts: 10,881
Default CPU generation question

T wrote:

VanguardLH wrote:

Well one use is to set aside one core for a virtual machine


That is a misconception I also had. A developer on
the KLM project straightened me out. EVERYTHING
in a virtual machine is fake. Commands get in line
with everything else to run on the native cores like
any other native program.


You're confusing an emulator (where all hardware is emulated) with a
virtual machine (where all hardware is emulated except the CPU). The
"virtual" cores inside the VM are sending their cost to the host CPU
where they get executed. In an emulator, the guest OS is having its
code interpreted by software (which then runs on the host CPU). In
fact, with VMs, there are pass-through drivers that can let the guest OS
have semi-direct access to the other real hardware, too. That's one
way, for example, to up the video performance inside the VM in the guest
OS. There's a difference between the guest OS and its apps running
their code on the real core(s) versus some CPU emulator interpreting all
that code to insulate from the real CPU.

Virtual machines use CPU self-virtualization (VT-x) to provide a
virtualized interface to the *real* hardware. Emulators emulate the
hardware without reliance on the CPU being able to run guest's code
directly. VMs let you run guest OS(es) that use the hardware you
actually have. Emulators can let you run guest OS(es) or their apps on
emulated hardware that differs from the real hardware. VMs are slower
than the host OS but are faster than emulated hardware (which may not
match the real hardware).

A hypervisor (or VMM) does not emulate protected access. It mediates
protected access with the real hardware. An emulator lets you run code
on a hardware platform that you don't have because the CPU is fake.
What the emulator needs for its host OS is disconnected from the CPU it
fakes to the virtual container.

Virtual machines provide an isolated environment, not present a fake
environment. Emulators reproduce the behavior of some hardware platform
you want to target which may be the same or different than the real
hardware. Emulators cannot use virtualization because that won't
reproduce any quirks inherent to the targeted hardware platform being
emulated and virtualization is a bit more "leaky" than emulation but
virtualization is an interface to the real CPU hence faster.

To the virtual machine it looks like you have x cores.
And you can have far more cores, if your heart desires,
than the actual machine has. The hybervisor takes care
of turning fake into real.


All that means is swapping between the cores, not that you have any
magical ones suddenly appearing. Can't see the point of specifying more
cores than there are physically available unless you are trying to fake
out some program running inside the VM. The result is you *degrade* the
performance of the VM by specifying more cores to the guest OS than
could possibly be assigned to the VM from the host OS.

Inside the VM, yes, the guest OS and apps running under it see a virtual
CPU. I'm talking about how many cores to assign to the VM itself; i.e.,
how many the VMM (virtual machine manager) gets to use, so it can
allocate which core(s) to which VM. Virtualbox, for example, can have
its VMM present up to 32 *virtual* cores to the VM. Yet obviously the
VM won't get more cores than are physically available hence threads
having to wait until they actually get a core for them to run making the
multithreaded application run even slower (since everything inside a VM
is going to run slower than on real hardware).

https://www.virtualbox.org/manual/ch03.html
3.5.2 Processor Tab
"You should not configure virtual machines to use more CPU cores than
are available physically. This includes real cores, with no
hyperthreads."

You gain nothing by allocating more virtual cores inside the VM than are
available to the VMM to allocate to any VM. Not sure why you want to
lie to the VM as to how many cores are actually available. In fact, you
should not allocate more than N-1 cores to a VM since you should leave 1
core to the host OS under which the VMM is running.

Once you specify a number of cores to use multithreading that exceeds
the number of threads the host OS can handle, performance will degrade.
Exceed the physical thread max count and the host cannot communicate
with the guest OS. There is just too much context switching. Hell, the
mouse inside the guest OS may not even more, or take a long time to
move. Because of all the pending threads and context switching, the
guest OS can get so slow as to be unusable.

VMs are already slow, and you want them even slower? Ever try to run
more VMs (with 1 core apiece) than there are physical cores? Yawn. If
you don't mind your VMs running super slow then go ahead and allocate
more virtual resources than exist physically. I assumed you actually
want to do something inside the guest OS(es), not push the VMs so badly
that you end up with a guest OS that is unusably slow or even appears
dead. The host OS still has highest priority but the guests are going
to be slow and worse as you continue to allocate resources inside their
VMs that just is not available through the VMM from the host OS.

Sure, VirtualBox can assign 32 virtual cores in a VM for a guest OS to
use. That works just fine on a 64-core host. Know a lot of your
customers that can afford the 72-core Intel Knights Landing (Xeon Phi)
CPU?

This also applies to memory. You have x fake amount of RAM.


Well, obviously all that over-allocated memory in the VM has to come
from somewhere other than the real system RAM. You already know how
that's done in the host OS (aka paging and what storage media is used
for that).
  #13  
Old July 27th 19, 08:23 AM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default CPU generation question

On 7/26/19 11:26 PM, Paul wrote:
T wrote:
On 7/26/19 9:01 PM, lonelydad wrote:
T wrote in :


As far a generation of processors goes, the higher the generation,
the better the power consumption.Â* I haven't seen more than four
cores making any practical difference with Windows.Â* And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

I know I will probably catch some flack for this, but the only reason
for
more than four cores would be if the user is going to run a specially
written massively parallel program. An example would be the simiulation
programs urn at Los Alamos, et el, when they do things like simulate
nuclear explosions, or when NOAA and others are doing weather forcasts.
There are really very few truly parallel processes required in the
programs
most run on their desktop PCs.


I have to agree.Â* So no flack from here.Â* And I have not seen
Windows being able to take advantage of more than four real
cores either.Â* Linux does, but that is a totally different
technology.


What does your statement mean exactly ?

Here's a quick 7ZIP compression run.

All cores in use.

https://i.postimg.cc/852B2nd2/compression.gif

Â*Â* Paul


I mean just in "observing" how fast things run, I am not observing any
improvement over 4 real cores. Well, in Windows.

  #14  
Old July 27th 19, 10:50 AM posted to alt.comp.os.windows-10
Eric Stevens
external usenet poster
 
Posts: 911
Default CPU generation question

On Sat, 27 Jul 2019 04:01:04 GMT, lonelydad
wrote:

T wrote in :


As far a generation of processors goes, the higher the generation,
the better the power consumption. I haven't seen more than four
cores making any practical difference with Windows. And
multi-threading doesn't seem to matter on Windows after
four real cores (Linux does make a big difference).

I know I will probably catch some flack for this, but the only reason for
more than four cores would be if the user is going to run a specially
written massively parallel program. An example would be the simiulation
programs urn at Los Alamos, et el, when they do things like simulate
nuclear explosions, or when NOAA and others are doing weather forcasts.
There are really very few truly parallel processes required in the programs
most run on their desktop PCs.


Well, I've got an i7-2600 with 16GB. It was way overkill when I bought
it, but hey, that was nearly 10 years ago. Its gone happily from XP to
W10 1903.

My present main machine is an i7-6800K with 32GB and will probably
remain competitive for longer than I will. :-)
  #15  
Old July 27th 19, 11:15 AM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default CPU generation question

On 7/26/19 11:43 PM, VanguardLH wrote:
T wrote:

VanguardLH wrote:

Well one use is to set aside one core for a virtual machine


That is a misconception I also had. A developer on
the KLM project straightened me out. EVERYTHING
in a virtual machine is fake. Commands get in line
with everything else to run on the native cores like
any other native program.


You're confusing an emulator (where all hardware is emulated) with a
virtual machine (where all hardware is emulated except the CPU). The
"virtual" cores inside the VM are sending their cost to the host CPU
where they get executed. In an emulator, the guest OS is having its
code interpreted by software (which then runs on the host CPU). In
fact, with VMs, there are pass-through drivers that can let the guest OS
have semi-direct access to the other real hardware, too. That's one
way, for example, to up the video performance inside the VM in the guest
OS. There's a difference between the guest OS and its apps running
their code on the real core(s) versus some CPU emulator interpreting all
that code to insulate from the real CPU.

Virtual machines use CPU self-virtualization (VT-x) to provide a
virtualized interface to the *real* hardware. Emulators emulate the
hardware without reliance on the CPU being able to run guest's code
directly. VMs let you run guest OS(es) that use the hardware you
actually have. Emulators can let you run guest OS(es) or their apps on
emulated hardware that differs from the real hardware. VMs are slower
than the host OS but are faster than emulated hardware (which may not
match the real hardware).

A hypervisor (or VMM) does not emulate protected access. It mediates
protected access with the real hardware. An emulator lets you run code
on a hardware platform that you don't have because the CPU is fake.
What the emulator needs for its host OS is disconnected from the CPU it
fakes to the virtual container.

Virtual machines provide an isolated environment, not present a fake
environment. Emulators reproduce the behavior of some hardware platform
you want to target which may be the same or different than the real
hardware. Emulators cannot use virtualization because that won't
reproduce any quirks inherent to the targeted hardware platform being
emulated and virtualization is a bit more "leaky" than emulation but
virtualization is an interface to the real CPU hence faster.

To the virtual machine it looks like you have x cores.
And you can have far more cores, if your heart desires,
than the actual machine has. The hybervisor takes care
of turning fake into real.


All that means is swapping between the cores, not that you have any
magical ones suddenly appearing. Can't see the point of specifying more
cores than there are physically available unless you are trying to fake
out some program running inside the VM. The result is you *degrade* the
performance of the VM by specifying more cores to the guest OS than
could possibly be assigned to the VM from the host OS.

Inside the VM, yes, the guest OS and apps running under it see a virtual
CPU. I'm talking about how many cores to assign to the VM itself; i.e.,
how many the VMM (virtual machine manager) gets to use, so it can
allocate which core(s) to which VM. Virtualbox, for example, can have
its VMM present up to 32 *virtual* cores to the VM. Yet obviously the
VM won't get more cores than are physically available hence threads
having to wait until they actually get a core for them to run making the
multithreaded application run even slower (since everything inside a VM
is going to run slower than on real hardware).

https://www.virtualbox.org/manual/ch03.html
3.5.2 Processor Tab
"You should not configure virtual machines to use more CPU cores than
are available physically. This includes real cores, with no
hyperthreads."

You gain nothing by allocating more virtual cores inside the VM than are
available to the VMM to allocate to any VM. Not sure why you want to
lie to the VM as to how many cores are actually available. In fact, you
should not allocate more than N-1 cores to a VM since you should leave 1
core to the host OS under which the VMM is running.

Once you specify a number of cores to use multithreading that exceeds
the number of threads the host OS can handle, performance will degrade.
Exceed the physical thread max count and the host cannot communicate
with the guest OS. There is just too much context switching. Hell, the
mouse inside the guest OS may not even more, or take a long time to
move. Because of all the pending threads and context switching, the
guest OS can get so slow as to be unusable.

VMs are already slow, and you want them even slower? Ever try to run
more VMs (with 1 core apiece) than there are physical cores? Yawn. If
you don't mind your VMs running super slow then go ahead and allocate
more virtual resources than exist physically. I assumed you actually
want to do something inside the guest OS(es), not push the VMs so badly
that you end up with a guest OS that is unusably slow or even appears
dead. The host OS still has highest priority but the guests are going
to be slow and worse as you continue to allocate resources inside their
VMs that just is not available through the VMM from the host OS.

Sure, VirtualBox can assign 32 virtual cores in a VM for a guest OS to
use. That works just fine on a 64-core host. Know a lot of your
customers that can afford the 72-core Intel Knights Landing (Xeon Phi)
CPU?

This also applies to memory. You have x fake amount of RAM.


Well, obviously all that over-allocated memory in the VM has to come
from somewhere other than the real system RAM. You already know how
that's done in the host OS (aka paging and what storage media is used
for that).


Hi Vanguard,

I think we are talking at cross purposed a bit.

The holy mother of VM's is Red Hat's KVM, which is the one I use.
The VM is built directly into the kernel, giving it near
bare metal performance. It is almost as fast as native.

When I say "Fake", you say "visualized". So talking at
cross purposes. The misunderstand a lot of people have is
that they think if they call out two cores, they get two
of the actual cores dedicated only to the VM. Same with
memory. They misunderstand that they get xxx specifically
dedicated to the VM. Everything is fake (virtualized).

Under KVM, you get access directly to the hardware, but through
KVM, which is just another program. And KVM shares resources like
any other program.

KVM description of itself:
https://www.linux-kvm.org/page/Main_Page

KVM (for Kernel-based Virtual Machine) is a full virtualization
solution for Linux on x86 hardware containing virtualization
extensions (Intel VT or AMD-V). It consists of a loadable kernel
module, kvm.ko, that provides the core virtualization
infrastructure and a processor specific module, kvm-intel.ko or
kvm-amd.ko.

Using KVM, one can run multiple virtual machines running
unmodified Linux or Windows images. Each virtual machine has
private virtualized hardwa a network card, disk, graphics
adapter, etc.

https://www.redhat.com/en/topics/vir...on/what-is-KVM

How does KVM work?

KVM converts Linux into a type-1 (bare-metal) hypervisor.
All hypervisors need some operating system-level components—such
as a memory manager, process scheduler, input/output (I/O)
stack, device drivers, security manager, a network stack, and
more—to run VMs. KVM has all these components because it’s part
of the Linux kernel. Every VM is implemented as a regular
Linux process, scheduled by the standard Linux scheduler, with
dedicated virtual hardware like a network card, graphics adapter,
CPU(s), memory, and disks.

If you are having trouble with VM's and the slows, it is time for you
to lie down with the big dogs, make Linux your host (Fedora works best
with KVM), and fire up qemu-kvm. Windows stinks for hosting VM's.

-T

 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 07:01 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.