A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Windows 10 » Windows 10 Help Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Outboard graphics card



 
 
Thread Tools Rate Thread Display Modes
  #1  
Old December 31st 18, 10:16 PM posted to alt.comp.os.windows-10
Tim[_10_]
external usenet poster
 
Posts: 249
Default Outboard graphics card

I have a home-built system that uses an AMD A10-5800 APU. If I add an
outboard graphics card (I am thinking of the MSI Gaming GeForce GTX 1050)
how does the CPU know to send the graphics operations to the MSI instead of
its onboard GPU?
Ads
  #2  
Old December 31st 18, 10:31 PM posted to alt.comp.os.windows-10
Andy Burns[_6_]
external usenet poster
 
Posts: 1,318
Default Outboard graphics card

Tim wrote:

I have a home-built system that uses an AMD A10-5800 APU. If I add an
outboard graphics card (I am thinking of the MSI Gaming GeForce GTX 1050)
how does the CPU know to send the graphics operations to the MSI instead of
its onboard GPU?


usually there's a BIOS setting to determine the priority for
IGD v.s. PEG (Integrated Graphics Device v.s. PCI express graphics)

If you have no setting, it'll probably use PCI if it exists, otherwise
the onboard.
  #3  
Old December 31st 18, 10:47 PM posted to alt.comp.os.windows-10
Tim[_10_]
external usenet poster
 
Posts: 249
Default Outboard graphics card

Tim wrote in
. 28:

I have a home-built system that uses an AMD A10-5800 APU. If I add an
outboard graphics card (I am thinking of the MSI Gaming GeForce GTX
1050) how does the CPU know to send the graphics operations to the MSI
instead of its onboard GPU?

Can someone explain HDCP to me in terms of is there something special I
have to have in my monitors? And can I turn it off on the video card?
Monitors:
AOC 2243
AOC 2050
  #4  
Old January 1st 19, 02:16 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Outboard graphics card

Tim wrote:
Tim wrote in
. 28:

I have a home-built system that uses an AMD A10-5800 APU. If I add an
outboard graphics card (I am thinking of the MSI Gaming GeForce GTX
1050) how does the CPU know to send the graphics operations to the MSI
instead of its onboard GPU?

Can someone explain HDCP to me in terms of is there something special I
have to have in my monitors? And can I turn it off on the video card?
Monitors:
AOC 2243
AOC 2050


AOC may have multiple models with the same model
number root string. Other manufacturers do this
too, and sometimes the "connector board" that
accepts input connections is part of a different
feature set. So these might not be accurate portrayals.

2243: (e2243Fw)

Video Format 1080p (Full HD)
Type DVI-D, VGA
Features HDCP === got it

2050:

Native Resolution 1600 x 900 at 60 Hz
Type VGA

We can say for sure the 2050 doesn't have HDCP. As it's
only VGA and VGA is analog - no digital encryption methods
are available on VGA.

VGA (being analog) naturally degrades at high resolutions,
due to the less than ideal connector design (transmission line
reflections). They're less worried about people
making "exact" copies of Hollywood content over
VGA. The crossover might be around 1600x1200 or so.
If you want to handle resolutions higher than that,
HDMI or DVI start to look better above that. If you
wanted a 4K signal, then VGA would be a definite
"forget it". The DAC bandwidth on VGA never went
past 400MHz (which is either 2048x2048 or 2560 or so).
Back when dual-head video cards first came out, the
DAC bandwidth then was still limited, and high res choices
were not available because of the DAC issue. (Even if
the card had sufficient memory to make a "big" frame
buffer, the DAC couldn't "draw that fast".)

This is how they should have made the VGA connector.
RGB are actually coaxial right in the connector itself.
The VGA cable uses coax inside the cable part (that's good),
but the way the VGA connector pins were done was cheap
but not too clever.

https://en.wikipedia.org/wiki/DB13W3

Paul
  #5  
Old January 1st 19, 05:34 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Outboard graphics card

Wolf K wrote:
On 2018-12-31 17:16, Tim wrote:
I have a home-built system that uses an AMD A10-5800 APU. If I add an
outboard graphics card (I am thinking of the MSI Gaming GeForce GTX 1050)
how does the CPU know to send the graphics operations to the MSI
instead of
its onboard GPU?


AFAIK, the driver for the card takes care of that. If the card is
plug'n'play, then the system will automatically use it instead of the
integrated graphics.

BTW, talk to someone who has experience with this card. It may not give
you as much of a performance boost as you would like. Graphics
subsystems are tricky. They are in effect dedicated computers that
handle just the graphics tasks, which means that communication between
the graphics card and the motherboard is a crucial parameter. AIUI, the
mobo must have a bus fast enough to take advantage of the card's speed.
(As always, correction/clarification requested).

Good luck,


Usually PCIe x4 is sufficient. Or AGP 8X is good.

PCI3 x4 Rev1.1 4*250 = 1000MB/sec === sufficient
Rev2 4*500 = 2000MB/sec
Rev3 4*985 = 3940MB/sec

AGP8X 8*266 = 2128MB/sec

You might lose 10% of max performance by using PCIe x4,
and that was with the Rev1.1 version. With modern
PCIe rev3, even one lane of that (985MB/sec) is sufficient
communication bandwidth to get some usage from it.

But they no longer make x1 lane video cards. They
made some at one time.

If you have to run a modern x16 video card, in
an x4 wired slot, it really isn't the end of the world.
You can tell how a slot is wired, by counting the
ceramic capacitors located next to the slot - that's
the hint.

Only PCI bus at 100MB/sec or so, is "too crusty for comfort".

The coin miner motherboards, they use PCIe x1 interfaces
for each video card. Coin mining didn't need the bandwidth.
The video card is very busy (gets hot) but doesn't need a
lot of comms while doing so. This motherboard supports
18 video cards, and you need the adapter cables to run
from the motherboard, up to the shelf full of standalone
video cards and big power supplies.

https://www.anandtech.com/show/13747...-hardware-drop

Video games need at least x4 lanes, to work decently
by comparison. In which case those 18 slots would not
be used, and only the "big" x16 slot would be a candidate.

The protocols themselves are a bit deceptive.

The whizzy transfer rates are available during "burst transfer".
If you transfer a huge texture file to the video card,
the bus runs at (closer to) the rated speed.

However, some operations write single register locations.
When that happens, the aggregate bandwidth is actually
pretty pathetic. Nobody talks about that, because it
would be embarrassing. On AGP, the rate actually drops
for PCI-like transactions. Whereas PCIe keeps the clock
running at the same rate all the time. The inefficiency
in PCIe is the overhead per packet. If you want
to write a register, it must be encapsulated in a
packet. And that's some amount of overhead.

Paul

  #6  
Old January 1st 19, 08:01 AM posted to alt.comp.os.windows-10
Tim[_10_]
external usenet poster
 
Posts: 249
Default Outboard graphics card

Wolf K wrote in
:

On 2018-12-31 17:16, Tim wrote:
I have a home-built system that uses an AMD A10-5800 APU. If I add an
outboard graphics card (I am thinking of the MSI Gaming GeForce GTX
1050) how does the CPU know to send the graphics operations to the
MSI instead of its onboard GPU?


AFAIK, the driver for the card takes care of that. If the card is
plug'n'play, then the system will automatically use it instead of the
integrated graphics.

BTW, talk to someone who has experience with this card. It may not
give you as much of a performance boost as you would like. Graphics
subsystems are tricky. They are in effect dedicated computers that
handle just the graphics tasks, which means that communication between
the graphics card and the motherboard is a crucial parameter. AIUI,
the mobo must have a bus fast enough to take advantage of the card's
speed. (As always, correction/clarification requested).

Good luck,

Well, since the card is PCIE x16, and my MOBO supports that, I don't think
that is going to be a problem. And since what I am mainly looking for is
better performance in transcoding, I can't see really high traffic on the
buss anyway.
  #7  
Old January 1st 19, 08:07 AM posted to alt.comp.os.windows-10
Tim[_10_]
external usenet poster
 
Posts: 249
Default Outboard graphics card

Tim wrote in
. 29:

Wolf K wrote in
:

On 2018-12-31 17:16, Tim wrote:
I have a home-built system that uses an AMD A10-5800 APU. If I add
an outboard graphics card (I am thinking of the MSI Gaming GeForce
GTX 1050) how does the CPU know to send the graphics operations to
the MSI instead of its onboard GPU?


AFAIK, the driver for the card takes care of that. If the card is
plug'n'play, then the system will automatically use it instead of the
integrated graphics.

BTW, talk to someone who has experience with this card. It may not
give you as much of a performance boost as you would like. Graphics
subsystems are tricky. They are in effect dedicated computers that
handle just the graphics tasks, which means that communication
between the graphics card and the motherboard is a crucial parameter.
AIUI, the mobo must have a bus fast enough to take advantage of the
card's speed. (As always, correction/clarification requested).

Good luck,

Well, since the card is PCIE x16, and my MOBO supports that, I don't
think that is going to be a problem. And since what I am mainly
looking for is better performance in transcoding, I can't see really
high traffic on the buss anyway.

Additional info. My MOBO has three PCIe 2.0 x16 slots. One is full x16,
one is two x8, and one is x4. I don't know if the last one does more than
one x4, but since I have a full x16 slot available I'm not worried about
it.
  #8  
Old January 1st 19, 09:01 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Outboard graphics card

Tim wrote:
Wolf K wrote in
:

On 2018-12-31 17:16, Tim wrote:
I have a home-built system that uses an AMD A10-5800 APU. If I add an
outboard graphics card (I am thinking of the MSI Gaming GeForce GTX
1050) how does the CPU know to send the graphics operations to the
MSI instead of its onboard GPU?

AFAIK, the driver for the card takes care of that. If the card is
plug'n'play, then the system will automatically use it instead of the
integrated graphics.

BTW, talk to someone who has experience with this card. It may not
give you as much of a performance boost as you would like. Graphics
subsystems are tricky. They are in effect dedicated computers that
handle just the graphics tasks, which means that communication between
the graphics card and the motherboard is a crucial parameter. AIUI,
the mobo must have a bus fast enough to take advantage of the card's
speed. (As always, correction/clarification requested).

Good luck,

Well, since the card is PCIE x16, and my MOBO supports that, I don't think
that is going to be a problem. And since what I am mainly looking for is
better performance in transcoding, I can't see really high traffic on the
buss anyway.


Do you have a benchmark on the transcoding with
the prospective new video card ?

https://video.stackexchange.com/ques...encoding-speed

"The NVENC engine does have licensing limitations when implemented
on a consumer level NVIDIA card: only 2 video transcoding threads can
be run simultaneously, even if you have multiple cards.
"

https://forums.plex.tv/t/best-video-...scoding/208929

"Nvidia consumer cards can only accelerate 2 streams at once, while
AMD cards have no hard limit.

So an RX460 would probably be a great low cost option.
"

Sounds like some real numbers would help. Nobody ever considers
the possibility a license may prevent them from using the
card for anything :-/

I just ran into the above by accident. I wasn't looking for that.

Paul
  #9  
Old January 1st 19, 01:35 PM posted to alt.comp.os.windows-10
Tim[_10_]
external usenet poster
 
Posts: 249
Default Outboard graphics card

Paul wrote in :

Tim wrote:
Wolf K wrote in
:

On 2018-12-31 17:16, Tim wrote:
I have a home-built system that uses an AMD A10-5800 APU. If I add
an outboard graphics card (I am thinking of the MSI Gaming GeForce
GTX 1050) how does the CPU know to send the graphics operations to
the MSI instead of its onboard GPU?

AFAIK, the driver for the card takes care of that. If the card is
plug'n'play, then the system will automatically use it instead of
the integrated graphics.

BTW, talk to someone who has experience with this card. It may not
give you as much of a performance boost as you would like. Graphics
subsystems are tricky. They are in effect dedicated computers that
handle just the graphics tasks, which means that communication
between the graphics card and the motherboard is a crucial
parameter. AIUI, the mobo must have a bus fast enough to take
advantage of the card's speed. (As always, correction/clarification
requested).

Good luck,

Well, since the card is PCIE x16, and my MOBO supports that, I don't
think that is going to be a problem. And since what I am mainly
looking for is better performance in transcoding, I can't see really
high traffic on the buss anyway.


Do you have a benchmark on the transcoding with
the prospective new video card ?

https://video.stackexchange.com/ques...hics-card-feat
ures-effect-nvidia-nvenc-hardware-encoding-speed

"The NVENC engine does have licensing limitations when implemented
on a consumer level NVIDIA card: only 2 video transcoding threads
can be run simultaneously, even if you have multiple cards.
"

https://forums.plex.tv/t/best-video-...scoding/208929

"Nvidia consumer cards can only accelerate 2 streams at once, while
AMD cards have no hard limit.

So an RX460 would probably be a great low cost option.
"

Sounds like some real numbers would help. Nobody ever considers
the possibility a license may prevent them from using the
card for anything :-/

I just ran into the above by accident. I wasn't looking for that.

Paul

Maybe I'm using the wrong terminology here. What I want to do is use
VideoProc (Digarty) to convert a single H.264 mp4 file to H.265. I only
do one file at a time, so does the Nvidia limit apply? As an example, an
average 2gb mp4 file takes about 20 hrs to convert to H.265. I would
really like to take a lot less time.
  #10  
Old January 1st 19, 03:27 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Outboard graphics card

Tim wrote:
Paul wrote in :

Tim wrote:
Wolf K wrote in
:

On 2018-12-31 17:16, Tim wrote:
I have a home-built system that uses an AMD A10-5800 APU. If I add
an outboard graphics card (I am thinking of the MSI Gaming GeForce
GTX 1050) how does the CPU know to send the graphics operations to
the MSI instead of its onboard GPU?

AFAIK, the driver for the card takes care of that. If the card is
plug'n'play, then the system will automatically use it instead of
the integrated graphics.

BTW, talk to someone who has experience with this card. It may not
give you as much of a performance boost as you would like. Graphics
subsystems are tricky. They are in effect dedicated computers that
handle just the graphics tasks, which means that communication
between the graphics card and the motherboard is a crucial
parameter. AIUI, the mobo must have a bus fast enough to take
advantage of the card's speed. (As always, correction/clarification
requested).

Good luck,

Well, since the card is PCIE x16, and my MOBO supports that, I don't
think that is going to be a problem. And since what I am mainly
looking for is better performance in transcoding, I can't see really
high traffic on the buss anyway.

Do you have a benchmark on the transcoding with
the prospective new video card ?

https://video.stackexchange.com/ques...hics-card-feat
ures-effect-nvidia-nvenc-hardware-encoding-speed

"The NVENC engine does have licensing limitations when implemented
on a consumer level NVIDIA card: only 2 video transcoding threads
can be run simultaneously, even if you have multiple cards.
"

https://forums.plex.tv/t/best-video-...scoding/208929

"Nvidia consumer cards can only accelerate 2 streams at once, while
AMD cards have no hard limit.

So an RX460 would probably be a great low cost option.
"

Sounds like some real numbers would help. Nobody ever considers
the possibility a license may prevent them from using the
card for anything :-/

I just ran into the above by accident. I wasn't looking for that.

Paul

Maybe I'm using the wrong terminology here. What I want to do is use
VideoProc (Digarty) to convert a single H.264 mp4 file to H.265. I only
do one file at a time, so does the Nvidia limit apply? As an example, an
average 2gb mp4 file takes about 20 hrs to convert to H.265. I would
really like to take a lot less time.


As long as the NVenc entry in Wikipedia covers the conversion
case you propose, you can do two of them simultaneously.

The 1050 is Pascal.

https://www.guru3d.com/articles_page..._review,4.html

https://en.wikipedia.org/wiki/Nvidia_NVENC

"Fourth generation NVENC implements HEVC Main10 10-bit hardware encoding.
It also doubles the encoding performance of 4K H.264 & HEVC when compared
to previous generation NVENC. It supports HEVC 8K, 4:4:4 chroma subsampling,
lossless encoding, and sample adaptive offset (SAO).

Nvidia Video Codec SDK 8 added Pascal exclusive Weighted Prediction
feature (CUDA based). Weighted prediction is not supported if the encode
session is configured with B frames (H.264).

There is no B-Frame support for HEVC encoding, and the maximum CU size is 32×32.

The NVIDIA GT 1030 and the Mobile Quadro P500 are GP108 chips that
don't support the NVENC encoder.
"

https://download.cnet.com/mac/digiar...6290866-1.html

I downloaded "videoproc.exe" 49,349,880 bytes and gave it a trial.
Both the NVidia 417 video driver plus the 2GB CUDA package were loaded
on the system. Since the trial only processes 5 minutes of video,
it's not a good thrashing.

CPU - 60% used, 9 minutes 30 seconds for 5 minute trial video

GPU - 44% encoder, 20% decoder, 30 seconds for 5 minute trial video

That's 19x faster, for whatever profile and pass count
the tool uses. Probably single pass for both.

High quality engine was not selected.

Input video H.264 = 185,871,948 bytes

NVenc 81,330,000 bytes

CPU 17,284,000 bytes

Obviously, the profiles are not the same, invalidating
the test. If the CPU was made to do a ****ty job,
it likely would have finished sooner.

I don't know if there is any way to see detail on the
profiles or not.

Paul
  #11  
Old January 1st 19, 06:43 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Outboard graphics card

Paul wrote:
Tim wrote:
Paul wrote in :

Tim wrote:
Wolf K wrote in
:
On 2018-12-31 17:16, Tim wrote:
I have a home-built system that uses an AMD A10-5800 APU. If I add
an outboard graphics card (I am thinking of the MSI Gaming GeForce
GTX 1050) how does the CPU know to send the graphics operations to
the MSI instead of its onboard GPU?

AFAIK, the driver for the card takes care of that. If the card is
plug'n'play, then the system will automatically use it instead of
the integrated graphics.

BTW, talk to someone who has experience with this card. It may not
give you as much of a performance boost as you would like. Graphics
subsystems are tricky. They are in effect dedicated computers that
handle just the graphics tasks, which means that communication
between the graphics card and the motherboard is a crucial
parameter. AIUI, the mobo must have a bus fast enough to take
advantage of the card's speed. (As always, correction/clarification
requested).
Good luck,

Well, since the card is PCIE x16, and my MOBO supports that, I don't
think that is going to be a problem. And since what I am mainly
looking for is better performance in transcoding, I can't see really
high traffic on the buss anyway.
Do you have a benchmark on the transcoding with
the prospective new video card ?

https://video.stackexchange.com/ques...hics-card-feat
ures-effect-nvidia-nvenc-hardware-encoding-speed
"The NVENC engine does have licensing limitations when implemented
on a consumer level NVIDIA card: only 2 video transcoding threads
can be run simultaneously, even if you have multiple cards.
"

https://forums.plex.tv/t/best-video-...scoding/208929

"Nvidia consumer cards can only accelerate 2 streams at once, while
AMD cards have no hard limit.

So an RX460 would probably be a great low cost option.
"

Sounds like some real numbers would help. Nobody ever considers
the possibility a license may prevent them from using the
card for anything :-/

I just ran into the above by accident. I wasn't looking for that.

Paul

Maybe I'm using the wrong terminology here. What I want to do is use
VideoProc (Digarty) to convert a single H.264 mp4 file to H.265. I
only do one file at a time, so does the Nvidia limit apply? As an
example, an average 2gb mp4 file takes about 20 hrs to convert to
H.265. I would really like to take a lot less time.


As long as the NVenc entry in Wikipedia covers the conversion
case you propose, you can do two of them simultaneously.

The 1050 is Pascal.


https://developer.nvidia.com/video-e...support-matrix

It turns out, the higher end card encode two movies at once.
The 1070 and 1080 have two encoders.
The 1050 and 1060 have one encoder.
The GT1030 has no encoder and has a decoder (for playback).

The Quadro P2000 looks to be similar to the GTX1050,
has one encoder, but has unlimited sessions. I presume
that means it can process more than one movie via
some sort of timesharing or something. It's only priced
at 5x to 7x the price of the GTX1050 (because it has
"certified" drivers for CAD work). So not a serious
alternative.

I think the 1050 will "give you a taste".

In the Geforce 10 family, it looks like the blocks are
relatively similar. The 1080 has two blocks for encoding.
(A Titan has three blocks for encoding.) For a person
who wants to process one video, one block will suffice.

https://devtalk.nvidia.com/default/t...ed-comparison/

card NVDEC NVENC H264 NVENC H265 CUDA DEINTERLACE
GTX 1060: 2600 2600 1800 4000
GTX 1070: 2600 2600 1800 5000
GTX 1080: 2600 5200* 2600* 10000

Maybe the 1050 will be the same as the 1060.

The numbers quoted are FPS or frames per second. When
I did my test run, my 1280x720 movie processed at 320FPS
or only a fraction of the larger numbers above. The table
above is normalized to DVD resolution 720x576. My results
are still slower than they should be, according to that
table.

CUDA Deinterlace would be a function done by shaders. The other
columns would seem to be a dedicated encoder block.

I would be happier with this test run, if I was comparing
Apples to Apples. The CPU-produced (5x smaller) file was ****.
It's really not fit. If the bandwidth limit was set higher
on whatever that preset is, the CPU might even finish faster
for all I know. I've tried checking the settings, and
the settings look identical when Hardware support is
switched on and off. But the output is quite different.

The NVENC H265 only gives you a moderate improvement
on file size, with whatever the defaults are. For some
reason, the settings on mine said the GOP was set to
250, when a normal movie would be 12 frames or 15 frames
(a half-seconds worth).

Input H264 185,871,948 bytes
NVenc H265 Main 81,330,000 bytes (in 30 seconds)

Paul


  #12  
Old January 1st 19, 08:46 PM posted to alt.comp.os.windows-10
Tim[_10_]
external usenet poster
 
Posts: 249
Default Outboard graphics card

Paul wrote in :

Paul, once again you are a voice leading the way. I am in awe of your
knowledge of these matters. I consider myself a reasonably decent
PC/Windows hack, but as you can tell, I don't have a lot of depth. Thank
you for taking the time to take me by the hand and lead me through the
swamp.
  #13  
Old January 2nd 19, 04:39 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Outboard graphics card

Wolf K wrote:
On 2019-01-01 15:46, Tim wrote:
Paul wrote in :

Paul, once again you are a voice leading the way. I am in awe of your
knowledge of these matters. I consider myself a reasonably decent
PC/Windows hack, but as you can tell, I don't have a lot of depth. Thank
you for taking the time to take me by the hand and lead me through the
swamp.


+1

And thank you, Tim, for raising a question some here (well, me anyway)
didn't know they wanted an answer to.


Well, I'm lucky I had the toy to play with :-)

The trick part will be getting the quality out of it.
It's a bit like operating a meat grinder.

Paul
  #14  
Old January 2nd 19, 07:07 PM posted to alt.comp.os.windows-10
Tim[_10_]
external usenet poster
 
Posts: 249
Default Outboard graphics card

Paul wrote in :

Wolf K wrote:
On 2019-01-01 15:46, Tim wrote:
Paul wrote in
:

Paul, once again you are a voice leading the way. I am in awe of
your knowledge of these matters. I consider myself a reasonably
decent PC/Windows hack, but as you can tell, I don't have a lot of
depth. Thank you for taking the time to take me by the hand and lead
me through the swamp.


+1

And thank you, Tim, for raising a question some here (well, me
anyway) didn't know they wanted an answer to.


Well, I'm lucky I had the toy to play with :-)

The trick part will be getting the quality out of it.
It's a bit like operating a meat grinder.

Paul

What I'm expecting is that a two year old outboard card with all the
latest bells and whistles has to be a lot faster than a seven year old
GPU that doesn't have/isn't able to use the latest software/hardware.

One of the reviews I read the reviewer had downloaded the free version of
VideoProc that only allows a five minute clip, and it transcoded the clip
in 30 seconds. That's a whole lot better than 22 hours for a two hour
mp4. Wish me luck.
  #15  
Old January 2nd 19, 08:27 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Outboard graphics card

Tim wrote:
Paul wrote in :

Wolf K wrote:
On 2019-01-01 15:46, Tim wrote:
Paul wrote in
:

Paul, once again you are a voice leading the way. I am in awe of
your knowledge of these matters. I consider myself a reasonably
decent PC/Windows hack, but as you can tell, I don't have a lot of
depth. Thank you for taking the time to take me by the hand and lead
me through the swamp.

+1

And thank you, Tim, for raising a question some here (well, me
anyway) didn't know they wanted an answer to.

Well, I'm lucky I had the toy to play with :-)

The trick part will be getting the quality out of it.
It's a bit like operating a meat grinder.

Paul

What I'm expecting is that a two year old outboard card with all the
latest bells and whistles has to be a lot faster than a seven year old
GPU that doesn't have/isn't able to use the latest software/hardware.

One of the reviews I read the reviewer had downloaded the free version of
VideoProc that only allows a five minute clip, and it transcoded the clip
in 30 seconds. That's a whole lot better than 22 hours for a two hour
mp4. Wish me luck.


I have one more test result for you.

I tried HandBrake (which is also free), and it uses NVEnc but
for some reason doesn't use NVDecoder. To convert my test movie
takes 220W of electricity. The CPU was running at 80%, apparently
doing the decode of the H264 input.

I next tried FFMPEG (which is the core of a lot of tools,
including VideoProc). VideoProc successfully uses both NVenc
and NVDecoder, to make the output. But FFMPEG managed
to do the same thing. I had my eye on the power there,
and the FFMPEG run (which uses only the video card) drew
160W. A savings of 60W. The CPU usage during the movie
conversion was 3-4% or so.

This isn't the perfect command (as it switched the order of
the streams, and I need to move the audio copy stage forward
in the command just a little bit), but this at least gave
a movie that plays. I had an FFMPEG version 4 or 4.1 build
for this test. I think the c:v should probably be "hevc_nvenc"
as placing them in the other order is deprecated. The audio
stream is copied from one movie to the other.

ffmpeg -hwaccel cuvid -c:v h264_cuvid -i KEY01.mp4 -c:v nvenc_hevc -preset slow -c:a copy output.mp4

The "h264_cuvid" is the hardware decoder hint, and relies on
the user determination it is h264. If you feed it a divx,
it would probably error out. If you don't give an input
hint, as in that special example, ffmpeg knows how to
use the software decoder to automatically figure out
the format. But when specifying hardware decoding acceleration,
the user is responsible for the accelerator spec.

A 6GB movie in, gave a 2.7GB movie out, in that case.
I was not limited by the videoproc trial 5 minute limit,
when using FFMPEG.

You can get a static build of FFMPEG here. Static means
all the DLLs are inside the EXE, for maximum portability.
Usually the release one works better than the nightly - not
that long ago, the nightly was missing key DLLs in the build.

https://ffmpeg.zeranoe.com/builds/

When you get your video card, install the NVidia driver
(from the website), plus download the 2GB CUDA package,
as I would swear NVenc was not detected until the CUDA kit
was present. The CUDA kit includes stuff to integrate with
Visual Studio, and it will moan a little bit if it does
not detect Visual Studio, but the important bits for your
project should then get installed.

This is also a useful tool, for when you want to know what
temperature the GPU is at. During my movie encode, the
video card was stated to draw 55W, and the temp was 44C or so.
In other words, because the shaders weren't running, the
card is loafing along. The clock is boosted, VCC is slightly
greater than 1V, and it's in VREL mode (clock rate
is as high as the hardware will support, while using
that increased voltage).

https://www.techspot.com/downloads/4452-gpu-z.html

If you run Furmark on your new card, as a torture test,
the voltage drops to maybe 0.85V or 0.9V and the card is
power limited. The frequency decreases until the card
hits max_power exactly. But when video decoding, the
frequency automatically rises higher, the voltage goes
up to max to support the frequency, and the power
is not maxed out. And the card_status is VREL instead
of POWER_LIMITED. GPU-Z reads all of that out for your
entertainment.

The two operating points are demonstrated in these pictures.

The first run, is VREL (highly clocked) limited. The
second run is POWER limited, and the voltage and
frequency are not allowed to go as high. Your card
will have similar behaviors, but with a different
max power value. Furmark should still cause the voltage
value to drop, the frequency to be lowered.

https://i.postimg.cc/GhvnCqFw/Smoke-Particles2.jpg

https://i.postimg.cc/85cZzPxf/furmark.jpg

Paul
 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 07:35 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.