PC Enthusiast-Fi (PC Gaming/Hardware/Software/Overclocking)
May 6, 2016 at 5:52 PM Post #8,956 of 9,120
My R290X was less prone to crashing / glitching than my SLI 980Ti, that has to be said, and AMD allow native 10 bit colour space on their consumer level cards, not like nvidia with the quadro being native, and the GeForce having it as a crude after thought.
 
May 6, 2016 at 6:16 PM Post #8,957 of 9,120
My R290X was less prone to crashing / glitching than my SLI 980Ti, that has to be said, and AMD allow native 10 bit colour space on their consumer level cards, not like nvidia with the quadro being native, and the GeForce having it as a crude after thought.

 
Oh. 

I did not know that. 
 
I seriously hope AMD will release something worthy of consideration then. 
 
Like a MXM compatible card. There is only one maker of socketed laptops, and that is Clevo so far. I really want a socketed laptop, as having one that is soldered can mean problems down the line. 
 
I would prefer to be able to change the CPU or GPU at will, than to have to throw entire mobo and pay for a new one. 
 
Also, having a 4K 99% adobeRGB is a sweet thing for a laptop, as P870 has. 
 
May 6, 2016 at 10:24 PM Post #8,958 of 9,120
Hmm, not really sure what the native 10-bit color thing means really. Maybe you can teach me, but I know the main deciding factor is determined by the capabilities of your display and if it supports it. With the monitors I've used, you can easily run a Geforce card in 10-bit color mode as long as the output port can handle the amount of bandwidth. Besides, 10-bit color doesn't do anything for you unless the programs/workflow you use actually make use of it, and/or you work with critical printmaking. And since consumer monitors are only 8-bit at the moment, it doesn't really make sense to produce digitally in 10-bit just for other people to see it in 8-bit either. 
tongue_smile.gif

 
I liked AMD/ATI cards a lot for their value, but since their 79XX series, their power/heat consumption efficiency is far behind NVIDIA's being more power hungry, noisier/hotter as well. Most games/programs tend to be optimized for Nvidia graphics anyways and run better on them versus AMD's architecture of similar performance. Nvidia (by popularity) also have arguably better driver support and more frequent releases as well.
 
For software features, I highly prefer Nvidia's Shadowplay capture as well, since it performs with very very little performance impact and is faster than AMD's Raptr counterpart. AMD's Mantle API was promising, but it really didn't pickup with developers and died out, nowhere to be seen anymore 
confused_face_2.gif

 
Of course these are just my biased opinions from experience hehe. I would really love for AMD's Polaris cards to step up and challenge Nvidia again so that we could finally have lower prices and bargain choices on both sides again. 
biggrin.gif

 
 
 
Edit: Pascal Geforce MSRP pricing and availability officially confirmed:
 
- GTX 1080 @ $599 (Nvidia's livestream announced it as being roughly 2x faster than the Titian X)
- GTX 1070 @ $379 (Nvidia's livestream announced as still being faster than the Titian X)
- Available May 27th, expecting aftermarket designs available early June
 
May 7, 2016 at 6:14 AM Post #8,959 of 9,120
  Hmm, not really sure what the native 10-bit color thing means really. Maybe you can teach me, but I know the main deciding factor is determined by the capabilities of your display and if it supports it. With the monitors I've used, you can easily run a Geforce card in 10-bit color mode as long as the output port can handle the amount of bandwidth. Besides, 10-bit color doesn't do anything for you unless the programs/workflow you use actually make use of it, and/or you work with critical printmaking. And since consumer monitors are only 8-bit at the moment, it doesn't really make sense to produce digitally in 10-bit just for other people to see it in 8-bit either. 
tongue_smile.gif

 
I liked AMD/ATI cards a lot for their value, but since their 79XX series, their power/heat consumption efficiency is far behind NVIDIA's being more power hungry, noisier/hotter as well. Most games/programs tend to be optimized for Nvidia graphics anyways and run better on them versus AMD's architecture of similar performance. Nvidia (by popularity) also have arguably better driver support and more frequent releases as well.
 
For software features, I highly prefer Nvidia's Shadowplay capture as well, since it performs with very very little performance impact and is faster than AMD's Raptr counterpart. AMD's Mantle API was promising, but it really didn't pickup with developers and died out, nowhere to be seen anymore 
confused_face_2.gif

 
Of course these are just my biased opinions from experience hehe. I would really love for AMD's Polaris cards to step up and challenge Nvidia again so that we could finally have lower prices and bargain choices on both sides again. 
biggrin.gif

 
 
 
Edit: Pascal Geforce MSRP pricing and availability officially confirmed:
 
- GTX 1080 @ $599 (Nvidia's livestream announced it as being roughly 2x faster than the Titian X)
- GTX 1070 @ $379 (Nvidia's livestream announced as still being faster than the Titian X)
- Available May 27th, expecting aftermarket designs available early June

 
 
2X faster than Titan X for VR, and only for very specialized games that are supposed to run their own optimizations, on an intel Extreme CPU, with all oc enabled. 
 
May 7, 2016 at 6:19 AM Post #8,960 of 9,120
I have an LG 31MU97 that supports all kinds of formats (my preferred is DCI-P3 Sim), and is native 10 bit. makes a difference for editing, but - I'm more of a gamer than anything else, so - it was either stick with AMD (which I could've done, but - when I saw what the 980ti could do, I was hooked), or move to the dark side...

There is an option for 10 (or 12, for that matter) bit output in the NVidia control panel, but it isn't native (or even active) from what I understand, like it is on the Quadros and AMDs...
 
May 7, 2016 at 8:30 AM Post #8,961 of 9,120
   
 
2X faster than Titan X for VR, and only for very specialized games that are supposed to run their own optimizations, on an intel Extreme CPU, with all oc enabled. 

 
Really? It is still what they listed during the pricing reveal regardless of them meaning VR or not, but they did say at the beginning of the reveal, that the "GTX 1080 is faster than 2 GTX 980's in SLI" and "3 times more power efficient than the Titan X."
 
It's a bit contradictory without a solid reference point because 2 GTX 980's in SLI would be faster than a single Titan X, yet they said the GTX 1080 is 2 times faster than both setups.
 
Best to not make assumptions until official review samples come out for benchmarks. 
bigsmile_face.gif
 
 
 
 
I have an LG 31MU97 that supports all kinds of formats (my preferred is DCI-P3 Sim), and is native 10 bit. makes a difference for editing, but - I'm more of a gamer than anything else, so - it was either stick with AMD (which I could've done, but - when I saw what the 980ti could do, I was hooked), or move to the dark side...

There is an option for 10 (or 12, for that matter) bit output in the NVidia control panel, but it isn't native (or even active) from what I understand, like it is on the Quadros and AMDs...

 
I finally found an answer. I was wondering about it too http://nvidia.custhelp.com/app/answers/detail/a_id/3011/~/10-bit-per-color-support-on-nvidia-geforce-gpus
 
 ​NVIDIA Geforce graphics cards have offered 10-bit per color out to a full screen Direct X surface since the Geforce 200 series GPUs.  Due to the way most applications use traditional Windows API functions to create the application UI and viewport display, this method is not used for professional applications such as Adobe Premiere Pro and Adobe Photoshop.  These programs use OpenGL 10-bit per color buffers which require an NVIDIA Quadro GPU with DisplayPort connector.

 
So I guess when I switched my 4k DP monitor to 10bit/30bit mode it did do absolutely nothing. At least nothing until I play a Direct X game or Direct X application with 10-bit color support. 
biggrin.gif
 
 
On the other hand, 10/30bit doesn't really do too much except smoothen out color transitions/reduce the likelihood of banding. So unless your monitor has the color gamut of Rec. 2020 or higher than Adobe RGB, 10/30 bit color might not benefit you... at least as much as something like calibration or having a high color gamut display to begin with.
 
May 7, 2016 at 11:49 AM Post #8,962 of 9,120
Well in the past they said GTX980 would be 2x faster than 780 Ti.
 
It wasn't.
I think they meant probably DX12 performance but Pascal doesn't have async shaders.
 
 
I liked AMD/ATI cards a lot for their value, but since their 79XX series, their power/heat consumption efficiency is far behind NVIDIA's being more power hungry, noisier/hotter as well. Most games/programs tend to be optimized for Nvidia graphics anyways and run better on them versus AMD's architecture of similar performance. Nvidia (by popularity) also have arguably better driver support and more frequent releases as well.
 
For software features, I highly prefer Nvidia's Shadowplay capture as well, since it performs with very very little performance impact and is faster than AMD's Raptr counterpart. AMD's Mantle API was promising, but it really didn't pickup with developers and died out, nowhere to be seen anymore 
confused_face_2.gif

 
Of course these are just my biased opinions from experience hehe. I would really love for AMD's Polaris cards to step up and challenge Nvidia again so that we could finally have lower prices and bargain choices on both sides again. 
biggrin.gif

 
 
 
Edit: Pascal Geforce MSRP pricing and availability officially confirmed:
 
- GTX 1080 @ $599 (Nvidia's livestream announced it as being roughly 2x faster than the Titian X)
- GTX 1070 @ $379 (Nvidia's livestream announced as still being faster than the Titian X)
- Available May 27th, expecting aftermarket designs available early June
 

Fury X and 980 Ti is neck and neck in power consumption but where the Fury X is a bit slower in DX11 it absolutely destroys the 980 Ti in DX12.
380X is tad more power hungry than 960 but it's power consumption isn't too far off. And it's faster.
 
Fury X, 380 and 380X are on GCN1.3. 
 
Uh, Mantle was killed by AMD themselves before the new games ever came out because they put their IP forward to Vulkan.
 
Oh god *70 class will be nearly 600$ again ... (At least here)
 
May 7, 2016 at 2:12 PM Post #8,963 of 9,120
Too bad the Fury X is not quite as widely available like the 980 Ti is and don't forget the limited 4GB of Vram compared to the Ti's 6GB which make the Fury a rather poor choice for 4K. Also that's comparing HBM to GDDR5 lol.

Not sure if I can count 30-35% more power used on the 380/380X over the GTX 960 similar power consumption. Would be nice if it could also be close to 30% faster than the 960 but it only manages about 5-10% at best.

Yeeeeah that Vulkan API... I'm happy that we can finally have a popular cross platform interface going but Vulcan and DirectX 12 are definitely going to wage war when time comes
 
May 7, 2016 at 6:37 PM Post #8,965 of 9,120
I guess my overclocked 980ti SLI setup shouldn't be too worried this generation... Will start saving some funds for the 1180ti
 
May 7, 2016 at 10:11 PM Post #8,966 of 9,120
Too bad the Fury X is not quite as widely available like the 980 Ti is and don't forget the limited 4GB of Vram compared to the Ti's 6GB which make the Fury a rather poor choice for 4K. Also that's comparing HBM to GDDR5 lol.

Not sure if I can count 30-35% more power used on the 380/380X over the GTX 960 similar power consumption. Would be nice if it could also be close to 30% faster than the 960 but it only manages about 5-10% at best.

Yeeeeah that Vulkan API... I'm happy that we can finally have a popular cross platform interface going but Vulcan and DirectX 12 are definitely going to wage war when time comes

GTX960 is 120W, 380 and 380X is less than 150W in reality. And yes, 380X is easily 15% faster
 
And in certain situations 980 ti draws more power than the Fury X.
 
May 7, 2016 at 11:44 PM Post #8,967 of 9,120
  GTX960 is 120W, 380 and 380X is less than 150W in reality. And yes, 380X is easily 15% faster
 
And in certain situations 980 ti draws more power than the Fury X.
 

 
It's just what I've read in reviews anyways. The power consumption of the 380X is a lot more in-line with the GTX 970, which actually averages around the same or even slightly less watts than the 380X (but it's obviously not as fast as the GTX 970). The Tonga chip used in the 380/380X isn't much of an improvement performance-wise and power consumption-wise over the original 2012 Tahiti 7870/7850. This is expected because it is only the first new revision since 3 generations of refreshing/rebranding. 
frown.gif

 
 
 
index.php

 
 
Likewise, the Performance Per Watt is much closer to Nvidia's Kepler 7XX; vastly beaten by Maxwell, and the Performance Per Dollar spent is not fantastic.
 
 

 
May 11, 2016 at 7:44 AM Post #8,968 of 9,120
   
It's just what I've read in reviews anyways. The power consumption of the 380X is a lot more in-line with the GTX 970, which actually averages around the same or even slightly less watts than the 380X (but it's obviously not as fast as the GTX 970). The Tonga chip used in the 380/380X isn't much of an improvement performance-wise and power consumption-wise over the original 2012 Tahiti 7870/7850. This is expected because it is only the first new revision since 3 generations of refreshing/rebranding. 
frown.gif

 
 
 
index.php

 
 
Likewise, the Performance Per Watt is much closer to Nvidia's Kepler 7XX; vastly beaten by Maxwell, and the Performance Per Dollar spent is not fantastic.
 
 

Measured power consumption can be a bit misleading because often, GPUs vary from one to another like audio devices do.
Oh and R9 380 is definitely faster and the 4GB is cheaper and the 380 sometimes have better frametimes. Last but not least ... GTX970 is rated 145W TDP and R9 380 is rated 190W, if it's closer to gtx970 then either AMD is damn bad at estimating or Nvidia is lying. Which they have done before, would you like 3.5GB?
And the difference is so minor, this is not a 290/290x vs the 970.
 
I'm no AMD fanboy but that is just bollocks.
 
May 12, 2016 at 9:32 AM Post #8,970 of 9,120
  Measured power consumption can be a bit misleading because often, GPUs vary from one to another like audio devices do.
Oh and R9 380 is definitely faster and the 4GB is cheaper and the 380 sometimes have better frametimes. Last but not least ... GTX970 is rated 145W TDP and R9 380 is rated 190W, if it's closer to gtx970 then either AMD is damn bad at estimating or Nvidia is lying. Which they have done before, would you like 3.5GB?
And the difference is so minor, this is not a 290/290x vs the 970.
 
I'm no AMD fanboy but that is just bollocks.

 
Don't worry about it. Maxwell was built specifically with efficiency in mind, so it's not surprising to see these numbers in comparison. 
smile.gif

 
There's obviously going to be small variances in power draw between same cards from different vendors because of different components, but more often than not, results like this mirror themselves quite closely from credible sites who run these benches at reference clocks/voltages.
 
If you account for the wattage being different because of a different test setup, you can still reference the different cards' power draws being similarly apart by percentage between reviews. Guru3D (the benchmark above with the blue bars) monitors their system power draw at a live baseline and compensates it with a full typical load only put on the GPU. TechPowerUp is even more accurate, measuring the power draw directly at the PCI-E power connectors and the PCI-E bus slot combined to bypass all other system components and factors.
 
The one condition where the GTX 970 draws slightly more power than the 380X is in peak and unrealistic loads (because of the higher power ceiling allowance which is very useful for overclockers), such as when stressed with FurMark. For all other purposes, it's often the other way around in power consumption. And It's been proven that the 970 is 20-30% faster, which would mean that the 970 is also around 20-30% more power efficient than the 380X.
 
Similarly, the 380 is not 20% faster nor is the 380X 30% faster than the GTX 960, so there's still a large power efficiency disparity there.
 
However it must be said that pricing-wise currently, the GTX 970 is about 25% more expensive than the 380X which makes them quite even in value, by pure performance per dollar standards (but both still having the same power consumption). And as you said the 3.5GB versus the 4GB may swing potential buyers toward AMD, for those who know about this "Nvidia Scandal". 
tongue_smile.gif
 (I personally won't forgive Nvidia for this technically false 4GB advertisement either) 
angry_face.gif
 
 

Users who are viewing this thread

Back
Top