PC Enthusiast-Fi (PC Gaming/Hardware/Software/Overclocking)
Mar 14, 2016 at 1:16 AM Post #8,851 of 9,120
   
Acer Nitro VN7-791G
i7-4710HQ
GTX860M 2GB
16GB RAM
1TB HDD + M2 256GB SSD
17" FullHD IPS LG Display LP173WF4-SPF1
 
A clevo P870 with full config costs about 3X the price of my current setup, and performs about 350% better on GPU (GTX980Desktop) and about 130%-150% better on CPU (i7-6700 Desktop). Also, it comes with a 4K Auo display which offers 100% adobeRGB coverage, which is, I think, the best part. Hyperdimension Neptunia Rebirth series and Dota 2 never asked too much of my GPU. 
 
Another thing I am concerned with is if I should go with Acer Predator instead because they offer reverse of air flow for anti dust protection, or if that is a marketing gimmick. But Acer comes only with gtx980M and I doubt it will be able to get a decent performance on a 4K display. 
 
I have no idea how much better can Pascal be, or if it is worth the wait, or if Clevo will create change-able GPU cards with Pascal modules (GTX980 Desktop comes on a change-able card, it is not soldered). 

I honestly don't know what to tell you, laptop are the almost the last thing that is released for new generation powerful graphic laptops
 
how about building A desktop PC with a budget GPU 960 or 950 and upgrade and then upgrade to Pascal when it is released?
 
Mar 14, 2016 at 3:38 AM Post #8,852 of 9,120
:frowning2:
 
Not sure why people would upgrade now instead of waiting. Pascal is literally just around the corner. The cards are being announced next month and they will be available to buy as early as May (if things go as planned).
 
Under the best possible conditions with professional workloads, Nvidia has already claimed Pascal is 10 times faster than current generation cards. 
In applications such as gaming, it translates to about 2 times current performance.
 
In saying that however, this is with HBM. The new GTX 1080 will be using GDDR5X which is still 30% faster than GDDR5.
 
If you want HBM2, that will have to wait a little longer and might be reserved for the 1080Ti and/or the next Titan.
 
Mar 14, 2016 at 1:54 PM Post #8,853 of 9,120
  I honestly don't know what to tell you, laptop are the almost the last thing that is released for new generation powerful graphic laptops
 
how about building A desktop PC with a budget GPU 960 or 950 and upgrade and then upgrade to Pascal when it is released?

 
No PC can do.
 
I am a student, and I travel a lot between home and work, and student dorms. In this aspect, even P870 from Clevo does not seem like the best idea, as it will be a 7KG laptop afterall 
eek.gif
. (5.5KG laptop + 1.3KG Power brick + ExtHDD + Mouse + FiioX5II)
 
The weight with which I train is 10KG per one arm, so this is going to be pretty heavy for me to carry. 
 
 
 
  :frowning2:
 
Not sure why people would upgrade now instead of waiting. Pascal is literally just around the corner. The cards are being announced next month and they will be available to buy as early as May (if things go as planned).
 
Under the best possible conditions with professional workloads, Nvidia has already claimed Pascal is 10 times faster than current generation cards. 
In applications such as gaming, it translates to about 2 times current performance.
 
In saying that however, this is with HBM. The new GTX 1080 will be using GDDR5X which is still 30% faster than GDDR5.
 
If you want HBM2, that will have to wait a little longer and might be reserved for the 1080Ti and/or the next Titan.

 
 
Yeah... But If I might ask you.. why would VRam matter at all, if the GPU core is too slow to take advantage? (I actually never understood this)
 
About waiting for Pascal, You, as PC owner must wait let's say 2 months, but I, as laptop owner, must wait for another at least one year or more until a full desktop Pascal card is made into a MXM GPU card, and then sold for laptops.
 
I would not expect a M version of Pascal to be faster than the MXM desktop version of gtx980 (literally, it would be like GTX980M could beat GTX780, which AFAIK is not the case yet)
 
I think it took Nvidia 1 year or more until making a desktop 980 into MXM slot, from the release of gtx980 and gtx980M. 
 
Now, I think that it will take even more until they make an 1080 card into MXM slot, as an 1080M would probably be lower than a 980. 
 
Waiting for more than one year, while using a GTX860M would probably be out of question, as I need the power for work related projects not for gaming 
biggrin.gif

 
Mar 15, 2016 at 4:55 AM Post #8,854 of 9,120
   
Yeah... But If I might ask you.. why would VRam matter at all, if the GPU core is too slow to take advantage? (I actually never understood this)
 
About waiting for Pascal, You, as PC owner must wait let's say 2 months, but I, as laptop owner, must wait for another at least one year or more until a full desktop Pascal card is made into a MXM GPU card, and then sold for laptops.
 
I would not expect a M version of Pascal to be faster than the MXM desktop version of gtx980 (literally, it would be like GTX980M could beat GTX780, which AFAIK is not the case yet)
 
I think it took Nvidia 1 year or more until making a desktop 980 into MXM slot, from the release of gtx980 and gtx980M. 
 
Now, I think that it will take even more until they make an 1080 card into MXM slot, as an 1080M would probably be lower than a 980. 
 
Waiting for more than one year, while using a GTX860M would probably be out of question, as I need the power for work related projects not for gaming 
biggrin.gif

 
Unfortunately, VRAM does matter in this era of GPUs because GDDR5 is in fact becoming a technical bottleneck for many reasons. Examples including: Gradually larger PCBs (bigger form factor cards), higher power consumption (less battery life), more heat (more fan noise), forcing high memory clock speeds (less stability), and also the high access latency between the GPU and the GDDR5 (worsens frametime & microstuttering). 
 
Faced with the 4K resolution marketing-juggernaut-standard, the amount of 2GB VRAM on many modern cards is simply not enough. In fact several games are known to gobble more than 2GB of VRAM at standard HD 1080p resolution. Even 4GB of vram is very iffy for 4K on something like the GTX 980.
Much more if you're an avid photographer working with large resolution RAW files, added to the fact that you can't simply combine the VRAM from 2 or more cards when doing SLI or Crossfire.
 
In order to get more VRAM, we have to spam more GDDR5 which multiplies problems like the ones mentioned above. By stacking HBM2, we can solve all the problems above with a neat package. It's just not yet affordable for mass production at this time... 
frown.gif

 
As you can probably tell from how long its been out (since 2008 on the ATI Radeon 4870), GDDR5 is a very matured technology and we've reached much of the limits... if you have a moment, give this a read: http://motherboard.vice.com/read/what-high-bandwidth-memory-is-and-why-you-should-care
 
 
Also, you can't really tell when companies like Nvidia release their mobile GPU counterparts either; sometimes earlier, sometimes later. The mobile GPU market has been a shady one at best, that not even AMD is exempt from.
 
If you can recall, the Maxwell/Kepler Mobile GTX 8xx series were released a little over half a year before the Maxwell Desktop GTX 9xx were first released. But here's where it gets confusing: some of the 8xx series used Maxwell chips and some used rebranded Kepler chips.
 
For example, your current GTX 860M could be based on the newer Maxwell 
GM107 or the older Kepler GK104. They both perform quite differently with the Maxwell being 10% faster than the Kepler, and drawing less power too.
 The Kepler version also obviously does not include​
 Maxwell's newest features. To top it off, they can both come in either 2GB or 4GB VRAM versions depending on OEM configuration. ​
R
egardless of all this, Nvidia is apparently evil and still brands both GPUs as the
exact
same GTX 860M even though they are very different. I'd advise using GPU-Z to check if you don't already know what architecture your 860M is.
 ​
 ​
Ultimately, we would have to wait for an announcement in April to know more about Pascal's release for desktop and mobile. 
rolleyes.gif
 
 
Mar 15, 2016 at 11:30 AM Post #8,855 of 9,120
   
Unfortunately, VRAM does matter in this era of GPUs because GDDR5 is in fact becoming a technical bottleneck for many reasons. Examples including: Gradually larger PCBs (bigger form factor cards), higher power consumption (less battery life), more heat (more fan noise), forcing high memory clock speeds (less stability), and also the high access latency between the GPU and the GDDR5 (worsens frametime & microstuttering). 
 
Faced with the 4K resolution marketing-juggernaut-standard, the amount of 2GB VRAM on many modern cards is simply not enough. In fact several games are known to gobble more than 2GB of VRAM at standard HD 1080p resolution. Even 4GB of vram is very iffy for 4K on something like the GTX 980.
Much more if you're an avid photographer working with large resolution RAW files, added to the fact that you can't simply combine the VRAM from 2 or more cards when doing SLI or Crossfire.
 
In order to get more VRAM, we have to spam more GDDR5 which multiplies problems like the ones mentioned above. By stacking HBM2, we can solve all the problems above with a neat package. It's just not yet affordable for mass production at this time... 
frown.gif

 
If you can recall, the Maxwell/Kepler Mobile GTX 8xx series were released a little over half a year before the Maxwell Desktop GTX 9xx were first released. But here's where it gets confusing: some of the 8xx series used Maxwell chips and some used rebranded Kepler chips.
 

The question now is not just about the capacity of vRAM but also the amount of bandwidth/speed of said vRAM. With the massive amounts of textures and the high framerates that we are trying to push now, the bandwidth on GDDR5 is reaching its ends as mentioned by the Vice article. HBM circumvents that with extremely high bandwidth which allows for very fast texture access and rewriting on the vRAM itself, and the resources not in use are swapped out to the system RAM. It is a similar concept as RAM and the page file on a typical computer (though there is a slightly different setup with Windows 10 now). We are reaching the ends of GDDR5 especially with the latest GPUs and HBM is going to fill the gap between GPU processing power and the demands of games/gamers along with the professional market where almost anything can happen.
 
I didn't know the mobile chips were also affected by the stupid architecture swapping, I recall the GTX 7xx chips had something like that.
 
Mar 15, 2016 at 12:42 PM Post #8,857 of 9,120
  The question now is not just about the capacity of vRAM but also the amount of bandwidth/speed of said vRAM. With the massive amounts of textures and the high framerates that we are trying to push now, the bandwidth on GDDR5 is reaching its ends as mentioned by the Vice article. HBM circumvents that with extremely high bandwidth which allows for very fast texture access and rewriting on the vRAM itself, and the resources not in use are swapped out to the system RAM. It is a similar concept as RAM and the page file on a typical computer (though there is a slightly different setup with Windows 10 now). We are reaching the ends of GDDR5 especially with the latest GPUs and HBM is going to fill the gap between GPU processing power and the demands of games/gamers along with the professional market where almost anything can happen.
 
I didn't know the mobile chips were also affected by the stupid architecture swapping, I recall the GTX 7xx chips had something like that.

 
Indeed. 
wink.gif
 
 
This is the very reason why the demands of memory clock speeds of GDDR5 are being pushed so high and are more likely to be unstable. By increasing the bandwidth, there's room to dial down the frequencies as evidenced by the slow 500MHz memory clock on the HBM1 based AMD Fury X.
 
A similar analogy is kind of like how the CPU clock speed wars was solved by introducing multi-cores and threads instead of trying to push higher frequencies through the roof on a single core CPU.
 
Different to graphical applications, we haven't quite reached the same lengths just yet with system RAM. In fact, real world tests show inconclusive performance of DDR4 versus DDR3 RAM based systems trading blows with one another. RAM is often not a deciding factor, or a bottleneck when it comes to performance as long as you have enough of it to not incur harddrive/storage thrashing.
 
 
The mobile GPU market, since it's mostly OEM based, sees no light or truth behind corporate barriers as cliche as it might sound... Nvidia rebrands their mobile chips all the time from generation to generation, selling them cheap to OEMs meaning that you really don't know what you're going to get with the architecture itself or the amount of VRAM the OEM decided to implement. This is why you should only buy laptops from reputable brands that can answer these questions and assure you don't get ripped-off. 
tongue.gif

 
About 3 years ago, I bought my first laptop for school and it had a GT 740M. I played some games, did some benchmarks but I decided to return it anyways because it only had 1366x768 resolution. The next laptop I tried had very similar specs, and the same GT 740M listed.
 
However, as I benchmarked this one and played some games at the same resolution and same testing conditions (Same OS, AC plugged in, Power settings changed to "High Performance"), it averaged about 20FPS lower than the laptop I just returned. Whereas the returned laptop would do about 50FPS, this one only managed 30-35FPS. After some research I concluded that they both used different chips, but were both still considered GT 740M.
So Nvidia can call 2 different chips that perform 60% apart from one another, a GT 740M. That was the moment I realised I could never trust laptop GPUs ever again 
tongue_smile.gif

 
And yes GDDR5X will most definitely be the next logical and economic step for AMD's Polaris and the bulk of Nvidia's Pascal.
 
Mar 15, 2016 at 1:03 PM Post #8,858 of 9,120
   
Indeed. 
wink.gif
 
 
This is the very reason why the demands of memory clock speeds of GDDR5 are being pushed so high and are more likely to be unstable. By increasing the bandwidth, there's room to dial down the frequencies as evidenced by the slow 500MHz memory clock on the HBM1 based AMD Fury X.
 
A similar analogy is kind of like how the CPU clock speed wars was solved by introducing multi-cores and threads instead of trying to push higher frequencies through the roof on a single core CPU.
 
Different to graphical applications, we haven't quite reached the same lengths just yet with system RAM. In fact, real world tests show inconclusive performance of DDR4 versus DDR3 RAM based systems trading blows with one another. RAM is often not a deciding factor, or a bottleneck when it comes to performance as long as you have enough of it to not incur harddrive/storage thrashing.
 
 
The mobile GPU market, since it's mostly OEM based, sees no light or truth behind corporate barriers as cliche as it might sound... Nvidia rebrands their mobile chips all the time from generation to generation, selling them cheap to OEMs meaning that you really don't know what you're going to get with the architecture itself or the amount of VRAM the OEM decided to implement. This is why you should only buy laptops from reputable brands that can answer these questions and assure you don't get ripped-off. 
tongue.gif

 
About 3 years ago, I bought my first laptop for school and it had a GT 740M. I played some games, did some benchmarks but I decided to return it anyways because it only had 1366x768 resolution. The next laptop I tried had very similar specs, and the same GT 740M listed.
 
However, as I benchmarked this one and played some games at the same resolution and same testing conditions (Same OS, AC plugged in, Power settings changed to "High Performance"), it averaged about 20FPS lower than the laptop I just returned. Whereas the returned laptop would do about 50FPS, this one only managed 30-35FPS. After some research I concluded that they both used different chips, but were both still considered GT 740M.
So Nvidia can call 2 different chips that perform 60% apart from one another, a GT 740M. That was the moment I realised I could never trust laptop GPUs ever again 
tongue_smile.gif

 
And yes GDDR5X will most definitely be the next logical and economic step for AMD's Polaris and the bulk of Nvidia's Pascal.

 
I would trust Clevo. The are pushing a 200W desktop GPU chip in a laptop.
 
About different chips, it is totally true. But now days, there is notebookcheck who test in depth and tell you exactly what chips are and where. 
 
Look: http://www.notebookcheck.net/Schenker-XMG-U726-Clevo-P870DM-Notebook-Review.153136.0.html
 
And a link to my laptop, which I just confirmed, after your advice with cpu-z that has that exact chip.
 
http://www.notebookcheck.net/Acer-Aspire-V17-Nitro-VN7-791G-759Q-Notebook-Review.126701.0.html
 
Mar 15, 2016 at 1:22 PM Post #8,859 of 9,120
   
Unfortunately, VRAM does matter in this era of GPUs because GDDR5 is in fact becoming a technical bottleneck for many reasons. Examples including: Gradually larger PCBs (bigger form factor cards), higher power consumption (less battery life), more heat (more fan noise), forcing high memory clock speeds (less stability), and also the high access latency between the GPU and the GDDR5 (worsens frametime & microstuttering). 
 
Faced with the 4K resolution marketing-juggernaut-standard, the amount of 2GB VRAM on many modern cards is simply not enough. In fact several games are known to gobble more than 2GB of VRAM at standard HD 1080p resolution. Even 4GB of vram is very iffy for 4K on something like the GTX 980.
Much more if you're an avid photographer working with large resolution RAW files, added to the fact that you can't simply combine the VRAM from 2 or more cards when doing SLI or Crossfire.
 
In order to get more VRAM, we have to spam more GDDR5 which multiplies problems like the ones mentioned above. By stacking HBM2, we can solve all the problems above with a neat package. It's just not yet affordable for mass production at this time... 
frown.gif

 
As you can probably tell from how long its been out (since 2008 on the ATI Radeon 4870), GDDR5 is a very matured technology and we've reached much of the limits... if you have a moment, give this a read: http://motherboard.vice.com/read/what-high-bandwidth-memory-is-and-why-you-should-care
 
 
Also, you can't really tell when companies like Nvidia release their mobile GPU counterparts either; sometimes earlier, sometimes later. The mobile GPU market has been a shady one at best, that not even AMD is exempt from.
 
If you can recall, the Maxwell/Kepler Mobile GTX 8xx series were released a little over half a year before the Maxwell Desktop GTX 9xx were first released. But here's where it gets confusing: some of the 8xx series used Maxwell chips and some used rebranded Kepler chips.
 
For example, your current GTX 860M could be based on the newer Maxwell 
GM107 or the older Kepler GK104. They both perform quite differently with the Maxwell being 10% faster than the Kepler, and drawing less power too.
 The Kepler version also obviously does not include​
 Maxwell's newest features. To top it off, they can both come in either 2GB or 4GB VRAM versions depending on OEM configuration. ​
R
egardless of all this, Nvidia is apparently evil and still brands both GPUs as the
exact
same GTX 860M even though they are very different. I'd advise using GPU-Z to check if you don't already know what architecture your 860M is.
 ​
 ​
Ultimately, we would have to wait for an announcement in April to know more about Pascal's release for desktop and mobile. 
rolleyes.gif
 

 
 
The read was interesting. Very, actually. 
 
Now my question remains. 
 
Should I actually wait? I mean, look at the reasons not to wait 
biggrin.gif

 
I mean, if it takes 1-1.5 years until I can get a laptop with a desktop 1080 and HBM, I think that I can safely buy a clevo with a 980 desktop. The GPU unit alone costs around 1500EUR, and I think one can safely sell it for 900-1000EUR. 
 
Yes, I would take a considerable hit, but I would also enjoy an incredible 350% better GPU, 150% CPU and much better display than I have now, for a little hit, considering that MXM 3.0b is a standard, and if 1080 ever comes to laptops, and has HBM, it will be MXM 3.0b
 
Mar 15, 2016 at 1:53 PM Post #8,860 of 9,120
I would wait at least one more month to see the news before deciding, because we can't be certain about many things right now (I'm curious about AMD's Polaris too). Unless you depend on a new laptop right away. 
redface.gif

 
Technology evolves with time either way and sometimes you have to buy on a needy basis. Other times you can pick your choice with great deals on used or retired flagships, or buying new on generation leaps with massive performance improvements that will last you longer.
 
If Pascal performs as well as it's hyped, this will be a generation leap bigger than the last 2 or even 3 generations combined. 
etysmile.gif
 
 
Mar 15, 2016 at 2:17 PM Post #8,861 of 9,120
   
Indeed. 
wink.gif
 
 
This is the very reason why the demands of memory clock speeds of GDDR5 are being pushed so high and are more likely to be unstable. By increasing the bandwidth, there's room to dial down the frequencies as evidenced by the slow 500MHz memory clock on the HBM1 based AMD Fury X.
 
A similar analogy is kind of like how the CPU clock speed wars was solved by introducing multi-cores and threads instead of trying to push higher frequencies through the roof on a single core CPU.
 
Different to graphical applications, we haven't quite reached the same lengths just yet with system RAM. In fact, real world tests show inconclusive performance of DDR4 versus DDR3 RAM based systems trading blows with one another. RAM is often not a deciding factor, or a bottleneck when it comes to performance as long as you have enough of it to not incur harddrive/storage thrashing.

The thing with DDR4 vs. DDR3 is because system RAM is never really utilized to it to the bandwidth or speed limits, it's always the size that gives it issues as there are always bottlenecks somewhere else in the line affecting the bandwidth utilization in RAM. HDDs, SSDs and CPU are some of those limits. The only time that the clock speed has made a difference is for extreme OCing on CPUs, the clock and timings on RAM can make a difference. 
 
Interesting analogy, never thought of it like that...
 
 
 
About 3 years ago, I bought my first laptop for school and it had a GT 740M. I played some games, did some benchmarks but I decided to return it anyways because it only had 1366x768 resolution. The next laptop I tried had very similar specs, and the same GT 740M listed.
 
However, as I benchmarked this one and played some games at the same resolution and same testing conditions (Same OS, AC plugged in, Power settings changed to "High Performance"), it averaged about 20FPS lower than the laptop I just returned. Whereas the returned laptop would do about 50FPS, this one only managed 30-35FPS. After some research I concluded that they both used different chips, but were both still considered GT 740M.
So Nvidia can call 2 different chips that perform 60% apart from one another, a GT 740M. That was the moment I realised I could never trust laptop GPUs ever again 
tongue_smile.gif

That's dumb as screw.
 
 
  I would wait at least one more month to see the news before deciding, because we can't be certain about many things right now (I'm curious about AMD's Polaris too). Unless you depend on a new laptop right away. 
redface.gif

 
Technology evolves with time either way and sometimes you have to buy on a needy basis. Other times you can pick your choice with great deals on used or retired flagships, or buying new on generation leaps with massive performance improvements that will last you longer.
 
If Pascal performs as well as it's hyped, this will be a generation leap bigger than the last 2 or even 3 generations combined. 
etysmile.gif
 

http://arstechnica.co.uk/gaming/2016/03/amd-gpu-vega-navi-revealed/
 
Aww shiz.
 
Mar 15, 2016 at 2:52 PM Post #8,862 of 9,120
  I would wait at least one more month to see the news before deciding, because we can't be certain about many things right now (I'm curious about AMD's Polaris too). Unless you depend on a new laptop right away. 
redface.gif

 
Technology evolves with time either way and sometimes you have to buy on a needy basis. Other times you can pick your choice with great deals on used or retired flagships, or buying new on generation leaps with massive performance improvements that will last you longer.
 
If Pascal performs as well as it's hyped, this will be a generation leap bigger than the last 2 or even 3 generations combined. 
etysmile.gif
 

 
I suspected as much. Well, I will try to hold my horses as much as I can for sure 
biggrin.gif
. Maybe the release of Polaris will lower prices for actual desktop grade GPUs in laptops even further. 
 
It will happen to me like it happened with my current laptop. Payed 1250 EUR for i7-4710HQ, GTX860m 2gb, 16GBram, 1TB + 256SSD, IPS FHD. I am still trying to sell it for 930EUR. 
 
Mar 15, 2016 at 2:55 PM Post #8,863 of 9,120
  The thing with DDR4 vs. DDR3 is because system RAM is never really utilized to it to the bandwidth or speed limits, it's always the size that gives it issues as there are always bottlenecks somewhere else in the line affecting the bandwidth utilization in RAM. HDDs, SSDs and CPU are some of those limits. The only time that the clock speed has made a difference is for extreme OCing on CPUs, the clock and timings on RAM can make a difference. 
 
Interesting analogy, never thought of it like that...
 
 
That's dumb as screw.
 
 
http://arstechnica.co.uk/gaming/2016/03/amd-gpu-vega-navi-revealed/
 
Aww shiz.

 
omg. That article was poasted today lol. AMD Vega? Must be Vegeta's twin... 
ph34r.gif

 
I just know that RAM overclocking is not worth it in my experience.
frown.gif
 It takes too much time to get stable and literally doesn't make a difference for any benchmarks. I don't even bother trying to up the clock speed because I know many RAM sticks don't do well, or even POST when set higher than their rated frequency even with overvolting. It's more likely to get better results leaving native clock speeds and lowering the timings which is what I do... but still, seems to be the biggest waste of time for no gain. 
tongue.gif

 
Oh yeah, and Intel Kaby Lake is also releasing before the end of the year... So maybe double upgrade CPU/GPU 
cool.gif
 
 
Mar 15, 2016 at 4:03 PM Post #8,864 of 9,120
If I recall correctly RAM OCing is only worth something for extreme CPU OCing, not just like "oh I want 20% more powah" but the people that push it over the freaking edge. I think it makes a difference there, never tested it myself so I only know what I've read from forums and articles/youtube.
 
Mar 15, 2016 at 4:04 PM Post #8,865 of 9,120
   
omg. That article was poasted today lol. AMD Vega? Must be Vegeta's twin... 
ph34r.gif

 
I just know that RAM overclocking is not worth it in my experience.
frown.gif
 It takes too much time to get stable and literally doesn't make a difference for any benchmarks. I don't even bother trying to up the clock speed because I know many RAM sticks don't do well, or even POST when set higher than their rated frequency even with overvolting. It's more likely to get better results leaving native clock speeds and lowering the timings which is what I do... but still, seems to be the biggest waste of time for no gain. 
tongue.gif

 
Oh yeah, and Intel Kaby Lake is also releasing before the end of the year... So maybe double upgrade CPU/GPU 
cool.gif
 

 
OffT: People who OC their hardware, do this only to get higher benchmarks, or to actually use the equipment OC'd? I mean, would one endanger their CPU or GPU to get 5% or 10% more out of it? 
 
On: How often does one need to upgrade motherboard? Theoretically the clevo laptop i'm eyeing accepts desktop size CPU and MXM GPU, so I should be free to upgrade as long as there will still be LGA1151 CPUs and MXM GPUs, I should be safe, given that power and cooling should be enough. Display already has 100% adobeRGB, so that is enough. Motherboard could age, and that is not replaceable, so how does a motherboard age? 
biggrin.gif
 Or how often should it be changed? 
 
I was pleased with a i5-450 + ati 5740 from late 2010 all the way to mid 2015, I guess that I am not a performance addict?
 

Users who are viewing this thread

Back
Top