I haven't tested OpenCL for CS6, but for non-HTML5 videos, Flash still continues to peg my CPU usage and pushes the CPU over 100 deg C. Thanks a lot, Adobe.
yeah if your cpu is going over 100 deg C, thats not flash. thats your cpu. it should never, under any circumstance, pass 100 deg C. you need an upgrade, or to clean the dust off your fan.
Yes, if you're CPU is heating up to 100C that's a hardware issue. There is no reason why a CPU shouldn't be able to run at 100% constantly (if it's actually processing something).
Be mad at Apple then. A program should use as much CPU time as possible to finish its task as fast a possible. If the PC/laptop cannot handle the stress, it's the fault of the manufacturer. Seems that the cooling solution doesn't hold up to long use. Get someone to replace the thermal paste and clean the heatsink for you.
Dude - don't blame Adobe - that's ALL Apple... Furthermore it has NOTHING to do with Adobe's Pro Video offerings like Premiere which is referenced in this article. I work in film professionally as an editor and have owned 7 MacBook Pros in the last 7 years. Apple has replaced them all due to severe overheating. I have a 17'' 2011 MacBook Pro that regularly overheats to over 100 degrees Celsius with graphical artifacting whenever rendering something with the dedicated GPU. When you have a hot processor in a chassis less than an inch thick - that's what happens. My next machine will be the HP 8770w or Dell Precision 6700 (Haswell equivalents) because they are both powerful and properly cooled. Something that Apple does not provide.
Are we over the days of QuickSync not working in you have another GPU on your PC? Back in the Sandy Bridge days QuickSync only worked when you had a monitor attached to one of its outputs, is this still the case?
I will be buying a Haswell set up when it's out but will also be using a dedicated GPU, will the GPU on the CPU still function? Will software like Handbrake and Prem Pro still have access to QuickSync?
Also there are often concerns over quality from GPU assisted encoding, is this still the case with Open CL? What would be a better option (mainly quality but also speed) new high-ish end AMD or nVidia card or sticking with QuickSync or disabling all GPU acceleration and purely using x86?
I don't understand the part where they say that 'lower GPU usage is better.' I mean, if there's 100% GPU available; I'd love to see Premiere use 100% and complete the rendering ASAP, not have 35% usage. I hope it'll be extended to the non-FireGL cards as well, like the 7970/6970/etc.
The way I read that is that they are talking about efficiency. So, the 2 cards are doing the same thing, AMD cards are being faster and at the same time use less resources.
Your answer is stupid, not his question. You should say why it is not possible to have faster results using more of the available GPU computational power. If it is there, why not exploiting it? But I suppose you can't come with a sensible answer, and that's why you can just swallow all AMD's PR bla bla without understanding anything about its (not) relevance.
Even on openCL, NV and AMD have different implementations. So a really optimized common code is not possible. Externally it may look like there is a single code for both, internally, the code does a hardware check to see which GPU is in use, and uses different codepath for different manufacturer.
And we all should bow down to you the "AMD Fanboi God" as all knowing.
Not going to happen.
What you seem to be saying is please, please Mr Ryan don't do one-on-one tests against Nvidia for you may find proof that AMD's constant PR about OpenCL doesn't meet the hype.
Just like the "CrossFire is Broken" problems AMD is currently having were found out by testing AMD's CF against Nvidia's SLI.
AMD Fanbois (like yourself) seem very afraid of direct testing of Open Standards like OpenCL. That alone should be enough of a reason to tests between AMD and Nvidia.
Yes, I'd like to see AMD tested vs NV with each running their respective fastest implementation. Of course this is OpenCL only for AMD (no cuda, but could also be OpenGL), and I'm guessing for NV Cuda will run quicker (but again, could be opengl is faster-but guessing always cuda where cuda is an option-which it is for all major rendering/content creation apps). They need to be compared. Tom's always shows OpenCL but acts like Cuda doesn't exist. Reality is, OpenCL isn't very useful so far except for free crap that means nothing. Nobody makes money folding@home...LOL. So few do bitcoin mining it's pointless to even benchmark this. The Civ5 benchmark doesn't tell us much about compute either, as it's just an example of something that COULD exist if more game makers followed this path. I'm not sure why sites like Anandtech refuse to do benchmarking of stuff like Blender, Adobe (take your pick of the apps in CS suite), Vegas, Solidworks, ProE, cadds 5, heck the list is long of apps with opengl/cuda options. OpenCL is slowly getting added to things, but ignoring all of the apps pros use daily just because they haven't caught the OpenCL bug yet is ridiculous and disingenuous to your readers. When was the last time Anandtech did a cuda test vs. anything else? Heck I wouldn't even care if they did it with the fastest app per card. Like rendering something in Vegas for AMD and Premiere for NV (whatever, insert best app for both). If you can rip or something in one app much faster than another you could test whatever you find to be the best app/api combo per gpu maker. If I had an NV card at home and wanted to do some movie making or something I'd seek the best cuda enabled app for whatever I'm trying to do. I'd do the same if I owned AMD at home (which I do currently). As long as quality/results were comparable I'd choose the fastest one I could get for my gpu.
If you're going to say something like "but at a minimum it looks like OpenCL is coming to parity with CUDA (or close enough)." You should have to prove that with benchmarks of one vs. the other. I'm seeing Ryan's love affair with AMD coming through again as there is no proof AMD's implementation will be at parity (or even close enough) with Cuda. This is AMD fanboy comment based on an AMD press release at best with next to nothing backing the statements from AMD or Ryan. Let me know when you test it, or Adobe shows some benchmarks showing AMD/OpenCL blowing away their same app tested in Cuda/NV. This should already be part of your benchmarking of gpu's here anyway instead of folding@home/bitcoin crap nobody does. Only 163,000 users have folding@home installed out of what, 352mil computers sold each year? Why bother benching that? Even less can be said about users bitcoin mining. Both things run up your electric bill and get you nothing (botnets make money on mining, not single home users these days). http://folding.stanford.edu/ No love for folding, as most of us aren't stupid enough to bloat our electric bills for the next great pill or cancer solution which company X will get rich from not US :) I get an expensive warm and fuzzy feeling NOTHING more.
Also, NV can always turn on support for all cards if they desire it to stop home card losses. Though I'd argue they probably don't care as any pro or company will cough up the money for a REAL card with ECC, backed drivers etc for professional content crap. You don't set your Pixar rendering dept for your next movie up on radeon 7970's for example. You buy Tesla's, Quadro, FireGL's etc. There's a reason pro cards exist and cost what they do. But you don't need a pro card to benchmark Adobe CS on AMD/NV and the same can be said about many apps supporting opengl & cuda already. CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and the Tesla line. They own 65% of the discrete market, so why act like it doesn't exist?
It all depends on the type of work you are doing. If you run one of those silly single operation benchmarks, you can come up with results that are much faster on AMD than on Kepler. If you run real complex work, where all the architecture as a whole is stressed, it is most probable that nvidia solutions are going to surpass AMD's one even using less power. GK104 is the most efficient GPU for integer and SP calculations (in fact they made a Tesla card for it as well, clearly showing that there's a market out there that is interested in integer and/or SP calculations other than the DP one), GK110 is the most efficient solution for DP calculations. That's what the professional benchmarks on HPC state. You may continue to believe in anything else that makes you feel better, though. Even that this limited OpenCL support a couple of year later with respect to the competition has any meaning,
I don't have anything against OpenCL, but if they were going to add GPU compute in Windows, why wouldn't they just use the DirectCompute API that is available to all GPUs that run Windows?
AMD needs some PR facts to show during next quarter results presentation as income numbers will be quite low and minus signs will be everywhere. So after announcing they have lost 20% of CPU revenues, that the GPU division is still not making money and they have lost some other market share, that they have not anything new to present on the market till the end of the year, both for CPUs and GPUs, they can finally say: but we get OpenCL support in Adobe Premiere Pro.
This quarter AMD's trump cards are 7990, which will announced as the fastest video card, ignoring that is is going to be completely out of PCI specs with its almost 400W (yeah, put two inefficient GPUs and you get a double inefficient card, though it is uselessly fast) and OpenCL support in niche markets, ignoring the fact that the competition holds more than 80% of the professional market with NO OpenCL support for the applications that use those solutions (meaning that OpenCL is not credited solutions for professional usage). They obviously won't state they are more than a year late with respect to the competition. But, well, that's what you get when you think that low prices which do not bring revenues are good.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
23 Comments
Back to Article
StealthX32 - Friday, April 5, 2013 - link
I haven't tested OpenCL for CS6, but for non-HTML5 videos, Flash still continues to peg my CPU usage and pushes the CPU over 100 deg C. Thanks a lot, Adobe.thesavvymage - Friday, April 5, 2013 - link
yeah if your cpu is going over 100 deg C, thats not flash. thats your cpu. it should never, under any circumstance, pass 100 deg C. you need an upgrade, or to clean the dust off your fan.Flunk - Friday, April 5, 2013 - link
Yes, if you're CPU is heating up to 100C that's a hardware issue. There is no reason why a CPU shouldn't be able to run at 100% constantly (if it's actually processing something).StealthX32 - Friday, April 5, 2013 - link
It's a C2D 2.26GHz in a Macbook Pro 13 (2009 era). I thought this was standard operating procedure for Flash on OS X.Death666Angel - Friday, April 5, 2013 - link
Be mad at Apple then. A program should use as much CPU time as possible to finish its task as fast a possible. If the PC/laptop cannot handle the stress, it's the fault of the manufacturer. Seems that the cooling solution doesn't hold up to long use. Get someone to replace the thermal paste and clean the heatsink for you.Tegeril - Friday, April 5, 2013 - link
It's not, unless you're viewing video on a player written over 4 years ago, the entire video rendering pipeline in Flash is hardware accelerated.AndreElijah - Friday, April 5, 2013 - link
Dude - don't blame Adobe - that's ALL Apple... Furthermore it has NOTHING to do with Adobe's Pro Video offerings like Premiere which is referenced in this article. I work in film professionally as an editor and have owned 7 MacBook Pros in the last 7 years. Apple has replaced them all due to severe overheating. I have a 17'' 2011 MacBook Pro that regularly overheats to over 100 degrees Celsius with graphical artifacting whenever rendering something with the dedicated GPU. When you have a hot processor in a chassis less than an inch thick - that's what happens. My next machine will be the HP 8770w or Dell Precision 6700 (Haswell equivalents) because they are both powerful and properly cooled. Something that Apple does not provide.Oberoth - Sunday, April 7, 2013 - link
Are we over the days of QuickSync not working in you have another GPU on your PC? Back in the Sandy Bridge days QuickSync only worked when you had a monitor attached to one of its outputs, is this still the case?I will be buying a Haswell set up when it's out but will also be using a dedicated GPU, will the GPU on the CPU still function? Will software like Handbrake and Prem Pro still have access to QuickSync?
Also there are often concerns over quality from GPU assisted encoding, is this still the case with Open CL? What would be a better option (mainly quality but also speed) new high-ish end AMD or nVidia card or sticking with QuickSync or disabling all GPU acceleration and purely using x86?
shnurov - Friday, April 5, 2013 - link
I don't understand the part where they say that 'lower GPU usage is better.'I mean, if there's 100% GPU available; I'd love to see Premiere use 100% and complete the rendering ASAP, not have 35% usage. I hope it'll be extended to the non-FireGL cards as well, like the 7970/6970/etc.
Death666Angel - Friday, April 5, 2013 - link
The way I read that is that they are talking about efficiency. So, the 2 cards are doing the same thing, AMD cards are being faster and at the same time use less resources.mayankleoboy1 - Friday, April 5, 2013 - link
Why not make it even more faster, and use more power ?geniusloci - Saturday, April 6, 2013 - link
mayankleoboy1: are you really this stupid?CiccioB - Wednesday, April 17, 2013 - link
Your answer is stupid, not his question.You should say why it is not possible to have faster results using more of the available GPU computational power. If it is there, why not exploiting it?
But I suppose you can't come with a sensible answer, and that's why you can just swallow all AMD's PR bla bla without understanding anything about its (not) relevance.
mayankleoboy1 - Friday, April 5, 2013 - link
Even on openCL, NV and AMD have different implementations. So a really optimized common code is not possible. Externally it may look like there is a single code for both, internally, the code does a hardware check to see which GPU is in use, and uses different codepath for different manufacturer.HighTech4US - Saturday, April 6, 2013 - link
Ryan: Can we get a Head-to-Head review with both AMD and Nvidia products?Testing AMD with OpenCL and Nvidia with both OpenCL and CUDA.
geniusloci - Sunday, April 7, 2013 - link
There's absolutely zero need for that. AMD destroys Nvidia in OpenCL.And 6xx series Nvidia cards suck at CUDA as well.
HighTech4US - Sunday, April 7, 2013 - link
And we all should bow down to you the "AMD Fanboi God" as all knowing.Not going to happen.
What you seem to be saying is please, please Mr Ryan don't do one-on-one tests against Nvidia for you may find proof that AMD's constant PR about OpenCL doesn't meet the hype.
Just like the "CrossFire is Broken" problems AMD is currently having were found out by testing AMD's CF against Nvidia's SLI.
http://www.pcper.com/reviews/Graphics-Cards/Frame-...
AMD Fanbois (like yourself) seem very afraid of direct testing of Open Standards like OpenCL. That alone should be enough of a reason to tests between AMD and Nvidia.
TheJian - Monday, April 8, 2013 - link
Yes, I'd like to see AMD tested vs NV with each running their respective fastest implementation. Of course this is OpenCL only for AMD (no cuda, but could also be OpenGL), and I'm guessing for NV Cuda will run quicker (but again, could be opengl is faster-but guessing always cuda where cuda is an option-which it is for all major rendering/content creation apps). They need to be compared. Tom's always shows OpenCL but acts like Cuda doesn't exist. Reality is, OpenCL isn't very useful so far except for free crap that means nothing. Nobody makes money folding@home...LOL. So few do bitcoin mining it's pointless to even benchmark this. The Civ5 benchmark doesn't tell us much about compute either, as it's just an example of something that COULD exist if more game makers followed this path. I'm not sure why sites like Anandtech refuse to do benchmarking of stuff like Blender, Adobe (take your pick of the apps in CS suite), Vegas, Solidworks, ProE, cadds 5, heck the list is long of apps with opengl/cuda options. OpenCL is slowly getting added to things, but ignoring all of the apps pros use daily just because they haven't caught the OpenCL bug yet is ridiculous and disingenuous to your readers. When was the last time Anandtech did a cuda test vs. anything else? Heck I wouldn't even care if they did it with the fastest app per card. Like rendering something in Vegas for AMD and Premiere for NV (whatever, insert best app for both). If you can rip or something in one app much faster than another you could test whatever you find to be the best app/api combo per gpu maker. If I had an NV card at home and wanted to do some movie making or something I'd seek the best cuda enabled app for whatever I'm trying to do. I'd do the same if I owned AMD at home (which I do currently). As long as quality/results were comparable I'd choose the fastest one I could get for my gpu.If you're going to say something like "but at a minimum it looks like OpenCL is coming to parity with CUDA (or close enough)." You should have to prove that with benchmarks of one vs. the other. I'm seeing Ryan's love affair with AMD coming through again as there is no proof AMD's implementation will be at parity (or even close enough) with Cuda. This is AMD fanboy comment based on an AMD press release at best with next to nothing backing the statements from AMD or Ryan. Let me know when you test it, or Adobe shows some benchmarks showing AMD/OpenCL blowing away their same app tested in Cuda/NV. This should already be part of your benchmarking of gpu's here anyway instead of folding@home/bitcoin crap nobody does. Only 163,000 users have folding@home installed out of what, 352mil computers sold each year? Why bother benching that? Even less can be said about users bitcoin mining. Both things run up your electric bill and get you nothing (botnets make money on mining, not single home users these days).
http://folding.stanford.edu/
No love for folding, as most of us aren't stupid enough to bloat our electric bills for the next great pill or cancer solution which company X will get rich from not US :) I get an expensive warm and fuzzy feeling NOTHING more.
Also, NV can always turn on support for all cards if they desire it to stop home card losses. Though I'd argue they probably don't care as any pro or company will cough up the money for a REAL card with ECC, backed drivers etc for professional content crap. You don't set your Pixar rendering dept for your next movie up on radeon 7970's for example. You buy Tesla's, Quadro, FireGL's etc. There's a reason pro cards exist and cost what they do. But you don't need a pro card to benchmark Adobe CS on AMD/NV and the same can be said about many apps supporting opengl & cuda already. CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and the Tesla line. They own 65% of the discrete market, so why act like it doesn't exist?
CiccioB - Wednesday, April 17, 2013 - link
It all depends on the type of work you are doing.If you run one of those silly single operation benchmarks, you can come up with results that are much faster on AMD than on Kepler. If you run real complex work, where all the architecture as a whole is stressed, it is most probable that nvidia solutions are going to surpass AMD's one even using less power.
GK104 is the most efficient GPU for integer and SP calculations (in fact they made a Tesla card for it as well, clearly showing that there's a market out there that is interested in integer and/or SP calculations other than the DP one), GK110 is the most efficient solution for DP calculations.
That's what the professional benchmarks on HPC state.
You may continue to believe in anything else that makes you feel better, though. Even that this limited OpenCL support a couple of year later with respect to the competition has any meaning,
B3an - Sunday, April 7, 2013 - link
I'd hope they also support After Effects with OpenCL. That can be slow as **** even on the very fastest, and overclocked, hardware.Solidstate89 - Wednesday, April 10, 2013 - link
I don't have anything against OpenCL, but if they were going to add GPU compute in Windows, why wouldn't they just use the DirectCompute API that is available to all GPUs that run Windows?lmcd - Saturday, April 13, 2013 - link
Question: is the OpenCL for APUs limited to the expensive "pro" APUs only? Wasn't really mentioned whether consumer GPUs received benefit.CiccioB - Wednesday, April 17, 2013 - link
AMD needs some PR facts to show during next quarter results presentation as income numbers will be quite low and minus signs will be everywhere.So after announcing they have lost 20% of CPU revenues, that the GPU division is still not making money and they have lost some other market share, that they have not anything new to present on the market till the end of the year, both for CPUs and GPUs, they can finally say: but we get OpenCL support in Adobe Premiere Pro.
This quarter AMD's trump cards are 7990, which will announced as the fastest video card, ignoring that is is going to be completely out of PCI specs with its almost 400W (yeah, put two inefficient GPUs and you get a double inefficient card, though it is uselessly fast) and OpenCL support in niche markets, ignoring the fact that the competition holds more than 80% of the professional market with NO OpenCL support for the applications that use those solutions (meaning that OpenCL is not credited solutions for professional usage).
They obviously won't state they are more than a year late with respect to the competition.
But, well, that's what you get when you think that low prices which do not bring revenues are good.