I just wonder why it runs slow on the HD6990 since the HD6990 is 7 times faster than the GTX560 on Milkyway tasks.
Simple, the implementation is not yet fully optimized for AMD's GPU architecture.
Quote:
Also, I have to run the 955BE at 50% CPU's to allow two cores to service the HD6990... otherwise it chocks. This is similar to Moo! Wrapper. Very heavy CPU use... one full core per GPU. You can see the difference in CPU times on the two systems... ATI uses a ton more CPU...
Please see my previous posts: that's most likely due to a known bug in the AMD driver.
I just wonder why it runs slow on the HD6990 since the HD6990 is 7 times faster than the GTX560 on Milkyway tasks.
Simple, the implementation is not yet fully optimized for AMD's GPU architecture.
Quote:
Also, I have to run the 955BE at 50% CPU's to allow two cores to service the HD6990... otherwise it chocks. This is similar to Moo! Wrapper. Very heavy CPU use... one full core per GPU. You can see the difference in CPU times on the two systems... ATI uses a ton more CPU...
Please see my previous posts: that's most likely due to a known bug in the AMD driver.
Oliver
Thank you. I didn't know about an AMD bug since I only received the 6990 recently and it's the first ATI/AMD GPU product I have ever had. Started with 3DFX Voodoo's back when and been Nvidia since... figured I would try the fastest card in the world to see how it would crunch...
And BTW, the new AMD FX processors don't crunch well at all... at least the FX-8120 stinks...
I use 5xxx you 6xxx
I use 1gpu you 2gpu
I use 32bit you 64bit
My guess here is that ATI drivers and multiGPU together with OpenCl makes the CPU-core goes 100%
Albert uses 2feeding cores to supply your 2GPU's
whatever the CPU load is here at albert it still requires 1 free CPU to feed each GPU.
I guess if you try to run Milkyway and Collatz your CPU usage is much lower.
I hope ATI can work with OpenCl programmers to fix this (bug).
I use 5xxx you 6xxx
I use 1gpu you 2gpu
I use 32bit you 64bit
My guess here is that ATI drivers and multiGPU together with OpenCl makes the CPU-core goes 100%
Albert uses 2feeding cores to supply your 2GPU's
whatever the CPU load is here at albert it still requires 1 free CPU to feed each GPU.
I guess if you try to run Milkyway and Collatz your CPU usage is much lower.
I hope ATI can work with OpenCl programmers to fix this (bug).
That is exactly correct and true for the HD6990. In contrast, the Nvidia CUDA tasks use very little CPU resources.
ATI/AMD seems a bit behind CUDA in their API's for sure. Let's just hope they progress more quickly and perhaps get their stream API working more easily. OpenCL is maybe nice for games or whatever, but linking to generalized libraries vs. optimized has to be part of the problem; especially considering the massive 2Gig memory available to each GPU on the HD6990.
It certainly seems something is lacking in whatever SDK is being used...
I'm using the ATI 11.11 drivers and the 2.5SDK installed, on my i7 2600K @ 4.7GHz, i typically see 3-4% of the total CPU usage going to the albert openCL app... I have a ATI 6970 and running windows 8 x64. My typical GPU load is around 80%
I'm happy to be able to crunch Einstein tasks on ATI at all! That's a great leap ahead already! All good on the developers so far!
Thanks, that's exactly our focus right now. I mean, we not only had to implement the OpenCL version of our algorithm but also update the server backend as well as the client requirements since we are the first project deploying a "real" (fully BOINC compliant) OpenCL app on all three major platforms (Win, Mac, Lin). This is not straight-forward, in particular when AMD's drivers come into play... Again, we first aim at supporting AMD GPUs at all, then we'll start optimizing the app. And hey, our test rigs show a performance drop of 20% only, compared to our CUDA app - not that bad of the first release (but you can't directly compare those results anyway)...
RE: I just wonder why it
)
Simple, the implementation is not yet fully optimized for AMD's GPU architecture.
Please see my previous posts: that's most likely due to a known bug in the AMD driver.
Oliver
Tex1954 What driver are you
)
Tex1954
What driver are you running on the ATI??
They say that there is a bug in newer ATI drivers that consumes 100%CPU.
Look at my tasks.
http://albertathome.org/host/1353/tasks
I have a CPU load between 15-25%
I have an ATI 5850 and use Catalyst 11.9
Parhapse you can try the 11.9 drivers and it will work with the 69xx as well.
I think it is worth a try.
Nice card btw, and it looks as you say that the card has 2GPU which needs 2CPU cores to feed it.
RE: RE: I just wonder why
)
Thank you. I didn't know about an AMD bug since I only received the 6990 recently and it's the first ATI/AMD GPU product I have ever had. Started with 3DFX Voodoo's back when and been Nvidia since... figured I would try the fastest card in the world to see how it would crunch...
And BTW, the new AMD FX processors don't crunch well at all... at least the FX-8120 stinks...
http://www.overclock.net/t/1143670/fx-8120-boinc-benchmarks-real-world-stuff
8-)
RE: Tex1954 What driver
)
I have the 11.9 drivers as well running on Win7-64b... I don't know about that bug, but it does in fact require 2 cores out of 4 to keep it fed...
8-)
tex 1954 Hmm... You are
)
tex 1954
Hmm...
You are using 11.9 the same as me..
I use 5xxx you 6xxx
I use 1gpu you 2gpu
I use 32bit you 64bit
My guess here is that ATI drivers and multiGPU together with OpenCl makes the CPU-core goes 100%
Albert uses 2feeding cores to supply your 2GPU's
whatever the CPU load is here at albert it still requires 1 free CPU to feed each GPU.
I guess if you try to run Milkyway and Collatz your CPU usage is much lower.
I hope ATI can work with OpenCl programmers to fix this (bug).
RE: tex 1954 Hmm... You
)
That is exactly correct and true for the HD6990. In contrast, the Nvidia CUDA tasks use very little CPU resources.
ATI/AMD seems a bit behind CUDA in their API's for sure. Let's just hope they progress more quickly and perhaps get their stream API working more easily. OpenCL is maybe nice for games or whatever, but linking to generalized libraries vs. optimized has to be part of the problem; especially considering the massive 2Gig memory available to each GPU on the HD6990.
It certainly seems something is lacking in whatever SDK is being used...
:)
RE: RE: tex 1954 It
)
RE: And: I don't think
)
Well, if OpenCL applied to Nvidia GPU's slows it down, I am against it! LOL!
But, I have no real idea what is really required...
However, I suspect we all are on the same track and wish the same goals... that being the most efficient and speedy code that can be achieved.
I'm happy to be able to crunch Einstein tasks on ATI at all! That's a great leap ahead already! All good on the developers so far!
8-)
I'm using the ATI 11.11
)
I'm using the ATI 11.11 drivers and the 2.5SDK installed, on my i7 2600K @ 4.7GHz, i typically see 3-4% of the total CPU usage going to the albert openCL app... I have a ATI 6970 and running windows 8 x64. My typical GPU load is around 80%
RE: I'm happy to be able
)
Thanks, that's exactly our focus right now. I mean, we not only had to implement the OpenCL version of our algorithm but also update the server backend as well as the client requirements since we are the first project deploying a "real" (fully BOINC compliant) OpenCL app on all three major platforms (Win, Mac, Lin). This is not straight-forward, in particular when AMD's drivers come into play... Again, we first aim at supporting AMD GPUs at all, then we'll start optimizing the app. And hey, our test rigs show a performance drop of 20% only, compared to our CUDA app - not that bad of the first release (but you can't directly compare those results anyway)...
Best,
Oliver