As some of you may know I’m currently working on converting physical simulations to run on a GPU using CUDA. A very interesting task which leads me to read interestings papers and get into research areas that I would have normally never even really looked at. One thing that I noticed though, when it come to GPGPU, is that almost everything you read on the subject sounds like some kind of infomercial. “Your GPU slices and dices, sends christmascards to your friends and keeps your feet warm at night”. The GPU is often seen as some sort of magic bullet. Does your algorithm lack in performance, throw it onto the GPU and it will run 100 times faster… Once you start programming with it however, you notice that in practice it is not that easy. I found three very interesting sites/blogs that have a slightly more down to earth view on GPGPU and even though I do not agree with all they say, it’s quite a good read:
Perhaps not too surprising from a vendor who’s trying to push multi-core processors employing ray-tracing to replace rasterization based graphics. But still some valid points
One of the better reviews I’ve read. A must read!!
“People with the expertise, persistence, and bloody-mindedness to keep slogging away will undoubtedly see phenomenal speedups for some application kernels.”
That must be me I guess…
A good “checklist” to keep in mind when thinking about porting your work to the GPU. Especially point number 3.
I still think there is a lot of potential for GPGPU work and research. But it needs to be applied correctly and with careful consideration. Okay, back to my GPU for some further experimentation. Perhaps I’ll write some blog on my experiences one of these days. Fun stuff…. really.