As some of you may know I’m currently working on converting physical simulations to run on a GPU using CUDA. A very interesting task which leads me to read interestings papers and get into research areas that I would have normally never even really looked at. One thing that I noticed though, when it come to GPGPU, is that almost everything you read on the subject sounds like some kind of infomercial. “Your GPU slices and dices, sends christmascards to your friends and keeps your feet warm at night”. The GPU is often seen as some sort of magic bullet. Does your algorithm lack in performance, throw it onto the GPU and it will run 100 times faster… Once you start programming with it however, you notice that in practice it is not that easy. I found three very interesting sites/blogs that have a slightly more down to earth view on GPGPU and even though I do not agree with all they say, it’s quite a good read: