A bunch of great papers related to Larrabee from Siggraph 08. Great perspective on parallel computing in general.

My read is that the GPU guys don’t have a clue. A lot of their talent has moved on. They’re focusing on maximizing flops without realizing that it isn’t always the flops that matter – it’s how you spend them. There’s for sure a market for teraflops of raw power, but LRB’s feature set maps so beautifully to rendering that I think the reduced performance is going to more than make up for it.

The trend in rendering is towards branch-heavy shaders doing random access. GPUs are fast when you’re doing branchless shaders with linear access patterns – but the trend is forcing them into the realm that Intel has been dominating for thirty years.

I guess we’ll see where things are in five years. But you can see which way I want things to go. ;)

Author: Ben Garney

See and @bengarney for details.