Supercomputing for the Masses

Supercomputers are used for high intensity computing like massive data crunching. When you think of them, the image that comes to mind is a supercooled, room sized machine, managed by men in lab coats with clipboards in their hands. Sadly, because of the exorbitant price of building a super computer and the manpower it needs to maintain it, there is no room for smaller organizations who can potentially benefit from their higher computing power.

Is there any way smaller organizations can benefit from super computing? Yes, there is. It is entirely possible to create a respectable super-fast computing environment using a mix of multiple GPUs, CPUs and FPGAs without spending a lot of money. Another alternative is to run applications on the cloud. But it is really tricky. When you try these methods, you are going to face two issues. The first problem is, how to make the most of the resources at your disposal. The second is, determining the best mix of resources for an application in terms of price and performance.

Some enterprising companies have already started providing supercomputing resources by making use of the OpenCL platform. Without rewriting applications, these systems can accelerate hardware and applications by a magnitude of several orders. Embedded engineers who need faster computing resources can give these services a try.