Imagining the Next Supercomputers with ‘LittleFe’

Jonathan Senning, right, in his office with student Peter Story ’14 and the model supercomputer they built together.

Last month while attending SC12, an international conference on  high performance computing (HPC) in Salt Lake City, Utah, Jonathan Senning, professor of mathematics and computer science, did something he’d wanted to for a while: he built a hands-on model computer he can use in his class for the first time next spring. Thanks to the National Science Foundation, Senning and his student Peter Story ’14 also spent the week exploring ideas we’ll probably see in the future. Here’s how he described it:

“High performance computing is everywhere today. Weather forecasting, molecular modeling, mapping the genome, economic modeling, simulation, and visualization are just some of the areas that work with large data sets and need substantial computing power. So when Peter Story, a computer science & mathematics double major, and I were selected to participate in a fully-funded HPC Educators program for faculty and/or students from undergraduate colleges, I knew this was an exciting opportunity for Gordon. 

“We also received an additional grant for a small parallel cluster designed for HPC education and spent most of Monday assembling it as part of a ‘build-out’ event.  The cluster, named by its designers ‘LittleFe’—a play on ‘big iron’ which is a term originally used to describe large mainframe computers—is a model of a modern supercomputer.  It operates in the same way and has essentially all the same parts and programming modes as today’s supercomputers.  It’s just not as fast or large, and it certainly doesn’t use as much power.

“This spring I’ll be teaching a new course called Parallel and High Performance Computing and will be able to use LittleFe.  The course will explore the three main forms of parallel and distributed processing in use today: shared memory multiprocessing (the dual and quad core processor chips in our phones and laptops), cluster computing (modern supercomputers), and GPGPU (general purpose graphics processing unit) programming.  GPGPU-equipped systems are the current cutting-edge devices.  As of this fall it is possible to buy a GPGPU ‘card’ to fit inside a desktop or server computer that has 2,496 processing cores and is capable of over 1 teraflop per second.  To get close to these speeds, however, either existing programs must be rewritten or new programmed solutions must be developed. Our new LittleFe cluster supports all three main types of parallel architectures, and as a result allows us to explore hybrid approaches, combining various types of parallelism in the solution of a single problem.

“I learn best by ‘tinkering,’ and I’m convinced that many of our students will benefit from having hands-on experiences.  The LittleFe cluster fosters this sort of learning; its open frame, exposed cabling, and blinking lights invite students to be curious about it.”

Comments are closed.