Trinity’s Computer Science Department has been selected to become a CUDA (Compute Unified Device Architecture) Teaching Center by NVIDIA, one of the world’s largest manufacturers of computer graphics processing units (GPUs). The new equipment provided to the College by NVIDIA as a result of this selection will enable students and faculty to access new sources of powerful computing for teaching and research.
CUDA is a parallel programming and computing platform that was developed by NVIDIA to allow novice users to more easily program GPUs.
Trinity’s selection as a CUDA Teaching Center is the result of a proposal submitted by Peter Yoon, associate professor of computer science, with the help of funds provided by the Faculty Research Committee. Trinity was one of 22 schools in eight countries that were named CUDA Teaching Centers in 2012.
As part of the selection, NVIDIA donated to Trinity five GeForce GTX480 graphics cards, which will primarily be used to teach CUDA, and one Tesla C2075 graphics card, which has a much greater processing power and will be used for timing and benchmarking in various research projects. These cards will be installed in several high-performance workstations that Yoon plans to build for the Computer Science Department.
Additionally, NVIDIA will provide teaching kits, textbooks, software licenses, NVIDIA CUDA architecture-enabled GPUs for teaching lab computers, and academic discounts for additional hardware.
In turn, Yoon plans to incorporate the CUDA platform into two of his classes—“Introduction to Computer Systems” and “High-Performance Computing”—so that students can learn the language extension. “They should be able to jump right into CUDA programming and use it to solve various real-world problems that require substantial computations,” he said.
The significance of this acquisition is found in the historical development of computers as we know them today. Personal computing was introduced more than 30 years ago, and early versions had a relatively simple interface between the physical components and the software of the machines. These interactions took place in the central processing unit (CPU), or the hardware device that interprets and executes program instructions.
Over the years, applications were developed with more complex graphics, videos, and images that used more of the CPU’s processing power. Computer manufacturers solved this problem by creating another processor specifically for graphics—GPUs. Today, virtually all digital devices come equipped with a GPU for graphics-intensive applications, such as video games and watching full-length movies.
As personal computers developed, so too did technology used for calculations in fields that require high-speed computing, including the sciences, engineering, medicine, and even weather forecasting. In these fields, as with personal computing, using a computer with only one CPU was not enough. “Traditionally, researchers turn to supercomputing, or using computers that consists of hundreds of thousands of CPUs,” said Yoon.
There are limitations to supercomputing, though. Supercomputers are not energy efficient machines and they consume massive amounts of electricity. They are also very expensive, which creates a barrier to adoption for smaller education and research institutions.
About five years ago, according to Yoon, NVIDIA started to develop ways to practically apply GPUs to function similarly to supercomputers. “The problem is that you need to be trained in the field of computer graphics in order to program for GPUs, so it was challenging to use one in place of a CPU,” he explained.
To counteract this limitation, NVIDIA developed the CUDA programming interface, which allows anyone who knows programming languages for CPUs, such as C, to program for GPUs using a series of language extensions. As a result, GPUs are becoming more widely used as a research tool for solving complex scientific problems.
GPUs not only cost less than supercomputers—Yoon estimates the difference to be thousands versus millions of dollars—they are much more energy efficient. “We used to have a very large supercomputer that sucked up, on average, about three households-worth of power,” he said. “The new little box we have downstairs with NVIDIA graphics cards is at least about five times faster and only uses about 1200 watts—the same amount as a hairdryer.”
The ability to solve substantial computations is also useful in a number of academic fields outside of computer science. One example is reconstructing medical images created by CT scanners, which take thousands of 2D pictures that must be reconstructed by computers to generate a 3D image. “In order to do something like that it takes a lot of time—days even—using only one computer. But using this technology we can do it in a matter of minutes,” said Yoon.
Yoon has already begun using the new hardware on projects with students and colleagues across disciplines—including math, engineering, and even philosophy. For example, Yoon and his research students are working with Dan Lloyd, Brownell Professor of Philosophy, on a project that uses these GPU-equipped computers to regenerate audio streams from brain scans created of subjects who are listening to music. “We can now process the data more quickly, and the ability to perform calculations at higher speeds is attracting the attention of more and more departments on campus,” said Yoon.