Home
Schedule
Participant List
Forums
Directions
FAQ
Background
Credits
Sponsors
Photos

Download

GFlop scoreboard

FlashMob I Scientific Conference

Sign up for FlashMob I announcements

Contact us

Add News

USF Computer Science


Press

New York Times [Front page]: Hey, Gang, Let's Make Our Own Supercomputer

C|Net: Reprint of nytimes.com article

San Jose Business Journal: Supercomputer flashmob planned for SF

Slashdot: Discussion

NPR: Radio Interview Archive

UK PC Pro: Flash mob to create super computer

USF News: Supercomputing in a Flash

Press release

To set up a briefing or to obtain media credential for the FlashMob I event, contact Gary McDonald at 415-422-2699.


FlashMob Background

FlashMob I is the brainchild of a group of graduate students at USF studying supercomputers. Our hope at the beginning of the semester was to build a supercomputer that would make the Top 500 list of supercomputers. After some back-of-the-envelope calculations, we concluded that we were about 100 computers short of having a good shot. Someone raised their hand and said: "We could post a message on Craig's List and get a hundred people to just show up." Thus the idea of FlashMob Computing was born.

How is FlashMob Computing different? Today supercomputing can be divided into two categories: Big Iron and Grid Computing.

Big Iron "Big iron" supercomputing dates back to World War II. Historically, supercomputers like the old Cray Supercomputers, or the current reigning champion, Japan's Earth Simulator, are hideously expensive custom machines that use custom parts and are constructed by PhD's to do very specific things. Recently Apple and Virginia Tech made headlines by networking 1100 Apple G5's together and creating the 3rd fastest supercomputer for the low-low price of $5 million (which is an impressively big step down from the estimated $1 B-B-Billion dollars the Earth Simulator cost). But still, unless you have a couple of million lying around, supercomputers of this kind are pretty much still out of reach.

Grid Computing Then there's Grid Computing. Grid Computing is based on the idea that most computers are idle most of the time. So, instead of a screen saver with flying toasters, let's put the computer to work. SETI@Home is the best known example of a grid computer. Users install a free program that downloads and analyzes radio telescope data looking for patterns in the data that might be signs of extraterrestrial life (i.e. the Search for Extraterrestrial Intelligence AKA SETI). SETI@home is pretty awesome but you need a **lot** of computers (currently SETI@Home has roughly half-a-million active computers at any one time) and grid computing is only good for certain kinds of problems. (More on that later.)

FlashMob Computing FlashMob I is something new in the world of supercomputers. FlashMob I is an ad-hoc supercomputer created on-the-fly using ordinary PC's interconnected via a well-organized LAN. A FlashMob computer has no permanent infrastructure, it's designed to be run in a gymnasium or a warehouse. There are no cooling towers, no expensive T-3's connected to the Internet, no custom hardware. The primary cost of a FlashMob computer is people's time.

Comparing FlashMob Computing to Grid and Big Iron Before you can compare the three, you have to understand what makes a computer a "supercomputer" and a little about the kinds of problems supercomputers solve.

A supercomputer is a computer that has a lot of CPU's working in parallel on a single problem. The smallest supercomputers typically have more than 64 CPU's; the Earth Simulator has 5120 CPU's. So your dual-CPU gaming machine does not count as a supercomputer. Sorry. Morever, if you brought 64 friends over for a LAN party to play Quake, you still don't have a supercomputer because the 64 machines are not all working on solving one single problem. Sorry again. It's both the parallelism and the singularity of purpose that defines a supercomputer. So what can I do with a supercomputer?

There are essentially two types of problems supercomputers can solve: problems where all the CPU's have to talk to each other occasionally and problems where CPU's have to talk to each other all the time. Here's two examples:

Shrek. Computer animated movies like Shrek are created frame-by-frame by a computer. The computer has to calculate the color and brightness of each pixel on the screen. There are a lot of pixels on a movie frame and sixteen frames per second of movie. So a ninety minute movie like Shrek or Toy Story or Finding Nemo takes a lot of computing power. The good news is that the pixel on the upper-left of the screen doesn't have much to do with the pixel on the lower right. So if you wanted to speed up the process of drawing one frame you could assign one computer to draw the left side of the screen and another to draw the right side, this would cut the time it takes to draw one frame in half. If you cut the screen into four areas you could put four computers on the job, eight and so on. Because the computers drawing the various parts of the screen don't have to talk to each other, the computers can be far apart, and if one of the computers fails it's no big deal because the other computers aren't waiting for each others results. The failed computer's job can just be resubmitted at the end.

Billiard Balls. Now imagine you wanted a computer to calculate where all the balls would end up after the initial break in a game of eight-ball. The result depends on how all the balls hit each other and the walls of the pool table. Unlike Shrek, the movement of one ball is very much dependent on the movements of all the other balls. Imagine you assigned one computer to calculate the low balls, and one computer to calculate the high balls. As soon as a low ball hit a high ball, the two computers would have to exchange data to determine how to change the balls' paths based on their impact. Since lots of high balls hit lots of low balls on a break, the two computers have to talk constantly. Moreover, if one computer failed the other computer couldn't finish its job because it would be waiting for the failed computer. Consequently, the computers need to be close together so they can talk very quickly, and they have to be very reliable because they're counting on each other. The billiard ball example is simple, there are only fifteen balls. But now imagine there were 1000 balls, or a million balls, or the balls weren't balls at all but instead atoms of water flowing in a river. In this case the computers would spend more time talking than computing. So the network is very important, and so is the reliability of the nodes. If one computer is slow or one fails, everyone waits.

Big Iron is good at both. But a lot of the cost of Big Iron goes into the networking, so to put Big Iron to work on Shrek is a waste of a lot of expensive networking equipment. So they're generally not used this way. Big Iron really shines in problems analogous to the Billiard Balls (like modeling the weather or how air flows around a jet fighter's wings, or how far a nuclear mushroom cloud will billow given the winds.)

Grid Computers are very good for problems like Shrek, but not so good for Billiard Ball type problems simply because the worst computer on a grid computer is usually pretty bad and nodes on a grid fail often so there's lots of waiting and lots of redos.

FlashMob computers have an advantage over Big Iron in that they are inexpensive and an advantage over grid computers in that they can solve Billiard Ball type problems. Unlike big iron, FlashMob computers can be sized to fit the problem at hand by adding more or fewer nodes. FlashMob Computing nodes are also dedicated to the problem at hand, so individual nodes are much more reliable than grid computer nodes.


Last modified Sunday October 13, 2019
Department of Computer Science, University of San Francisco
Please e-mail webmaster at cs dot usfca dot edu with comments or problems