3. SINCE
2001
For 10 years, MIT
Technology Review has
created a list of the 10
most important
technological milestones
reached each year. To
compile the list, the editors
select the technologies we
believe will have the
greatest impact on the
shape of innovation in
years to come.
5. TRACK
RECORD
Over the years, we’ve
identified many
technologies that have
flourished and
become part of our
everyday lives.
We’ve also picked
some technologies
that have not fared as
well – that, because of
market or other forces,
have been delayed or
forgotten.
13. HAS NOT (YET) PROVEN OUT
Grid Computing
Ian Foster & Carl Kesselman
2003
Distributed grid computing
has not seen the expansion
predicted.
TRACK RECORD
17. HAS NOT (YET) PROVEN OUT
Green Concrete
Nikolaos Vlasopoulos
2010
A commercial method of
reducing cement’s carbon
footprint has not been
perfected.
TRACK RECORD
18. Jason Pontin @Jason_Pontin
editor in chief & publisher facebook.com/Jason.Pontin
MIT Technology Review technologyreview.com
@techreview
facebook.com/technologyreview
20. DEEP LEARNING
The Problem:
How can massive amounts of data be
efficiently processed so computers can
recognize objects and translate speech
in real time?
22. DEEP LEARNING
Why It Matters:
Computers would assist humans far
more effectively if they could reliably
recognize patterns and make
inferences about the world.
26. BIG DATA FROM CHEAP
PHONES
The Problem:
How can data mined from even the
most basic cell phones help us
understand how people move about
and behave?
27. BIG DATA
FROM CHEAP
PHONES
The Solution:
Caroline Buckee,
Harvard University
Creating disease–
fighting tools with
cell-phone mobility
data.
28. BIG DATA FROM CHEAP
PHONES
Why It Matters:
Poor countries lack data-gathering
infrastructure; phone data can provide
it.
31. TEMPORARY SOCIAL
MEDIA
Why It Matters
Messages that quickly self-destruct
could enhance the privacy of online
communication and make people feel
freer to be spontaneous.
37. MEMORY IMPLANTS
Why It Matters:
If the code by which the brain forms
long-term memories can be
deciphered, there is hope for people
whose brains have suffered damage
from age, Alzheimer’s, stroke, or injury.
40. ROBOTIC
MANUFACTURING
Why It Matters:
Smarter, safer new industrial robots
could bring automation to new areas of
manual work and help many U.S.
manufacturers regain a competitive
edge.
43. ADDITIVE
MANUFACTURING
Why It Matters:
Because it can potentially make
complex parts less expensive to
produce, additive manufacturing could
revitalize many advanced
manufacturing sectors.
46. PRENATAL DNA
SEQUENCING
Why It Matters:
Tomorrow’s children could be born with
a complete list of their genetic
strengths and weaknesses.
47. SUPERGRIDS
The Problem:
High-voltage DC could previously be
used only for point-to-point
transmission, not to form the integrated
grid networks needed for a stable
electricity system.
MicrofluidicsRebecca Zacks Saturday, February 1, 2003Stephen QuakeThe forces of physics move oceans, mountains and galaxies. But applied physicist Stephen Quake uses them to manipulate things on a vastly reduced scale: tiny volumes of fluids thousands of times smaller than a dewdrop. Microfluidics, as Quake's field is called, is a promising new branch of biotechnology. The idea is that once you master fluids at the microscale, you can automate key experiments for genomics and pharmaceutical development, perform instant diagnostic tests, even build implantable drug-delivery devices-all on mass-produced chips. It's a vision so compelling that many industry observers predict microfluidics will do for biotech what the transistor did for electronics. Quake's 11-person lab at Caltech is not the only outfit bent on realizing this vision. Over the past decade or so, scores of researchers have set out to build microscale devices for many of the basic processes of biological research, from sample mixing to DNA sequencing. But many of those groups have run into roadblocks in developing technology that can be generalized to a broad range of applications and would allow several functions--such as sample preparation, DNA extraction and detection of a gene mutation-to be integrated on a single chip. Moreover, some of the manufacturing approaches involved, particularly silicon micromachining, are so expensive that experts in the field question whether products relying on these techniques could ever be economical to manufacture. Quake's group is one of several now working their way around these obstacles. Last spring, the team unveiled a set of microfabricated valves and pumps--a critical first step in developing technology general enough to work for any microfluidics application. And to make microfluidic devices cheaper, Quake and others are casting them out of soft silicone rubber in reusable molds, using a technique called "soft lithography." The potential payoff of these advances is huge: mass-produced, disposable microfluidic chips that make possible everything from drug discovery on a massive scale to at-home tests for common infections. Because microfluidics is so promising and yet so technically frustrating, expectation and hype have sometimes outpaced the development of viable technology. Yet Quake and his group have consistently turned out elegant devices that actually work. First was a microscale DNA analyzer that operates faster and on different principles than the conventional, full-sized version, then a miniature cell sorter and most recently, those valves and pumps, described last April in the journal Science. All this while regularly publishing important findings on the basic physics of biological molecules. If Quake seems adept at straddling fields-in this case science and technology-perhaps it's because that's exactly the sort of challenge he has long craved. Even as an undergraduate at Stanford University, where he earned bachelor's and master's degrees simultaneously in only four years, Quake worried that physics was "somewhat finished" as an experimental science, that it was hard to find the field's frontiers. A pioneer at heart, Quake started looking to tackle questions that lay at the boundaries between disciplines. As he recalls: "It was completely obvious, even to an outsider, that biology was going through this period of incredible growth and intellectual excitement, and there were going to be big questions asked and answered, and the frontiers were advancing at a tremendous rate in all directions." After Quake finished his doctorate in theoretical physics at Oxford University, he went back to Stanford as a fellow working on the physics of DNA. When Caltech's applied physics department hired him in 1996, Quake says, "it was an experiment for them"-he was the first faculty member in the department with a biological bent. So far, the experiment seems to be going smoothly; this past summer, at only 31, Quake got tenure. Quake's inventions are also thriving in industry, through a startup called Mycometrix. Founded in 1999 by Quake, two of his college classmates and a consultant, the South San Francisco-based company has licensed all of Quake's microfluidics patents from Caltech. When TR went to press, the company was planning to deliver its first microfluidic devices to selected university researchers and industry partners by the end of 2000, and was hoping for a commercial release by the end of this year or early 2002. The competition will be intense. Several startups and even electronics giants like Hewlett-Packard and Motorola are getting in on the game. But to date, only one of Mycometrix's competitors has brought a microfluidic product to market. Although Quake's work is rapidly flowing into the commercial marketplace, it's still the very early stages of science and technology development that interest him the most. And though he has built quite a reputation as a technologist, he hopes soon to focus more of his attention on some of the most pressing questions in basic biology: How do the proteins that control gene expression work? How can you do studies that cut across the entire genome? "Now that we've got some pretty neat tools," Quake says, "we're going to try and do some science with them." Quake's ability to work in areas from basic research to hot commercial markets make him a prototypical innovator. And the same versatility makes microfluidics a field to pay close attention to in the next few years.
Technology Review February 2004DAVID COXPersonal GenomicsThree billion. That's the approximate number of DNA "letters" in each person's genome. The Human Genome Project managed a complete, letter-by-letter sequence of a model human-a boon for research. But examining the specific genetic material of each patient in a doctor's office by wading through those three billion letters just isn't practical. So to achieve the dream of personalized medicine—a future in which a simple blood test will determine the best course of treatment based on a patient's genes-many scientists are taking a shortcut: focusing on only the differences between people's genomes.David Cox, chief scientific officer of Perlegen Sciences in Mountain View, CA, is turning that strategy into a practical tool that will enable doctors and drug researchers to quickly determine whether a patient's genetic makeup results in greater vulnerability to a particular disease, or makes him or her a suitable candidate for a specific drug. Such tests could eventually revolutionize the treatment of cancer, Alzheimer's, asthma—almost any disease imaginable. And Cox, working with some of the world's leading pharmaceutical companies, has gotten an aggressive head start in making it happen.
Data Mining1 commentM. Mitchell Waldrop Saturday, February 1, 2003Usama Fayyad"Hello again, Sidney P. Manyclicks. We have recommendations for you. Customers who bought this title also bought ..." Intrusive? A touch of personal attention in the sterile world of e-shopping? Both, perhaps-but definitely a tour de force of database technology. Conventional databases sort though a few megabytes of structured data to find answers to specific queries. But compiling a simple recommendation list requires a system that can burrow through gigabytes of Web site visitor logs in search of patterns no one can anticipate in advance. Welcome to data mining, also known as knowledge discovery in databases (KDD): the rapidly emerging technology that lies behind the personalized Web and much else besides. The emphasis here is on "emerging," says Usama Fayyad, who should know: data mining didn't exist as a field until he helped pioneer it. In 1987, the Tunisian-born computer scientist was a graduate student at the University of Michigan. He had taken a summer job with General Motors, which was compiling a huge database on car repairs. The idea, he says, was to enable any GM service technician to ask the database a question based on the model of car, the engine capacity, and so on, and get a quick, appropriate response. Sounds straightforward. But, recalls Fayyad, "there were hundreds of millions of records-no human being could go through it all." The pattern recognition algorithm he devised to solve that problem became his 1991 doctoral dissertation, which is still among the most cited publications in the data-mining field. Data mining proved to have surprisingly broad application. Fayyad left Michigan for NASA's Jet Propulsion Laboratory, where he applied his techniques to astronomical research. In particular, his algorithm helped in automatically determining which of some two billion observed celestial objects were stars and which were galaxies. The tool also helped find volcanoes on Venus from the huge number of radar images being transmitted from space probes. A geologist could retrieve the image of a previously identified volcano; the computer would then examine the picture for patterns and search through other images for similar patterns. That worked so well, Fayyad says, that "pretty soon the military intelligence people were all over us, wanting to use it. And so were doctors, who wanted to do automatic searches of radiology images." In 1995, in response to this widening interest, Fayyad and his colleagues planned a full-scale international conference on KDD. The conference drew about 500 participants, more than double what had been expected. (KDD 2000 drew 950.) By this time, with the Internet gushing information onto everyone's desktop, the urgency for data mining was becoming evident in the corporate world. IBM and other industry giants sensed a market--and wanted in. Microsoft set its sights on Fayyad and enticed him to join the company's research labs. "They suggested that I take a look at databases in the corporate world," says Fayyad. "It was pretty sad. In many companies, the 'data warehouses' were actually 'data tombs': the data went in and were never looked at again." Fayyad joined Microsoft in 1996 and organized a new research group in data mining. "We looked at new algorithms for scaling up to very large databases-gigabytes or larger," he says. By decade's end, Fayyad had caught the entrepreneurial bug sweeping through computer science labs. "I realized that even the organizations that loved the idea of data mining were having trouble just maintaining their data." What they needed, he reasoned, was a company to host their databases for them, and provide data-mining services on top of that. The result was digiMine, a Kirkland, Wash., startup that opened for business in March 2000 with Fayyad as CEO. And the future of data-mining technology? Wide open, says Fayyad-especially as researchers begin to move beyond the field's original focus on highly structured, relational databases. One very hot area is "text data mining": extracting unexpected relationships from huge collections of free-form text documents. The results are still preliminary, as various labs experiment with natural-language processing, statistical word counts and other techniques. But the University of California at Berkeley's LINDI system, to take one example, has already been used to help geneticists search the biomedical literature and produce plausible hypotheses for the function of newly discovered genes. Another hot area, says Fayyad, is "video mining": using a combination of speech recognition, image understanding and natural-language processing techniques to open up the world's vast video archives to efficient computer searching. For instance, when Carnegie Mellon University's Informedia II system is given an archive of, say, CNN news clips, it produces a computer-searchable index by automatically dividing each clip into individual scenes accompanied by transcripts and headlines. Fayyad hopes that ultimately the techniques of data mining will become so successful and so thoroughly integrated into standard database systems that they will no longer be thought of as exotic. "People will just assume that their database software will do what they need."
David Talbot Monday, March 12, 2007Arthur Nozik hopes quantum dots will enable the production of more efficient and less expensive solar cells, finally making solar power competitive with other sources of electricity. Lance W. Clayton This article is one in a series of 10 stories we're running this week covering today's most significant emerging technologies. It's part of our annual "10 Emerging Technologies" report, which appears in the March/April print issue of Technology Review.No renewable power source has as much theoretical potential as solar energy. But the promise of cheap and abundant solar power remains unmet, largely because today's solar cells are so costly to make.Photovoltaic cells use semiconductors to convert light energy into electrical current. The workhorse photovoltaic material, silicon, performs this conversion fairly efficiently, but silicon cells are relatively expensive to manufacture. Some other semiconductors, which can be deposited as thin films, have reached market, but although they're cheaper, their efficiency doesn't compare to that of silicon. A new solution may be in the offing: some chemists think that quantum dots--tiny crystals of semiconductors just a few nanometers wide--could at last make solar power cost-competitive with electricity from fossil fuels.By dint of their size, quantum dots have unique abilities to interact with light. In silicon, one photon of light frees one electron from its atomic orbit. In the late 1990s, Arthur Nozik, a senior research fellow at the National Renewable Energy Laboratory in Golden, CO, postulated that quantum dots of certain semiconductor materials could release two or more electrons when struck by high-energy photons, such as those found toward the blue and ultraviolet end of the spectrum. In 2004, Victor Klimov of Los Alamos National Laboratory in New Mexico provided the first experimental proof that Nozik was right; last year he showed that quantum dots of lead selenide could produce up to seven electrons per photon when exposed to high-energy ultraviolet light. Nozik's team soon demonstrated the effect in dots made of other semiconductors, such as lead sulfide and lead telluride.These experiments have not yet produced a material suitable for commercialization, but they do suggest that quantum dots could someday increase the efficiency of converting sunlight into electricity. And since quantum dots can be made using simple chemical reactions, they could also make solar cells far less expensive. Researchers in Nozik's lab, whose results have not been published, recently demonstrated the extra-electron effect in quantum dots made of silicon; these dots would be far less costly to incorporate into solar cells than the large crystalline sheets of silicon used today.To date, the extra-electron effect has been seen only in isolated quantum dots; it was not evident in the first prototype photovoltaic devices to use the dots. The trouble is that in a working solar cell, electrons must travel out of the semiconductor and into an external electrical circuit. Some of the electrons freed in any photovoltaic cell are inevitably "lost," recaptured by positive "holes" in the semiconductor. In quantum dots, this recapture happens far faster than it does in larger pieces of a semiconductor; many of the freed electrons are immediately swallowed up.The Nozik team's best quantum-dot solar cells have managed only about 2 percent efficiency, far less than is needed for a practical device. However, the group hopes to boost the efficiency by modifying the surfaces of the quantum dots or improving electron transport between dots. The project is a gamble, and Nozik readily admits that it might not pay off. Still, the enormous potential of the nanocrystals keeps him going. Nozik calculates that a photovoltaic device based on quantum dots could have a maximum efficiency of 42 percent, far better than silicon's maximum efficiency of 31 percent. The quantum dots themselves would be cheap to manufacture, and they could do their work in combination with materials like conducting polymers that could also be produced inexpensively. A working quantum dot-polymer cell could eventually place solar electricity on a nearly even economic footing with electricity from coal. "If you could [do this], you would be in Stockholm--it would be revolutionary," says Nozik.A commercial quantum-dot solar cell is many years away, assuming it's even possible. But if it is, it could help put our fossil-fuel days behind us.
Technology Review February 2004HARI BALAKRISHNANDistributed StorageWhether it's organizing documents, spreadsheets, music, photos, and videos or maintaining regular backup files in case of theft or a crash, taking care of data is one of the biggest hassles facing any computer user. Wouldn't it be better to store data in the nooks and crannies of the Internet, a few keystrokes away from any computer, anywhere? A budding technology known as distributed storage could do just that, transforming data storage for individuals and companies by making digital files easier to maintain and access while eliminating the threat of catastrophes that obliterate information, from blackouts to hard-drive failures.Hari Balakrishnan is pursuing this dream, working to free important data from dependency on specific computers or systems. Music-sharing services such as KaZaA, which let people download and trade songs from Internet-connected PCs, are basic distributed-storage systems. But Balakrishnan, an MIT computer scientist, is part of a coalition of programmers who want to extend the concept to all types of data. The beauty of such a system, he says, is that it would provide all-purpose protection and convenience without being complicated to use. "You can now move [files] across machines," he says. "You can replicate them, remove them, and the way in which [you] get them is unchanged." With inability to access data sometimes costing companies millions in revenue per hour of downtime, according to Stamford, CT-based Meta Group, a distributed-storage system could dramatically enhance productivity.
The viewership for live television broadcasts has generally been declining for years. But something surprising is happening: events such as the winter Olympics and the Grammys are drawing more viewers and more buzz. The rebound is happening at least in part because of new viewing habits: while people watch, they are using smart phones or laptops to swap texts, tweets, and status updates about celebrities, characters, and even commercials. Marie-José Montpetit, an invited scientist at MIT's Research Lab for Electronics, has been working for several years on social TV—a way to seamlessly combine the social networks that are boosting TV ratings with the more passive experience of traditional TV viewing. Her goal is to make watching television something that viewers in different places can share and discuss--and to make it easier to find something to watch. Carriers, networks, and content producers hope that making it easier for viewers to link up with friends will help them hold on to their audiences rather than losing them to services like Hulu, which stream shows over the Internet. And opening TV to social networking could make it easier for companies to provide personalized programming. VideoMany developers are working on ways to let people share the viewing experience over broadband connections or through set-top boxes; indeed, cable companies and other broadband video providers have sponsored small trials of various interactive TV services around the world for more than 20 years. But most of the systems were even clumsier than the combination of laptop and large-screen TV that today's viewers have kludged together. Montpetit wants to unite different communication systems--especially cellular and broadband services--to create an elegant user experience. She's been sharing ideas about that sort of system with BT, which provides broadband connections to 15 million people in the United Kingdom and Ireland, including nearly a half-million digital-TV subscribers.Though BT won't comment on what form its social-TV system might take, Montpetit and her students at the MIT Media Lab demonstrated an intriguing prototype last year. A central database aggregates video from online sources like YouTube, shares user-specified data with social networks, delivers video to the user's TV, and lets users and the people in their networks send comments and ratings back and forth via an iPhone app. It avoids using the TV screen for messages, something that has proved irritating to consumers who don't want clunky text obscuring the pictures on their 52-inch HDTVs. The app also allows the user to tell the network what program to show on his or her set. For instance, if a friend suggests a show and the owner agrees, that show will pop up at the appointed time. In February, Montpetit and her students presented a refined version of this system to BT. Jeff Patmore, who works with Montpetit as head of strategic university research at BT, says such a system could be rolled out this year, although he declines to confirm any plans. But Montpetit anxiously awaits U.S. deployment of social TV: her daughter, with whom she watches certain shows, heads off to college next fall. Engineering and business issues aside, she wants social TV to help friends and family stay connected, even as they move apart.
2006Stretchable SiliconBy teaching silicon new tricks, John Rogers is reinventing the way we use electronics.Kate Greene March/April 2006This article is the tenth in a series of 10 stories we're running over two weeks, covering today's most significant (and just plain cool) emerging technologies. It's part of our annual "10 Emerging Technologies" report, which appears in the March/April print issue of Technology Review.These days, most electronic circuitry comes in the form of rigid chips, but devices thin and flexible enough to be rolled up like a newspaper are fast approaching. Already, "smart" credit cards carry bendable microchips, and companies such as Fujitsu, Lucent Technologies, and E Ink are developing “electronic paper” —thin, paperlike displays. But most truly flexible circuits are made of organic semiconductors sprayed or stamped onto plastic sheets. Although useful for roll-up displays, organic semiconductors are just too slow for more intense computing tasks. For those jobs, you still need silicon or another high-speed inorganic semiconductor. So John Rogers, a materials scientist at the University of Illinois at Urbana-Champaign, found a way to stretch silicon.[For an illustration of this new process, click here.] If bendable is good, stretchable is even better, says Rogers, especially for high-performance conformable circuits of the sort needed for so-called smart clothes or body armor. "You don't comfortably wear a sheet of plastic," he says. The potential applications of circuitry made from Roger's stretchable silicon are vast. It could be used in surgeons' gloves to create sensors that would read chemical levels in the blood and alert a surgeon to a problem, without impairing the sense of touch. It could allow a prosthetic limb to use pressure or temperature cues to change its shape. What makes Rogers's work particularly impressive is that he works with single-crystal silicon, the same type of silicon found in microprocessors. Like any other single crystal, single-crystal silicon doesn't naturally stretch. Indeed, in order for it even to bend, it must be prepared as an ultrathin layer only a few hundred nanometers thick on a bendable surface. Rogers exploits the flexibility of thin silicon, but instead of attaching it to plastic, he affixes it in narrow strips to a stretched-out, rubber-like polymer. When the stretched polymer snaps back, the silicon strips buckle but do not break, forming "waves" that are ready to stretch out again.Rogers's team has fabricated diodes and transistors—the basic building blocks of electronic devices—on the thin ribbons of silicon before bonding them to the polymer; the wavy devices work just as well as conventional rigid versions, Rogers says. In theory, that means complete circuits of the sort found in computers and other electronics would also work properly when rippled. Rogers isn't the first researcher to build stretchable electronics. A couple of years ago, Princeton University's Sigurd Wagner and colleagues began making stretchable circuits after inventing elastic-metal interconnects. Using the stretchable metal, Wagner's group connected together rigid "islands" of silicon transistors. Although the silicon itself couldn't stretch, the entire circuit could. But, Wagner notes, his technique isn't suited to making electrically demanding circuits such as those in a Pentium chip. "The big thing that John has done is use standard, high-performance silicon," says Wagner. Going from simple diodes to the integrated circuits needed to make sensors and other useful microchips could take at least five years, says Rogers. In the meantime, his group is working to make silicon even more flexible. When the silicon is affixed to the rubbery surface in rows, it can stretch only in one direction. By changing the strips' geometry, Rogers hopes to make devices pliable enough to be folded up like a T-shirt. That kind of resilience could make silicon's future in electronics stretch out a whole lot further. OTHER PLAYERSStretchable SiliconStephanie Lacour Neuro-electronic prosthesis to repair damage to the nervous system University of Cambridge, England Takao Someya Large-area electronics based on organic transistorsUniversity of TokyoSigurd Wagner Electronic skin based on thin-film siliconPrinceton University Home page image courtesy of Bryan Christie Design.
Technology Review February 2003Grid Computing In the 1980s "internetworking protocols" allowed us to link any two computers, and a vast network of networks called the Internet exploded around the globe. In the 1990s the "hypertext transfer protocol" allowed us to link any two documents, and a vast, online library-cum-shopping mall called the World Wide Web exploded across the Internet. Now, fast emerging "grid protocols" might allow us to link almost anything else: databases, simulation and visualization tools, even the number-crunching power of the computers themselves. And we might soon find ourselves in the midst of the biggest explosion yet. "We're moving into a future in which the location of [computational] resources doesn't really matter," says Argonne National Laboratory's Ian Foster. Foster and Carl Kesselman of the University of Southern California's Information Sciences Institute pioneered this concept, which they call grid computing in analogy to the electric grid, and built a community to support it. Foster and Kesselman, along with Argonne's Steven Tuecke, have led development of the Globus Toolkit, an open-source implementation of grid protocols that has become the de facto standard. Such protocols promise to give home and office machines the ability to reach into cyberspace, find resources wherever they may be, and assemble them on the fly into whatever applications are needed. Imagine, says Kesselman, that you're the head of an emergency response team that's trying to deal with a major chemical spill. "You'll probably want to know things like, What chemicals are involved? What's the weather forecast, and how will that affect the pattern of dispersal? What's the current traffic situation, and how will that affect the evacuation routes?" If you tried to find answers on today's Internet, says Kesselman, you'd get bogged down in arcane log-in procedures and incompatible software. But with grid computing it would be easy: the grid protocols provide standard mechanisms for discovering, accessing, and invoking just about any online resource, simultaneously building in all the requisite safeguards for security and authentication. Construction is under way on dozens of distributed grid computers around the world-virtually all of them employing Globus Toolkit. They'll have unprecedented computing power and applications ranging from genetics to particle physics to earthquake engineering. The $88 million TeraGrid of the U.S. National Science Foundation will be one of the largest. When it's completed later this year, the general-purpose, distributed supercomputer will be capable of some 21 trillion floating-point operations per second, making it one of the fastest computational systems on Earth. And grid computing is experiencing an upsurge of support from industry heavyweights such as IBM, Sun Microsystems, and Microsoft. IBM, which is a primary partner in the TeraGrid and several other grid projects, is beginning to market an enhanced commercial version of the Globus Toolkit. Out of Foster and Kesselman's work on protocols and standards, which began in 1995, "this entire grid movement emerged," says Larry Smarr, director of the California Institute for Telecommunications and Information Technology. What's more, Smarr and others say, Foster and Kesselman have been instrumental in building a community around grid computing and in advocating its integration with two related approaches: peer-to-peer computing, which brings to bear the power of idle desktop computers on big problems in the manner made famous by SETI@home, and Web services, in which access to far-flung computational resources is provided through enhancements to the Web's hypertext protocol. By helping to merge these three powerful movements, Foster and Kesselman are bringing the grid revolution much closer to reality. And that could mean seamless and ubiquitous access to unfathomable computer power. - M. Mitchell Waldrop Others in GRID COMPUTING RESEARCHER PROJECT Andrew Chien Entropia Peer-to-Peer Working Group Andrew Grimshaw Avaki; U. Virginia Commercial grid software Miron Livny U. Wisconsin, Madison Open-source system to harness idle workstations Steven Tuecke Argonne National Laboratory Globus Toolkit
M. Mitchell Waldrop March/April 2008Photo: Bettman/Corbis; Graphics: John Hersey Much of modern life depends on forecasts: where the next hurricane will make landfall, how the stock market will react to falling home prices, who will win the next primary. While existing computer models predict many things fairly accurately, surprises still crop up, and we probably can't eliminate them. But Eric Horvitz, head of the Adaptive Systems and Interaction group at Microsoft Research, thinks we can at least minimize them, using a technique he calls "surprise modeling."Horvitz stresses that surprise modeling is not about building a technological crystal ball to predict what the stock market will do tomorrow, or what al-Qaeda might do next month. But, he says, "We think we can apply these methodologies to look at the kinds of things that have surprised us in the past and then model the kinds of things that may surprise us in the future." The result could be enormously useful for decision makers in fields that range from health care to military strategy, politics to financial markets. Granted, says Horvitz, it's a far-out vision. But it's given rise to a real-world application: SmartPhlow, a traffic-forecasting service that Horvitz's group has been developing and testing at Microsoft since 2003.SmartPhlow works on both desktop computers and Microsoft PocketPC devices. It depicts traffic conditions in Seattle, using a city map on which backed-up highways appear red and those with smoothly flowing traffic appear green. But that's just the beginning. After all, Horvitz says, "most people in Seattle already know that such-and-such a highway is a bad idea in rush hour." And a machine that constantly tells you what you already know is just irritating. So Horvitz and his team added software that alerts users only to surprises--the times when the traffic develops a bottleneck that most people wouldn't expect, say, or when a chronic choke point becomes magically unclogged.But how? To monitor surprises effectively, says Horvitz, the machine has to have both knowledge--a good cognitive model of what humans find surprising—and foresight: some way to predict a surprising event in time for the user to do something about it.Horvitz's group began with several years of data on the dynamics and status of traffic all through Seattle and added information about anything that could affect such patterns: accidents, weather, holidays, sporting events, even visits by high-profile officials. Then, he says, for dozens of sections of a given road, "we divided the day into 15-minute segments and used the data to compute a probability distribution for the traffic in each situation."That distribution provided a pretty good model of what knowledgeable drivers expect from the region's traffic, he says. "So then we went back through the data looking for things that people wouldn't expect—the places where the data shows a significant deviation from the averaged model." The result was a large database of surprising traffic fluctuations.Once the researchers spotted a statistical anomaly, they backtracked 30 minutes, to where the traffic seemed to be moving as expected, and ran machine-learning algorithms to find subtleties in the pattern that would allow them to predict the surprise. The algorithms are based on Bayesian modeling techniques, which calculate the probability, based on prior experience, that something will happen and allow researchers to subjectively weight the relevance of contributing events (see TR10: "Bayesian Machine Learning," February 2004).
Technology Review May 2005Universal Memory NANOELECTRONICS Nanotubes make possible ultradense data storage. By Gregory T. HuangNantero CEO Greg Schmergel holds a circular wafer of silicon, about the size of a compact disc, sealed in an acrylic container. It's a piece of hardware that stores 10 billion bits of digital information, but what's remarkable about it is the way it does it. Each bit is encoded not by the electric charge on a circuit element, as in conventional electronic memory, nor by the direction of a magnetic field, as in hard drives, but by the physical orientation of nanoscale structures. This technology could eventually allow vastly greater amounts of data to be stored on computers and mobile devices. Experts estimate that within 20 years, you may be able to fit the content of all the DVDs ever made on your laptop computer or store a digital file containing every conversation you have ever had on a handheld device.Nantero's approach is part of a broader effort to develop "universal memory"—next-generation memory systems that are ultradense and low power and could replace everything from the flash memory in digital cameras to hard drives. Nantero's technology is based on research that the Woburn, MA, company's chief scientist, Thomas Rueckes, did as a graduate student at Harvard University. Rueckes noted that no existing memory technologies seemed likely to prove adequate in the long run. Static and dynamic random-access memory (RAM), used in laptops and PCs, are fast but require too much space and power; flash memory is dense and nonvolatile—it doesn't need power to hold data—but is too slow for computers. "We were thinking of a memory that combines all the advantages," says Rueckes.The solution: a memory each of whose cells is made of carbon nanotubes, each less than one-ten-thousandth the width of a human hair and suspended a few nanometers above an electrode. This default position, with no electric current flow between the nanotubes and the electrode, represents a digital 0. When a small voltage is applied to the cell, the nanotubes sag in the middle, touch the electrode, and complete a circuit—storing a digital 1. The nanotubes stay where they are even when the voltage is switched off. That could mean instant-on PCs and possibly the end of flash memory; the technology's high storage density would also bring much larger memory capacities to mobile devices. Nantero claims that the ultimate refinement of the technology, where each nanotube encodes one bit, would enable storage of trillions of bits per square centimeter—thousands of times denser than what is possible today. (By comparison, a typical DVD holds less than 50 billion bits total.) The company is not yet close to that limit, however; its prototypes store only about 100 million bits per square centimeter.Nantero has partnered with chip makers such as Milpitas, CA-based LSI Logic to integrate its nanotube memory with silicon circuitry. The memory sits on top of a layer of conventional transistors that read and write data, and the nanotubes are processed so that they don't contaminate the accessing circuits. By late 2006, Schmergel predicts, Nantero's partners should have produced samples of nanotube memory chips. Early applications may come in laptops and PDAs. Ultimately, however, the goal is to replace all memory and disk storage in all computers.Suspending nanotubes is not the only way to build a universal memory. Other strategies include magnetic random-access memory, which Motorola and IBM and are pursuing, and molecular memory, where Hewlett-Packard is a research leader. But industry experts are watching Nantero's progress with cautious optimism. "They have a very good approach, and it's further along than any other," says Ahmed Busnaina, professor of electrical engineering at Northeastern University and director of the National Science Foundation-funded Center for High-Rate Nanomanufacturing. If successful, this new kind of memory could put a world of data at your fingertips instantly, wherever you go.
Erica Naone March/April 2009Weekend plans: Adam Cheyer participates in a conversation with the software. (Go to the next page to read the dialogue and an explanation of the artificial-intelligence behind it.) Howard Cao Search is the gateway to the Internet for most people; for many of us, it has become second nature to distill a task into a set of keywords that will lead to the required tools and information. But Adam Cheyer, cofounder of Silicon Valley startup Siri, envisions a new way for people to interact with the services available on the Internet: a "do engine" rather than a search engine. Siri is working on virtual personal-assistant software, which would help users complete tasks rather than just collect information. Cheyer, Siri's vice president of engineering, says that the software takes the user's context into account, making it highly useful and flexible. "In order to get a system that can act and reason, you need to get a system that can interact and understand," he says. Siri traces its origins to a military-funded artificial-intelligence project called CALO, for "cognitive assistant that learns and organizes," that is based at the research institute SRI International. The project's leaders--including Cheyer--combined traditionally isolated approaches to artificial intelligence to try to create a personal-assistant program that improves by interacting with its user. Cheyer, while still at SRI, took a team of engineers aside and built a sample consumer version; colleagues finally persuaded him to start a company based on the prototype. Siri licenses its core technology from SRI.Mindful of the sometimes spectacular failure of previous attempts to create a virtual personal assistant, Siri's founders have set their sights conservatively. The initial version, to be released this year, will be aimed at mobile users and will perform only specific types of functions, such as helping make reservations at restaurants, check flight status, or plan weekend activities. Users can type or speak commands in casual sentences, and the software deciphers their intent from the context. Siri is connected to multiple online services, so a quick interaction with it can accomplish several small tasks that would normally require visits to a number of websites. For example, a user can ask Siri to find a midpriced Chinese restaurant in a specific part of town and make a reservation there.Recent improvements in computer processor power have been essential in bringing this level of sophistication to a consumer product, Cheyer says. Many of CALO's abilities still can't be crammed into such products. But the growing power of mobile phones and the increasing speed of networks make it possible to handle some of the processing at Siri's headquarters and pipe the results back to users, allowing the software to take on tasks that just couldn't be done before.Video"Search does what search does very well, and that's not going anywhere anytime soon," says Dag Kittlaus, Siri's cofounder and CEO. "[But] we believe that in five years, everyone's going to have a virtual assistant to which they delegate a lot of the menial tasks."While the software will be intelligent and useful, the company has no aspiration to make it seem human. "We think that we can create an incredible experience that will help you be more efficient in your life, in solving problems and the tasks that you do," Cheyer says. But Siri is always going to be just a tool, not a rival to human intelligence: "We're very practical minded."
Making cement for concrete involves heating pulverized limestone, clay, and sand to 1,450 °C with a fuel such as coal or natural gas. The process generates a lot of carbon dioxide: making one metric ton of commonly used Portland cement releases 650 to 920 kilograms of it. The 2.8 billion metric tons of cement produced worldwide in 2009 contributed about 5 percent of all carbon dioxide emissions. Nikolaos Vlasopoulos, chief scientist at London-based startup Novacem, is trying to eliminate those emissions with a cement that absorbs more carbon dioxide than is released during its manufacture. It locks away as much as 100 kilograms of the greenhouse gas per ton. Vlasopoulos discovered the recipe for Novacem's cement as a grad student at Imperial College London. "I was investigating cements produced by mixing magnesium oxides with Portland cement," he says. But when he added water to the magnesium compounds without any Portland in the mix, he found he could still make a solid-setting cement that didn't rely on carbon-rich limestone. And as it hardened, atmospheric carbon dioxide reacted with the magnesium to make carbonates that strengthened the cement while trapping the gas. Novacem is now refining the formula so that the product's mechanical performance will equal that of Portland cement. That work, says Vlasopoulos, should be done "within a year." Other startups are also trying to reduce cement's carbon footprint, including Calera in Los Gatos, CA, which has received about $50 million in venture investment. However, Calera's cements are currently intended to be additives to Portland cement rather than a replacement like Novacem's, says Franz-Josef Ulm, director of the Concrete Sustainability Hub at MIT. Novacem could thus have the edge in reducing emissions, but all the startups face the challenge of scaling their technology up to industrial levels. Still, Ulm says, this doesn't mean a company must displace billions of tons of Portland cement to be successful; it can begin by exploiting niche areas in specialized construction. If Novacem can produce 500,000 tons a year, Vlasopoulos believes, it can match the price of Portland cement.VideoEven getting that far will be tough. "They are introducing a very new material to a very conservative industry," says Hamlin Jennings, a professor in the Department of Civil and Environmental Engineering at Northwestern University. "There will be questions." Novacem will start trying to persuade the industry by working with Laing O'Rourke, the largest privately owned construction company in the U.K. In 2011, with $1.5 million in cash from the Royal Society and others, Novacem is scheduled to begin building a new pilot plant to make its newly formulated cement.
Why It MattersComputers would assist humans far more effectively if they could reliably recognize patterns and make inferences about the world.BreakthroughA method of artificial intelligence that could be generalizable to many kinds of applications.Key Players• Google• Microsoft• IBM• Geoffrey Hinton, University of Toronto
Why It MattersHigher efficiency would make solar power more competitive with fossil fuels.BreakthroughManaging light to harness more of sunlight’s energy.Key Players• Harry Atwater, Caltech• Albert Polman, AMOLF• Eli Yablonovitch, University of California, Berkeley• Dow Chemical
Why It MattersPoor countries lack data-gathering infrastructure; phone data can provide it.BreakthroughCreating disease–fighting tools with cell-phone mobility data.Key Players• Caroline Buckee, Harvard University• William Hoffman, World Economic Forum• Alex Pentland, MIT• Andy Tatem, University of Southampton
Why It MattersSites such as Facebook and Twitter are becoming permanent records of our interactions.BreakthroughA social-media service that replicates the unrecorded nature of ordinary conversation.Key Players• Snapchat • Gryphn• Burn Note• Wickr
Even as computing gets more sophisticated, people want simple and easy-to-use interfaces.BreakthroughWatches that pull selected data from mobile phones so their wearers can absorb information with a mere glance.Key Players• Pebble• Sony• Motorola • MetaWatch
Brain damage can cause people to lose the ability to form long-term memories. BreakthroughAnimal experiments show it is possible to correct for memory problems with implanted electrodes.Key Players• Theodore Berger, USC• Sam Deadwyler, Wake Forest• Greg Gerhardt, University of Kentucky • DARPA
Key Players:Rethink RoboticsUniversal roboticsRedwood roboticsJulie Shah, MIT
Why It MattersBecause it can potentially make complex parts more cheaply, additive manufacturing could revitalize many advanced manufacturing sectors. BreakthroughGE will use 3-D printing to produce a key metal part for its new jet engines.Key Players• GE Aviation • EADS • United Technologies• Pratt & Whitney
Why It MattersTomorrow’s children could be born with a complete list of their genetic strengths and weaknesses.BreakthroughSequencing the DNA of a fetus from a pregnant woman’s blood.Key Players• Illumina• Verinata • Stanford University • Jay Shendure, University of Washington
Why It MattersDC grids could be far more efficient and make it possible to link widely dispersed wind and solar farms.BreakthroughPractical high-voltage direct-current circuit breakers.Key Players• ABB• Siemens• EPRI• General Atomics