AMERICAN FEBRUARY 1999 $4.95 The Next Generation How Limbs Develop High Blood Pressure in African-Americans www.sciam.com ^S-l1 , v. Copyright 1999 Scientific American, Inc. SCIENTIFIC AMERICAN FROM THE EDITORS 8 LETTERS TO THE EDITORS 10 50,100 AND 150 YEARS AGO 14 NEWS AND ANALYSIS Invading fire ants (page 26) February 1999 Volume 2 8 0 Number 2 IN FOCUS Pretesting tumor therapies remains controversial. 19 SCIENCE AND THE CITIZEN Microrotors and Maxwell's demon___Suppressing anti-nuke protesters___Ants against elephants. 24 PROFILE Paleoanthropologist Dennis Stanford shreds a mammoth. 36 TECHNOLOGY AND BUSINESS Why pollution cleanups stall___ Liquid air for young lungs___ RNA vaccines. 39 CYBER VIEW On-line privacy guarantees pit the U.S. against Europe. 44 THE WAY TO GO IN SPACE Tim Beardsley, staff writer Industry, science, exploration and even tourism all have their sights on outer space. The only catch is getting there. Today's launch vehicles and spacecraft are too expensive and limited to enable a gold rush to the stars. Scientific American previews some of the most exciting new concepts in space transport now being planned and tested, with explanations and commentaries by the people behind the spacecraft. INCLUDES: Air-Breathing Engines Charles R. McCUnton Space Tethers 86 Robert L. Forward and Robert P. Hoyt Highways of Light Leik N. Myrabo Light Sails Henry M. Harris 90 Compact Nuclear Rockets 92 James R. Powell Reaching for the Stars Stephanie D. Leifer 94 Copyright 1999 Scientific American, Inc. 46 Supersoft X-ray Stars and Supernovae Feter Kababka, Edward P. J. van den Heuvel and Saul A. Rappaport Oddly low energy x-rays from space can be traced back to stellar systems where white dwarfs orbit larger, more ordinary stars. The white dwarfs appear to cannibalize their siblings and then, when full to bursting, explode as type la Supernovae. 56 The Puzzle of Hypertension in African-Americans Richard S. Cooper, C. N. Rotimi and R. Ward High blood pressure is the leading cause of health problems among black Americans. Yet inhabitants of western Africa have among the lowest rates of hypertension anywhere. Preconceptions about race distort understanding of this ailment. 64 Cichlids of the Rift Lakes Melanie L. J. Stiassny and Axel Meyer These beautiful fish evolve at a dizzying pace-hundreds of species live within just three African lakes, and many of them seem to have emerged almost overnight. But now human use of these environments threatens to exterminate these living laboratories for evolutionary studies. 70 A Multifractal Walk down Wall Street BenoitB. Mandelbrot When will the Dow top 10,000? When will it crash? This famous mathematician argues that the complex geometric patterns that describe the shapes of coastlines, ferns and galaxies might model the capri-ciousness of financial markets better than conventional portfolio theory can. 74 How Limbs Develop Robert D. Riddle and Clifford J. Tabin Tiny buds of almost featureless tissue on embryos organize themselves into the complex structures of arms, legs, wings and fins. Cells within these buds orient the growth of digits and bones by establishing trails of signal molecules. These discoveries have implications for both birth defects and cancer. Scientific American (ISSN 0036-8733), published monthly by Scientific American, Inc., 415 Madison Avenue, New York, N.Y. 10017-1 111. Copyright © 1999 by Scientific American, Inc. All rights reserved. No part of this issue may be reproduced by any mechanical, photographic or electronic process, or in the form of a phonographic recording, nor may it be stored in a retrieval system, transmitted or otherwise copied for public or private use without written permission of the publisher. Periodicals postage paid at New York, N.Y., and at additional mailing offices. Canada Post International Publications Mail (Canadian Distribution) Sales Agreement No. 242764. Canadian BN No. 127387652RT; QST No. Q1015332537. Subscription rates: one year $34.97 (outside U.S. $49). Institutional price: one year $39.95 (outside U.S. $50.95). Postmaster: Send address changes to Scientific American, Box 3187, Harlan, Iowa 51537. Reprints available: write Reprint Department, Scientific American, lnc.,415 Madison Avenue, New York, N.Y. 10017-1111;fax:(212) 355-0408 orsende-mailtosacust@sciam.com Subscription inquiries: U.S. and Canada (800) 333-1199; other (515) 247-7631. mmmmiMmiMiHMwiiiHHi Mulini »■*#■***......** THE AMATEUR SCIENTIST Capturing the three phases of water in one bottle. 98 MATHEMATICAL RECREATIONS Origami gets practical. 100 REVIEWS AND COMMENTARIES Once upon a Number: John Allen Paulos finds the mathematics in entertaining stories, and the stories in entertaining math. 102 The Editors Recommend Books on robots, extraterrestrial intelligence and quantum physics. 103 Wonders by the Morrisons Noah's flood revealed. 105 Connections by James Burke From Bordeaux to balloons. 106 WORKING KNOWLEDGE How construction cranes stay upright. 108 About the Cover "Multifractal" graphs closely resemble the fluctuations of financial markets. Could they predict real upturns and downturns in stocks? Image by Slim Films. THE SCIENTIFIC AMERICAN WEB SITE Explore the DNA of a 1,000-cell animal: J www.sciam. com/exhibit/ 122198worm/ index.html www.sciam.com Copyright 1999 Scientific American, Inc. From the editors Worm Gets the Early Bird Yea, the stars are not pure in his sight," reads the Book of Job. "How much less man, that is a worm?" Typical. As Bartletťs Familiar Quotations will attest, worms are the most famously low vermin in literature. People are usually the writers' real targets, but worms take the rhetorical beating. Jonathan Edwards, for instance, invoked them to rail, "A little, wretched, despicable creature; a worm, a mere nothing, and less than nothing; a vile insect that has risen up in contempt against the majesty of Heaven and earth." Worms are the acme of insignificance. And yet biologists love them. Granted, researchers' affection falls mainly on the roundworm Caenorhabditis elegans, an inoffensive microscopic beastie. As I write this, John E. Sulston of the Sanger Center in England and Robert H. Waterston of Washington University have only just published the complete genetic sequence for C. elegans. For the first time, science knows all the genetic information that makes up a multicellular animal. That brilliant accomplishment foretells the completion of the Human Genome Project just a few years from now, when we will similarly know all the genes of humans. Bruce Alberts, the president of the National Academy of Sciences, quotably remarked to the New York Times, "In the last 10 years we have come to realize humans are more like worms than we ever imagined. " (He meant this genomic work, not the rise of the Jerry Springer Show.) We and the worms share many of the same genes—and why not? By and large, we're made of the same proteinaceous stuff. The differences mostly reflect proportion and organization. The great mystery is how that DNA directs development, telling one cell how to grow into a well-formed creature of differentiated tissues. C. elegans furthers that pursuit, too, but only so far. Past that, we need to turn to other creatures and other methods. Roundworms are ill equipped, for example, to teach us how limbs develop—and not merely because they don't have feet. Rather C. elegans lacks even some of the ancient genes that evolution later co-opted for building vertebrate fins, legs, wings and arms. Chick embryos are better choices: they are easily manipulated and anatomical cousins to humans. Robert D. Riddle and Clifford J. Tabin bring us up to date in "How Limbs Develop," beginning on page 74. CHICK EMBRYO holds clues to development that worms cannot. JOHN RENNIE, Editor in Chief editors@sciam.com SCIENTIFIC AMERICAN Established 1845 John Rennie, EDITOR IN CHIEF Board of Editors Michelle Press, MANAGľNG EDITOR Philip M. Yam, NEWS EDITOR Ricki L. Rusting, senior associate editor ASSOCIATE EDITORS: Timothy M. Beardsley; Gary Stix W. Wayt Gibbs, SENIOR WRITER Kristin Leutwyler, ON-LINE EDITOR EDITORS: Mark Alpert; Carol Ezzell; Alden M. Hayashi; Madhusree Mukerjee; George Musser; Sasha Němecek; Glenn Zorpette CONTRIBimNG EDITORS: Marguerite Holloway; Steve Mirsky; Paul Wallich Art Edward Bell, ART DIRECTOR Jana Brenning, SENIOR ASSOCIATE ART DIRECTOR Johnny Johnson, ASSISTANT ART DIRECTOR Bryan Christie, ASSISTANT ART DIRECTOR Dmitry Krásny, ASSISTANT ART DIRECTOR Bridget Gerety, PHOTOGRAPHY EDITOR Richard Hunt, PRODUCTION EDITOR Copy Maria-Christina Keller, COPY CHIEF Molly K. Frances; Daniel C. Schlenoff; Katherine A. Wong; Stephanie J. Arthur; Eugene Raikhel; Myles McDonnell Administration Rob Gaines, EDITORIAL ADMrNISTRATOR David Wildermuth Production Richard Sasso, ASSOCIATE PUBLISHER/ VICE PRESIDENT, PRODUCTION William Sherman, DIRECTOR, PRODUCTION Janet Cermak, MANUFACTURING MANAGER Silvia Di Placido, PREPRESS AND QUALITY MANAGER Georgina Franco, PRINT PRODUCTION MANAGER Norma Jones, ASSISTANT PROJECT MANAGER Madelyn Keyes, CUSTOM PUBLISHING MANAGER Carl Cherebin, AD TRAFFIC Circulation Lorraine Leib Terlecki, ASSOCIATE PUBLISHER/ CIRCULATION DIRECTOR Katherine Robold, CIRCULATION MANAGER Joanne Guralnick, CIRCULATION PROMOTION MANAGER Rosa Davis, FULFILLMENT MANAGER Business Administration Marie M. Beaumonte, GENERAL MANAGER Alyson M. Lane, BUsrNESS manager Constance Holmes, MANAGER, ADVERTISING ACCOUNTľNG AND COORDrNATION Electronic Publishing Martin O. K. Paul, DIRECTOR Ancillary Products Diane McGarvey, DIRECTOR Chairman and Chief Executive Officer John J. Hanley Co-Chairman Rolf Grisebach President Joachim P. Rosier Vice President Frances Newburg Scientific American, Inc. 415 Madison Avenue New York, NY 10017-1111 (212) 754-0550 PRINTED IN U.S.A. Scientific American February 1999 Copyright 1999 Scientific American, Inc. LETTERS TO THE EDITORS The award for most curious letter of the month goes to Bernard S. Husbands of Camano Island, Wash. After reading "Secrets of the Slime Hag," by Frederic H. Martini [October 1998], Husbands wondered, "How suitable would slime be in fighting fires? Could hagfish be 'milked' for their slime-producing agent?" When consulted for more information, Martini pointed out that because at least 99 percent of the slime is water, "it'd be a lot easier just to pour water on the fire in the first place and skip the part about the hagfish." As to milking hagfish, he says "because handling the animals is extremely stressful for all involved and a massive sliming leaves the critter moribund if not doomed, I doubt that slime dairies will ever be a growth industry." The most impassioned letters were in response to the October special report "Computer Security and the Internet." In particular, Carolyn P. Meinel's article "How Hackers Break In... and How They Are Caught" prompted an array of responses from people throughout the computer security community. Some readers questioned Meinel's qualifications to write the article; others found the piece right on target (below). HACKERS VERSUS CRACKERS In the October 1998 special report on computer security, the term "hacker" was used incorrectly. You stated that hackers are malicious computer security burglars, which is not the correct meaning of "hacker" at all. The correct term for such a person is "cracker." Hackers are the expert programmers who engineered the Internet, wrote C and UNIX and made the World Wide Web work. Please show more respect for hackers in the future. Further information about this distinction can be found at the Hacker Anti-Defamation Teague's site at http://members.xoom.com/jcenters/ on the World Wide Web. JOSH CENTERS via e-mail "AQU ' DLQO Di JG21. tl Dil ÚIODOIO uu rJlOül01l 010 TU -lamm A CLOSER LOOK: Are you a hacker or a cracker? Editors' note: "We agree that there is indeed a difference between "hacker" and "cracker," but the mainstream media has used "hacker" to encompass both. We did, however, try to draw a distinction by using the term "white-hat hacker." Part of the problem with "cracker" is that the word has been used disparagingly in the past to refer to a poor, white person from the South. MIXED REVIEWS As a computer security professional .. with many years' experience in both public and private industry, I was extremely disturbed to see that you published an article by Carolyn P. Meinel in your October issue ["How Hackers Break In ... and How They Are Caught"]. Meinel has absolutely no credibility in the computer security community. She does not have the technical awareness to be considered knowledgeable, nor is she in any stretch of the imagination considered an expert in the field. Her article probably gave CEOs a fairly good sense of how insecure their networks might be, but I shudder to think that companies looking to jump on the computer security bandwagon will now be using her article as a technical reference. CHEY COBB via e-mail Scientific American February 1999 I just wanted to thank you for Meinel's excellent article. It was informative for less technically literate readers but accurate, so as to not curl any fingernails among us geeks. It is a pleasure to see real information about computer security in this day of media-friendly fantasies. ELIZABETH OLSON via e-mail ANEWY2KBUG In response to Wendy R. Grossman's Cyber View, "Y2K: The End of the World as We Know It," in the October issue: Perhaps the biggest problem of all will be getting used to writing 2000. I've been doing 19XX my whole life—50 years—and that's going to be a very hard habit to break. WILLIAM CARLQUIST Nevada City, Calif. THE NAME GAME We have serious concerns about "The Artistry of Microorganisms," by Eshel Ben-Jacob and Herbert Tevine [October]. The baaeria piaured on page 84 are not Bacillus subtilis as the authors indicate. We have recently shown that a number of the baaerial strains once thought to be B. subtilis instead belong to a different group of bacilli, which differ significantly in their pattern formation properties. These species have the ability to form complex patterns on very hard agar surfaces, whereas B. subtilis and its close relatives do not. Ben-Jacob provided us with a sample of the baaeria shown in the inset on page 84; we found it to be an unidentified species, which we named B. vortex. The larger picture appearing on that page is yet another species, which we named B. tipchirales. It is perplexing to us that Ben-Jacob is well aware of our recent findings, has confirmed our results but is nonetheless publishing with his colleagues their own characterization of the species. RIVKA RUDNER Department of Biology Hunter College ERICH D.JARVIS Department of Neurobiology Duke University Medical Center Letters to the Editors Copyright 1999 Scientific American, Inc. SCIENTIFIC AMERICAN Kate Dobson PUBLISHER 212-451-8522 kdobson@sci am .com NEW YORK Thomas Potratz ADVERTISING DIRECTOR 212-451-8561 tpotratz@sciam.com Timothy W. Whiting SALES DEVELOPMENT MANAGER 212-451-8228 twhiting® s ci am. c om Kevin Gentzel 212-451-8820 kgentzel@sci am .com Randy James 212-451-8528 r j ames@sciam. com Stuart M. Keating 212-451-8525 ske ating @ sei am .com Wanda R. Knox 212-451-8530 wknox@ s ci am .com DETROIT Edward A. Bartley MIDWEST MANAGER 248-353-4411 fax 248-353-4360 ebartley@sciam.com CHICAGO Randy James CHICAGO REGIONAL MANAGER 312-236-1090 fax 312-236-0893 rjames@sciam.com LOS ANGELES Lisa K. Carden WEST COAST MANAGER 310-477-9299 fax 310-477-9179 lcarden@sci am .com SAN FRANCISCO Debra Silver SAN FRANCISCO MANAGER 415-403-9030 fax 415-403-9033 dsilver@sciam.com DALLAS THE GRIFFITH GROUP 972-931-9001 fax 972-931-9074 lowcpm@onramp.net CANADA FENN COMPANY, INC. 905-833-6200 fax 905-833-2116 dfe nn@ canadads.com EUROPE Roy Edwards INTERNATIONAL ADVERTISING DIRECTOR Thavies Inn House, 3/4, Holborn Circus London ECIN 2HB, England +44 171 842-4343 fax+44 171 583-6221 r e dwar ds@sciam.com BENELUX REGINALD HOE EUROPA S.A. +32-2/735-2150 fax+32-2/735-7310 MIDDLE EAST PETER SMITH MEDIA & MARKETING +44 140 484-1321 fax+44 140 484-1320 JAPAN NIKKEI INTERNATIONAL LTD. +813-5259-2690 fax+813-5259-2679 KOREA PISCOM, INC. +822 739-7840 fax +822 732-3662 ÍÍONG KONG HUTTON MEDIA LIMITED +852 2528 9135 fax+852 2528 9281 MARKETING Laura Salant MARKETING DIRECTOR 212-451-8590 lsalant@sciam.com Diane Schübe PROMOTION MANAGER 212-451-8592 dscriube@sciam.com Susan Spirakis RESEARCH MANAGER 212-451-8529 sspirakis@sciam.com Nancy Mongelli PROMOTION DESIGN MANAGER 212-451-8532 nm ong elli @ s ci am .com NEW YORK ADVERTISING OFFICES 415 MADISON AVENUE, NEW YORK, NY 10017 212-754-0550 fax 212-754-1138 SUBSCRIPTION INQUIRIES U.S. AND CANADA (800) 333-1199: OTHER (515) 247-7631 OTHER EDITIONS OF SCIENTIFIC AMERICAN Spektrum der Wissenschaft Verlagsgesellschaft mbH Vangerowstrasse 20 69115 Heidelberg, GERMANY tel:+49-6221-50460 redaktion@spektrum.com SCIENCE Pour la Science Editions Bělin 8, rue Férou 75006 Paris, FRANCE tel: +33-1-55-42-84-00 Lfc ŠCJKINZE Le Scienze Piazza delia Repubblica, 8 20121 Miláno, ITALY tel:+39-2-29001753 redazione@lescienze.it maura IĽNCIA Investigacion y Ciencia Prensa Científica, S.A. Muntaner, 339 pral. l.a 08021 Barcelona, SPAIN tel: +34-93-4143344 precisa@abaforum.es fÄ Majallat Al-Oloom Kuwait Foundation for the Advancement of Sciences P.O. Box 20856 Safat 13069, KUWAIT tel:+965-2428186 Swiat Nauk i Proszynski i Ska S.A. ul. Garazowa 7 02-651 Warszawa, POLAND tel: +48-022-607-76-40 swiatnauki@proszynski.com.pl Nikkei Science, Inc. 1-9-5 Otemachi, Chiyoda-ku Tokyo 100-8066, JAPAN tel:+813-5255-2821 run HAVKI1 Svit Nauky Lviv State Medical University 69 Pekárska Street 290010, Lviv, UKRAINE tel: +380-322-755856 zavadka@meduniv.lviv.ua *% KeXue Institute of Scientific and Technical Information of China P.O. Box 2104 Chongqing, Sichuan PEOPLE'S REPUBLIC OF CHINA tel:+86-236-3863170 Ben-Jacob andLevine reply: Although they were isolated from cultures of Bacillus subtilis, certain bacteria shown in our article went unidentified for several years. Only very recently (in fact, after the article was written), physiological and genetic studies carried out by Ben-Jacob and David Gutnick identified these bacteria as members of the new Paenibacillus genera. The researchers named these species P. dendritiformis (shown on the cover and in the large photograph on page 84), and P. vortex (shown in the inset photograph on page 84). Rudner and Jarvis are therefore correct that these colonies are not B. subtilis but wrong in detail as far as identification and attribution are concerned. Clearly, though, none of this affects the focus and conclusions of our article, namely, that microorganisms can engage in sophisticated cooperative and adaptive behavior, leading to intricate and indeed beautiful spatial patterns. OUNCE OF PREVENTION... I read "Designer Estrogens," by V. Craig Jordan [October], with great interest. It is comforting to know that the topic of estrogen replacement therapies for the treatment of osteoporosis, heart disease, and breast and endometrial cancers in women is being so actively and aggressively researched. We should not, however, in our desire to have a cure in the form of a pill forget the importance of simple things like exercise, calcium intake and diet in the prevention of these problems. LAUREN SLOANE Macungie, Pa. Letters to the editors should be sent by e-mail to editors@sciam.com or by post to Scientific American, 415 Madison Ave., New York, NY 10017. Letters may be edited for length and clarity. ERRATUM In the Further Readings for "Evolution and the Origins of Disease" [November 1998], the publisher of Darwinian Psychiatry, by M. T. McGuire and A. Troisi, was misiden-tified. The correct publisher is Oxford University Press. We regret the error. 12 Scientific American February 1999 Copyright 1999 Scientific American, Inc. Letters to the Editors 50, 100 AND 150 YEARS AGO FEBRUARY 1949 RESEARCH MONEY-"The Office of Naval Research today is the principal supporter of fundamental research by U.S. scientists. Its 1,131 projects account for nearly 40 percent of the nation's total expenditure in pure science. Most surprising of all has been ONR's ardent and unflagging fidelity to the principle of supporting research of the most fundamental nature, although many of its projects, of course, are likely to lead to more immediate naval applications. The ONR has pioneered so fruitfully in the support of basic science that it stands as a model for the planned National Science Foundation, which is now regarded as 'imminent.'" ROCKET PLAN—"A new rocket specifically designed for research in the upper atmosphere has been successful in flight tests at the White Sands, N.M., proving ground. Named the Aerobee, it has carried up to 250 pounds of scientific equipment to heights of 70 miles. It is the first large high-altitude rocket of American design, and was developed at Johns Hopkins University under Navy sponsorship to take the place of the dwindling supply of captured German V-2s. Although it does not have the range of the V-2, it is a more practical and less expensive instrument. The Aerobee is nearly 19 feet long and very slender. It has no guiding mechanism; its course is set on the launching platform." ATOMIC CLOCK-"The first clock in history to be regulated by the spin of a molecule instead of by the sun or stars is now a ticking reality. It was unveiled at the National Bureau of Standards. The clock is controlled by the period of vibration of the nitrogen atom in the ammonia molecule." FEBRUARY 1899 PANAMA CANAL-"The new Canal project is on a sound engineering and financial footing and is within a calculable distance of completion. The new company decided at the outset to abandon Ferdinand de Lesseps' extravagant idea of a sea-level canal and substitute a system of locks and suitable reservoirs. The canal is at present two-fifths completed, and the cost to complete the work under the new plans will be $87,000,000 over the next eight to ten years." Norwegian skate sailing VEGETABLE CATERPILLAR—"The grub, the larva of a large moth commonly called 'the night butterfly,' is subject to attacks from a vegetable parasite, or fungi, called Sphaeria Robertsii. The spores of the fungi, germinating in the body of the grub, absorb or assimilate the whole of the animal substance, the fungus growth being an exact replica of the living caterpillar. The fungi, having killed the grub, sends up a shoot or seed stem; its lower portion retains its vitality and sends up another shoot the following year.— C. Fitton, New Zealand" ADVANCED TOOLS FOR ARCHAEOLOGY-"In a leaure by Flinders Petrie, entitled 'Photography, the Handmaid of Exploration,' he showed to what an enormous extent exploration has been aided by photography. Especially in Egypt the success of photography is very great, owing to the splendid atmospheric conditions and fine sunlight which prevail in that country. With the aid of the camera not only can the actual finds be photographed, but the exact condition of the objects in situ can be recorded. Nowadays all explorers go equipped with the best photographic apparatus which money can purchase." SKATE SAILING—"The home of skate sailing is Norway, the land of fjords, mountains, and lakes. In order to sail in the Norwegian fashion, two long skates and a sail rigged to a bamboo pole are required [see illustration]. The sail is simple in construction, but requires great dexterity in handling, and is directed by a steering cord in the left hand. On the great fjords of Norway, Sognefjord, for example, 100 kilometers (62 miles) can be covered in a comparatively short time." FEBRUARY 1849 NEW WHALING GROUND-"We learn that Capt. Royce, an American, of Sag Harbor, L.I., has just arrived with 1,800 barrels of oil which he took in the Arctic Ocean above Behring Straits. He found the seas clear of ice, plenty of Whales, and one a new kind. He found the ocean there very shallow, 14 to 35 fathoms, and he saw Indians crossing in their canoes regularly from Asia to the American continent. There can be no doubt but the two were once united. Some interesting discoveries are yet to be made in that region." WORLD WIDE WIRE-"Dr. Jones, of this city, proposes to run telegraph wires from St. Louis, Missouri, with a branch to Behring's Straits, where the wires should cross to the Asiatic side, and proceed through Siberia to St. Petersburg, and the principal cities of Europe. In such a project, the governments of Europe, Russia at least, will not be likely to engage—the language of freedom would too often travel along the iron wings to suit the policy of a one man government." 14 Scientific American February 1999 50, 100 and 150 Years Ago Copyright 1999 Scientific American, Inc. NEWS and ANALYSIS 24 SCIENCE AND THE CITIZEN 36 PROFILE Dennis Stanford 29 ANTI GRAVITY 32 BY THE NUMBERS 44 CYBER VIEW 39 TECHNOLOGY AND BUSINESS IN FOCUS PRETESTING TUMORS Long derided, test-tube screening for cancer-drug sensitivity slowly gains acceptance On January 22,1997, doctors diagnosed 40-year-old Randy Stein with pancreatic cancer and told him he had three months to live. Two years later, Stein is working out with a trainer twice a week, planning his next vacation and launching an Internet business to help cancer patients. "I'm doing fabulous," he declares. "It's a miracle." He beat the odds, he says, because his doctor used a test aimed at predicting which drugs would kill his tumor— a test most oncologists don't order. Conventionally, oncologists rely on clinical trials in choosing chemotherapy regimens. But the statistical results of these population-based studies might not apply to an individual. For many cancers, especially after a relapse, more than one standard treatment exists. "There is rarely a situation where you would get everyone to agree that there's only one form of therapy," says Larry Weisenthal, who runs Weisenthal Cancer Group, a private cancer-drug-testing laboratory in Huntington Beach, Calif. Physicians select drugs based on their personal experience, possible side effects and the patient's condition, among other factors. "The system is overloaded with drugs and underloaded with wisdom and expertise for using them," asserts David S. Alberts, director of prevention and control at the University of Arizona cancer center. Given Stein's particularly poor prognosis and limited treat - ENJOYING COMPLETE REMISSION, Randy Stein apparently benefited from a controversial chemosensitivity test. ment options, his physician decided to look for drugs that might have a better chance of helping him than the "standard" regimens. So surgeons sent a part of his tumor to Weisenthal, who along with other researchers has developed a handful of techniques for assessing cancer "response" in News and Analysis Scientific American February 1999 19 Copyright 1999 Scientific American, Inc. the test tube. They grow tumor cells in the presence of different drugs and assess whether the drugs kill the cells or inhibit their growth. This idea of assaying cancer cells for drug sensitivity has been around since the 1950s. A 1970s technique sparked considerable enthusiasm until studies revealed numerous problems: fewer than 50 percent of tumors grew even with no drugs present, for example, and it took weeks to generate results. "The rank-and-file oncologists threw out the whole idea after the [1970s] assay proved to be a bust," says Dwight McKee, a medical oncologist in Kalispell, Mont., adding that they equate all cancer-drug response tests with failure. Researchers have since improved the assays and can now obtain results in several days for many cancers. If a drug allows cancer cells to grow in the test tube, even at exposure levels toxic to humans, chances are very good that it won't thwart the tumor in the body, according to John P. Fruehauf, medical director of On-cotech, another cancer-drug-testing laboratory, in Irvine, Calif. The idea is that physicians could rule out those treatments, and patients could avoid side effects from ineffective agents. "Current ways of treating people are almost barbaric compared with what this test can do," states Robert Fine, director of the experimental therapeutics program at Columbia University. Such tests also provide information that enables physicians to devise unconventional therapies, emphasize Weisenthal and Robert A. Nagourney, medical director of Rational Therapeutics, a drug-testing company in Long Beach, Calif. In Randy Stein's case, for example, Weisenthal suggested a drug combination not routinely used for pancreatic cancer. In other cases, Weisenthal and Nagourney abandon standard therapies entirely. Several dozen studies, most of which measured tumor shrinkage, have suggested that "patients treated with drugs that killed cells in the assay do better than patients in the overall population and much better than those treated with 'assay-resistant' drugs," Weisenthal says. But many physicians aren't convinced of the tests' utility, in part because for many cancers, they more accurately predict what won't work rather than what will. Four of the five oncologists Stein consulted advised him against having them done. "They said, 'Things react differently in the human body than they do in the test tube,'" Stein recalls. Indeed, the tests do not mimic many aspects of human biology—drug delivery by the bloodstream, for example. "I'm thrilled for Randy, but what's to say that the assay significantly affected his treatment course or outcome?" points out Lee S. Rosen of the University of California at Los Angeles Jonsson Comprehensive Cancer Center, one of the oncologists who advised Stein against the tests. "Maybe his tumor would have been sensitive to every single drug." Furthermore, some oncologists are wary of replacing therapies that have been tested in clinical trials with those chosen by assays that scientists have not yet thoroughly studied. Still, some physicians are beginning to be swayed. "I was much more skeptical five years ago," says Lawrence Wagman, a surgeon at the City of Hope cancer center near Los Angeles, who removed Stein's tumor sample. "Randy's had a dramatic, unanticipated response with drugs that wouldn't have been chosen without the assay." Although it's not scientific, he remarks, "it forces me to wonder whether the tests might benefit many more patients." A formal answer to that question awaits results from large prospective trials in which survival, not just tumor shrinkage, will be measured. "Unless you have a randomized trial showing that a particular assay is superior to what a clinician can do without it, you have the possibility of taking away standard therapy from someone who might respond," says Daniel D. Von Hoff, an oncologist at the Cancer Therapy and Research Center and the University of Texas Health Science Center at San Antonio. Von Hoff spearheaded improvements and clinical tests of the original assays and now relies on them predominantly to identify new drugs worthy of study. Private lab test practitioners claim they have historically lacked sufficient support from national oncology organizations and other institutions to carry out large trials, although recently they and some academic groups have managed to initiate a handful of clinical trials in the U.S., Britain and parts of Europe. Like previous trials, however, the number of patients will be sufficient to detect only large differences in survival. Although workers in the field say they are eager to participate in such studies, some note that the demand for them by some oncologists is unprecedented for laboratory tests. No one has compared treatment for bacterial diseases based on antibiotic sensitivity tests with treatment administered without the sensitivity knowledge, Alberts says. In fact, most researchers would consider such a trial unethical, because some patients would receive antibiotics not necessarily appropriate for their infections. "Why are we holding the bar higher for [cancer] tests?" he asks. Even before results come out, two federal administrative law judges in California have given drug prescreening a vote of confidence. A national policy excludes the 1970s version of the test from Medicare reimbursement. But last spring the judges ruled that the contemporary methods are different and have not been experimental as of the end of 1996. Since that decision, the Medicare intermediary in those cases has denied subsequent claims; Oncotech and Weisenthal are filing appeals. A revised national policy might eventually take the issue out of the hands of Medicare intermediaries. "We're reexamining the current noncoverage policy and are developing a draft policy so we can get comment from the medical community," comments Grant Bagley of the Health Care Financing Administration in Baltimore. "The existing medical evidence suggests that the tests are not experimental and may be medically reasonable and necessary in at least some situations. The question is under what circumstances we should pay for it." —Evelyn Strauss EVELYN STRAUSS, a Ph.D. biologist turned science writer, freelances from Berkeley, Calif. *. :' CANCER CELLS FROM STEIN'S PANCREAS stain red, and dead cells blue. No meaningful effect occurred when the cells were exposed to the drug Gemcitabine (top). But adding cisplatin killed many cells and increased the amount of cellular debris (bottom). 22 Scientific American February 1999 News and Analysis Copyright 1999 Scientific American, Inc. SCIENCE and the CITIZEN PHYSICS TAMING MAXWELL'S DEMON Random molecular motions can be put to good use Building a miniature machine is not as simple as scaling down the parts. For one, the inherent chaos of the microworld tends to overwhelm any concerted motion. But what if a motor could work with the disorder, rather than against it? The recent fabrication of nanometer-size wheels brings this vision even closer to fruition. On the face of it, seeking useful power in random molecular motions seems to repeat the mistake of Maxwell's demon, a little device or hypothetical creature that tries to wring regularity out of the randomness by picking and choosing among the motions. One incarnation of the demon, devised by the late Richard Feynman, is a ratcheted gear attached to a microscopic propeller. As fluid molecules buffet the propeller, some push it clockwise, others counterclockwise—a jittering known as Brownian motion. Yet the ratchet allows, say, only clockwise motion. Voilä, a perpetual-motion ma- NANOSCALE BROWNIAN MOTOR, recently built as a molecule (inset), applies power to a ratchet and lets random molecular motions turn the rotor. chine: the heat represented by molecular tumult is turned into consistent clockwise rotation without any loss. (Feynman proposed to use it to lift fleas.) But no demon or mortal has ever challenged the second law of thermodynamics and won. According to the law, one of the most subtle in physics, any increase in the order of the system—as would occur if the gear turned only one way—must be overcompensated by a decrease in the order of the demon. In the case of the ratcheted gear, the catch is the catch. As Feynman argued, the ratchet mechanism itself is subject to thermal vibrations. Some push up the spring and allow the gear to jiggle out of its locked position. Because the gear teeth are skewed, it takes only a tiny jiggle to go counterclockwise by one tooth, and a larger (and less probable) jiggle to go clockwise. So when the pawl clicks back into place, the wheel is more likely to have shifted counterclockwise. Meanwhile the sudden jerk of the propeller as the ratchet reengages dumps heat back into the fluid. The upshot: no net motion or heat extraction. hi 1997 T. Ross Kelly, Jose Perez Ses-telo and hnanol Tellitu of Boston College synthesized the first molecular ratchet. The propeller has three blades, each a benzene ring, that also act as the gear teeth. A row of four benzene rings—the pawl—sits in between two of the blades, and the propeller cannot turn without pushing it aside. Because of a twist in the pawl, that is easier to do in the clockwise direction than counterclockwise. For another minipropeller, fashioned by James K. Gimzewski of the IBM Zurich Research Laboratory and his colleagues, the asymmetry is provided by the arrangement of neighboring molecules. Yet the researchers see their wheels spinning equally in both directions, as Feyn-man's analysis predicted. Nevertheless, the basic idea suggests to theorists a new kind of engine. Instead of directly driving a rotor, why not let it jiggle and instead apply power to a ratchet? For example, imagine using tweezers to engage and disengage the microscopic ratchet manually at certain intervals. Then there would be net motion counterclockwise. The second law stays 24 Scientific American February 1999 happy because the tweezers must exert energy to push the pawl back into place. In so doing, they restore heat to the fluid. In practice, the ratchet could take the form of an asymmetric electric field turned on or off by light beams or chemical reactions. There is no need to coordinate the moving parts or to exert a net force, as with ordinary motors. (A simulation is at monet.physik.unibas.ch/-elmer/bm on the World Wide Web.) Researchers have increasingly found that nature loves a Brownian motor. In the case of ion pumps, which push charged particles through the membranes of cells, the ratchet may be a protein whose internal electric field is switched on and off by reaaions with ATP, the fuel supply of cells. The movement of materials along microtubules in cells, the flailing of bacterial flagella, the contraction of muscle fibers and the transcription of RNA also exploit Brownian motion. To turn his rotor into a motor, Kelly is trying to attach extra atoms to the propeller blades in order to provoke chemical reactions and thereby jam the ratchet at the appropriate points in the cycle. Gimzewski, meanwhile, is using a scanning tunneling microscope to feed in an electric current. Because internal friction is negligible, these motors could use energy with nearly 100 percent efficiency. Unfortunately, that is not as good as it sounds: most of the output is squandered by external friction with the fluid. One potential application is fine sifting, made possible because particles of different sizes are affected by Brownian motion to different degrees. In principle, a system could sort a continuous stream of particles, whereas current methods such as centrifuges or electrophoresis are restricted to discrete batches. Nanofork-lifts are also possible: a particle—the forklift—would wriggle forward, encounter a desired molecule and latch onto it. The composite, being bigger, would experience a different balance of forces and be pushed backward. Brownian motion could even be the basis for a computer, as Charles H. Bennett of IBM argued in the early 1980s. Such a computer would use jiggling to drive signals through—reducing voltages and heat dissipation. Brownian motors are one more example of how scientists and engineers have come to see noise as a friend rather than merely a foe. —George Musser News and Analysis Copyright 1999 Scientific American, Inc. IN BRIEF Worm Genome Project In what is being hailed as a landmark achievement, biologists have announced in Sc/encethat they have sequenced the complete genetic code of an organism. The animal, a microscopic roundworm called Caenorhabditiselegans, has some 97 million chemical units and more than 19,000 genes. Having all the M information that governs the development and behavior of the worm should shed light on the evolutionary history of multicellular organisms and help geneticists understand the human genome, which will be fully sequenced early next century. C. elegans mapped Violently Forgetting When it comes to pushing soap or glue, it maybe best to avoid advertising during kick-boxing matches. Brad J. Busman of Iowa State University tested college students'recall of brand names, message details and product appearance in commercials shown during violent and nonviolent video clips that viewers found equally engaging. Hefound that those who watched the violent programming (specifically, Karate Kid III) did not recall the advertisers' products as well as those who watched the nonviolent clips (Gorillas in the Mist). The reason may be that violent shows leave viewers angry; instead of paying attention to the commercial message, they may be trying to calm themselves down.The paper can be found atwww.apa.org/journals/ xap/xap44291.html Seeing Swirls in Superconductors A major hurdle to applicationsfor high-temperature superconductors—substances that carry electricity without resistance above liquid-nitrogen tempera-tu res— is that they generate magnetic whirlpoollike vortices that block the flow of current. David J. Bishop of Bell Laboratories and his colleagues imaged these vortices— essentially by sprinkling on the superconductor iron filings, which become attracted to the magnetic vortices. The researchers write in Nature that the vortices, like flocks of birds, assume patterns depending on the current.These patterns may hold the clues to maintaining supercurrentflow. More "In Brief"on page 28 26 Scientific American February 1999 ECOLOGY ATTACK OF THE FIRE ANTS The insect has spread, maiming animals and shifting the ecological balance Fire ants, aptly named for their burning stings, have long been an infernal pest in the southern U.S., destroying crops, displacing other insects and terrorizing small mammals and people. The aggressive insects have also invaded the Galapagos Islands and parts of the South Pacific, including New Caledonia and the Solomon Islands. Now scientists fear that one species of the ant— Wasmannia auropunctata—might be wreaking havoc in West Africa, possibly blinding elephants there. Commonly called the little fire ant, Wasmannia is a distant relative of Solenopsis wagneri (formerly invicta), the foreign species that has plagued the southern U.S. It is widely believed that the ants have emigrated from their native Central and South America mainly through human commerce, which is why they are sometimes referred to as tramp ants. One theory for Wasmannia's recent appearance in Melanesia is that the ants were stowaways on ships transporting heavy logging equipment from South America to other project sites in the Pacific. Fire ants have been compared with weeds: they tolerate a range of conditions and can spread quickly, usurping the local environment. In infested areas in the U.S., S. wagneri can make up 99 percent of the total ant population, according to James K. Wetterer, an entomologist at Florida Atlantic University. Also, once entrenched, fire ants are extremely difficult to dislodge, hi the U.S., insecticides such as Dieldrin, which is much more toxic than DDT, have failed to eradicate the pest. The Department of Agriculture is currently studying whether to introduce into the country a species of Brazilian fly that is a natural parasite of S. wagneri. Although the ecological ramifications of the migration are not entirely known, early indications have been frightening. On the Galapagos Islands, fire ants eat the hatchlings of tortoises. They have also attacked the eyes and cloacae of the adult reptiles. "It's rather hideous," notes James P. Gibbs, an environmental scientist at the S.U.N.Y. College of Environmental Science and Forestry in Syracuse. hi the Solomon Islands, fire ants have reportedly taken over areas where incubator birds lay their eggs, and locals say the insect's venomous stings have blinded dogs. "It's a disaster there__ When these invasive ants come in, they change everything," Werterer notes. Although the exact range of Wasmannia in West Africa is unknown, one estimate is that the ant has encroached on more than 600 kilometers (375 miles) along the coastline and 250 kilometers inland. Near some of these areas in Gabon, villagers have noticed elephants with white, cloudy eyes behaving strangely, as if they were nearly blind. Peter D. Walsh, a population ecologist with the Wildlife Conservation Society in Bronx, N.Y., speculates that fire ants might be the culprit, based on the dog problem in the Solomon Islands and his personal experience with several Gabon house cats that lived in a home infested with fire ants and later developed a similar eye malady. Ecologists also fear that the damage could cascade, hi New Caledonia, Wasmannia has benefited the population of scale insects, which produce honeydew WASMANNIA AUROPUNCTATA, or "little fire ant," has a big, potent sting. It has spread to the Galapagos, South Pacific and Africa. News and Analysis Copyright 1999 Scientific American, Inc. In Brief, continued from page 26 Muscles from Gene Therapy In a study that relied on mice, researchers from the University of Pennsylvania Medical Center have used gene therapy to treat age-related loss of muscle, which can deteriorate by one third in the elderly. To deliver the gene—an insulinlike growth factor—they used a virus that had its disease-causing abilities removed. The virus delivered the gene to muscle stem cells, which turned into functional muscle tissue. Older mice so treated experienced a 27 percent increase over untreated ones, as described in the December 22,1998, issue of Proceedings of the National Academy of Sciences USA. Tracking Asteroid Killers FrankT. Kyte of the University of California at Los Angeles may have recovered a piece of one of the deadliest murder weapons ever: the 10-kilometer-wide (six-mile-wide) asteroid that wiped out the dinosaurs 65 million years ago. Kytefound t the tiny fossil, about as big as the end of a 5 felt-tip pen, while sifting North Pacific sed-jr iments that correspond to the time of the Ě mass extinction. The fossil seems to be related to bodiesfrom the asteroid belt. Meanwhile Peter H. Schultz of Brown University and his colleagues describe inSc/enceglassy, bubble-filled slabs of rock in Argentina that formed in the rapid heat of an impact (photograph). Schultz thinks a body one kilometer wide struck offshore 3.3 million years ago—just before a sudden ocean cooling and the disappearance of 36 animal genera. Doh! Iťs Not the Heat... Climatologists have solved the evaporation paradox, in which apparently less water was evaporating globally even though more rain was falling (increased precipitation is an outcome of a warmer earth). Marc B. Parlange of Johns Hopkins University and Wilfried H. Brutsaert of Cornell University say researchers had not taken into account ambient humidity and local moisture when measuring evaporation (determined from pans of water left out on a platform). Once they were worked into calculations, the paradox disappeared. More "In Brief"on page 30 Glassy evidence on plants. The excess honeydew reportedly promotes a fungus that covers the plant leaves, altering their photosynthesis. Especially troubling is that the fire ants appear to have few natural predators in their new habitats. Scientists emphasize, however, that most of the evidence of fire ant damage is anecdotal and that much work needs to NUCLEAR POLICY BLAST FALLOUT The antinuclear movement takes off in South Asia Mahatma Gandhi was murdered twice by Hindu nationalists, remarked an Indian scientist: physically in 1948 and spiritually in 1998. Now, nine months after nuclear blasts in India and Pakistan set people dancing in the streets last May, a dawning awareness of what an atomic bomb signifies—the tangible threat of nuclear holocaust—is muting the fervor. "An evil shadow has been cast on the subcontinent," grimly warns retired admiral L. Ramdas of the Indian navy. Because India and Pakistan share a border, missiles from either would take eight minutes or less to reach major cities—leaving no time to decide whether an incoming device is nuclear or not. The danger of retaliating with a nuclear weapon, and perhaps inadvertently triggering atomic war, is undeniable. Pervez Hoodbhoy, a physicist at Quaid-E-Azam University in Islamabad, Pakistan, argues that because early-warning systems are untenable, India or Pakistan can protect their command-and-control centers only by distributing nuclear-armed aircraft or missiles over remote regions and providing local commanders with the ability to launch the devices. Such dispersal of authority is a frightening prospect because, as Ramdas points out, "on both sides of the border we have people who are irresponsible enough to start a war." M. V. Ramana, now at the Center for Energy and Environmental Studies at Princeton University, has calculated that a relatively small, 15-kiloton device (like the bomb dropped on Hiroshima) would kill between 150,000 and 800,000 people if it exploded over Bombay. Although such scenarios are dismissed by the governments of both nations, be done before they can draw any conclusions. Meanwhile ecologists warn that irreversible destruction might already be occurring. Says Walsh, who monitors elephants in Gabon: "That's the ironic thing. I've been worried about poaching and deforestation, and what could eventually kill these huge animals might be these tiny ants." —Alien M. Hayashi they are being taken seriously by many South Asians. Right after the blasts, a poll conducted by the newspaper Times of India in several Indian cities found that 91 percent of the respondents approved of the tests. But a similar poll conducted in October by The Hindu newspaper found that 41 percent of the respondents expressed "worry" about the May blasts. On August 6, Hiroshima Day, thousands of antinuclear protesters marched in Indian cities and hundreds in Pakistani ones. A good part of the change is owed to efforts by a few journalists, scientists and others to educate the public about nuclear issues. Shortly after the blasts, more than 250 Indian scientists signed petitions protesting them; another antinuclear petition was signed by almost 50 retired personnel from the armed forces of India and Pakistan. English-language newspapers in both countries have carried articles pointing out the danger—to the owner—of nuclear weapons (just maintaining a stockpile can be tricky). And some activists have received requests to speak in remote villages, showing that it is not just the elite who are concerned about bombs. "People do listen to us," says A. H. Nayyar of Quaid-E-Azam University. "They come back and ask questions. They see there is sincerity of purpose." The activism can carry a penalty. In Pakistan, those speaking out against nuclear weapons have been denounced as traitors, and in June physicists at the Pakistan-India People's Forum for Peace and Democracy were beaten up by Islamic fundamentalists. In India, peace activists rallying in Bombay have been arrested, and Hindu fundamentalists disrupted an antinuclear conference in Bangalore. One physicist, T. Jayara-man of the Institute of Mathematical Sciences in Madras, was recently threatened with disciplinary action for his writings, which criticize the role played by India's Department of Atomic Energy in pushing for the blasts. A signature campaign organized over 28 Scientific American February 1999 Copyright 1999 Scientific American, Inc. News and Analysis 1164 ANTI GRAVITY This Is Only a Test Because the vast majority of our readers have some experience with being in high school, we now pay homage to that great tradition that brought sweat to the palms of so many: the pop quiz. If you are one of those amazing devotees ofthe magazine who know its pages inside and out, the following should be funfor you. If you're a more casual reader, you will still have a good time. And if you picked up this issue by accident at a newsstand, buy it and leave it on your coffee table to impress people. (Television star Paul Reiser did it in an episode of Mad about You. I do it, too, only I don't have to buy it. [Editors' note: He does now.] Anyway, the true/false trivia questions that follow are based on material that appeared in Scientific American in 1998. 1. We proudly made it through all of 1998 without once publishing the word "Lewinsky." 2. We published an article that discussed the work of a scientist who had a metal nose. 3. We printed a photograph of a team of horses pulling a boat. 4. We printed a photograph of a boat pulling a team of horses. 5. We ran an x-ray image of a mosquito's knee. 6. We ran an x-ray of a bee's knees. Bonus essay question: Why only six questions? Extra bonus: Why does this quiz on 1998 appear in February rather than January? ANSWERS 1. Regrettably, this is false. (And now we've blown 1999, too.) The word "Lewinsky" appears in the November issue on page 110. So does a picture of Monica, in an article on the history of magnetic recording. Linda Tripp, however, is not pictured, nor does she appear in the August issue's article on lower back pain. 2. True. The article appears on page 116 of theJuly issue, and the noseless man in question is the great Danish astronomer Tycho Brahe. So how did he smell? Probably pretty bad: daily showers were still a few centuries off, and there was indeed something rotten in Denmark. 3. True. The photograph appears on page 63 ofthe February issue, in an article on Viking longships. Horses were put to the task of pulling longships over short stretches of land between bodies of water. 4. False. Unless you want to be a real stickler for Newton's third law. In that case, true, same picture. 5. True. The phase-contrast x-ray micrograph appears on page 73 of the December issue, in the article "Making Ultrabright X-rays." 6. False. The bee's knees? Hey, it's 1999; this magazine no longer employs such antiquated verbiage, although the column "50,100 and 150 Years Ago" still features vestigial usage such as "23 skidoo" and "Nobel prize for DDT." So, no, there were no bee's knees. In an article in the April issue on the images seen by early micros-copists, however, we do publish a view of the head of a louse, on page 52. Another louse appears in the November issue on the bottom of page 107, standing in a car, sporting a silly little mustache and planning world domination. Bonus answer: We ran out of space for anything more. Extra bonus answer: We accuse the Y2K bug, thereby laying claim to being among the first to blame it for something that has actually happened. —Steve Mirsky |l you're husalnjiyinjrni'WFHt discovery, niiik>- Mlh- l ras KASA,,, periecteií at Ttkujk-Pudiľ. DoKNtu ru Fit lutE tow... SWEDISH SCIENTISTS GO UNDERCOVER TO CREATE THE WORLD'S BEST BED! 'TVnpurftiit's w&ty rnual Swe&h Seru Sjedxr b -xjn&u ~t 1 wit Anmzzi venťů livid cither I net LhjphiiHfmul new =al i the m nídlí fjim. I mine iing! Mil iJr(*idH*JlMi*l IhrimlJ hnur now «uii ixwty CHUL bul (U-htí rrtWkrtfr »rj ÍLLEITiG OAVTESTS Conine-imiurted bjlhamndj Jikrpda *tn .iv * in Ůe ir mu hune, lů n^nak; ŕnr JólilBuucht Thťf £od^nri1lun4riif|ftap*4^pl|fo«fa4^iirfTC0vcr t «Wir ffltjfn b *- Hltal s! lata Istú PEHHY CELLS ■uuk SE bei ta vnnt u 'nKlitubr ipingT |hc ct£-ariuy ■hetst a\nhnuB4 aaäif 1ř ttar nate' «vtfy pif* v4 iqk Ihe onen DMtfvqultaJ Any nWÜütj Uul ma slhei tmi-baw irtudb InAimGim. Íl nuj pnUKů jun thm thi lani «hI xritt jnifc, b« k rate i HAMMOCK ÖFKCT autu* íha pn-l^ffc^ wmithctuffnpfrainnili^rdillfptjrhuüi. tJn owueah.. KETLÍfl TESTS reral Uul nir iwtih '^^^P' ty j r, u'.iunbiig KfV rUJrlňS t ■*» DURS Irwt OJ ŕHLMillw. incrtiintlir-dmr Mlatu t* ŕnj-iittpí tki fauldi... lb uiuduU. Tempu/1 nulfrúl nuit ■■ our bulí ifů pirt/, rtUtiu [otami. II vm PUT JWlSOPTřHSS-ťJthňluartiJliii fch»nqrr- itshcnt imniŕfy giljfck *mdd* Üucfnf*3& to jTOUT bedj s prc-dw thine, pwllhit »d mín, n priati«■Éhmťhťunl jil j^Muljäg™ pouEnWTDlM HUTOftT! Huh betto siqntt IhmiT2?'nněf^iring itvbĽLilt hamrJtur or ucb* icdL This tasil ťtd atlmsii ilsrlL. fcptE hl gknüki "whalŕ bcov Ft' Terr-pur trulfňil a nlrnuaJh1 =fl r.iTKIlUTIIť lu munpnu^ŕ íliuilnií nmmitfej tfíh WlUbrtJUlUltan—tŕifillt^trt mlptnpriltaT*«. i-l. >-< k-minm^, „ii.^.i.,^, m------1 ■—■,.-■ —no- tryúiktí fair Imjvi Pííi: inner wäfc tumnj nr nfóíing. JiiJ :hrn: ji* nn iirfikhei, ruit.hi, nr ptnff QJ hf»k tfo*fŕ fm umplŕ.. .Fret vidm.. .1'ret ĚnáWiT Jjrfif uhm if m («ti í* OOksbW-^MlI m>í y* m ttt-rftiml BEWEWTHAnOW KTT cmliianj >|i| i rri Lob Binpk ď Tfcipur nulnui, .^. Mír Ětfia iSftf »ifca. lil At«W ptán* itlŕ»n% wn w> i «fíyJm Cuwí Wxt ritu Iw. ni IS iJ4GrTHr1-HtJCTÍiKJCTmliiati. Hhr 5vBdšh sckntiflX itiitiiC nheit WLU urt> dfBCKraH iul-iijoqmcwiiainoíl> £5,1)00 sltsf clrnks fr dortncs fj>-' V.^!" »■flB-frJi: um ninnnV In rt künl d HEICIfTlKS SLÍEl1 Thcy ňu b wiv cur btd Eiua iĽhaznd c4Ína...2tcB dimlifibri Sc flo-J^ Lii^TllKuundľ:rdncun,luuaflub,iJ^n J|f4miulrriuS talllírl5^1*«ínrhrtJijH-S^*ihShp5ílliin' Tť-purPtJicsliM KNlTiSTlUTKPiHT jyuun ta r Ihtati m SeoHiiilKn DťjDirkinJ! *Ht nil nu hhlini Vilu*. VjuTi be /jo1 Ritt MJWLE / FREi VB« I »HIT INFO 8Q0.886.64G6 News and Analysis Scientific American February 1999 41 Copyright 1999 Scientific American, Inc. GOLD CONTACT PAD QUANTUM CASCADE LASER can produce multiwavelength light. Voltage is applied to the raised surface to generate light. they emit a photon with each jump. The thickness of the layers and barriers controls the energy of the emitted photons, which is related in a straightforward way to their wavelength. The new multiwavelength version of the quantum cascade laser consists, like the original type, of a tiny chip made of alternating layers of different materials laid down one atomic layer at a time by molecular-beam epitaxy. But the thick- nesses of the layers—aluminum indium arsenide four atoms thick and indium gallium arsenide 18 atoms thick—were carefully selected to control which electronic energy transitions could occur. Quantum interactions between the wells of the active material (the indium gallium arsenide) allow the emergence of "mini-bands"—groups of energy states that electrons can occupy as they cascade through the wells. Ca-passo's group engineered the material so that electrons moving between two minibands could make either one of two possible state transitions. Each transition produced light at a different wavelength, as expected, even at room temperature. And the group saw a third wavelength emerge as a bonus when the laser was operated at high power and cooled to 80 kelvins. The technical description of the device was published in Nature in the November 26,1998, issue. The quantum cascade laser and the new multiwavelength version operate at wavelengths that are useful for distinguishing chemicals. Capasso says that with some refinements he is confident of being able to add, a laser emitting at two distinct wavelengths could be built that would be "a definite plus" for the analytical technique known as lidar (light detection and ranging). In lidar, laser beams of two different wavelengths are sent into a mixture of gases, and the amount of light scattered back is measured. If one of the wavelengths is chosen so that it is absorbed by a chemical in the mixture, the attenuation of that wavelength, relative to the other wavelength, will provide a sensitive measure of the concentration of the chemical. Having a single small laser device that could produce both the wavelengths could make for smaller lidar devices and related instruments for monitoring pollution, for controlling industrial processes and for making medical diagnoses. The Bell Labs laser certainly seems to be on the right wavelength. —Tim Beardsley in Washington, D.C. GENETIC MEDICINE INNOVATIVE IMMUNITY A biological trick offers promise for making vaccines from RNA Vaccines are among the most cost-effective medicines. Yet for many serious infectious diseases, vaccines have proved impossible to create. A group of researchers at the University of Vienna has recently demonstrated a technique for making a vaccine that in mice provided potent protection against a viral disease, tick-borne encephalitis. The vaccine represents a novel way of using RNA, a molecule that cells use to transfer genetic information from their nuclei to sites where proteins are assembled. If the idea works for other conditions and in other animals, it could give vaccine manufacturers a powerful weapon. Most vaccines now in use consist of either killed or genetically attenuated microbes. In recent years, however, im-munologists have learned that plas-mids, tiny loops of "naked" DNA, can by themselves provoke immunity if they incorporate a sequence encoding a 42 Scientific American February 1999 pathogen's protein. When the DNA enters an animal's cells, it causes them to manufacture the protein, which in turn stimulates the immune system. Compared with traditional vaccines, DNA vaccines are easy and inexpensive to make, because the process does not require cultivation of bacteria or viruses. But promising as they are, DNA vaccines have not proved as potent in clinical trials as might be desired, because recipient cells produce the pathogen's protein only for as long as the administered DNA remains functional. Looking for a different trick, Christian W. Mandl and his colleagues synthesized in the laboratory RNA corresponding to almost the whole genome of the virus that causes tick-borne encephalitis. This genome, like that of many other viruses, consists of RNA. Although it is missing part of the viral genome, the synthetic RNA could still replicate and was infectious when put in cells. But it replicated much more slowly than the RNA of the whole viral genome. Mandl and his colleagues then deposited the synthetic attenuated virus RNA onto microscopic gold beads and used a "gene gun" to shoot the beads into the skin of mice. The gene gun has been GENE GUN, widely used in vaccine research, relies on pressurized helium to fire DNA- or RNA-covered gold pellets one micron wide through the skin of animals. News and Analysis Copyright 1999 Scientific American, Inc. I>i: vu vi je PROGRESS MADE IN IDENTIFYING GENES CAUSING MI KOlíHilC DISORDERS ■i" !l Mil l>. AHIlUAif ■\4oi.Enjl.4R "ĽUROLCXiY Molecular , IN el rolový ^ül Ldilnl li> Jmcgk B. Haan, MU, P...I).. fh-,v\ ni ihť Finnin *ť tUtliclm1, lliiniird Mt-dii-d hVhu.tl IH \l;iríinjoiil jji , IWUMlotpfli, psictiulrins. .ml mlnírs «Im *fcli in winmij ik'ir mul«- «Cjlkbllínl llSľ tóin |'_lľ.i-.:rllii ihii ľ,..l! i-.Jľ ■■ iitidi-rinn:^ n.LiPiluuŕí disĽ-jw;. .«pus wiľ|i hitMi.^iii|.nyi.(i Swmxi OnliT InEI-Iti-i L-8O0-545-O554 SCIENTIFIC .MRKU \N 4tí HblLxiil kennt. Ví i"" Víri. M liŕit widely used in studies on DNA vaccines. As a result of the treatment, the mice developed a strong immunity to tick-borne encephalitis, presumably because the RNA caused a localized attenuated infection that then fired up the animals' immune cells. Remarkably, because the RNA could replicate, the amount needed to produce immunity was about one thousandth the amount of naked DNA typically needed for protection. The results were reported in December 1998 in Nature Medicine. Most previous vaccine work with RNA viruses has employed them as carriers that produce a protein of a different pathogen in cells. Other efforts have focused simply on short RNA sequences that encode pathogens' genes. But this approach, like DNA vaccines, limits the amount of the pathogen's protein that recipient cells can produce. Synthetic infectious RNA corresponding to the (almost) complete genome of an RNA virus is an original twist, according to Margaret A. Liu of Chiron Technologies in Emeryville, Calif. Chiron holds patents on molecular-biology techniques for making RNA viruses that cause cells to overproduce desired RNA sequences. A practical vaccine based on Mandl's idea might be potent, because it would replicate in the recipient. Moreover, it could have safety advantages. Attenuated viruses cultured in cells occasionally revert to a fully pathogenic form, but there is no obvious way for reversion to occur with synthetic RNA. The chemical is also less infectious than whole virus. And RNA, unlike DNA, cannot integrate itself into an animal's chromosomes, a phenomenon that has been observed in cell cultures with DNA vaccines. On the other hand, points out David B. Werner of the University of Pennsylvania, RNA breaks down rather easily, so it might be hard to use Mandl's technique to make practical vaccines. But Mandl says his preparation works well even after six months in storage. He thinks the idea might be particularly applicable to yellow fever and polio, which are caused by RNA viruses that operate like tick-borne encephalitis (it would not work against viruses based on DNA or against retroviruses, such as HIV). Mandl acknowledges that RNA of the needed complexity is currently too expensive for wide-scale use. But mass manufacturing can bring prices down, and vaccine developers badly need new ideas. —Tim Beardsley in Washington, D.C. 44 Scientific American February 1999 CYBER VIEW Private Parts One of presidential aide Ira Magaziner's last aas before leaving the White House was to hand over a report on cyberspace issues. It recommends greater consumer protection and privacy rights but advises leaving them to industry self-regulation rather than instituting government intervention. The report follows a series of similar recommendations, such as a two-year moratorium on Internet taxation designed to keep the Internet free of regulation while it grows. Regulation has never been popular on the Net, which tends to be most vocally populated by people who dislike authority and welcome freedom. Herein lies the paradox: a chief reason why Netizens want cryptography deregulated is to protect privacy. The Clinton administration, on the other hand, stubbornly clings to regulating cryptography, while saying that allowing the market to regulate itself is the best way to protect privacy—the one area where at least some Netizens are persuaded that regulation is needed. In clinging to self-regulation for privacy, the U.S. is out of step—not just with the Net but with most other countries and with the American public, which in polls cites privacy concerns as a serious deterrent to the growth of electronic commerce. Internet users in the U.S. would be free to sit around and debate all this endlessly if it weren't for one thing: in October the European privacy directive came into force. This legally binding document requires all European member states to pass legislation meeting the directive's minimum standards. The supporting bill in Britain, for example, has already been passed by Parliament and received Royal Assent; no starting date has been announced, but it is presumed to be early in 1999. The kicker in the directive and supporting legislation, as far as the U.S. is concerned: besides giving European consumers much greater privacy rights, the legislation prohibits member states from transferring data to countries that do not have equivalent protection. Privacy activists have been warning the U.S. for some time that because the U.S. has no such legal protection, it is entirely possible that U.S. companies may find themselves prohibited from News and Analysis Copyright 1999 Scientific American, Inc. Patents Inventions Technology transferring personal data for processing, either to business partners or to their own overseas subsidiaries. Nevertheless, the administration still clings to the idea (and the recent report states so clearly) that market pressures will force industries to regulate themselves. A white paper written by the Online Privacy Alliance (OPA), a coalition boasting members such as America Online, Bank of America, Bell Atlantic, IBM, EDS, Equifax and the Direct Marketing Association, outlines the plan. Publicly announced corporate policies and industry codes of conduct would be backed by the enforcement authority of the Federal Trade Commission and state and local agencies and by laws to protect the privacy of specific types of information. They will add up to a "layered approach" that will create what is sometimes referred to as a safe harbor. The OPA insists it will produce the same level of protection as the European directive. As the paper points out, many privacy laws already exist in the U.S., starting with the Fourth Amendment and leading up to the 1998 Children's Online Privacy Protection Act, which directs the FTC to regulate the personal information obtained by commercial sites from anyone younger than 13. No such law is proposed for adult on-line users, who arguably have as much or more to lose, although schemes that stamp Web sites with a seal of approval (from organizations such as TRUSTe or the Better Business Bureau) do exist to try to give the Web some consistent privacy standards. The paper's conclusion is that the U.S. doesn't need privacy regulation. Simon Davies, director of Privacy hi-ternational and a visiting fellow at the London School of Economics, disagrees. "When the U.S. government approaches this issue, they approach it as if it were a domestic affair," he says. "Safe harbor is condemned by everybody because it lacks all the primary requirements for effective protection." Under the self-regulatory model, customers must do all the legwork: they have to do the complaining and the investigating and muster the proof that their privacy has been invaded. Any arbitrator is hampered in such a regime, because companies are notoriously reluctant to give third parties access to internal records that may be commercially sensitive. Meanwhile, Davies says, companies are "pathologically unable to punish themselves," so a customer seeking redress is unlikely to find any without that third party. Worse than that, a lack of effective regulation means that even if companies successfully regulate themselves, there are no curbs on government invasions of privacy. That is probably the greater concern, especially because of projects under consideration, such as putting all medical data on-line and asking banks to notify government officials if customers display a change in their banking habits. The U.S. may be in for a shock if Europe, flexing its newly unified muscles in a globally networked world, refuses to budge and companies find themselves unable to trade because of data flow problems. Davies, for one, thinks this scenario is all too likely. "They still think that because they're American they can cut a deal, even though they've been told by every privacy commissioner in Europe that safe harbor is inadequate," he remarks with exasperated amusement. "They fail to understand that what has happened in Europe is a legal, constitutional thing, and they can no more cut a deal with the Europeans than the Europeans can cut a deal with your First Amendment." — Wendy M. Grossman WENDYM. GROSSMAN is a freelance writer based in London. She described methods to foil electronic eavesdropping of computer monitors in the December 1998 issue. Nut OUcrcd before. Ibis product comes wich a detoikd. and fully operational model of a patented inveruinn. Tile hook contains a copy of the patent, and it lias informal km un how to conduct a computerired patent search.It also has information about the patent system, plus many types of technology related to the inventive process, Contact: Ihe I ducLitiooal Products Company for a free brochure, at P.O. Box 287, Vallejo, CA 94590 www. ET_P-CO.com Hypothesis: Půo{3)9 with an affinity for science or nature leriqj 1q be ÍOŕTipalible fůr friendship or romance. J q.n the «xporiment! Con+act us far infarmalian an becoming a member of Science Connettlon 5r* S,QP UP On-line. Science Connection 304 Newbury 51 #307. Boslon MA GZ115 Box 5*S, Chaílar, NS BGJ 1 JO, Canada {M0}6S7-5179 mfa@SciCi.nneC-t.oarn rmprrfrwww, sciconiiecl.com/ News and Analysis Scientific American February 1999 45 Copyright 1999 Scientific American, Inc. Supersoft X-ray Stars ernovae Several years ago astronomers came across a new type of star that spews out unusually low energy x-rays. These so-called supersoft sources are now thought to be white dwarf stars that cannibalize their stellar companions and then, in many cases, explode Heuvel and Saul A. Rap Copyright 1999 Scientific American, Inc. DAVID AND GOLIATH STARS form a symbiotic binary system: a white dwarf and a red giant star in mutual orbit. The dwarf, with its intense gravity, is slurping off the outer layers of the giant. The pilfered gas goes into an accretion disk around the dwarf and even- g tually settles onto its surface, | whereupon it can ignite nuclear % fusion and generate a large quanti- S ty of low-energy x-rays. < ince the 1930s astronomers have known that ordinary stars shine because of nuclear fusion deep in their interior. In the core of the sun, for example, 600 million tons of hydrogen fuse into helium every second. This process releases energy in the form of x-rays and gamma rays, which slowly wend their way outward through the thick layers of gas. By the time the radiation reaches the surface of the star, it has degraded into visible light. Recently, however, researchers have discovered a new class of stars in which the nuclear fusion takes place not in the deep interior but in the outer layers just below the surface. These stars appear to be white dwarfs—dense, burned-out stars that have exhausted their nuclear fuel—in orbit around ordinary stars. The dwarfs steal hydrogen gas from their companions, accumulate it on their surface and resume fusion. The result is a torrent of x-rays with a distinctive "soft" range of wavelengths; such stars are known as luminous supersoft x-ray sources. As the dwarfs gain weight, they eventually grow unstable, at which point they can collapse into an even denser neutron star or explode. The disruption of white dwarfs has long been conjectured as the cause of one sort of supernova explosion, called type la. With the discovery of the supersoft sources, observers have identified for the first time a class of star system that can detonate in this way. Type la Supernovae have become important as bright "standard candles" for measuring distances to distant galaxies and thereby the pace of cosmic expansion. Much of the lingering uncertainty in estimates of the age and the expansion rate of the universe is connected to astronomers' ignorance of what gives rise to these Supernovae. Supersoft sources may be the long-sought missing link. Copyright 1999 Scientific American, Inc. WAVELENGTH (NANOMETERS) 2.50 1.25 0.83 0.62 0.50 0 0.5 1.0 1.5 2.0 2.5 ENERGY (KILOELECTRON VOLTS) SOFT AND HARD x-ray sources are distinguished by their spectra, as measured by the ROS AT orbiting observatory. A typical supersoft source (top) emits x-rays with a fairly low energy, indicative of a comparatively cool temperature of 300,000 degrees Celsius. A typical hard x-ray source (bottom) is 100 times hotter and therefore emits higher-energy x-rays. In both cases, the intrinsic spectrum of the source (red curves) is distorted by the response of the ROSAT detector (gray curves) and by interstellar gas absorption. X-RAY COLOR IMAGE (left) shows how a nearby minigalaxy, the Large Magellanic Cloud, might appear to someone with x-ray vision. A red color denotes lower-energy (or, equivalently, longer-wavelength) radiation; blue means higher energy (shorter wavelength). Supersoft sources stand out as red or orange dots, The story of the supersoft sources began with the launch of the German x-ray satellite ROSAT in 1990. This orbiting observatory carried out the first complete survey of the sky in soft x-rays, a form of electromagnetic radiation that straddles ultraviolet light and the better-known "hard" x-rays. Soft x-rays have wavelengths 50 to 1,000 times smaller than those of visible light—which means that the energy of their photons (the unit x-ray astronomers prefer to think in) is between about 0.09 and 2.5 kiloelectron volts (keV). Hard x-rays have energies up to a few hundred keV. With the exception of the National Aeronautics and Space Administration's orbiting Einstein Observatory, which covered the energy range from 0.2 to 4.0 keV, previous satellites had concentrated on the hard x-rays. Almost immediately the ROSAT team, led by Joachim Trümper of the Max Planck Institute for Extraterrestrial Physics near Munich, noticed some peculiar objects during observations of the Large Magellanic Cloud, a small satellite galaxy of the Milky Way. The objects emitted x-rays at a prodigious rate—some 5,000 to 20,000 times the total energy output of our sun—but had an unexpectedly soft spectrum. Bright x-ray sources generally have hard spectra, with peak energies in the range of 1 to 20 keV, which are produced by gas at temperatures of 10 million to 100 million kelvins. These hard x-ray sources represent neutron stars and black holes in the process of devouring their companion stars [see "X-ray Binaries," by Edward P. J. van den Heuvel and Jan van Paradijs; Scientific American, November 1993]. But the soft spectra of the new stars—with photon energies a hundredth of those in other bright x-ray sources—implied temperatures of only a few hundred thousand kelvins. On an x-ray color picture, the objects appear red, whereas classical, hard x-ray sources look blue [see illustration at bottom left]. whereas hard x-ray sources look blue. The supersoft star CAL 87 seems green because an intervening cloud of hydrogen alters its true color. (Some red dots are actually sunlike stars in the foreground.) The x-ray view is rather different from an ordinary photograph of the same area (right). 48 Scientific American February 1999 Supersoft X-ray Stars and Supernovae Copyright 1999 Scientific American, Inc. The reason the supersoft sources had not been recognized before as a separate class of star is that the earlier x-ray detectors were less sensitive to low energies. In fact, after the ROSAT findings, researchers went back through their archives and realized that two of the sources had been discovered 10 years earlier by Knox S. Long and his colleagues at the Columbia University Astrophysics Laboratory (CAL), using the Einstein Observatory. These sources, named CAL 83 and CAL 87, had not been classified as distinct from other strong sources in the Large Magellanic Cloud, although the Columbia team did remark that their spectra were unusually soft. Back of the Envelope At the time, Anne P. Cowley and her co-workers at Arizona . State University surmised that CAL 83 and 87 were accreting black holes, which often have softer spectra than neutron stars do. This suggestion seemed to receive support in the 1980s, when faint stars were found at the locations of both sources. The stars' brightness oscillated, a telltale sign of a binary-star system, in which two stars are in mutual orbit. In 1988 an international observing effort led by Alan P. Smale of University College London found that the brightness of CAL 83 fluctuated with a period of just over one day. A similar project led by Tim Naylor of Keele University in England obtained a period of 11 hours for CAL 87. These visible companion stars are the fuel for the hypothesized black holes. Assuming they have not yet been decimated, the various measurements indicated that they weighed 1.2 to 2.5 times as much as the sun. But the ROSAT observations suddenly made this explanation very unlikely. The sources were much cooler than any known black-hole system. Moreover, their brightness and temperature revealed their size. According to basic physics, each unit area of a star radiates an amount of energy proportional to the fourth power of its temperature. By dividing this energy into the total emission of the star, astronomers can easily calculate its surface area and, assuming it to be spherical, its diameter. It turns out that CAL 83, CAL 87 and the other Magellanic Cloud sources each have a diameter of 10,000 to 20,000 kilometers (16,000 to 32,000 miles)-the size of a white dwarf star. They are therefore 500 to 1,000 times as large as a neutron star or the "horizon" at the edge of a stellar-mass black hole. When Trümper first described the supersoft sources at a conference at the Santa Barbara Institute for Theoretical Physics in January 1991, several audience members quickly made this calculation on the proverbial back of the envelope. Some conference participants, among them Jonathan E. Grindlay of Harvard University, suggested that the sources were white dwarfs that gave off x-rays as gas crashed onto their surface—much as hard x-ray sources result from the accretion of matter onto a neutron star or into a black hole. Others, including Trümper, his colleagues Jochen Greiner and Günther Hasinger, and, independently, Nikolaos D. Kylafis and Kiriaki M. Xilouris of the University of Crete, proposed that the sources were neutron stars that had somehow built up a gaseous blanket some 10,000 kilometers thick. In either case, the ultimate source of the energy would be gravitational. Gravity would pull material toward the dwarf or neutron star, and the energy of motion would be converted to heat and radiation during collisions onto the stellar surface or within the gas. Both models seemed worth detailed study, and two of us (van den Heuvel and Rappaport), collaborating with Di-pankar Bhattacharya of the Raman Research Institute in ■ * ■ NEUTRON , STAR k P E VELOCITY OKM/S y m ^F f STELLAR ^^ ^^ BLACK «- 20 KM -► HOLE COMPACT STARS have colossal escape velocities. A typical white dwarf (left) packs the mass of the sun into the volume of a terrestrial planet. To break free of its gravity, an object must travel at some 6,000 kilometers per second. This is also approximately the speed that a body doing the reverse trip—falling onto the dwarf from afar—would have on impact. Denser stars, such as neutron stars with the same mass (center), have an even mightier grip. The densest possible star, a black hole, is defined by a surface, or "horizon," from which the escape velocity equals the speed of light (right). Supersoft X-ray Stars and Supernovae Scientific American February 1999 49 Copyright 1999 Scientific American, Inc. 16.5 < 17.0 17.5 I í í ON/OFF EMISSION of supersoft star RXJ0513.9-6951 is a sign that it is poised between two different modes of behavior. When it shines in visible light (left), its x-ray output is low (right), and vice versa. (The lower x-ray counts are upper limits.) The star is at the border between a pure supersoft source (which would emit only x-rays) and a white dwarf surrounded by thick gas (which would emit only visible light). Slight fluctuations in the rate of gas intake switch the star from one behavior to the other. Bangalore, India, were lucky enough to be able to start such studies immediately. The conference was part of a half-year workshop at Santa Barbara, where several dozen scientists from different countries had the time to work together on problems related to neutron stars. It soon became clear that neither model worked. The super-soft sources emit about the same power as the brightest accreting neutron stars in binaries. Yet gas collisions onto neutron stars are 500 to 1,000 times as forceful as the same process on white dwarfs, because the effect of gravity at the surface of a neutron star is that much greater. (For bodies of the same mass, the available gravitational energy is inversely proportional to the radius of the body.) Thus, for a dwarf to match the output of a neutron star, it would need to sweep up material at 500 to 1,000 times the rate. In such a frenetic accretion flow—equivalent to several Earth-masses a year—the incoming material would be so dense that it would totally absorb any x-rays. Neutron stars with gaseous blankets also ran into trouble. Huge envelopes of gas (huge, that is, with respect to the 10-kilometer radius of the neutron star) would be unstable; they would either collapse or be blown away in a matter of seconds or minutes. Yet CAL 83 and C AL 87 have been shining for at least a decade. Indeed, the ionized interstellar gas nebula surrounding CAL 83 took many tens of thousands of years to create. Nuclear Power A fter weeks of discussing and evaluating models, none of xVwhich worked either, we realized the crucial difference between accretion of material onto neutron stars or black holes and accretion onto white dwarfs. The former generates much LIFE CYCLE of a supersoft star (sequence at top) begins with an unequal binary-star system and ends with a type la supernova explosion. The supersoft phase can take one of three forms, depending on the nature of the companion star. If it is an ordinary star in a tight orbit, it can overflow its Roche lobe and cede control of its outer layers to the white dwarf. This case is depicted in the fifth frame of the sequence (5a). The lower diagrams show the alternatives. If the companion is a red giant star of sufficient size, it also overflows its Roche lobe (5b). Finally, if it is a red giant with a smaller size or a wider orbit, it can power a supersoft source with its strong winds (5c). Not all supersoft sources blow up, but enough do to account for the observed rate of Supernovae. more energy than nuclear fusion of the same amount of hydrogen, whereas the latter produces much less energy than fusion. Of the energy inherent in mass (Albert Einstein's famous £ = mc1), fusion releases 0.7 percent. Accretion onto a neutron star, however, liberates more than 10 percent; into a black hole, up to 46 percent before the material disappears into it. On the other hand, accretion onto a white dwarf, with its comparatively weak gravity, liberates only about 0.01 percent of the inherent energy. Therefore, on white dwarfs, nuclear fusion is potentially more potent than accretion. If hydrogen accumulated on the surface of a white dwarf and somehow started to "burn" (that is, undergo fusion), only about 0.03 Earth-mass would be needed a year to generate the observed soft x-ray luminosity. Because of the lower density of inflowing matter, the x-rays would be able to escape. Stable nuclear burning of inflowing matter would account for the paradoxical brightness of the supersoft sources. But is it really possible? Here we were lucky. Just when we were dis- 50 Scientific American February 1999 Supersoft X-ray Stars and Supernovae Copyright 1999 Scientific American, Inc. cussing this issue, Ken'ichi Nomoto of the University of Tokyo arrived in Santa Barbara. He had already been trying to answer the very same question in order to understand another phenomenon, nova explosions—outbursts much less energetic than Supernovae that cause a star suddenly to brighten 10,000-fold but do not destroy it. Novae always occur in close binaries that consist of a white dwarf and a sunlike star. Until the discovery of supersoft sources, they were the only known such close binaries [see "The Birth and Death of Nova V1974 Cygni," by Sumner Starrfield and Steven N. Shore; Scientific American, January 1995]. For over a decade, Nomoto and others had been improving on the pioneering simulations by Bohdan Paczyňski and Anna Zytkow, both then at the Nicolaus Copernicus Astronomical Center in Warsaw. According to these analyses, hydrogen that has settled onto the surface of a dwarf can indeed burn. The style of burning depends on the rate of accretion. If it is sufficiently low, below 0.003 Earth-mass a year, fusion is spasmodic. The newly acquired hydrogen remains passive, often for thousands of years, until its accumulated mass exceeds a critical value, at which point fusion is abruptly ignited at its base. The ensuing thermonuclear explosion is visible as a nova. If the accretion rate is slightly higher, fusion is cyclic but not explosive. As the rate increases, the interval between burning cycles becomes shorter and shorter, and above a certain threshold value, stable burning sets in. For white dwarfs of one solar mass, this threshold is about 0.03 Earth-mass a year, hi the simulations, fusion generates exaaly the soft x-ray luminosity observed in the supersoft sources. If the rate is still higher, above 0.12 Earth-mass a year, the incoming gas does not settle onto the surface but instead forms an extended envelope around the dwarf. Steady burning continues on the surface, but the thick envelope degrades the x-rays into ultraviolet and visible light. Recent calculations have shown that the radiation is so intense that it exerts an outward pressure on gas in the envelope, causing part of it to stream away from the star in a stellar wind. If the accretion rate hovers around 0.12 Earth-mass a year, the system may alternate between x-ray and visible phases. Exactly this type of behavior has been found in the supersoft source known as RXJ0513.9-6951, which was discovered by Stefan G. Schaeidt of the Max Planck institute. It gives off x-rays for weeks at a time, with breaks of several months. This on/off emission puzzled astronomers until 1996, when Karen A. Southwell and her colleagues at the University of Oxford noticed that the visible counterpart to this star fluctuated, too. When the visible star is faint, the x-ray source is bright, and vice versa [see top illustration on opposite page]. The system also features two high-speed jets of matter flowing out in opposite directions at an estimated 4,000 to 6,000 kilometers per second. Such jets are common where an accretion disk dumps more material on the star than it can absorb. The excess squirts out in a direction perpendicular to the disk, where Pair of ordinary stars burn hydrogen in their cores One exhausts fuel in core, becomes red giant star Orbit tightens; giant envelops other star Giant sheds outer layers, becomes white dwarf Dwarf steals gas from other star, emits soft x-rays Dwarf reaches critical mass, explodes Super s of t X-ray Stars and Supernovae Scientific American February 1999 51 Copyright 1999 Scientific American, Inc. WHITE DWARF MASS (SOLAR MASSES) STYLE OF NUCLEAR FUSION on the surface of a white dwarf depends on how massive the dwarf is (horizontal axis) and how fast it is devouring its companion star (vertical axis). If the accretion rate is sufficiently low, fusion (which as- occurs in tronomers, somewhat misleadingly, call "burning" spurts, either gently or explosively. Otherwise it is continuous. This chart shows that phenomena once thought to be distinct— such as novae and supersoft sources—are closely related. there is no inflowing matter to block it. The outflow velocity is expected to be approximately the same as the escape velocity from the surface of the star. In RXJ0513.9-6951 the inferred speed nearly equals the escape velocity from a white dwarf—further confirmation that the supersoft sources are white dwarfs. Soft-Boiled Star Not every binary system can supply material at the rates required to produce a supersoft source. If the companion star is less massive than the white dwarf, as is typically observed in nova-producing systems, the fastest that material can flow in is 0.0003 Earth-mass a year. This limit is a consequence of the law of conservation of orbital angular momentum. As the small companion star loses mass, its orbit widens and the flow rate stabilizes. For the rates to be higher, the donor star must have a mass larger than that of the dwarf. Then the conservation of angular momentum causes the orbit to shrink as a result of the mass transfer. The stars come so close that they begin a gravitational tug-of-war for control of the outer layers of the donor. Material within a certain volume called the Roche lobe remains under the sway of the donor's gravity, while material beyond it is stripped off by the dwarf. Perversely, the donor abets its own destruction. While it sheds mass at the surface, the amount of energy generated by fusion in the core remains largely unaffected. The continued heating from below exerts pressure on the outer layers to maintain the original shape of the star. This pressure replenishes the material ripped off the dwarf, much as an overflowing pot of soup on a hot burner will continue to pour scalding water onto the stove. The situation stabilizes only when the effects of mass loss are felt by the core itself. For a star originally of two solar masses, the return to equilibrium—and thus the cessation of supersoft emission—takes seven million years after the onset of plundering. By this time the star has shrunk to a fifth of its initial mass and become the lesser star in the system. The average accretion rate onto the dwarf was about 0.04 Earth-mass a year. Following this reasoning, we predicted in 1991 that many supersoft sources would be white dwarfs in tight orbits (with periods of less than a few days) around a companion star whose original mass was 1.2 to 2.5 solar masses. In fact, CAL 83 and 87 are precisely such systems. Since 1992 orbital periods for four more supersoft sources have been measured; all were less than a few days. The explanation may also apply to a class of novalike binary systems, V Sagittae stars, whose oscillating brightness has perplexed astronomers since the turn of the century. Last year Joseph Patterson of Columbia University and his collaborators, and, independently, Joao E. Steiner and Marcos P. Diaz of the National Astrophysical Laboratory in Itajubá, Brazil, demonstrated that the prototype of this class has the appropriate mass and orbital period. There is one other group of star systems that could give rise to supersoft sources: so-called symbiotic binaries, in which the white dwarf is in a wide orbit about a red giant star. Red giants are willing donors. Bloated by age, they have relatively weak surface gravity and already discharge matter in strong stellar winds. In 1994 one of us (Kahabka), Hasinger and Wolfgang Pietsch of the Max Planck institute discovered a supersoft symbiotic binary in the Small Magellanic Cloud, another satellite galaxy of the Mlky Way. Since then, a further half-dozen such sources have been found. Some supersoft sources are harder to recognize because their accretion rate varies with time. One source in our galaxy alternates between x-ray and visible emission on a cycle of 40 years, as seen on archival photographic plates. A few objects, such as 52 Scientific American February 1999 Supersoft X-ray Stars and Supernovae Copyright 1999 Scientific American, Inc. Nova Muscae 1983 and Nova Cygni 1992, combine nova behavior with supersoft emission, which can be explained by a years-long period of sedate "afterburning" between eruptions. The Seeds of Supernovae The companion masses required of supersoft sources with short orbital periods imply that they are relatively young systems (compared with the age of our galaxy). Stars of the inferred mass live at most a few billion years and are always located in or near the youthful central plane of the galaxy. Unfortunately, that location puts them in the region thick with interstellar clouds, which block soft x-rays. For this reason, the observed population is only the tip of the iceberg. Extrapolating from the known number of supersoft sources, we have estimated that the total number in our galaxy at any one time is several thousand. A few new ones are born every 1,000 years, and a few others die. What happens as they pass away? The fusion of matter received from the companion clearly causes the white dwarf to grow in mass. It could reach the Chandrasekhar limit of about 1.4 solar masses, the maximum mass a white dwarf can have. Beyond this limit, the quantum forces that hold up the dwarf falter. Depending on the initial composition and mass of the dwarf, there are two possible outcomes: collapse to a neutron star or destruction in a nuclear fireball. Dwarfs that either lack carbon or are initially larger than 1.1 solar masses collapse. A number of theorists—Ramon Canal and Javier Labay of the University of Barcelona, Jordi Isern of the Institute for Space Studies of Catalonia, Stan E. Woosley and Frank Timmes of the University of California at Santa Cruz, Hitoshi Yamaoka of Kyushu University, and Nomoto— have analyzed this fate. White dwarfs that do not meet either of these criteria simply blow up. They may slowly amass helium until they reach the Chandrasekhar limit and explode. Alternatively, the helium layer may reach a critical mass prematurely and ignite itself explosively, hi the latter case, shock waves convulse the star and ignite the carbon at its core. And once the carbon burning begins, it becomes a runaway process in the dense, taut material of the dwarf. Within a few seconds the star is converted largely into nickel as well as other elements between silicon and iron. The nickel, dispersed into space, radioactively decays to cobalt and then iron in a few hundred days. As it happens, astronomers had already ascribed a kind of explosion to the death of carbon-rich dwarfs—the supernova type la. The spectrum of such a supernova lacks any sign of hydrogen or helium, one of the factors that distinguish it from the other types of Supernovae (lb, Ic and H), which probably result from the implosion and subsequent explosion of massive stars [see "Helium-Rich Supernovas," by J. Craig Wheeler and Robert P. Harkness; Scientific American, November 1987]. Type la Supernovae are thought to be a major source of iron and related elements in the universe, including on Earth. Four occur every 1,000 years on average in a galaxy such as the Milky Way. Before supersoft sources were discovered, astronomers were unsure as to the precise sequence that led to type la super-novae. The leading explanations implicated either certain symbiotic stars—in particular, the rare recurrent novae—or mergers of two carbon-rich white dwarfs. But the latter view is now disputed. No double-dwarf system with the necessary mass and orbital period has ever been seen, and recent calculations by Nomoto and his colleague Hadeyuki Saio have shown that such a merger would be too gentle to produce a thermonuclear explosion. Supersoft sources and other surface-burning dwarfs may be the solution. Their death rate roughly matches the observed supernova frequency. The concordance makes the luminous supersoft binary x-ray sources the first firmly identified class of objects that can realistically be expected to end their lives as type la Supernovae. This new realization may improve the accuracy of cosmolog-ical measurements that rely on these Supernovae to determine distance [see "Surveying Space-time with Supernovae," by Craig J. Hogan, Robert P. Kirshner and Nicholas B. Suntzeff; Scientific American, January]. Subtle variations in brightness can make all the difference between conflicting conclusions concerning the origin and fate of the universe. The worry for cosmologists has always been that slight systematic errors— the product, perhaps, of astronomers' incomplete understanding of the stars that go supernova—could mimic real variations. The implications of the supersoft findings for cosmology, however, have yet to be worked out. When supersoft sources were first detected, nobody expected that the research they provoked would end up uniting so many phenomena into a single coherent theory. Now it is clear that a once bewildering assortment of variable stars, novae and Supernovae are all variants on the same basic system: an ordinary star in orbit around a reanimated white dwarf. The universe seems that much more comprehensible. EH The Authors PETER KAHABKA, EDWARD P. J. VAN DEN HEUVEL and SAUL A. RAPPAPORT say they never thought super-soft sources would be explained by white dwarfs. That insight came about by accident during a workshop that van den Heuvel and Rappaport organized on a different topic: neutron stars. Two years later these veteran astronomers met Kahabka, who had discovered many of the supersoft sources as a member of the ROSAT team. Today Kahabka is a postdoctoral fellow at the Astronomical Institute of the University of Amsterdam. Van den Heuvel is director of the institute and the 1995 recipient of the Spinoza Award, the highest science award in the Netherlands. An amateur archaeologist, he owns an extensive collection of early Stone Age tools. Rappaport is a physics professor at the Massachusetts Institute of Technology. He was one of the pioneers of x-ray astronomy in the 1970s. Further Reading Luminous Supersoft X-ray Sources as Progenitors of Type Ia Super-novae. Rosanne Di Stefano in Supersoft X-ray Sources. Edited by Jochen Greiner. Springer-Verlag, 1996. Preprint available at xxx.lanl.gov/abs/astro-ph/9701199 on the World Wide Web. Luminous Supersoft X-ray Sources. P. Kahabka and E.P.J, van den Heuvel in Annual Review of Astronomy and Astrophysics, Vol. 35, pages 69-100. Annual Reviews, 1997. SNeIa: On the Binary Progenitors and Expected Statistics. Pilar Ruiz-Lapuente, Ramon Canal and Andreas Burkert in Thermonuclear Supernovae. Edited by Ramon Canal, Pilar Ruiz-Lapuente and Jordi Isern. Kluwer, 1997. Preprint available at xxx.lanl.gov/abs/astro-ph/9609078 on the World Wide Web. Type Ia Supernovae: Their Origin and Possible Applications in Cosmology. Ken'ichi Nomoto, Koichi Iwamoto and Nobuhiro Kishimoto in Science, Vol. 276, pages 1378-1382; May 30, 1997. Preprint available at xxx.lanl.gov/abs/ astro-ph/9706007 on the World Wide Web. Super s of t X-ray Stars and Supernovae Scientific American February 1999 53 Copyright 1999 Scientific American, Inc. The Puzzle of Hypertension in African-AirJB! -ans by Richard S. Cooper, Charles N. Rotimi and Ryk Ward Nearly all Americans undergo a steady rise in blood pressure with age. Almost 25 percent cross the line into hypertension, the technical term for chronically high blood pressure. This condition, in turn, can silently contribute to heart disease, stroke and kidney failure and thus plays a part in some 500,000 deaths every year. For black Americans, the situation is even more dire: 35 percent suffer from hypertension. Worse, the ailment is particularly deadly in this population, 1 1 300^ £90 280 t {- 270 260 Ě- 250 240 230 220 210 *» i «of - 170 160 j- 150 no f 130 120 - 110 too 90 80 70 50 30 20 (WmonomAr «.iHT^lhlll ■.-".■*.%■■ IB J^ IF- " "T 56 Scientific American February 1999 YORAM LEHMANN Peter Arnold Inc. ^^H^ř f } ' \ ^M^)MP\X Peter Arnold, Inc. The Puzzle of Hypertension in African-Americans Copyright 1999 Scientific American, Inc. Genes are often invoked to account for why high blood pressure is so common among African-Americans. Yet the rates are low in Africans. This discrepancy demonstrates how genes and the environment interact accounting for 20 percent of deaths among blacks in the U.S.—twice the figure for whites. One popular explanation of this disparity between blacks and whites holds that people of African descent are "intrinsically susceptible" to high blood pressure because of some vaguely defined aspect of their genetic makeup. This conclusion is not satisfying. Indeed, the answer troubles us, for as we will show, it does not reflect the available evidence accurately. Instead such reasoning appears to follow from the racialized character of much public health research, which at times defaults to reductionist interpretations that emphasize the importance of racial or genetic characteristics. Race be- INCIDENCE OF HYPERTENSION, or chronic high blood pressure, was assessed by the authors in Africans as well as in people of African descent in the U.S. and the Caribbean. The rate dropped dramatically from the U.S. across the Atlantic to Africa (graph), and the difference was most pronounced between urban African-Americans (below, right) and rural Nigerians (below, left). The findings suggest that hypertension may largely be a disease of modern life and that genes alone do not account for the high rates of hypertension in African-Americans. comes the underlying cause for the presence of a disease, rather than being recognized as a proxy for many other variables (along the lines of, say, socioeconomic status) that influence the course of a disorder. We suggest that a more fruitful approach to understanding the high levels of hypertension among African-Americans would begin by abandoning conventional hypotheses about áD z 30 O z 25 1— CĽ LU §: 20 X X 1— CĽ z c z 5 c z z c 1 ////z// i> JIM SUGAR PHOTOGRAPHY Corbis The Puzzle of Hypertension in African-Americans DONNABINDER ImpaaVtsuals CHRISTOPHER SMITH ImpactVisuak Scientific American February 1999 57 Copyright 1999 Scientific American, Inc. What Pressure Readings Mean Blood pressure is measured with a sphygmomanometer, which gives a reading of two numbers: systolic and diastolic pressure. The systolic reading indicates the maximum pressure exerted by the blood on the arterial walls; this high point occurs when the left ventricle of the heart contracts, forcing blood through the arteries. Diastolic pressure is a measure of the lowest pressure on the blood vessel walls and happens when the left ventricle relaxes and refills with blood. Healthy blood pressure is considered to be around 120 millimeters of mercury systolic, 80 millimeters of mercury diastolic (usually presented as 120/80). Many people can experience temporary increases in blood pressure, particularly under stressful conditions. When blood pressure is consistently above 140/90, however, physicians diagnose hypertension. The disorder can generally be managed with the help of special diets, exercise regimens and medication. —The Editors race. It would acknowledge that hypertension arises through many different pathways, involving complex interactions among external factors (such as stress or diet), internal physiology (the biological systems that regulate blood pressure) and the genes involved in controlling blood pressure. Only by teasing out the connections among all three tiers of this model will scientists truly comprehend how high blood pressure develops. This knowledge will then enable researchers to return successfully to the questions of why the disorder is so prevalent among African-Americans and how best to intervene for all patients. One strategy for clarifying the relative significance of different environmental factors would be to hold constant the genetic background of people in distinct environments and focus on the variations in their living conditions or behavior. This kind of experiment is impossible to do perfectly, particularly when vast numbers of Americans have at least one, and frequently several, of the known behavioral risk factors for developing high blood pressure: being overweight, eating a high-salt diet, suffering long-term psychological stress, being physically inactive and drinking alcohol to excess. In a way, the situation is analogous to trying to identify the causes of lung cancer in a society where everyone smokes; without having nonsmokers for a comparison group, researchers would never know that smoking contributes so profoundly to lung cancer. Lessons from the Past Our solution to this dilemma was to turn to Africa. In 1991 we initiated a research project concentrated on the African diaspora, the forced migration of West Africans between the 16th and 19th centuries. In this shameful chapter of world history, European slave traders on the west coast of Africa purchased or captured an estimated 10 million people and transported them to the Caribbean and the Americas, where they gradually mixed with Europeans and Native Americans. Today their descendants live throughout the Western Hemisphere. Scientists have known for some time that the rate of hypertension in rural West Africa is lower than in any other place in the world, except for some parts of the Amazon basin and the South Pacific. People of African descent in the U.S. and the U.K., on the other hand, have among the highest rates of hypertension in the world. This shift suggests that something about the surroundings or way of life of European and American blacks—rather than a genetic factor—was the fundamental cause of their altered susceptibility to high blood pressure. To elucidate what was triggering hypertension among these people, we established research facilities in communities in Nigeria, Cameroon, Zimbabwe, St. Lucia, Barbados, Jamaica and the U.S. As the project progressed, we focused our attention on Nigeria, Jamaica and the U.S. as the three countries that allow us, in a sense, to capture the medical effects of the westward movement of Africans from their native lands. We conducted testing of randomly sampled people at each location to determine the general prevalence of both hypertension and its common risk factors, such as eating a high-salt diet or being obese or physically inactive. As might be expected, the differences between the three societies are vast. The Nigerian community we surveyed, with the help of colleagues at the University of Ibadan Medical School, is a rural one in the district of Igbo-Ora. Polygamy is a common practice there, so families tend to be complex and large; on average, women raise five children. The residents of Igbo-Ora are typically lean, engage in physically demanding subsistence farming and eat the traditional Nigerian diet of rice, tubers and fruit. Nations in sub-Saharan Africa do not keep formal records on mortality and life expectancy, but based on local studies, we assume that infection, especially malaria, is the major killer. Our research revealed that adults in Igbo-Ora have an annual mortality risk of between 1 and 2 percent—high by any Western standard. Those who do survive to older ages tend to be quite healthy. In particular, blood pressure does not rise with age, and even though hypertension does occur, it is rare. (We were pleased that we could coordinate with the established medical personnel in the region to treat those patients who did suffer from hypertension.) Jamaica, in contrast, is an emerging industrial economy in which the risk of infectious disease is very low but the levels of chronic disease are higher than 58 Scientific American February 1999 The Puzzle of Hypertension in African-Americans Copyright 1999 Scientific American, Inc. in Nigeria. The base of operations for our team was Spanish Town, the original colonial capital of Jamaica. A bustling city of 90,000 people, Spanish Town features a cross section of Jamaican society. Investigators at the Tropical Metabolism Research Unit of the University of the West Indies, Mona Campus, led the project. The family structure in Jamaica has evolved away from the patriarchy of Africa. Women head a significant number of households, which are generally small and often fragmented. Chronic unemployment has tended to marginalize men and lower their social position. Farming and other physically demanding occupations are common; residents' diets include a blend of local foodstuffs and modern commercial products. Despite widespread poverty, life expectancy in Jamaica is six years longer than it is for blacks in the U.S. because of lower rates of cardiovascular disease and cancer. In the metropolitan Chicago area, we worked in the primarily African-American city of Maywood. Many of the older adults in this community were born in the southern U.S., primarily in Mississippi, Alabama or Arkansas. Interestingly, the northern migration seems to have greatly improved both the health and the economic standing of these people. Unionized jobs in heavy industry provide the best opportunities for men, whereas women have been integrated into the workforce across a range of job categories. The prevailing diet is typical American fare: high in fat and salt. The generation now reaching late adulthood has enjoyed substantial increases in life expectancy, although progress has been uneven in the past decade. Similarities and Differences Even as we sought out these examples of contrasting cultures, we were careful to make sure the people we studied had similar genetic backgrounds. We found that the American and Jamaican blacks who participated shared, on average, 75 percent of their genetic heritage with the Nigerians. Against this common genetic background, a number of important differences stood out. First, the rates of hypertension: just 7 percent of the group in rural Nigeria had high blood pressure, with increased rates noted in urban areas. Around 26 percent of the black Jamaicans and 33 percent of the black Americans surveyed were either suffering from hypertension or already taking medication to lower their blood pressure. In addition, certain risk factors for high blood pressure became more common as we moved across the Atlantic. Body mass index, a measure of weight relative to height, went up steadily from Africa to Jamaica to the U.S., as did average salt intake. Our analysis of these data suggests that being overweight, and the associated lack of exercise and poor diet, explains between 40 and 50 percent of the increased risk for hypertension that African-Americans face compared with Nigerians. Variations in dietary salt intake are likely to contribute to the excess risk as well. The African diaspora has turned out to be a powerful tool for evaluating the effects of a changing society and environment on a relatively stable gene pool. Our study also raises the question of whether rising blood pressure is a nearly unavoidable hazard of modern life for people of all skin colors. The human cardiovascular system evolved in the geographic setting of rural Africa in which obesity was uncommon, salt intake was moderate, the diet was low in fat, and high levels of physical activity were required. The life of subsistence farmers in Africa today has not, at least in these respects, changed all that much. We see that for people living this The African diaspora has turned out to be a powerful tool for evaluating the effects of a changing society and environment. way, blood pressure hardly rises with age and atherosclerosis is virtually unknown. As a result, the African farmers provide epidemiologists with a revealing control group that can be compared with populations living in more modernized societies. It is disquieting to recognize that a modest shift from these baseline conditions leads to sizable changes in the risk for hypertension. For instance, blood pressures are substantially higher in the 35 30 - n n 25 20 15 - 10 U.S.(URBAN) BARBADOS • ST. LUCIA I JAMAICA • CAMEROON (URBAN) • CAMEROON (RURAL) I NIGERIA (RURAL AND URBAN) 22 24 26 28 30 32 BODY MASS INDEX, or BMI, measures a person's weight-to-height ratio; a BMI over 25 is generally considered a sign of being overweight. In the authors' study of people of African descent, a low average BMI in a population corresponded to a low rate of hypertension in that community. As average BMI increased, so did the prevalence of hypertension. The findings support the view that obesity contributes to high blood pressure. The Puzzle of Hypertension in African-Americans Scientific American February 1999 59 Copyright 1999 Scientific American, Inc. city of Ibadan, Nigeria, than in nearby rural areas, despite small differences in the groups' overall levels of obesity and sodium intake. Other variables, such as psychological stress and lack of physical activity, may help account for this increase. Psychological and social stresses are extremely difficult to measure, especially across cultures. Yet there is little dispute that blacks in North America and Europe face a unique kind of stress— racial discrimination. The long-term effects of racism on blood pressure remain unknown; however, it is worth noting that blacks in certain parts of the Caribbean, including Trinidad, Cuba and rural Puerto Rico, have average blood pressures that are nearly the same as those of other racial groups. Although this is no more than conjec- ture, perhaps the relationships among races in those societies impose fewer insults on the cardiovascular system than those in the continental U.S. do. Environment at Work As epidemiologists, we want to move ±\beyond these descriptive findings of what might increase people's risk for hypertension and examine more closely how environmental and biological risk factors interact to produce the disease. Physiologists have not yet uncovered every detail of how the body regulates blood pressure. Nevertheless, they know that the kidneys play a key role, by controlling the concentration in the bloodstream of sodium ions (derived from table salt—sodium chloride—in the diet), which in turn influ- ences blood volume and blood pressure. Having evolved when the human diet was habitually low in sodium, the kidneys developed an enormous capacity to retain this vital ion. As these organs filter waste from the blood, they routinely hold on to as much as 98 percent of the sodium that passes through, then eventually return the ion to the bloodstream. When doused with sodium, however, the kidneys will excrete excessive amounts into the blood, thereby elevating blood pressure. Too much salt in the kidneys can also harm their internal filtering mechanism, leading to a sustained rise in pressure. As a gauge of how well the organs were modulating the body's sodium balance in our patients, we decided to measure the activity of an important biochemical pathway that helps to reg- LIVER Angiotensinogen is produced continuously by the liver. The RAAS Pathway This biochemical pathway, otherwise known as the renin-angiotensin-aldosterone system, influences blood pressure. People with a highly active system typically sufferfrom high blood pressure. 2 Renin is released by the kidneys in response to stress—either physiological, such as exercise or changes in diet, or emotional. BLOODSTREAM / ANGIOTENSIN I RENIN Angiotensin I results from the reaction of angiotensinogen and renin. When blood carrying angiotensin I passes through the lungs, it reacts with the enzyme ACE. KIDNEY 60 Scientific American February 1999 The Puzzle of Hypertension in African-Americans Copyright 1999 Scientific American, Inc. ulate blood pressure. Known as the renin-angiotensin-aldosterone system, or RAAS, this intricate series of chemical reactions (named for three of the compounds involved) has the net effect of controlling the amount of the protein angiotensin II present in the bloodstream. Angiotensin II performs a range of functions, such as prompting the constriction of blood vessels, which causes a rise in blood pressure, and triggering the release of another crucial chemical, aldosterone, which induces an increase in the reuptake of sodium by the kidneys. In short, a highly active RAAS pathway should correlate with elevated blood pressure. As a convenient method for tracing the activity of RAAS in our patients, we measured the amount of the compound angiotensinogen—one of the chemicals ADRENAL GLANDS ALDOSTERONE 5 Aldosterone tells the kidney to take up salt and water from the bloodstream, thereby raising blood pressure. Angiotensin II results from the reaction of angiotensin I and ACE. Angiotensin II has two primary effects. It prompts the adrenal glands to release aldosterone, and it causes smooth muscle in blood vessels to contract, which raises blood pressure. 'BLOOD VESSEL (CONSTRICTED) involved in the first step of RAAS [see illustration below]—present in blood samples. One advantage to measuring angiotensinogen is that unlike other, short-lived compounds in the pathway, it circulates at a relatively constant level in the bloodstream. As expected, we found that in general the higher angiotensinogen levels are, the higher blood pressure is likely to be, although this association is -not as strong for women (variations in estrogen also appear to affect a woman's blood pressure). Further, the average level of angiotensinogen for each group we studied increased substantially as we moved from Nigeria to Jamaica to the ^_^^ U.S., just as the rate of hypertension did; that pattern was found in both men and women. Our results suggest that some of the risk factors for hypertension might promote the disorder by elevating levels of angiotensinogen in the blood. Obesity, in particular, may contribute to chronic high blood pressure in this way. Excessive body fat, for instance, has been shown to correspond to an elevation in an individual's circulating level of angiotensinogen. And the incidence of obesity rose more or less in parallel with levels of hypertension and angiotensinogen in our study groups. Correlations do not necessarily prove causality, of course, but the collected findings do hint that obesity promotes hypertension at least in part by leading to enhanced angiotensinogen production. Clues in the Genes Genetic findings seem to lend some support to a role for excess angiotensinogen in the development of hypertension. Scientists have found that some people carry certain variations of the gene for producing angiotensinogen (these variations in genes are known as alleles) that give rise to elevated levels of the protein. Intriguingly, people with these alleles tend to have a higher risk of developing high blood pressure. Several years ago researchers at the University of Utah and the College de France in Paris reported that two alleles of the angiotensinogen gene, known as 235T and 174M, correlated with high levels of circulating angiotensinogen—as well as with hypertension—among people of European descent. The scientists do not know, however, whether these alleles themselves play a part in controlling angiotensinogen levels or are merely markers inherited along with other alleles that have more of an effect. We must emphasize that identification of a gene associated with greater susceptibility to hypertension is not equivalent to finding the cause of the condition. Nor is it equivalent to saying that The destructive effects of racism complicate any study of how a disease such as hypertension affects minority groups. certain groups with the gene are fated to become hypertensive. Investigators have determined that genetic factors account for 25 to 40 percent of the variability in blood pressure between people and that many genes—perhaps as many as 10 or 15—can play a part in this variation. Those numbers indicate, then, that an isolated gene contributes only about 2 to 4 percent of the differences in blood pressure among people. And whether genes promote the development of hypertension depends considerably on whether the environmental influences needed to "express" those hypertension-causing traits are present. Our own genetic findings seem to illustrate this point. In a quite perplexing discovery, we found that the 235T allele is twice as common among African-Americans as it is among European-Americans but that blacks with this form of the gene do not seem to be at an increased risk for hypertension compared with other blacks who do not carry the gene. Among the Nigerians in our study, we did see a modest elevation in levels of angiotensinogen in those with the 235T gene variant; again, however, this factor did not translate into a higher risk for hypertension. Furthermore, 90 percent of the Africans we tested carried the 235T allele, yet the rate of hypertension in this community is, as noted earlier, extremely low. (The frequency of the 174M allele was equivalent in all groups.) It may well be that high angiotensinogen levels are not sufficient to trigger hypertension in people of African descent; rather other factors—genetic, physiological or environmental—may also be needed to induce the disorder. Alterna- The Puzzle of Hypertension in African-Americans Scientific American February 1999 61 Copyright 1999 Scientific American, Inc. tively, this particular allele may not be equally important in the development of hypertension for all ethnic groups. Pieces of the Puzzle 100 80 3 60 5 40 20 0 ■ 235T ALLELE D HYPERTENSION Although our results re-.veal at least one aspect of how nurture may interact with nature to alter a person's physiology and thereby produce hypertension, the findings also highlight the pitfalls of making sweeping generalizations. Clearly, no single allele and no single environmental factor can explain why hypertension occurs and why it is so common in African-Americans. An individual with a given mix of alleles may be susceptible to high blood pressure, but as our research on the African diaspora emphasizes, that person will develop hypertension only in a certain setting. The continuing challenge for researchers is to isolate specific genetic and environmental effects on hypertension and then put the pieces back together to determine the myriad ways these factors can conspire to cause chronic elevations of blood pressure. Hypertension currently accounts for approximately 7 percent of all deaths worldwide, and this figure will no doubt increase as more societies adopt the habits and lifestyle of industrial nations. There is no returning to our evolutionary home- NIGERIANS AFRICAN-AMERICANS JAMAICANS EUROPEAN-AMERICANS RATES OF A PARTICULAR GENE VARIANT—235T—and of hypertension in different ethnic groups yield a puzzling picture. Scientists expected that people who carried 23 5T would have a high incidence of hypertension. Yet that association has not held true universally. For instance, 235T is very common in Nigerians, in whom high blood pressure is rare. The findings suggest that a single gene cannot control the development of high blood pressure. land, so science must lead us forward to another solution. The sanitary revolution was born of the awareness of contagion. Heart disease became a tractable problem when researchers recognized the importance of lifetime dietary habits on cholesterol metabolism. Prevention and treatment of hypertension will require a fuller appreciation of how genes and the environment join forces to disrupt blood pressure regulation. We also believe that to understand hypertension in African-Americans better, the scientific community should reevaluate what the ethnic and racial divisions of our species mean. Many disci- plines hold that there is no biological basis to the concept of race; instead they view it as a reflection of societal distinctions rather than readily defined scientific ones. Physical anthropologists, for instance, long ago ceased their attempts to classify Homo sapiens into various races, or subspecies. The disciplines of medicine and epidemiology, however, continue to ascribe biological meaning to racial designations, arguing that race is useful not only for distinguishing between groups of people but also for explaining the prevalence of certain disorders. Yet the racial classifications they incorporate in their studies are not based on rigorous scientific criteria but instead on bureaucratic categories, such as those used in the U.S. census. As researchers grapple with the scientific import of race, its societal meaning must not be forgotten. We live in a world in which racial designations assume unfortunate significance. The destructive effects of racism complicate any study of how a disease such as hypertension affects minority groups. But as we continue to explore the complex interactions between external risk factors, such as stress and obesity, and the genes associated with the regulation of blood pressure, the results should offer guidance for all of us, regardless of skin color. The Authors RICHARD S. COOPER, CHARLES N. ROTIMI and RYK WARD have worked together on hypertension for eight years. Cooper received his medical degree from the University of Arkansas and completed training in clinical cardiology at Montefiore Hospital in Bronx, N.Y. He has written widely about the significance of race in biomedical research. Cooper and Rotimi are both at the Stritch School of Medicine at Loyola University Chicago. Rotimi studied biochemistry at the University of Benin in his native Nigeria before emigrating to the U.S. He serves as a consultant to the National Human Genome Research Institute and directs the field research program on diabetes and hypertension in Nigeria; the program is run by Loyola and the National Institutes of Health. Ward is professor and head of the Institute of Biological Anthropology at the University of Oxford. He was trained in New Zealand as an anthropologist and a human geneticist. Further Reading Familial Aggregation and Genetic Epidemiology of Blood Pressure. Ryk Ward in Hypertension: Pathophysiology, Diagnosis and Management. Edited by J. H. Laragh and B. M. Brenner. Raven Press, 1990. Molecular Basis of Human Hypertension: Role of Angiotensinogen. X. Jeunemaitre, F. Soubrier, Y. V. Kotelevtsev, R. P. Lifton, C. S. Williams, A. Charu et al. in Cell, Vol. 71, No. 1, pages 169-180; October 1992. The Slavery Hypothesis for Hypertension among African Americans: The Historical Evidence. Philip D. Curtin in American Journal of Public Health, Vol. 82, No. 12, pages 1681-1686; December 1992. Hypertension in Populations of West African Origin: Is There a Genetic Predisposition? Richard S. Cooper and Charles N. Rotimi m Journal of Hypertension, Vol. 12, No. 3, pages 215-227; March 1994. Hypertension Prevalence in Seven Populations of African Origin. Richard S. Cooper, Charles N. Rotimi, Susan L. Ataman, Daniel L. McGee, Babatunde Osotimehin, Solomon Kadiri, Walinjom Muna, Samuel Kingue, Henry Fräser, Terrence Forrester, Franklyn Bennett and Rainford Wilks in American Journal of Public Health, Vol. 87, No. 2, pages 160-168; February 1997. 62 Scientific American February 1999 The Puzzle of Hypertension in African-Americans Copyright 1999 Scientific American, Inc. Negroes in the Bilge, engraved by Demi, circa 1835 High Blood Pressure and the Slave Trade One frequently cited—but controversial—explanation for the prevalence of chronic high blood pressure among U.S. blacks has to do with the voyage from Africa to America on slave ships, known as the Middle Passage. During such trips, the proposal goes, the slaves were placed in a Darwinian "survival-of-the-fittest" situation, in which staying alive depended on having the right genes—genes that now mightconferan increased riskforhigh blood pressure. Scientists often invoke evolutionary theory to account for why a certain racial or ethnic group appears to be at greater risk for a particular condition. The argument usually proceeds as follows: The population experienced a so-called selective pressure that favored the survival of some members of the group (and their genes) while eliminating others. If the remaining population did not mix genes with other racial or ethnic groups, certain genetic traits would begin to appear with increasing frequency. Assuming that African-Americans have a genetic predisposition to hypertension, evolutionary theorists ask, what was the unique, extreme selective pressure that led to this harmful trait becoming so common? Some researchers suggest that the brutal voyage in slave ships was exactly this kind of event. Not surprisingly, slaves had extraordinarily high death rates before, during and after coming to American plantations. Many of the deaths were related to what doctors call salt-wasting conditions—diarrhea, dehydration and certain infections. Thus, the ability to retain salt might have had a survival value for the Africans brought to America. Under modern conditions, however, retaining salt would predispose the descendants of those people to hypertension. Despite its immediate appeal, the slavery hypothesis is, in our view, quite problematic and has unfortunately been accepted uncritically. The historical framework for this hypothesis has been questioned by scholars of African history. For instance, there is no strong historical evidence that salt-wasting conditions were, in fact, the leading cause of death on slave ships. Africans on board these ships died fora variety of reasons, among them tuberculosis (nota salt-wasting infection) and violence. The biological basis for the theory is also rather weak. Diarrhea and other salt-wasting diseases, particularly in children, have been among the most deadly killers for every population over humankind's entire evolutionary history. Any resulting selective pressures caused by such conditions would therefore apply to all racial and ethnic groups. And at least in the Caribbean during the 18th century, whites had little better survival rates than the slaves did—again indicating that any evolutionary pressure was not limited to Africans. Finally, current data suggest that Africans who have moved to Europe in the past several decades also have higher blood pressure than whites do, pointing to either environmental effects or something general in the African genetic background. Researchers do notyet know enough about the genes for salt sensitivity to test the Middle Passage hypothesis directly. But some indirect evidence is informative. If the Middle Passage functioned as an evolutionary bottleneck, it should have reduced both the size of the population and the genetic variability within it, because only people with a very specific genetic makeup would survive. The data available, however, show a great deal of geneticdiversity—notuniformity— among African-Americans. The problem with the slavery hypothesis is that it provides a short-cut to a genetic a nd racial theory a bout why blacks have higher rates of hypertension. The responsive chord it strikes among scholars and the general public reflects a willingness to accept genetic explanations about the differences between whites and nonwhites without fully evaluating the evidence available. That attitude is obviously a significant obstacle to sound, unbiased research. As genetic research becomes more objective, with the ability to measure actual variations in DNA sequences, it might force society to abandon racial and ethnic prejudices, or it might offer them new legitimacy. Which outcome occurs will depend on how well scientists interpret the findings within a context that takes into account the complexities of society and history. —R.S.C,C-N.R.and R.W. The Puzzle of Hypertension in African-Americans Scientific American February 1999 63 Copyright 1999 Scientific American, Inc. Rift Lakes The extraordinary diversity of cichlid fishes challenges entrenched ideas of how quickly new species can arise "W y Melanie L. J. Stiassny and Axel Meyer Female Haplotaxodon tricoti broods her young The waters of Lake Tanganyika are clear, dark and deep, but the shallow, sunlit edges are where most of the cichlids live. Brown or green Eretmodus algae scrapers, covered with blue spots, thrive among the breaking waves; the turbulent water pushes their rounded backs onto the rock surfaces instead of pulling them off. These fish nip algae off the rocks with their chisel-like teeth. Their neighbors the Tangan-icodus insect pickers also have round backs. But the pointed heads, sharp snouts and long, fine teeth of these cichlids are adapted to plucking insect larvae from within the crevices. In calmer waters, old snail shells are strewn on 64 Scientific American February 1999 Cichlids of the Rift Lakes Copyright 1999 Scientific American, Inc. LAKE TANGANYIKA'S rocky edges are home to hundreds of species of " cichlids, each adapted to an exceedingly Harrow ecological niche. Cobra often preys on shell-dwelling cichlids U sandy shelves between the boulders. Inside these live tiny female Lamprologus cichlids, along with their eggs and young. The yellow, green or brown males are too large to enter the abode. Instead they steal shells—sometimes with females inside— from one another, and posture and preen around their harems. Other algae scrapers, of the genus Tropheus, also hover among sheltering rocks. Sometimes a cluster of boulders is separated from another by a sandbank a few hundred feet wide, far too exposed for a small fish to cross safely. As a result, Tropheus cichlids in scattered rock piles have evolved much like Charles Darwin's finches on islands of the Galapagos: di- Cichlids of the Rift Lakes Scientific American February 1999 65 Copyright 1999 Scientific American, Inc. EAST AFRICAN LAKES Tanganyika, Malawi and Victoria contain the greatest diversity of cichlid species. The family is spread, however, over the warm waters of much of the globe. verging wildly in their isolation. In a certain rock outcrop one might find a black Tropheus with vertical yellow bars; in another, an identical fish but for white and blue bars. In all, researchers have identified almost 100 of these "color morphs." All in the Family The exceptional diversity of the family Cichlidae has elevated it to the status of an icon in textbooks of evolutionary biology. Cichlids are spiny-rayed freshwater fishes that come in a vast assortment of colors, forms and habits. They are indigenous to warm rivers and lakes in Africa, Madagascar, southern India, Sri Lanka and South and Central America—with one species, the Texas cichlid, making it into North America. Most of these regions were part of the ancient southern continent of Gondwana, which fragmented 180 million years ago; the observation suggests an ancient lineage for the family. (Curiously, the fossil record is silent on this issue until the past 30 million years.) Research by one of us (Stiassny) has identified 15 species of cichlids in Madagascar, and three species are known in southern India. These fishes appear to be survivors from the very earliest lineages. (Many such ancient species survive in Madagascar, which their competitors, evolving in Africa, could not reach; India, too, was isolated for millions of years.) The Amer-g^^ j icas contain approximate-'"- ly 300 species. But by far the most abundant diversity of cichlids occurs in Africa, in particular the CICHLID ANATOMY is astonishingly adaptable. 8reat East African lakes Teeth of Cichlasoma citrinellum can take the form °f Victoria, Malawi and of sharp piercers (a) or flat crushers (b). The radio- Tanganyika. graph (c) shows the two sets of jaws of a cichlid. Geologic data indicate that Lake Victoria, shaped like a saucer the size of Ireland, formed between 250,000 and 750,000 years ago; it contains more than 400 species of cichlids. Lakes Malawi and Tanganyika are narrow and extremely deep, for they fill the rift between the East African and Central African tectonic plates. Malawi is about four million years old and contains 300 to 500 cichlid species, whereas Tanganyika is nine to 12 million years old and has some 200 species. It turns out, however, that despite the advanced age of the cichlid family and of their native lakes, their amazing variety arose only in the past few million years. Several factors are believed to lie behind the diversity of cichlids. One has to do with anatomy. Cichlids possess two sets of jaws: one in the mouth, to suck, scrape or bite off bits of food, and another in the throat, to crush, macerate, slice or pierce the morsel before it is ingested. They are the only freshwater fishes to possess such a modified second set of jaws, which are essentially remodeled gill arches (series of bones that hold the gills). Both sets of jaws are exceedingly manipulable and adaptable: one of us (Meyer) has shown that they can change form even within the lifetime of a single animal. (Even the teeth might transform, so that sharp, pointed piercers become flat, molarlike crushers.) Cichlids that are fed one kind of diet rather than another can turn out to look very different. The two sets of jaws, fine-tuned according to food habits, allow each species to occupy its own very specific ecological niche. In this manner, hundreds of species can coexist without directly competing. If instead these cichlids had tried to exploit the same resources, most would have been driven to extinction. One instance of such feeding specialization relates to the scale eaters. These cichlids, found in all three East African lakes, approach other cichlids from behind and rasp a mouthful of scales from their sides. Lake Tanganyika has seven such species, in the genus Perissodus. Michio Hori of Kyoto University discovered that P. microlepis scale FERTILIZATION of Ophthalmotilapia ventralis eggs involves an unusual routine. The female lays an egg and takes it up in her mouth for safekeeping (left); the male then releases sperm at the same site. Yellow 66 Scientific American February 1999 Cichlids of the Rift Lakes Copyright 1999 Scientific American, Inc. eaters exist in two distinct forms, sporting heads and jaws curved either to the right or to the left. The fish not only feed on scales, and only on scales, but are specialized to scrape scales off only one side: the left-handed fish attack the right sides of their victims, and the right-handed ones the left sides. This astonishing asymmetry in morphology even within the same species very likely evolved because a twisted head allows the fish to grasp scales more efficiently. Inside the throat, the scales are stacked like leaves of a book by the second set of jaws before being ingested as packets of protein. (The victims survive, though becoming wary of attackers from either side. If the population of left-handed scale eaters were to exceed that of right-handed scale eaters, however, the fish would become more wary of attacks from the right side. As a result, the right-handed scale eaters would have an advantage, and their population would increase. These forces ensure that the relative populations of left- and right-handed fish remain roughly equal.) Another factor that has allowed cichlids to exploit a variety of habitats—and again, diversify—is their reproductive behavior. Nothing sets cichlids apart from other fishes more than the time and energy that they invest in their young. All cichlids care for their broods long after hatching, and the protracted association between parents and offspring involves elaborate communication. Whereas the fertilized eggs can be guarded by a single parent, once the brood becomes mobile both parents are often necessary. And then a fascinating assortment of social systems—monogamy, polyandry, even polygyny—come into play. One strategy common to many cichlids is to hold fertilized eggs or young in their mouths. In this way, the fishes provide a safe haven into which their offspring can retreat when danger threatens. Moreover, the parent might graze algae or ingest other foods, nourishing the young inside its mouth. Many cichlid species will, like the cuckoo, sneak their fertilized eggs or young in with the broods of other cichlid parents and thereby relieve themselves of the effort required to raise offspring. The mouth brooders lay far fewer eggs than other fishes—sometimes no more than 10—and so invest much time and energy per offspring. Moreover, the total population of a mouth-brooding species is often small, so that a few hundred fish living in one _^_ 1 *\ y -I.I " ftj ■ - ,J''i ta"1 * ^: spots at the tips of his ventral fins mimic the egg, and the female tries to collect these as well (right). In the process, she inhales the sperm, so that the egg is fertilized in her mouth. MOUTH BROODING is a strategy by which many cichlids ensure the survival of their young. This female Nimbochromis livingstonii from Lake Malawi retrieves her young when danger threatens. rock outcrop might constitute a species. Any mutation is likely to spread faster through a small population than a large one and to lead to differentiation of a species. Therefore, the limited population sizes favored by mouth brooding may have contributed to the diversification of cichlids. In the East African lakes, males of mouth-brooding cichlids do not take care of offspring but vie with one another to fertilize the most eggs. Sometimes they form congregations, called leks, in which they dart and posture to attract females. A lek might consist of 20 to 50 males, but in some species more than 50,000 have been observed. Or the males—such as those of Opbtbal-motilapia, with their flashy peacock colors-might construct elaborate bowers over which they display for females. Individuals typically weighing about 10 ounces might move upwards of 25 pounds of sand and gravel in constructing a bower. When a female is enticed to lay a few eggs over his bower (she usually picks the largest), the male quickly fertilizes them; she then takes the eggs into her mouth and swims on, looking for another male. Female cichlids are often a drab gray or brown, but males tend to be brilliantly colored. The diverse hues (such as those of the color morphs de- Cichlids of the Rift Lakes Scientific American February 1999 67 Copyright 1999 Scientific American, Inc. scribed earlier) have probably arisen because of the preferences of the females. In this case, sexual selection, rather than pressure for physical survival, seems to have driven the diversification. The different colors of otherwise identical fish can serve as a barrier separating distinct species, because a female Tropheus, for instance, that prefers yellow males will not mate with a red one. Secrets in the Genes Until very recently, biologists did not know how these hundreds of cichlid species were related. Modern molecular techniques have now answered some of these questions and raised many others. Although the genetic research confirms several early hypotheses based on anatomy, it sometimes conflicts spectacularly with entrenched ideas. As initially suggested by Mutsumi Nishida of Fukui Prefectural University, early lineages of cich- LAKE TANGANYIKA SPECIES LAKE MALAWI SPECIES /mi it'1 ,J~' * Julidochromis ornatus \ V* Melanochromis auratus Tropheus brichardi Pseudotropheus microstoma Bathybates ferox Lobochilotes labiatus Placidochromis milomo DISTANTLY RELATED CICHLIDS from Lakes Tanganyika and Malawi have evolved to become uncannily alike by virtue of occupying similar ecological niches. They demonstrate how morphological resemblance may have little correlation with genetic closeness or evolutionary lineage (phylogenetic relationship). All the cichlids of Lake Malawi are more closely related to one another than to any cichlids in Lake Tanganyika. lids from West Africa first colonized Lake Tanganyika. The cichlids of this ancient lake are genetically diverse, corresponding to 11 lineages (that is, deriving from 11 ancestral species). Much later some of these fishes left the lake's confines and invaded East African river systems, through which they reached Lakes Victoria and Malawi. Studies of the genetic material called mitochondrial DNA conducted by one of us (Meyer) and our colleagues show that the cichlids in Lake Victoria are genetically very close to one another—far closer than to morphologically similar cichlids in the other two lakes. They derive almost entirely from a single lineage of mouth brooders. This scenario implies that almost identical evolutionary adaptations can and did evolve many times independently of one another. Cichlids with singular anatomical features—designed to feed on other fish or on eggs and larvae, to nip off fins, scrape algae, tear off scales, crush mollusks or any of myriad other functions—occur in all three lakes. To some of us biologists, such features had seemed so unique and so unlikely to evolve more than once that we had held that fishes with the same specializations should be closely related. If that were so, the predilection to scrape algae (for instance) would have evolved only once, its practitioners having later dispersed. But algae scrapers in Lake Victoria and Lake Malawi have evolved independently of those in Lake Tanganyika, from an ancestor with more generalized capabilities. The genetic studies thus show that evolution repeatedly discovers the same solutions to the same ecological challenges. It also appears that morphological characteristics can evolve at an incredibly uneven pace, sometimes completely out of step with genetic changes. Some of Lake Tanganyika's species have physically altered very little over time— a number of fossil cichlids, especially tilapias, look very similar to their modern descendants in the lake. And apart from their color, the Tropbeus species remained (morphologically) almost unchanged. On the other hand, the cichlids of Lake Victoria—with their diversity in size, pattern and shape—evolved in an extremely short time span. Amazingly, the lake's more than 400 species contain less genetic variation than the single species Homo sapiens. Molecular clocks that are roughly calibrated on the rate of mutations in mitochondrial DNA suggest that the entire assemblage of Lake Victoria cichlids arose within the past 200,000 years. Recent paleoclimatological data from Thomas C. Johnson of the University of Minnesota and his colleagues point to 68 Scientific American February 1999 Copyright 1999 Scientific American, Inc. Cichlids of the Rift Lakes an even more restricted window for the origin of the Victoria cichlid flock: the lake seems to have dried out almost completely less than 14,000 years ago. No more than a small fraction of individual cichlids, let alone species, could have survived such an ordeal. In that case, the specia-tion rate exhibited by its cichlids is truly remarkable, being unmatched by other vertebrates. In addition, Lake Nabugabo, a small body of water separated from Lake Victoria by a sandbar that is no more than 4,000 years old, contains five endemic species of cichlids. These fishes are believed to have close relatives in Lake Victoria, which differ from the former mainly in the breeding coloration of the males. Even more remarkably, it turns out that the southern end of Lake Malawi was dry only two centuries ago. Yet it is now inhabited by numerous species and color morphs that are found nowhere else. These examples, bolstered by recent DNA data from Lake Tanganyika, suggest a mechanism for the speciation of cichlids: repeated isolation. It appears that successive drops in the level of Lake Tanganyika, by as much as 2,000 feet, facilitated the formation of Tropbeus color morphs and all the other rock-dwelling cichlids. Populations that used to exchange genes instead became isolated in small pockets of water. They developed independently, coming into contact once again as the water level rose—but could no longer interbreed. A Sadder Record If the speciation rate in Lake Victoria has been record-breaking, so also has been the extinction rate. Half a century ago cichlids made up more than 99 percent of the lake's fish biomass; today they are less than 1 percent. Many of the species are already extinct, and many others are so reduced in population that the chances of their recovery are minimal. The causes of this mass extinction can perhaps be best summarized by the HIPPO acronym: Habitat destruction, Introduced species, Pollution, Population growth and Overexploitation. The "nail in Victoria's coffin" has been a voracious predatory fish, the giant Nile perch. It was introduced into the lake in the 1950s in a misguided attempt to increase fishery yields. By the mid- jr j>- ..if jŕ Nř .*° ŕ' # •<>0 > , n~ «.- <ö- .x* kP ,o° # < = Unifractal ' ^\^^M1 = Multifractal 1 Y^M2= Multifractal 2 ^^M3 = Multifractal 3 ^M4 = Multifractal 4 '/3 0 % 5/9 1 TIME 3 ... causes the same amount of market activity in a shorter time interval for the first piece of the generator and the same amount in a longer interval for the second piece... 4/g 5/g 1 X ľiSílŽ1^^^ 8/ / / // tu/ / / / °7 / Aver ^ l /// M4 ... Movement of the generator to the left causes market activity to become increasingly volatile. Ml tyvn tMiw***"** M4 ' 72 Scientific American February 1999 Copyright 1999 Scientific American, Inc. Pick the Fake Mi m How do multifractals stand up against actual records of changes in financial prices? To assess their performance, let us compare several historical series of price changes with a few artificial models. The goal of modeling the patterns of real markets is certainly not fulfilled by the first chart, which is extremely monotonous and reduces to a static background of small price changes, analogous to the static noise from a radio. Volatility stays uniform with no sudden jumps. In a historical record of this kind, daily chapters would vary from one another, but all the monthly chapters would read very much alike. The rather simple second chart is less unrealistic, because it shows many spikes; however, these are isolated against an unchanging background in which the overall variability of prices remains constant. The third chart has interchanged strengths and failings, because it lacks any precipitous jumps. The eye tells us that these three diagrams are unrealistically simple. Let us now reveal the sources. Chart 1 illustrates price fluctuations in a model introduced in 1900 by French mathematician Louis Bachelier. The changes in prices mi nm m» mn Y^r^f^r*!' m'LyiLJl ■! JI |,i|.fcUPiJh.l|"|i. h |J|JujI| ^flfi 4 5 iftMfr+pmft ^ťV"11^^'^1^"^"^ follow a "random walk" that conforms to the bell curve and illustrates the model that underlies modern portfolio theory. Charts 2 and 3 are partial improvements on Bachelier's work: a model I proposed in 1963 (based on Levy stable random processes) and one I published in 1965 (based on fractional Brownian motion). These revisions, however, are inadequate, except under certain special market conditions. In the more important five lower diagrams of the graph, at least one is a real record and at least another is a computer-generated sample of my latest multifractal model. The reader is free to sort those five lines into the appropriate categories. I hope the forgeries will be perceived as surprisingly effective. In fact, only two are real graphs of market activity. Chart 5 refers to the changes in price of IBM stock, and chart 6 shows price fluctuations for the dollar-deutsche mark exchange rate. The remaining charts (4, 7 and 8) bear a strong resemblance to their two real-world predecessors. But they are completely artificial, having been generated through a more refined form of my multi-fractal model. —B.B.M. ■* pieces of the generator. Before each interpolation, the die is thrown, and then the permutation that comes up is selected. What should a corporate treasurer, currency trader or other market strategist conclude from all this? The discrepancies between the pictures painted by modern portfolio theory and the actual movement of prices are obvious. Prices do not vary continuously, and they oscillate wildly at all timescales. Volatility—far from a static entity to be ignored or easily compensated for—is at the very heart of what goes on in financial markets. In the past, money managers embraced the continuity and constrained price movements of modern portfolio theory because of the absence of strong alternatives. But a money manager need no longer accept the current financial models at face value. Instead multifractals can be put to work to "stress-test" a portfolio. In this technique the rules underlying multifractals attempt to create the same patterns of variability as do the unknown rules that govern actual markets. Multifractals describe accurately the relation between the shape of the genera- tor and the patterns of up-and-down swings of prices to be found on charts of real market data. On a practical level, this finding suggests that a fractal generator can be developed based on historical market data. The actual model used does not simply inspect what the market did yesterday or last week. It is in fact a more realistic depiction of market fluctuations, called fractional Brownian motion in multifractal trading time. The charts created from the generators produced by this model can simulate alternative scenarios based on previous market activity. These techniques do not come closer to forecasting a price drop or rise on a specific day on the basis of past records. But they provide estimates of the probability of what the market might do and allow one to prepare for inevitable sea changes. The new modeling techniques are designed to cast a light of order into the seemingly impenetrable thicket of the financial markets. They also recognize the mariner's warning that, as recent events demonstrate, deserves to be heeded: On even the calmest sea, a gale may be just over the horizon. ej The Author BENOIT B. MANDELBROT has contributed to numerous fields of science and art. A mathematician by training, he has served since 1987 as Abraham Robinson Professor of Mathematical Sciences at Yale University and IBM Fellow Emeritus (Physics) at the Thomas J. Watson Research Center in Yorktown Heights, N.Y, where he worked from 1958 to 1993. He is a fellow of the American Academy of Arts and Sciences and foreign associate of the U.S. National Academy of Sciences and the Norwegian Academy. His awards include the 1993 Wolf Prize for physics, the Barnard, Franklin and Steinmetz medals, and the Science for Art, Harvey, Humboldt and Honda prizes. Further Reading The Fractal Geometry of Nature. Benoit B. Mandelbrot. W. H. Freeman and Company, 1982. Fractals and Scaling in Finance: Discontinuity, Concentration, Risk. Benoit B. Mandelbrot. Springer-Verlag, 1997. The Multifractal Model of Asset Returns. Discussion papers of the Cowles Foundation for Economics, Nos. 1164-1166. Laurent Calvet, Adlai Fisher and Benoit B. Mandelbrot. Cowles Foundation, Yale University, 1997. Multifractals and 1/f Noise: Wild Self-Affinity in Physics. Benoit B. Mandelbrot, Springer-Verlag, 1999. A Multifractal Walk down Wall Street Scientific American February 1999 73 Copyright 1999 Scientific American, Inc. 7 Limbs Develop A protein playfully named Sonic hedgehog is one of the long-sought factors that dictate the pattern of limb development by Robert D. The waiting was the hardest part. But finally it was time to crack some eggs. A week earlier we had cut a small hole in the eggshell of a developing chick and inserted some genetically engineered cells into one of the two tiny buds that were destined to develop into the embryo's wings. We had engineered the cells to make a protein that we suspected to be one of the major determinants in establishing the overall pattern of wing development. Now was the moment of truth: How had the extra cells affected the formation of the limb? As we peered into the microscope to examine the embryo closely, we saw that our highest hopes had been realized. The transplanted cells had caused a whole new set of digits to form at the wing tip, confirming that we had identified an important factor in limb development. This experiment, which we conducted in the summer of 1993, partially answered a question that biologists had posed early this century: How do the cells in a developing limb "know" left from right, top from bottom, and back from front? More specifically, what caused the digit that faces forward (anterior) when you hold your arms at your 74 Scientific American February 1999 sides to form a thumb, and what caused the digit that faces toward the back (posterior) to form a pinkie? What ensured that the bone of your upper arm formed close (proximal) to your body while your fingers took shape farther away (distal)? And why did only those cells on the bottom (ventral) sides of your hands form creased palms, with the back (dorsal) sides remaining smooth? Experimental embryologists have been trying to answer these questions for decades. Until recently, however, most studies have focused on identifying the cells that are necessary for proper limb development. With the advent of molecular biology techniques, scientists can now analyze the specific genes that direct the formation of limbs. Curiously, many of the genes—and the proteins they make—are closely related to others that control the development of Copyright 1999 Scientific American, Inc. the limbs of fruit flies, even though vertebrates and insects are thought to have evolved from a common ancestor that lacked limbs altogether. Besides satisfying age-old curiosities about the miracle of life, such studies are helping researchers to understand how and why the processes of embryonic development sometimes go wrong, resulting in birth defects. What is more, they are indicating that the same protein that establishes the anterior and posterior sides of developing limbs affects a number of other developmental processes, from the formation of the central nervous system to the growth of cells that can cause a form of skin cancer. One of biology's oldest questions concerns whether all organisms share similar factors or processes that guide embryonic development or if each particular organism or group of organisms develops How Limbs Develop in a manner unique to it. It might seem obvious that human arms develop similarly to those of chimpanzees, for instance, but how similar are human and chimpanzee arms to chicken wings? And does the development of mammalian arms have anything in common with the formation of the wings of flies? For years, biologists assumed that the factors that shape the developing legs of a future ballerina and those of a pesky fruit fly are very different. Any similarities between the two were thought to be simply the result of convergent evolution, in which similar structures arise through entirely different means. But How Limbs Develop two revolutionary ideas have now emerged to change that line of thinking. First, biologists now know that the same or similar genes shape the development of many comparable structures across the spectrum of the animal kingdom, from flies to mice to humans. Nearly every animal has a head on one end of its body and a tail at the other end, for example, because of the activity of a family of genes called the homeobox, or Hox genes [see "The Molecular Architects of Body Design," by William McGinnis and Michael Kuziora; Scientific American, February 1994]. Second, genes that direct the formation of one aspect of devel- HUMAN HAND AND CHICKEN WING are shaped by the same chemical signals during embryonic development. Researchers are now identifying the signals that tell cells in nascent limbs top from bottom, front from back and head from tail. opment—for instance, the sculpting of limbs—can also play a role in something as different as the formation of the nervous system. Nature, it seems, uses the same toolbox again and again to put together amazingly diverse organisms. Chicken eggs are particularly useful for studying how limbs form. Since the time of Aristotle, scientists have known that to observe how a chicken embryo develops, one simply needs to cut a hole in the shell Scientific American February 1999 75 Copyright 1999 Scientific American, Inc. ANTERIOR-POSTERIOR I ANTERIOR WING LIMB BUD ZONE OF POLARIZING ACTIVITY (ZPA) POSTERIOR ZPA TRANSPLANT ZPA (OR CELLS MAKING SONIC \ HEDGEHOG PROTEIN) HUMERUS ANTERIOR DONOR PROXIMAL-DISTAL ZPA RECIPIENT ULNA RADIUS METACARPALS-DIGITS- HUMAN ARM DORSAL-VENTRAL ORSAL APICAL ECTODERMAL! RIDGE (AER) ■ VENTRAL AER REMOVED V I IV LATER WING LIMB BUD AER At Rf AER REMOVED DORSAL LEG LIMB BUD <" VENTRAL ECTODERM NORMAL LEG FORMS EARLY WING LIMB BUD AER EXCISE ECTODERM AND INVERT i/ENTRAL AER REPLACED BY BEAD SOAKED IN A FIBROBLAST GROWTH FACTOR Wm Si INVERTED LEG FORMS DORSAL 76 Scientific American February 1999 • HUMAN ARM AND CHICKEN WING (top left) develop similarly along all three axes: dorsal-ventral, anterior-posterior and proximal-distal. By manipulating particular groups of cells on developing chick limb buds—both those that will become wings and those that will become legs (bottom left, top and bottom right)—researchers have determined which cells are crucial for the three axes to form properly in all vertebrates, including humans. They have also identified some of the proteins produced by the cells that direct normal embryonic limb formation. How Limbs Develop Copyright 1999 Scientific American, Inc. METACARPALS ULNA HUMERUS NORMAL WING MIRROR IMAGE DUPLICATION FROM GRAFT NORMAL DEVELOPMENT GRAFTED WING ONLY HUMERUS DEVELOPS ONLY RADIUS AND ULNA FORM NORMAL WING of a fertilized egg. For well over 100 years, embryologists have surgically altered chicken embryos through such small holes, then sealed the holes with paraffin wax (or Scotch tape, today) and incubated the eggs until hatching. Through such studies, researchers have observed that limbs form initially from buds that appear along the sides of the developing body. The buds consist of a "jacket" of outer cells, or ectoderm, surrounding a core of other cells called the mesoderm. Although early limb buds are not fully organized structures, they contain all the information required to form a limb: removing an early limb bud and transplanting it to another site on an embryo results in the growth of a normal limb in an abnormal location. At these early stages of development, the cells in a limb bud are not "committed" to becoming part of a thumb or a pinkie—they are in the midst of the process of becoming one or the other. Accordingly, they can be poked and prodded to help experimenters understand the rules of limb development. In every limb bud, there are leaders and there are followers. Through the years, developmental biologists have determined that each axis of a developing limb—anterior-posterior, proximal-distal and dorsal-ventral—is organized by distinct types of cells in distinct locations in the limb bud. These cells are referred to as signaling centers. The ectoderm, for example, will make only a part of the skin of the adult, but it establishes the dorsal-ventral axis that affects the location and formation of each of the muscles and tendons. Scientists have known for years that removing the ectoderm from an early limb bud, rotating it 180 degrees and replacing it causes muscles that normally develop on the dorsal side of a chick wing to end up on the ventral side, and vice versa. The ectoderm accomplishes this by sending chemical signals to the underlying cells that will eventually form the muscles and tendons. Beginning in the late 1940s, John W. Saunders, Jr., of the State University of New York at Albany and his colleagues observed that a particular clump of ectodermal cells at the tip of each developing chick limb bud—which they termed the apical ectodermal ridge (AER)—is responsible for setting up a limb's proximal-distal pattern. When they removed this ridge of cells, only a stump of a limb developed; when they transplanted an extra clump of the cells onto an otherwise normal limb bud, it developed into a double limb. Moreover, the timing of the microsurgery determined how much of the limb would form and how far down the first limb the second limb would start growing. This demonstrated that the AER is both necessary and sufficient for limb outgrowth. Furthermore, because the limbs grew proximal structures first and distal structures later, the experiment showed that the AER regulates development along the proximal-distal axis. Saunders and his co-workers also identified a second clump of cells that dictates the anterior-posterior axis of a budding limb. These cells lie just beneath the ectoderm, along the posterior edge of the limb bud. When the researchers transplanted the cells from the posterior side of one chick limb bud to the anterior side of a second bud, they found that the limb formed an entire set of additional digits—only oriented backward, so that the chicken equivalent of our little finger faced the front. Because the transplanted cells not only induced the anterior cells to form extra digits but also turned, or repolarized, them, Saunders's group labeled the region from which they took the cells the zone of polarizing activity (ZPA). In the mid-1970s Cheryll Tickle, now at the University of Dundee, found that the ZPA works in a concentration-dependent manner: the more cells transplanted, the more digits duplicated. This evidence suggested that the ZPA functions by secreting a chemical signal called a morphogen that becomes fainter as it diffuses throughout the limb bud. (The idea of a morphogen was first proposed around the turn of the century and greatly expanded on in the late 1960s by Lewis Wolpert, who is now at University College London.) According to the morphogen hypothesis, anterior digits form from cells farthest from the ZPA, which are exposed to the lowest concentrations of the morphogen, and posterior digits form closer to the ZPA, where they are exposed to higher morphogen concentrations. But the identity of the morphogen and exactly how it functioned remained a mystery. Shape of Things to Come The advent of molecular biology has given researchers the means to identify the genes involved in embryonic processes such as limb development. Once a gene is cloned—that is, once its DNA is isolated—it becomes a powerful research tool. By reading the order of the chemical "letters" that make up the gene's DNA, scientists can predict the structure of the protein it encodes. Pieces of the DNA can also be used as probes to determine where the gene is aaive in a developing embryo and when it is turned on. Perhaps most important, once a gene is cloned, scientists can alter the gene's expression, switching it on in places where it normally does not function or shutting it off where it normally How Limbs Develop Scientific American February 1999 77 Copyright 1999 Scientific American, Inc. would be on. By doing so, researchers can begin to explore the function of the gene during normal development. During the 1980s and 1990s, researchers studying fruit flies discovered that the posterior parts of the various segments that make up the embryonic fruit-fly body plan produce a protein that is vital for fly development. The protein was named hedgehog because fly larvae with mutations in the gene that encodes the protein do not develop normally but instead appear rolled up and bristly, like frightened hedgehogs. To determine whether a similar protein might play a role in the development of vertebrates, we collaborated with researchers led by Andrew P. McMahon of Harvard University and Philip W. Ingham—now at the University of Sheffield—to use probes from the fly hedgehog gene to search for comparable genes in mice, chickens and fish. Between us, we turned up not one but three versions of the gene. We named them after three types of hedgehog: Desert hedgehog, for a species prevalent in North Africa; Indian hedgehog, after a variety indigenous to the Indian subcontinent; and Sonic hedgehog, for the Sega computer game character found in video arcades worldwide. We discovered that all three genes exist in mice, chickens and fish but that each one has a different function in the development of those organisms. Desert hedgehog, for example, is important in sperm production because male mice with mutations in the gene are sterile. Indian hedgehog is expressed in growing cartilage, where it plays a role in cartilage development. But Sonic hedgehog has a truly remarkable pattern of expression that suggests it functions in the development of other body regions as well. Not only is Sonic hedgehog active in the ZPAs of limb buds, it is also "on" in a region of the developing spinal cord that acts as a signaling center in its own right. In addition, it is known to prompt the growth of extra digits when transplanted onto a budding limb. Because other scientists had found that the fruit-fly hedgehog protein is secreted, we guessed that Sonic hedgehog might be one of the signals that shapes the growth of vertebrate limbs. To test this idea directly, we spliced the Sonic hedgehog gene into embryonic chick cells grown in the laboratory, causing the cells to produce the Sonic hedgehog protein, and then implanted the cells into the anterior side of a chick limb 78 Scientific American February 1999 EMBRYONIC CHICK limb buds require a protein called Sonic hedgehog (dark shading) to develop into wings and legs with the proper anterior-posterior orientation. The protein is made along the posterior edge of each bud. bud. Just as in Saunders's experiments, the transplanted cells prompted the formation of a duplicate set of digits that were oriented backward. Since our 1993 studies, Tickle and McMahon have found that purified Sonic hedgehog protein has the same effect as the gene-spliced cells, proving that the protein is indeed responsible for establishing the anterior-posterior axis during vertebrate limb development. Moreover, Sonic hedgehog functions just as we would expect a morphogen to: high concentrations produce a full set of extra reverse-ordered digits, whereas low concentrations result in fewer duplicated, backward structures. Scientists are now studying the molecular nature of this concentration effect. The early 1990s proved to be banner years for developmental biology. At about the same time that we and our collaborators were identifying Sonic hedgehog as the chemical signal that establishes the anterior-posterior axis of a developing limb, others were isolating the factors made by cells in the AER that set up the proximal-distal axis. Working independently, Tickle, Gail R. Martin of the University of California at San Francisco, John E Fallon of the University of Wisconsin-Madison and their Copyright 1999 Scientific American, Inc. colleagues found that the AER makes several proteins called fibroblast growth factors that tell cells in a budding limb how far from the body to grow. Tickle, Martin and Fallon determined that purified fibroblast growth factors could substitute for transplanted cells from the AER in driving limb outgrowth. They observed that normal limbs grew when they stapled tiny beads soaked in the factors to the tip of a limb bud that had had its AER removed. Such limb buds usually develop only severely shortened limbs. We now know that the production of Sonic hedgehog and of the fibroblast growth factors is coordinated in a developing limb, allowing growth along the anterior-posterior axis to keep pace with that along the proximal-distal axis. Studies by our teams and by Lee A. Niswander of Memorial Sloan-Ketter-ing Cancer Center in New York City have demonstrated that removing the fibroblast growth factor-producing AER from a chick limb bud shuts down the ability of that limb bud's ZPA to make Sonic hedgehog. Likewise, cutting out the ZPA prevents the AER from generating the fibroblast growth factors. But adding back the fibroblast growth factors allows Sonic hedgehog to be made. Similarly, reintroducing Sonic hedgehog fosters the production of the fibroblast growth factors. Philip A. Beachy of the Johns Hopkins University School of Medicine and Heiner Westphal of the National Institute of Child Health and Human Development and their co-workers were the first to show that Sonic hedgehog is necessary for the proper functioning of the AER and ZPA in mice. They deleted the Sonic hedgehog gene to generate so-called knockout mice and saw that the mice developed severely shortened limbs that had failed to develop properly along both the anterior-posterior axis and the proximal-distal axis. Therefore, Sonic hedgehog is necessary and sufficient for normal limb development. Birth Defects The knockout mice have also indicated another dramatic role played by the Sonic hedgehog protein: generating the pattern in the brain and spinal cord that determines, for example, whether early neural cells become motor or sensory neurons. Besides developing extremely foreshortened limbs, mice lacking Sonic hedgehog have only How Limbs Develop one eye and have a severe brain defect called holoprosencephaly, in which the forebrain fails to divide into two lobes. Normal motor and sensory neuron development and the formation of two eyes and a bilateral brain depends on the activity of Sonic hedgehog in the neural tube—the precursor of the adult central nervous system—and in the cells beneath the tube. Holoprosencephaly is the most frequent congenital brain anomaly in the human forebrain. It can arise sporadically but also runs in families as part of several rare, inherited disorders. The degree of holoprosencephaly varies widely in affected individuals: some have mild cognitive deficits, whereas others have marked impairment accompanied by head and facial skeletal deformities. Maximilian A. Muenke of the Children's Hospital of Philadelphia, Stephen W. Scherer of the Hospital for Sick Children in Toronto and their colleagues reported that mutations that inactivate Sonic hedgehog are responsible for some sporadic and inherited cases of holoprosencephaly. Without Sonic hedgehog to specify correctly the dorsal-ventral axis in the developing forebrain, the forebrain and eye tissues fail to become bilateral structures. Of Hedgehogs and Cancer It might not be surprising that a development-regulating gene such as Sonic hedgehog contributes to birth defects, but researchers have also recently uncovered a truly surprising link between the gene and cancer. The protein encoded by Sonic hedgehog signals cells by binding to the same protein on the cell surface that is involved in the skin cancer basal cell carcinoma. Most chemical factors interact with susceptible cells by binding to cell-surface proteins called receptors. The binding of the factors to their specific receptors triggers a cascade of signals within the cell, ultimately leading to genes being turned on or off. SMOOTH ENED PROTEIN NO SIGNAL NORMAL CELL PATCHED PROTEIN SONIC HEDGEHOG PROTEIN SURFACE / SIGNAL NORMAL CELL WITH SONIC HEDGEHOG PROTEIN SIGNAL BASAL CELL CARCINOMA SONIC HEDGEHOG PROTEIN unlocks the activity of a protein called Patched, which in turn regulates the ability of another protein called Smoothened to send a growth signal to skin cells. In basal cell carcinoma the Patched protein is absent or not functional, allowing skin cells to grow into tumors. The receptors to which Sonic hedgehog binds consist of two subunits: one, called Smoothened, is poised to send a signal into the cell, and another, called Patched, keeps the first subunit from sending its signal. When Sonic hedgehog binds to Patched, it causes Patched to unleash Smoothened, the signaling subunit. In cells that harbor mutations that prevent them from making functional Patched proteins, however, the signaling half of the receptor is continuously active, as if Sonic hedgehog were constantly bathing the cells. Exactly how Sonic hedgehog affects normal skin development—and how the aberrant signaling of Smoothened leads to basal cell carcinoma—is now under intensive study. Basal cell carcinoma is a malignancy of the epidermis or of skin cells lining the hair follicles that often results from mutations caused by overexposure to the ultraviolet radiation in sunlight. In 1996 Allen E. Bale of Yale University and Matthew P. Scott of Stanford University found independently that cancer arises when hair-follicle cells develop mutations in both copies of the Patched gene, the one inherited from the mother and the other handed down by the father. The mutations can occur in both copies of the gene after birth, or individuals can be born with a mutation in one copy, which makes them highly prone to developing multiple basal cell carcinomas if the second copy becomes mutated later. Basal cell carcinoma is highly treatable but often recurs. If researchers could find small molecules to block the activity of Smoothened, the compounds might be used to prevent the cancers. Because such drugs could be applied directly to the skin, rather than taken orally, they might lack the side effects of systemic chemotherapies. The role of Sonic hedgehog signaling in cancer should not be surprising. Molecular biology has shown in several instances that the processes that dictate development and cancer share many fundamental properties. The same factors that regulate cell growth and development in embryos, for example, also do so in adults. When mutations arise in the genes encoding these factors in embryos, birth defects occur; when they take place in adults, tumors can form. Perhaps what is surprising is the degree to which a single factor like Sonic hedgehog can play various roles in the formation and function of an organism. Sonic hedgehog appears to be ancient: both flies and vertebrates have found multiple uses for it and many other embryonic genes. Once a molecular signaling pathway is established, nature often finds ways to use it in many other settings. One of the recurring themes of the symphony of life could be the sound of a Sonic hedgehog. ES The Authors ROBERT D. RIDDLE and CLIFFORD J. TABIN have collaborated on studies of limb development since 1990, when Riddle joined Tabin's laboratory at Harvard Medical School as a postdoctoral fellow. Riddle is now an assistant professor at the University of Pennsylvania School of Medicine. He obtained his Ph.D. in 1990 from Northwestern University. Tabin has been a faculty member at Harvard since 1987. He earned his Ph.D. at the Massachusetts Institute of Technology, where he was instrumental in identifying a mutation in ras, a gene that contributes to a variety of cancers. Further Reading Developmental Biology. Fifth edition. Scott F. Gilbert. Sinauer Associates, 1997. Fossils, Genes and the Evolution of Animal Limbs. Neil Shubin, Cliff Tabin and Sean Carroll in Nature, Vol. 388, pages 639-648; August 14, 1997. Cells, Embryos and Evolution. John Gerhard and Marc Kirschner. Blackwell Science, 1998. How Limbs Develop Scientific American February 1999 79 Copyright 1999 Scientific American, Inc. The Way to To go farther into space, humans will first have to figure out how to get there cheaply and more efficiently, w Ideas are not in short supply ZL v opyright 1999 Scientific American, Inc. by Tim Beardsley, staff writer The year 1996 marked a milestone in the history of space transportation. According to a study led by the accounting firm KPMG Peat Marwick, that was when worldwide commercial revenues in space for the first time surpassed governments' spending on space, totaling some $77 billion. Growth continues. Some 150 commercial, civil and military payloads were lofted into orbit in 1997, including 75 commercial pay-loads, a threefold increase over the number the year before. And the number of payloads reaching orbit in 1998 was set to come close to the 1997 total, according to analyst Jonathan McDowell of Harvard University. Market surveys indicate that commercial launches will multiply for the next several years at least: one estimate holds that 1,200 telecommunications satellites will be completed between 1998 and 2007. In short, a space gold rush is now under way that will leave last century's episode in California in the dust. Space enthusiasts look to the day when ordinary people, as well as professional astronauts and members of Congress, can leave Earth behind and head for a space station resort, or maybe a base on the moon or Mars. The Space Transportation Association, an industry lobbying group, recently created a division devoted to promoting space tourism, which it sees as a viable way to spur economic development beyond Earth. The great stumbling block in this SPACECRAFT DESIGNS decades from now may look very different from today's models. A solar-power station [upper left) beams microwaves down to a lightcraft [lower leň) powered by magneto-hydrodynamic forces; an old-style shuttle [lower background) has released a satellite that has been picked up by a rotating tether system [upper right). A single-stage-to-orbit rotary rocket craft deploys another satellite [lower center). Meanwhile a light-sail craft sets out for a remote destination [lowerright). Scientific American February 1999 81 SOLAR ORBIT TRANSFER VEHICLE is now being built by Boeing. This device utilizes a large reflector to focus the sun's rays onto a block of graphite, which is heated to 2,100 degrees Celsius and vaporizes stored liquid-hydrogen propellant to generate thrust. The vehicle gently lifts payloads from low-Earth orbits to higher orbits over a period of weeks. The light vehicle can launch satellites using smaller rockets than would otherwise be needed. road to the stars, however, is the sheer difficulty of getting anywhere in space. Merely achieving orbit is an expensive and risky proposition. Current space propulsion technologies make it a stretch to send probes to distant destinations within the solar system. Spacecraft have to follow multiyear, indirect trajectories that loop around several planets in order to gain velocity from gravity assists. Then the craft lack the energy to come back. Sending spacecraft to other solar systems would take many centuries. Fortunately, engineers have no shortage of inventive plans for new propulsion systems that might someday expand human presence, literally or figuratively, beyond this planet. Some are radical refinements of current rocket or jet technologies. Others harness nuclear energies or would ride on powerful laser beams. Even the equivalents of "space elevators" for hoisting cargoes into orbit are on the drawing board. "Reach low orbit and you're halfway to anywhere in the Solar System," science-fiction author Robert A. Heinlein memorably wrote. And virtually all analysts agree that inexpensive access to low-Earth orbit is a vital first step, because most scenarios for expanding humankind's reach depend on the orbital assembly of massive spacecraft or other equipment, involving multiple launches. The need for better launch systems is already immediate, driven by private-and public-sector demand. Most commercial payloads are destined either for the now crowded geostationary orbit, where satellites jostle for elbow room 36,000 kilometers (22,300 miles) above the equator, or for low-Earth orbit, just a few hundred kilometers up. Low-Earth orbit is rapidly becoming a space enterprise zone, because satellites that close can transmit signals to desktop or even handheld receivers. Scientific payloads are also taking off in a big way. More than 50 major observatories and explorations to other solar system bodies will lift off within the next decade. The rate of such launches is sure to grow as the National Aeronautics and Space Administration puts into practice its new emphasis on faster, better, cheaper craft: science missions now being developed cost a third of what a typical early-1990s mission did. Furthermore, over its expected 15-year lifetime the International Space Station will need dozens of deliveries of crew, fuel and other cargo, in addition to its 43 planned assembly flights. Scores of Earth-observing spacecraft will also zoom out of the atmosphere in coming years, ranging from secret spy satellites to weather satellites to high-tech platforms monitoring global change. The pressing demand for launches has even prompted Boeing's commercial space division to team up with RSC-Energia in Moscow and Kvaerner Maritime in Oslo to refurbish an oil rig and create a 34,000-ton displacement semi-submersible launch platform that will be towed to orbitally favorable launch sites. After the Gold Rush Even the most sobersided scientists would like to see many more research spacecraft monitoring Earth's environment and exploring the farther reaches of the solar system. The more visionary ones foresee a thriving space industry based on mining minerals from asteroids or planets and extracting gases from their atmospheres for energy and life support. K. R. Sridhar of the University of Arizona borrows the rhetoric of Mars enthusiasts when he says space pioneers will have to "live off the land": he has a developed an electrochemical cell that should be able to generate oxygen from the Martian atmosphere. Already one firm, SpaceDev, has talked about mining minerals from asteroids, earning a complaint from the Securities and Exchange Commission for its incautious enthusiasm. Some dreamers even devote themselves to finding ways of sending probes beyond the sun's domain into the vastness of interstellar space. The clamor for a ticket to space is all the more remarkable in light of the extremely high cost of getting there. Conventional rockets, most developed by governments, cost around $20,000 per kilogram delivered to low-Earth orbit. The space shuttle, now operated privately by United Space Alliance, a joint venture of Boeing and Lockheed Martin, was intended to be an inexpensive ride to space, but its costs are no less 82 Scientific American February 1999 Copyright 1999 Scientific American, Inc. The Way to Go in Space than those of typical expendable rockets. In any event, the shuttle has been unavailable for commercial launches since the Challenger disaster in 1986. If a shuttle were outfitted today to take 50 passengers for a flight, they would have to pay $8.4 million a head for its operator to break even. Getting into space is expensive today because boosters carry both the oxidizer and the fuel for their short ride and (with the exception of the partly reusable space shuttle) are abandoned to burn in the atmosphere after their few fiery minutes of glory. Engineers have long hoped to slash launch costs by building reusable craft that would need only refueling and some basic checks between flights, like today's commercial airliners. An energetic group of companies dedicated to reducing launch costs has sprung up in recent years, many of them populated with former NASA top brass. Most are adapting existing technology to gain a commercial edge for launching small payloads into low-Earth orbit. Nobody should underestimate the risks of building rockets, even ones based on conventional designs. The very first Boeing Delta 3, which was the first large booster developed privately in decades, exploded shortly after liftoff from Cape Canaveral last August, setting back Boeing's plans. A U.S. Air Force/Lockheed Martin Titan 4A had detonated over the cape two weeks earlier, and European Arianespace had a costly failure of a new launcher in 1996. In the U.S., disagreements over costs and demand have led to the cancellation of several government-sponsored efforts to develop new expendable rockets in the past decade. Buck Rogers Rides Again The entrepreneurs are not easily deterred. One of the farthest along and best financed of this new breed is Kistler Aerospace in Kirkland, Wash., which is building the first two of five planned launchers that will employ Russian-built engines. The first stage of each vehicle would fly back to the launch site; the second would orbit Earth before returning. Both stages would descend by parachute and land on inflatable air bags. The company has raised $440 million and seeks hundreds of millions more; it says that despite world financial turmoil, flights should start this year. Privately financed Beal Aerospace Technologies in Texas is developing a three-stage launcher that is scheduled to fly in the third quarter of 2000. A reusable version may be developed later, says Beal vice president David Spoede. Several firms plan to increase their advantage by using oxygen in the atmosphere, thereby reducing the amount of it that their rockets have to carry. This can be done most easily with a vehicle that takes off and lands horizontally. Pioneer Rocketplane in Vandenberg, Calif., is developing a lightweight, two-seater vehicle powered by a rocket engine as well as conventional turbofan engines. The plane, with a payload and attached second stage in its small shuttle-style cargo bay, takes off from a runway with its turbofans and climbs to 6,100 meters (20,000 feet). There it meets a fuel tanker that supplies it with Koton Approximate launch year. 2000 Approximate cost: $100 million Power source: Rotary rocket engine Rotary Rocket Roton delivers its pay-load into low-Earth orbit 4 9 <& I Vehicle starts to turn about and deploy rotor -ree-spinning rotor fully deployed for descent Roton reenters Earth's atmosphere i base-first * Roton climbs through atmosphere, . powered by ■ " | spinning | engine m WE Br -Z^^HB A Ě ^^^MM^B^BBW^^B _^fl Rotor spun by tiny rockets. Roton stabilized by small side thrusters ROTON VEHICLE is being constructed by Rotary Rocket in Redwood City, Ca I if. The craft takes off vertically, powered by a lightweight rotary rocket engine. After delivering a payload to low-Earth orbit, the craft comes about and unfolds helicopter blades. It reenters the atmosphere base-first. The helicopter blades rotate passively at first butarespun by small rockets on theirtipsfor the vertical landing. The Way to Go in Space Scientific American February 1999 83 Copyright 1999 Scientific American, Inc. 64,000 kilograms (140,000 pounds) of liquid oxygen. After the two planes separate, the oxygen is used to fire up the smaller plane's rocket engine and take it to Mach 15 and 113 kilometers' altitude, at which point it can release its payload and second stage. A fail-safe mechanism for the cryogenic oxygen transfer is the main technical challenge, says the company's vice president for business development, Charles J. Lauer. Kelly Space and Technology is also developing a horizontal takeoff plane for satellite launches, but one that can handle larger payloads, up to 32,000 kilograms. Kelly's Astroliner, which looks like a smaller version of the shuttle, has to be towed to 6,100 meters. At that altitude, its rocket engines are tested, and a decision is made either to zip up to 122,000 meters or to fly back to the launch site. The first two vehicles should cost close to $500 million, and Kelly is now lining up investors. Other companies are being more technologically adventurous. One of the most intriguing is Rotary Rocket in Redwood City, Calif., which is building a crewed rocket that would take off and land vertically. The most innovative fea- Air-Breathing Engines by Charles R. McClinton For years, engineers have dreamed of building an aircraft that could reach hypersonic speeds, greater than Mach 5, or five times the speed of sound. Propelled by a special type of air-breathing jet engine, a high-performance hypersonic craft might even be able to "fly" into orbit—a possibility first considered more than four decades ago. Recently, as the technology has matured and as the demand for more efficient Earth-to-orbit propulsion grows, scientists have begun seriously considering such systems for access to space. Air-breathing engines have several advantages over rockets. Because the former use oxygen from the atmosphere, they require less propellant—fuel, but no oxidizer—resulting in lighter, smaller and cheaper launch vehicles. To produce the same thrust, air-breathing engines require less than one seventh the propellant that rockets do. Furthermore, because air-breathing vehicles rely on aerodynamic forces rather than on rocket thrust, they have greater maneuverability, leading to higher safety: flights can be aborted, with the vehicle gliding back to Earth. Missions can also be more flexible. But air-breathing engines for launch vehicles are relatively immature compared with rocket technology, which has continually evolved, with refinements and re-refinements, over the past 40years. Hypersonic air-breathing propulsion isjustnowfinally coming of age. Of course, jet engines—which work by compressing atmospheric air, combining it with fuel, burning the mixture and ex- COMPUTERMODELofa scramjet reveals locations where heat transfer is at a maximum {orange). The supersonic flow of air underneath the vehicle helps to minimize thermal stresses. LOW lllllllllllJlllllllllllLllLilUIUllllllllllllllllllllllllllLU HEATTRANSFER panding the combustion products to provide thrust—are nothing new. But turbojet engines, such as those found on commercial and fighter aircraft, are limited to Mach 3 or 4, above which the turbine and blades that compress the air suffer damage from overheating. Fortunately, at such high supersonic speeds a turbine is not required if the engine is designed so that the air is "ram"-com-pressed. Such an engine has an air inlet that has been specially shaped to slow and compress the air when the vehicle is moving rapidly through the atmosphere. Because ramjets cannot work unless the vehicle is traveling at high speeds, they have been integrated in the same engine housing with turbojets, as in the French Griffon II experimental aircraft, which set a speed record of 1,640 kilometers per hour (1,020 miles per hour) around a course in 1959. Ramjets have also been combined with rockets in surface-to-air and air-to-surface missiles. But ramjets are limited to about Mach 6, above which the combustion chamber becomes so hot that the combustion products (water) decompose. To obtain higher speeds, supersonic-combustion ramjets, or scramjets, reduce the compression of the airflow at the inlet so that it is not slowed nearly as much. Because the flow remains supersonic, its temperature does not increase as dramatically as it does in ramjets. Fuel is injected into the supersonic airflow, where it mixes and must burn within a millisecond. The upper speed limit of scramjets has yet to be determined, but theoretically it is above the range required for orbital velocity (Mach 20 to 25). But at such extreme speeds, the benefits of scramjets over rockets become small and possibly moot because of the resulting severe structural stresses. Hypersonic air-breathing engines can operate with a variety of fuel sources, including both hydrogen and hydrocarbons. Liquid hydrogen, which powers the U.S. space shuttle, is the choice for space launch because it can be used to cool the engine and vehicle before being burned. Hydrocarbons cannot be utilized so efficiently and are limited to speeds less than about Mach 8. For a scramjet-powered craft, which must be designed to cap- HIGH š ture large quantities of air, the 84 Scientific American February 1999 Copyright 1999 Scientific American, Inc. The Way to Go in Space ture of the design, called the Roton, is its engine. Oxidizer and fuel are fed into 96 combustors inside a horizontal disk seven meters in diameter that is spun at 720 revolutions per minute before launch. Centrifugal force provides the pressure for combustion, thereby eliminating the need for massive, expensive turbo pumps and allowing the vehicle's single stage to go all the way to orbit. The Roton descends with the aid of foldaway helicopter blades that are spun by tiny rockets on their tips, like a Catherine wheel. Rotary Rocket says it will be able to deliver payloads to low-Earth orbit for a tenth of today's typical launch price. The first orbital flight is scheduled for 2000; the company has already tested individual combustors, and atmospheric flights are supposed to take place this year. The design "has got a lot of challenges," observes Mark R. Oderman, managing director of CSP Associates in Cambridge, Mass., who has surveyed new rocket technologies. Oderman says the Roton has many features "that imply high levels of technical or financial risk." Space Access in Palmdale, Calif., is designing an altogether different but equal- distinction between engine and vehicle blurs. The oncoming flow is deflected mainly by the underside of the craft, which increases the pressure of the diverted air. Generally, the change is great enough to cause a pressure discontinuity, called a shock wave, which originates at the ship's nose and then propagates through the atmosphere. Most of the compressed air between the bottom of the vehicle and the shock ^£f \ § 125 100 75 50 25 wave is directed into the engine. The air gets hotter as its flow is slowed and as fuel is burned in the combustion region. The end product of the reaction expands through both an internal and an external nozzle, generating thrust. The high pressures on the underside of the vehicle also provide lift. To broaden the scramjet's operating range, engineers have designed vehicles that can fly in either scram or ram mode. The dual-mode operation can be achieved either by constructing a combustor of variable geometry or by shifting the fuel flow between injectors at different locations. Because neither scramjets nor ramjets can operate efficiently when they are traveling below Mach 2 or 3, a third type of propulsion (perhaps turbojet or rocket) is required for takeoff. So-called rocket-based combined-cycle engines, which could be used in a space vehicle, rely on a rocket that is integrated within the scramjet combustor to provide thrust from takeoff through subsonic, low-supersonic and then ramjet speeds. Ramjet operation is then followed by scramjet propulsion to at least Mach 10 or 12, after which the rocket is utilized again to supplement the scramjet thrust. Above Mach 18, the rocket by itself propels the vehicle into orbit and enables it to maneuver in space. The National Aeronautics and Space Administration is currently testing several variations of such a system. First, though, much work remains to validate scramjets. Sophisticated computational fluid-dynamic and engineering design methods have made it possible to develop a launch vehicle that has a scramjet built into its structure. Challenges remaining include developing lightweight, high-temperature materials, ensuring rapid and efficient fuel mixing and combustion, and minimizing the buildup of undesirable heat. In the 1970s the NASA Langley Research Center demonstrated basic scramjet technology with models of hypersonic vehicles and a wind tunnel. Additional ground tests of prototype engines have been performed elsewhere in the U.S. as well as in England, France, Germany, Russia, Japan and Australia, with other related research under way in countries such as China, Italy and India. Today scientists routinely conduct ground tests of METHOD OF PROPULSION ROCKET, ROCKET-ASSISTED RAMJET OR TURBOJET RAMJET SCRAMJET SCRAMJET, ROCKET-ASSISTED SCRAMJET OR ROCKET ROCKET - DUAL-MODE SCRAMJET - l_ 12 16 20 24 FLIGHT MACH NUMBER SCRAMJETS (fop) are designed to capture large quantities of air underneath the craft for burning with a fuel source, such as liquid hydrogen. Dual-mode scramjet engines could be combined with rockets (graph) in a vehicle that would, in essence, "fly" into space. scramjet engines at simulated speeds up to Mach 15. In flight tests the Russians have demonstrated ramjet operation of a dual-mode scramjet up to Mach 6.4. To date, though, no vehicle has flown under scramjet power. But this ultimate test is nearing reality. Through its Hyper-X research program at Langley and Dryden Flight Research Center, NASA is currently building the X-43A, a 3.6-meter-long aircraft that will demonstrate scramjet flight at Mach 7 and Mach 10 within the next three years. If all goes well, the tests will pave the way for future uses of scramjet propulsion, possibly in a vehicle designed for hypersonic flight into space. CHARLES R. McCLINTON, technology manager of the Hyper-X Program at the NASA Langley Research Center in Hampton, Va., has been intrigued and captivated by the technical challenges of hypersonic air-breathing propulsion since the 1960s. The Way to Go in Space Scientific American February 1999 85 Copyright 1999 Scientific American, Inc. ly daring craft. Its heavy space plane would take off and land horizontally under the power of a proprietary engine design called an ejector ramjet. This novel engine, which has been tested on the ground, will propel the craft from a standstill to Mach 6, according to Space Access's Ronald K. Rosepink—a performance well beyond anything in service today. Rosepink says the engine is almost 10 times more efficient than existing engines. At Mach 6, the plane will fire up two liquid-hydrogen-fueled rockets. At Mach 9, its nose will open like the jaws of a crocodile to release the second and third stages plus the payload. All the stages have wings and will fly back and land horizontally at the launch strip. Space Access's plane will handle payloads of around 14,000 kilograms, as big as Space Tethers by Robert L. Forward and Robert P. Hoyt When humans begin to inhabit the moon and planets other than Earth, they may not use the modern technology of rockets. Instead space travel and settlement may depend on an ancient technology invented long before recorded history—string. How can mere string propel objects through space? Consider two scenarios. First, a thick strand connecting two satellites can enable one to "throw" the other into a different orbit, much like a hunter casting a stone with a sling. Such a concept could be adapted for transporting payloads to the moon and beyond. Second, if the string is a conductive wire, electricity flowing through it will interact with Earth's magnetic field to generate propulsive forces. The great advantage of both types of tethers—momentum transfer and electrodynamic— is their economical operation. Instead of consuming huge quantities of propellant, they work by simply draining a little momentum from a body already in orbit or by using electrical energy supplied from solar panels. To date, 17 space missions have involved tethers. Most of these missions have been successful, but the public has heard mainly about two failures. In 1992 a satellite built by the Italian Space Agency was to be released upward, away from Earth, from the space shuttle Atlantis at the end of a long tether made of insulated copper wire. But the spool mechanism jammed, halting the experiment. Four years later the National Aeronautics and Space Administration tried again. In that mission, as the tether approached its full 20-kilometer (12-mile) length, the motion of the shuttle through Earth's magnetic field generated 3,500 volts in the tether. 86 Scientific American February 1999 Copyright 1999 Scientific American, Inc. The Way to Go in Space those carried by the shuttle. Commercial service could start in 2003, Rose-pink claims. The most prominent launch vehicle in development, the X-33, is under construction at Lockheed Martin's Skunk Works in Palmdale, Calif., as part of a joint industry-NASA effort to reduce launch costs 10-fold. The X-33 is a roughly half-size experimental craft intended to test a type of rocket engine known as a linear aerospike, as well as various other technologies. On paper the linear aerospike can power a fully reusable, vertical takeoff vehicle to orbit with a single stage of engines that would automatically adapt to changing atmospheric pressure. But the X-33, which will not itself achieve orbit, pushes the limits of current construction techniques. And some observers Electronic devices on the shuttle and the Italian satellite provided an electrical conduit to the ionosphere, allowing ampere-level currents to flow through the tether. The experiment demonstrated that such electrodynamic tethers can convert shuttle momentum into kilowatts of electrical power, and vice versa. Unfortunately, a flaw in the insulation allowed a high-power electric arc to jump from the tether to the deployment boom, and the arc burned through the tether. But although the break aborted the electrodynamic part of the project, it inadvertently triggered a spectacular display of momentum transfer. At the time, the Italian satellite was 20 kilometers above the shuttle and was being pulled along faster than the orbital speed for that higher altitude. Consequently, when the tether broke, the excess momentum made the satellite soar to seven times the tether length, or 140 kilometers, above the shuttle. Other work has had greater success. In 1993, to test an idea proposed by Joseph A. Carroll of Tether Applications in San Diego, a payload attached to a 20-kilometer tether was deployed downward from a large satellite. Because the speed of the payload was then slower than that required for an object at that reduced orbital altitude, cutting the tether at the right moment caused the package to descend toward a predetermined point on Earth's surface. Tether Applications is now developing a reentry capsule and tether that the International Space Station could use to send urgent deliveries to Earth, including scientific payloads that cannot wait for the next shuttle pickup. In a related mission in 1994, a payload was left hanging at the end of a 20-kilometer tether to see how long the connection— as thick as a kite string—would survive collisions with microme-teors and space debris. The expected lifetime of the tether, which could readily be cut by a particle the size of a sand grain traveling at high speed, was a meager 12 days. As things turned out, it was severed after only four. The experiment demonstrated the need to make tethers out of many lines, separated so that they cannot all be cut by the same particle yet joined periodically so that when one line fails, the others take up the load. With that in mind, the Naval Research Laboratory (NRL) and the National Reconnaissance Office (NRO) fabricated a 2.5-millimeter-diameter hollow braid of Spectra fiber (a high-strength polymer used in fishing lines) loosely packed with yarn. A four-kilometer length linking two satellites that was launched in June 1996 has remained orbiting in space uncut for almost three years. In a follow-up experiment last October, NRL and NRO tested a tether with a different design: a thin plastic tape three centimeters wide with strong fiber strands running along its length. The six-kilometer tether should survive for many years in space, but the tape makes it heavy. Our company, Tethers Unlimited in Clinton, Wash., is working with Culzean Fabrics and Flemings Textiles, both in Kilmarnock, Scotland, to fabricate multiline tethers with an open, fishnetlike pattern that will weigh less and should last in space for many decades. Other tether demonstrations are scheduled. The Michigan Technic Corporation in Holland, Mich., has plans in 2000 for a shuttle to release two science packages joined by a two-kilometer tether. In addition, the NASA Marshall Space Flight Center is investigating the use of electrodynamic tethers for propellantless space propulsion. In mid-2000 a mission will demonstrate that a conducting tether can lower the orbit of a Delta 2 upper stage. At Tethers Unlimited, we are developing a commercial version of the NASA concept: a small package that would be attached to a satellite or upper stage before launch. When the spacecraft completed its mission or malfunctioned, the conducting tether would unfurl and drag against Earth's magnetic field, causing the craft to lose altitude rapidly until it burned up in the upper atmosphere. We will test such a tether de-orbit device in late 2000 on an upper stage built by the Lavochkin Association of Russia. NASA is also considering such electrodynamic tethers for upward propulsion. In the system, solar panels would supply a flow of electricity through the tether to push against Earth's magnetic field. The resulting force could haul payloads around Earth indefinitely. This approach might be used to keep the International Space Station in orbit without refueling. How far can tethers take humankind in the future? We and others have analyzed a system of rapidly cartwheeling, orbiting tethers up to hundreds of kilometers long for delivering pay-loads to the moon and ever farther. The idea is simple—think of Tarzan swinging from one vine to the next. First, a low-Earth-orbit tether picks up a payload from a reusable launch vehicle and hands the delivery to another tether in a more distant elliptical-Earth orbit. The second tether then tosses the object to the moon, where it is caught by a Lunavator tether in orbit there. The Lunavator would be cartwheeling around the moon at just the right velocity so that, after catching the payload, it could gently deposit the object onto the lunar surface a half-rotation later. Simultaneously, the tether could pick up a return load. No propellant would be required if the amount of mass being delivered and picked up were balanced. Such a transportation mechanism could become a highway to the moon that might make frequent lunar travel commonplace. Obviously, there are many technological challenges that must be overcome before such a system becomes a reality, but its potential for opening up an economical expressway in space is tremendous. Perhaps someday there will be numerous cartwheeling tethers around many of the planets and their moons, carrying the hustle and bustle of interplanetary commerce. And it all will have begun with a piece of string. ROBERT L. FORWARD and ROBERT P. HOYTare the founders of Tethers Unlimited, a start-up aerospace company based in Clinton, Wash., that specializes in developing space tether systems for commercial applications. The Way to Go in Space Scientific American February 1999 87 Copyright 1999 Scientific American, Inc. 16 now doubt whether it will be able to provide NASA with enough information for a promised year 2000 decision on whether the agency should continue to rely on current shuttles until after 2020 or instead phase out those expensive workhorses around 2012. Difficulties in building the engines have delayed the first flight of the X-33 by six months, until the end of this year. And Daniel R. Mulville, NASA's chief engineer, maintains that a further "year or two" of development will most likely be needed after flight tests are completed in late 2000 before a decision on building a full-size single-stage-to-orbit vehicle. (Lockheed Martin, however, which calls its design the VentureStar, says it will be ready to commit by the end of 2000.) One problem: the world does not have a large enough autoclave to cure the Ven- Highways of Light by Leik N. Myrabo Today's spacecraft carry their source of power. The cost of space travel could be drastically reduced by leaving the fuel and massive components behind and beaming high-intensity laser light or microwave energy to the vehicles. Experiments sponsored over the past year by the National Aeronautics and Space Administration and the U.S. Air Force have demonstrated what I call a lightcraft, which rides along a pulsed infrared laser beam from the ground. Reflective surfaces in the craft focus the beam into a ring, where it heats air to a temperature nearly five times hotter than the surface of the sun, causing the air to expand explosively for thrust. Using an army 10-kilowatt carbon dioxide laser pulsing 28 times per second, Franklin B. Mead of the U.S. Air Force Research Laboratory and I have successfully propelled spin-stabilized miniature lightcraft measuring 10 to 15 centimeters (four to six inches) in diameter to altitudes of up to 30 meters (99 feet) in roughly three seconds. We have funding to increase the laser power to 100 kilowatts, which will enable flights up to a 30-kilometer altitude. Although today's models weigh less than 50 grams (two ounces), our five-year goal is to accelerate a one-kilogram microsatellite into low-Earth orbit with a custom-built, one-megawatt ground-based laser—using just a few hundred dollars'worth of electricity. Current lightcraft demonstration vehicles are made of ordinary aircraft-grade aluminum and consist of a forward aeroshell, or covering, an annular (ring-shaped) cowl and an aft part consisting of an optic and expansion nozzle. During atmospheric flight, the forward section compresses the air and directs it to the engine inlet. The annular cowl takes the brunt of the thrust. The aft section serves as a parabolic collection mirror that concentrates the infrared laser light into an annular focus, while providing another surface against which the hot-air exhaust can press. The design offers automatic steering: if the craft starts to move outside the beam, the thrust inclines and pushes the vehicle back. A one-kilogram lightcraft will accelerate this way to about Mach 5 and reach 30 kilometers' altitude, then switch to onboard liquid hydrogen for propellant as air becomes scarce. One kilogram of hydrogen should suffice to take the craft to orbit. A version 1.4 meters in diameter should be able to orbit mi- MINIATURE LIGHTCRAFT demonstration vehicle has already flown to a height of 30 meters in tests, powered by a 10-kilowatt laser. Larger designs should be able to accelerate to orbit. crosatellites of up to 100 kilograms by riding a 100-megawatt laser beam. Because the beams we use are pulsed, this power might be achieved fairly easily by combining the output from a group of lasers. Such lasers could launch communications satellites and de-orbit them when their electronics become obsolete. Lightcraft with different geometries can move toward their energy source rather than away from it—or even sideways. These variant vehicles have potential for moving cargo economically around the planet. Lightcraft could also be powered by microwaves. Microwaves cannot achieve such high power densities as lasers, so the vehicles would have to be larger. But microwave sources are considerably less expensive and easier to scale to very high powers. I have also designed more sophisticated beamed-energy craft, operating on a different principle, that could transport passengers. These craft would be better for carrying larger cargoes because they can produce thrust more efficiently. A mirror in the craft focuses some of the incoming beamed energy at a point one vehicle-diameter ahead of the vehicle. The intense heat creates an "air spike" that diverts oncoming air past the vehicle, decreasing drag and reducing the heating of the craft. This craft taps some additional beamed energy to generate powerful electric fields around the rim, which ionizes air. It also uses superconducting magnets to create strong magnetic fields in that region. When ionized air moves through electric and magnetic fields in this configuration, magnetohydrodynamic forces come into play that accelerate the slipstream to create thrust. By varying the amount of energy it reflects forward, the lightcraft can control the airflow around the vehicle. I demonstrated reduction of drag by an air spike in April 1995 in a hypersonic shock tunnel at Rensselaer Polytechnic Institute, though with an electrically heated plasma torch rather than with laser power. Tests aimed at generating magnetohydrodynamic thrust, using a 15-centimeter-diameter device, have just begun. A person-size lightcraft of this type driven by microwaves or by a 1,000-megawatt pulsed laser should be able to operate at altitudes up to 50 kilometers and to accelerate easily to orbital velocities. Lightcraft could revolutionize transportation if they are driven from orbiting solar-power stations. But the cost of assembling the orbital infrastructure eventually must be reduced below a few hundred dollars per kilogram. It now costs about 88 Scientific American February 1999 The Way to Go in Space Copyright 1999 Scientific American, Inc. 5695 8 tureStar's all-composite liquid-hydrogen tank. More effort is also needed on the metallic tiles that will protect the craft from the heat of reentry. The VentureStar was billed as a potential national launch system, notes Marcia S. Smith of the Congressional Research Service. Yet the timing could be awkward, as the first VentureStar would not carry humans. NASA has recently asked industry to study the options for carrying to orbit both human and nonhuman cargo early next century. Some potentially useful tricks are be- ing explored with a smaller experimental vehicle known as the X-34. It will test two-stage-to-orbit technologies, including a new type of reusable ceramic tile, starting this year. Looking beyond X-33 and X-34 technology, the agency recently beefed $20,000 to put a kilogram of payload in orbit by means of the space shuttle, about 100 times too much. I think we can bridge the gap by making the first orbital power station one that is specialized for enabling cheap access to space. Imagine a one-kilometer-diameter structure built like a giant bicycle wheel and orbiting at an altitude of 500 kilometers. Its mass would be about 1,010 metric tons, and it would slowly spin to gain gyroscopic stability. Besides the structural "spokes," the wheel would have a disk made from 55 large, pie-slice segments of 0.32-millimeter-thick silicon carbide. Completely covering one side of the silicon carbide would be 30 percent efficient, thin-film solar photovoltaic cells capable of supplying 320 megawatts of electricity. (Such devices are expected within a decade.) On the other side would be 13.2 billion miniature solid-state transmitters, each just 8.5 millimeters across and delivering 1.5 watts of microwave power. Today's heavy-lift chemical rockets could loft this entire structure over about 55 launches, at an affordable cost of perhaps $5.5 billion.The station would be ringed by an energy storage device consisting of two superconducting cables, each with a mass of 100 metric tons, that could be charged up with counterflowing electric currents. (This arrangement would eliminate the titanic magnetic torque that would be produced by a single cable.) During two orbits of Earth, the station would completely charge ORBITING solar-power station {upperleft) could beam microwave energy to an ascending lightcraft {right) powered by magnetohydrodynamic thrust. The lightcraft focuses the microwave energy to create an "air spike" that deflects oncoming air. Electrodes on the vehicle's rim ionize air and form part of the thrust-generating system. this system with 1,800 giga-joules of energy. It would then beam down 4.3 gigawatts of microwave power onto a lightcraft at a range of about 1,170 kilometers. Torquing forces produced by shifting small amounts of current from one cable to the other would crudely point the power station, but fine control would come from a beacon mounted on the lightcraft. It would send a signal that would coordinate the individual transmitters on the power station to create a spot 10 meters in diameter at the launch site. The vehicle could reach orbit in less than five minutes, subjecting occupants to no more than three g's of acceleration, about the same that shuttle astronauts experience. Or the solar-power station could unload all its energy in a 54-second burst that should offer a nearly vertical 20-g boost to geostationary orbit or even to escape velocity. The first orbital solar-power station will pave the way for a whole industry of orbital stations, launched and assembled from specialized lightcraft. Within decades, a fleet of these will make feasible rapid, low-cost travel around the globe, to the moon and beyond. LEIKN. MYRABO is associate professor of engineering physics at Rensselaer Polytechnic Institute. His research interests focus on advanced propulsion and power technology, energy conversion, hypersonic gas dynamics and directed energy. The Way to Go in Space Scientific American February 1999 89 Copyright 1999 Scientific American, Inc. up work on hypersonic jet engines, which had taken a back seat since the National Aerospace Plane program was canceled in November 1994. Variants on jet engines called scramjets—which breathe air like conventional jets but can operate at speeds over Mach 6— could help bring the goal of single stage to orbit within reach. Several unpiloted scramjets, designated X-43, will fly at speeds of up to Mach 10 and then crash-land in the Pacific Ocean, starting in the year 2000 [see box on page 84]. The difficulty faced by such efforts, explains NASA's Gary E. Payton, is in slowing the incoming air enough so that fuel can be burned in it for thrust without generating excess heat. In principle, it can be done with a shock wave created at the air inlet. But the process wastes a lot of energy. One potentially pathbreaking launch technology is an air-breathing engine that also operates as a rocket both when at low velocities and when the air becomes too thin to be worth taking in. At that altitude, a vehicle heading for space would most likely be traveling at about Mach 10. Such rocket-based combined-cycle engines have yet to advance beyond tests in wind tunnels, and they have to be designed as part of the body of a craft to achieve adequate thrust. NASA recently awarded Boeing a cost-shared contract under its new Future-X program to develop an Advanced Technology Vehicle Light Sails by Henry M. Harris Science-fiction dreams of worlds beyond our own solar system have taken on a more realistic aspect since astronomers discovered that the universe contains planets in unexpectedly large numbers. Studying those distant planets might show how special Earth really is and tell us more about our place in the universe. This perspective is prompting the National Aeronautics and Space Administration to turn its gaze toward the stars. Gazing is one thing, but for actual exploration the engineering reality is harsh. It would take tens of thousands of years to reach even the nearest stars with today's technologies. In 19981 coordinated for NASA a survey of propulsion concepts that might enable an exploratory vehicle to travel to another star fast enough to accomplish its mission within 40 years, the professional lifetime of a scientist. We came up with only three that now seem plausible: fusion [see box on page 94], antimatter and beamed energy. Of these, only beamed energy is understood sufficiently to be part of any realistic near-term research program. It is easy to see why beamed energy is attractive. When you take your car on a long trip, you rely on gas stations for fuel and on mechanics to keep it running. Current spacecraft, in contrast, have to transport all the fuel they will need and must operate without human intervention. But could the engine somehow be kept on Earth, along with the fuel? Besides making in-flight repairs possible, the arrangement would make the spacecraft less massive and therefore easier to accelerate. Beamed energy might offer a way. Engineering analyses suggest that the best approach for long-duration spaceflight is to shine a powerful optical laser at a large, thin "sail." This idea was first proposed by Robert L. Forward as long ago as 1984. Lasers can project energy over vast distances, and the large area of a sail allows it to receive a lot of energy in relation to its mass. Other types of beamed energy, such as microwaves, could also be used. Some investigators have even considered beaming charged particles at a spacecraft. The particles, on reaching the craft, would pass through a superconducting magnetic loop, thereby creating a Lorentz force that would provide thrust. But for now, laser light aimed at sails seems to be the most practical option. When a photon from a laser hits a sail, one of two things can happen. It can collide elastically with the electromagnetic field surrounding the atoms in the sail and be reflected. Alternatively, the photon can simply be absorbed by the sail material, a process that heats the sail a minuscule amount. Both processes impart an acceleration, but reflection imparts twice as much as absorption. Thus, the most efficient sail is a reflective one. The acceleration that a laser provides is proportional to the force it transmits to the sail and inversely proportional to the spacecraft's mass. Like other propulsion methods, then, light sails are limited in their performance by the thermal properties and the strength of materials—as well as by our ability to design low-mass structures. The sail designs that have been proposed consist of a polished, thin metal film, most with some kind of backing for structural strength. The power that can be transmitted is constrained by heating of the sail: as the metal surface gets hotter, it becomes less reflective. The temperature a sail attains can be lowered, and so its acceleration increased, by coating its reverse side with materials that efficiently radiate heat. To reach very high velocities, a spacecraft must sustain its acceleration. The ultimate velocity achievable by a light sail is determined by how long the Earth-bound laser can illuminate its target efficiently. Laser light has an important property known as coherence. It means that the energy it can impart is undiminished by distance, up to a critical value known as the diffraction distance. Beyond it, the power delivered quickly becomes insignificant. The diffraction distance of a laser, and thus the ultimate velocity of a spacecraft it powers, is governed by the size of the laser's aperture. Very powerful lasers would probably consist of hundreds of smaller ones ganged together in an array. The effective aperture size is roughly the diameter of the entire array. Maximum power is transferred when the array is packed as densely as possible. We have a tessellated design that approaches 100 percent packing density. At the Jet Propulsion Laboratory in Pasadena, Calif., my team has studied the trade-offs in mission cost between the power of individual lasers and the size of an array. The aperture size required for an interstellar mission is enormous. A phased laser array we have designed tosend a probe in 40 years to the nearby star Alpha Centauri would be 1,000 kilometers (621 miles) in diameter. Fortunately, planetary missions require much smaller apertures. A 46-gigawatt laser illuminating a 50-meter-diameter, gold-plated sail would require only a 15-meter aperture to send a 10-kilogram payload to Mars in 10 days. This system could 90 Scientific American February 1999 The Way to Go in Space Copyright 1999 Scientific American, Inc. 42 that will test a variety of hypersonic flight technologies. Payton says that "if things go well" flight tests of rocket-based combined-cycle engines could occur between 2004 and 2006. Beyond Earth As soon as a vehicle has left the atmosphere and reached orbital velocity, around Mach 25, the engineering challenges change completely. Large thrusts are no longer needed, because the craft is not fighting Earth's gravity and air resistance. Several new approaches are being explored, including, notably, the ion engine now flying on NASA's Deep Space 1 spacecraft. Ion engines work by accelerating charged atoms (ions) of a propellant with electrical grids charged to high voltage. As the ions leave the engine, they impart thrust. Xenon is the currently favored propellant. Power on Deep Space 1 comes from solar panels, but theoretically any means of generating electricity could be used to drive an ion engine, which can produce almost 10 times more thrust per kilogram of propellant than chemical rockets can. As a result, even though ion engines generate only a few grams of force, they can in principle operate for years nonstop, allowing a spacecraft to reach extremely high velocities. Ion engines could feasibly make long-term exploratory missions to Uranus and send a probe to the boundary between the solar wind and the interstellar medium inthreetofouryears. Light-sail craft can be designed to follow a beam automatically, so steering can be done from Earth. A sail might even be built incorporating a reflective outer ring that could be detached on reaching the destination. The ring would continue onward as before and reflect laser light back onto the separated central part of the sail, thus propelling it back home. A good deal of work relevant to light sails has already been done. The Department of Defense has developed high-powered lasers and precision-pointing capability as part of its research into ballistic-missile defenses and possible antisatellite weaponry. And saillike structures whose purpose isto reflect sunlight have already been tested. Russian scientists have flown a spinning 20-meter-di- ameter, polymer solar reflector, Znamya 2, as part of a scheme to provide extra winter illumination in northern Russian cities; a 25-meter-diameterversion is scheduled for testing in February. Closer to home, the U.S. National Oceanic and Atmospheric Administration is planning to launch within four years a spacecraft powered by a solar sail. The craft would hover at an orbital-ly unstable location between Earth and the sun, from where it could provide about an hour's advance warning of particles emanating from solar storms. NASA is now evaluating plans to develop laser light sails as a possible low-cost alternative to conventional rockets. Missions being considered range from a demonstration of a 100-meter-diameter sail in Earth orbit to a journey through the shock wave at the edge of our planetary system. In the immediate future, laboratory tests could measure the properties of candidate laser-sail materials for missions to Mars, the Kuiper belt and the interstellar medium. A military megawatt-class chemical laser at White Sands Missile Range in New Mexico may be used to illuminate sails deployed from spacecraft so that the resulting accelerations can be verified. And planned megawatt-class lasers that can run inexpensively off the power grid could within five years be able to boost light sails between orbits. I estimate that such lasers could power scientific missions to the moon within a decade. We see in light sails a possible glimpse of the future, a vision of rapid, inexpensive access to the remote solar system and beyond. In time they could make travel to distant stars a reality. HENRYM. HARRIS is a physicist who studies interstellar exploration at the Jet Propulsion Laboratory in Pasadena, Calif. He has also designed space shuttle and other experiments. Harris has worked as a jazz musician and has written a novel about science and spirituality. THEORIZED LIGHT-SAIL craft (far left) driven from Earth by a laser could one day convey sensors to distant reaches of the solar system and even to other stars. The sail's reflective surface maximizes velocity. The low-mass structure might carry a light payload [near left). The Way to Go in Space Scientific American February 1999 91 Copyright 1999 Scientific American, Inc. 42 9277 Compact Nuclear Rockets by James R. Powell Someday, in exploring the outer planets of our solar system, humankind will want to do more than send diminutive probes that merely fly rapidly by them. In time, we will want to send spacecraft that go into orbit around these gaseous giants, land robots on their moons and even return rock and soil samples back to Earth. Eventually, we will want to send astronauts to their intriguing moons, on at least a couple of which liquid water—the fundamental requirementfor lifeasweknow it—is believed to be abundant. For missions such as these, we will need rockets powered by nuclear fission rather than chemical combustion. Chemical rockets have served us well. But the relatively low amount of energy that they can deliver for a given mass of fuel imposes severe restrictions on spacecraft. To reach the outer planets, for example, a chemically powered space vehicle must have very limited mass and make extensive use of planetary gravitational "assists," in which the craft maneuvers close enough to a planet for the planet's gravitational field to act like a slingshot, boosting the speed of the craft. To take advantage of these assists, mission planners must wait for "windows"—short periods within which a craft can be launched toward planets appropriately positioned to speed it on its way to more d ista nt bod ies. In technical terms, chemical rockets have a low maximu m velocity increment, which means thattheirexhaustvelocities are not high enough to impart very high speeds to the rocket. The best chemical rockets, which are based on the reaction between hydrogen and oxygen, impart a maximum velocity increment of about 10 kilometers (six miles) a second to spacecraft departing from Earth orbit. Nuclear rockets, in contrast, could impart a maximum velocity increment of up to about 22 kilometers a second. Such a high value would make possible a direct path to, say, Saturn, reducing travel time from about seven years to as little as three. A nuclear rocket such as this would be inherently safe and environmentally benign: contrary to popular belief, a nuclear rocket need not be strongly radioactive when launched. The spacecraft, with its nuclear thrusters, would be launched as a payload atop a conventional chemical rocket. Then, once the payload was in high-Earth orbit, above about 800 kilometers, the nuclear reactor would start up. The technology required to build a rocket motor powered by nuclear fission is not far beyond current capabilities. In fact, my col- FUEL ELEMENT HYDROGEN FLOW BERYLLIUM PRESSURE TUBE METAL MATRIX FUEL REGION GAS FLOWS THROUGH HOLES IN SHEETS NUCLEAR FUEL PARTICLES (INSIDE METAL MATRIX) LITHIUM 7 HYDRIDE MODERATOR FUEL ELEMENT would be one of 37 in a compact nuclear rocket engine. Liquid hydrogen flowing into the element would convert to a gas and flow through the nuclear fuel roll {light brown). Five of the roll's metal matrix sheet layers are shown in the detail at the left. The superheated gas would then shoot down a center channel and out the bottom of the element, providing thrust. leagues and I have designed a compact nuclear rocket engine, which we call Mitee (deriving the letters loosely from the words "miniature reactor engine"), that could be built in about six or seven years at a cost of $600 million to $800 million—actually quite modest in the context of space launches. In fact, the costs of developing the engine would be offset by savings in future launch costs. The reason is that nuclear spacecraft powered by the engine would not need to haul along a large mass of chemical propel-lant, meaning that launching it would not require a Titan IV vehicle costing $250 million to $325 million. Instead a lower-priced rocket, such as a Delta or an Atlas in the range of $50 million to $125 million, could be used. In our design, the reactor's nuclear fuel would be in the form of perforated metal sheets in an annular roll, in a configuration similar to a jelly roll with a hollow center [see illustration below]. A jacket of lithium 7 hydride around the outside of the fuel roll would act as a moderator, reducing the speed of the neutrons emitted by the nuclear fission occurring inside the fuel. The coolant—liquid hydrogen—would flow from the outside of the roll inward, quickly turning into a gas as it heated up and flowed toward the center. The superheated gas, at about 2,700 degrees Celsius (4,900 degrees Fahrenheit), would flow at a high velocity along a channel at the center axis of the roll and then out through a small nozzle at the end. A key attraction of nuclear propulsion is that its propellant—hydrogen—is widely available in gaseous form in the giant planets of the outer solar system and in the water ice of distant moons and planets. Thus, because the nuclearfuel would be relatively long-lasting, a nuclear-powered craft could in theory tour the outer solar system for 10 or 15 years, replenishing its hydrogen propellant as necessary. A vehicle could fly for months in the atmospheres of Jupiter, Saturn, Uranus and Neptune, gathering detailed data on their composition, weather patterns and other characteristics. Alternatively, a craft could fly to Europa, Pluto or Titan to collect rock samples and also accumulate hydrogen, by electrolyzing water from melted ice, for the trip back to Earth. Because its reactor would start up well away from Earth, a nuclear-powered spacecraft could actually be made safer than some deep-space probes that are powered by chemical thrusters. In the outer reaches of the solar system, the sun's rays are too feeble to provide energy for a spacecraft's instruments. So they generally run on plutonium 238 power sources, which are highly radioac- 92 Scientific American February 1999 Copyright 1999 Scientific American, Inc. The Way to Go in Space tive even during launch. In a probe with nuclear thrusters, on the other hand, the instruments would be run off the same reactor that provides thrust. Moreover, the amount of radioactive waste produced would be negligible—amounting to about a gram of fission products for a deep-space mission—and in any event the material would never come back to Earth. Nuclear rockets are not new. Among the U.S. Department of Defense's projects in this area was the Space Nuclear Thermal Propulsion program in the late 1980s. Its goal was to develop a compact, lightweight nuclear engine for defense applications, such as launching heavy payloads into high-Earth orbit. The cornerstone of the design was a particle bed reactor (PBR), in which the fuel consisted of small, packed particles of uranium carbide coated with zirconium carbide. Although the PBR work ended before a full-scale nuclear engine was built, engineers did successfully build and operate low-power reactors based on the concept and demonstrated that high-power densities could be achieved. Indeed, our Mitee engine owes much to the PBR effort, on which my colleagues and I worked for nearly a decade at Brookhaven National Laboratory. In addition to the same basic annular configuration of fuel elements, the Mitee also would use lightweight, thermally stable lithium 7 hydride as a moderator. To be conservative, however, we designed the Mitee's fuel assembly to have a power density of about 10 megawatts per liter instead of the PBR's 30. It is an easily provable fact that with only chemical rockets, our ability to explore the outer planets and their moons is meager. In the near term, only nuclear rockets could give us the kind of power, reliability and flexibility that we would need to improve dramatically our understanding of the still largely mysterious worlds at the far edges of our solar system. JAMES R. POWELL is president of Plus Ultra Technologies in Shoreham, N.Y., which conceived and designed the Mitee reactor for space propulsion. He worked for Brookhaven National Laboratory from 1956 to 1996 and was head of its reactor systems division. The author wishes to thank his co-workers George Maise and John Paniagua for their help in the preparation of this article. Neptune that would return far more data than the simple flybys that Voyager 2 made in the 1980s, according to James S. Sovey of the NASA Lewis Research Center. Ion engines are not the only futuristic space drive being considered for solar system exploration. Hall thrusters also accelerate ions, but without grids. They employ radial magnetic fields, in part, to direct the ions, and they can deliver larger thrusts: a 50-kilowatt version has been tested, and research models are as propellant-efficient as an ion engine, according to Robert S. Jankovsky of the NASA Lewis center. The devices are attractive for now mainly for near-Earth space applications, although that could change if performance improves. The U.S. government has already flown one on a classified payload, and Teledesic, which plans to offer a broadband, global telecommunications service, will use Hall thrusters on its fleet of satellites. Photovoltaic cells are now used to power almost all satellites in near-Earth orbit. And their performance is expected to improve: NASA has developed advanced designs that incorporate myriad small lenses that focus sunlight on the photovoltaic material. Deep Space 1 is now testing this type. But solar power can be used to provide thrust more directly. The U.S. Air Force has committed $48 million to a four-year program to develop a solar-powered final rocket stage that would move satellites from low-Earth orbit to geostationary orbit at a fraction of the cost of chemical rockets. The Solar Orbit Transfer Vehicle uses a lightweight mirror to direct the sun's light onto a graphite block, which reaches 2,100 degrees Celsius (3,800 degrees Fahrenheit) and vaporizes stored liquid hydrogen. The expanding gas provides the thrust. An operational version would take three to eight weeks to boost a typical payload to geostationary orbit, but its light weight means that a satellite will be able to go on a smaller rocket than it would otherwise. The savings amount to tens of millions of dollars for each launch, notes deputy program manager Thomas L. Kessler of Boeing. The sun, however, can only do so much, and it is difficult to exploit solar power for journeys to planets more distant than Jupiter. The Galileo mission to Jupiter and the Cassini mission to Saturn both employed radioisotope thermal generators, which utilize the heat generated by the decay of plutonium 238 to generate modest amounts of electricity. But this technique cannot readily be scaled up to provide larger amounts. Approximate launch year: 2003 Space Access Approximate cost: $4 billion to $6 billion Power source: Air-breathing engines, rockets HEAVY SPACE PLANE is being developed by Space Access in Palmdale, Calif. The craft will utilize innovative ejector ramjet engines to accelerate to Mach 6, then switch to rocket engines. Separated stages will individually fly back to the launch strip. The Way to Go in Space Scientific American February 1999 93 Copyright 1999 Scientific American, Inc. Many space buffs believe nuclear reactors designed to operate in space could be the answer. Because operating a reactor generates some radioactive waste, proponents of space nuclear power now envisage designs that would be launched on chemical rockets in an inactive state. They would be energized only after attaining a safe distance from Earth, so they would present no threat in the event of a launch accident. Some estimates indicate that a nuclear-pow- ered journey to Mars might last just 100 days, about half the estimated trip time for a chemical rocket. A reactor could also be valuable to provide power to support a base on Mars, says Samuel L. Venneri, NASA's chief technologist. Reaching for the Stars by Stephanie D. Leifer T he notion of traveling to the stars is a concept compelling enough to recur in countless cultural artifacts, from Roman poetry to 20th-century popular music. So ingrained has the concept become that when novelists, poets or lyricists write of reaching for the stars, it is instantly understood as a kind of cultural shorthand for striving for the unattainable. Although interstellar travel remains a glorious if futuristic dream, a small group of engineers and scientists is already exploring concepts and conducting experiments that may lead to technologies capable of propelling spacecraft to speeds high enough to travel far beyond the edge of our solar system. A propulsion system based on nuclear fusion could carry humans to the outer planets and could propel robotic spacecraft thousands of astronomical units into interstellar space (an astronomical unit, at 150 million kilometers, or 93 million miles, is the average distance from Earth to the sun). Such a system might be built in the next several decades. Eventually, even more powerful engines fueled by the mutual annihilation of matter and antimatter might carry spacecraft to nearby stars, the closest of which is Proxima Centauri, some 270,000 astronomical units distant. The attraction of these exotic modes of propulsion lies in the fantastic amounts of energy they could release from a given mass of fuel. A fusion-based propulsion system, for example, could in theory produce about 100 trillion joules per kilogram of fuel—an energy density that is more than 10 million times greater than the corresponding figure for the chemical rockets that propel today's spacecraft. Matter-antimatter reactions would be even more difficult to exploit but would be capable of generating an astounding 20 quadrillion joules from a single kilogram of fuel—enough to supply the entire energy needs of the world for about 26 minutes. In nuclear fusion, very light atoms are brought together at temperatures and pressures high enough, and for long enough, to fuse them into more massive atoms. The difference in mass between the reactants and the products of the reaction corresponds to the amount of energy released, according to Albert Einstein's famous formula E = mc2. The obstacles to exploiting fusion, much less antimatter, are daunting. Controlled fusion concepts, whether for rocket propulsion or terrestrial power generation, can be divided into two general classes. These categories indicate the technique used to confine the extremely hot, electrically charged gas, called a plasma, within which fusion occurs. In magnetic confinement fusion, strong magnetic fields contain the plasma. Inertial confinement fusion, on the other hand, relies on laser or ion beams to heat and compress a tiny pellet of fusion fuel. In November 1997 researchers exploiting the magnetic confinement approach created a fusion reaction that produced 65 percent as much energy as was fed into it to initiate the reaction. This milestone was achieved in England at the Joint European Torus, a tokamak facility—a doughnut-shaped vessel in which the plasma is magnetically confined. A commercial fusion reactor would have to produce far more energy than went into it to start or maintain the reaction. But even if commercial fusion power becomes a reality here on Earth, there will be several problems unique to developing fusion rockets. A key one will be directing the energetic charged particles created by the reaction to produce usable thrust. Other important challenges include acquiring and storing enough fusion fuel and maximizing the amount of power produced in relation to the mass of the spacecraft. Since the late 1950s, scientists have proposed dozens of fusion rocket concepts. Although fusion produces enormous amounts of very energetic particles, the reaction will accelerate a spacecraft only if these particles can be directed so as to produce thrust. In fusion systems based on magnetic confinement, the strategy would be to feed in fuel to sustain the reaction while allowing a portion of the plasma to escape to generate thrust. Because the plasma would destroy any material vessel it touched, strong magnetic fields, generated by an assembly that researchers call a magnetic nozzle, would direct the charged particles out of the rocket. In an engine based on the inertial confinement approach, high-power lasers or ion beams would ignite tiny fusion fuel capsules at a rate of perhaps 30 per second. A magnetic nozzle might also suffice to direct the plasma out of the engine to create thrust. The particles created in a fusion reaction depend on the fuels used.The easiest reaction to initiate is between deuterium and tritium, two heavy isotopes of hydrogen whose atomic nuclei include one and two neutrons, respectively, besides a proton. The reaction products are neutrons and helium nuclei (also known as alpha particles). For thrust, the positively charged alpha particles are desirable, whereas the neutrons are not. Neutrons cannot be directed; they carry no charge. Their kinetic energy can be harnessed for propulsion, but not directly—to do so would involve stopping them 94 Scientific American February 1999 The Way to Go in Space Copyright 1999 Scientific American, Inc. Reactors could be used for propulsion in various ways. One that generates thrust directly and operates for a short intense burst is described by James R. Powell on page 92. Such a design might make it possible to return rock samples to Earth from Pluto, Powell maintains. But there are other possibilities. A reactor could be designed to generate heat over long periods. Several different schemes then would be available to convert the heat to electricity to power ion drives, Hall thrusters or a new type of electric propulsion in early development known as a magneto-plasmodynamic thruster. "You can mix and match different reactor and thrust concepts," observes Gary L. Bennett, NASA's former manager of advanced in a material and making use of the heat generated by their capture. Neutron radiation also poses a danger to a human crew and would necessitate a large amount of shielding for piloted missions. These facts lead to a key difficulty in fusion fuel selection. Although it is easiest to initiate fusion between deuterium and tritium, for many propulsion concepts it would be more desirable to use deuterium and the isotope helium 3 (two protons, one neutron). Fusion of these nuclei produces an alpha particle and a proton, both of which can be manipulated by magnetic fields. The problem is that helium 3 is exceedingly rare on Earth. In addition, the deuterium-helium 3 reaction is more difficult to ignite than the deuteriu m-tritium reaction. But regardless of the fusion fuel selected, a spacecraft of thousands of tons—much of it fuel—would be necessary to carry humans to the outer reaches of the solar system or deep into interstellar space (for comparison, the International Space Station will have a mass of about 500 tons). Even individually, the key obstacles to fusion propulsion—getting higher levels of power out of a controlled reaction, building effective containment devices and magnetic nozzles, and finding enough fuel—seem overwhelming. Still, for each of them, there is at least a glimmer of a future solution. In the first place, there is every reason to believe that fusion reactors will gofar beyond the break-even point, at which a reactor produces as much energy as is fed into it. Inertial confinement work in the U.S. is enjoying robust funding as part of the stockpile stewardship program, in which researchers are working on methods of assuring the safety and reliability of thermonuclear weapons without actually test-firing them. The research is centered at the National Ignition Facility, now under construction at Lawrence Livermore National Laboratory. The facility is expected to start up in 2001, with full laser energy of 1.8 million joules—for four billionths of a second-available in 2003. With that kind of power, researchers anticipate liberating up to 10 times the energy required to initiate the reaction. There are indications, too, that the tokamak, which has dominated magnetic confinement research, may someday be supplanted by more compact technologies more amenable to rocket propulsion. In 1996 the Fusion Energy Sciences Advisory Committee of the U.S. Department of Energy endorsed investigation of such promising magnetic confinement schemes as reverse-field pinches, the field-reversed configuration and the spherical tokamak. In the meantime, workers have begun preliminary work on magnetic nozzles.The largest research effort at present is a collaboration among the National Aeronautics and Space Administration, Ohio State University and Los Alamos National Laboratory. Researchers from the three organizations are using extremely high electric currents to create a plasma, which in the experiments stands in for a fusion plasma, and to study its interactions with a magnetic field. Even the fusion fuel problem may be tractable. Although there is very little helium 3 on Earth, there are larger quantities of it in the lunar soil and in Jupiter's atmosphere as well. Also, other elements found on Earth, such as boron, may figure in alternative fusion reactions that are difficult to ignite but that yield alpha particles. For all the promise of fusion propulsion, there is one known physical phenomenon—matter-antimatter annihilation—that releases far more energy for a given mass of reactants. A space propulsion system based on this principle would exploit the mutual annihilation of protons and antiprotons. This annihilation results in a succession of reactions. The first of these is the production of pions—short-lived particles, some of which may be manipulated by magnetic fields to produce thrust. The pions resulting from matter-antimatter annihilation move at speeds close to that of light. Here again, though, one of the key problems is scarcity: the number of antiprotons produced at high-energy particle accelerators all over the world adds up to only a few tens of nanograms a year. To carry humans on a rendezvous mission to the nearest star, Proxima Centauri, a matter-antimatter drive system would need tons of antiprotons. Trapping, storing and manipulating antiprotons present other major challenges because the particles annihilate on contact with ordinary protons. Nevertheless, it may be possible to exploit, albeit to a lesser extent, antimatter's high energy content while requiring much smaller numbers of antiprotons—amounts that are most likely to be available in the next decade. Such a system would use antiprotons to trigger inertial confinement fusion. The antiprotons would penetrate the nuclei of heavy atoms, annihilating with protons and causing the heavy nuclei to fission. The energetic fission fragments would heat the fusion fuel, initiating the fusion reaction. The first steps toward determining the feasibility of such a propulsion system are already being taken under NASA sponsorship. One research activity is the design and construction, at Pennsylvania State University, of a device in which antiprotons could be trapped and transported. At this very early stage, the challenges to building fusion—let alone antimatter—propulsion systems may seem insurmountable. Yet humankind has achieved the seemingly impossible in the past. The Apollo program and the Manhattan Project, among other large undertakings, demonstrated what can be accomplished when focused, concerted efforts and plenty of capital are brought to bear. With fusion and antimatter propulsion, the stakes could not be higher. For these will be the technologies with which humanity will finally and truly reach for the stars. STEPHANIE D. LEIFER is manager of advanced propulsion concepts in the Advanced Propulsion Technology Group at the Jet Propulsion Laboratory in Pasadena, Calif. AtJPL she has also studied solar sails and electric and micropropulsion systems. HUMAN-PILOTED interstellar spaceship would have a rotating structure in front, to simulate gravity in four compartments. The Way to Go in Space Scientific American February 1999 95 Copyright 1999 Scientific American, Inc. 59 space propulsion systems. Yet strong public distaste for anything nuclear means that space reactors face enormous political obstacles, and NASA's effort in that area is now dormant. Beam Me Up Whether space nuclear power is eventually developed or not, inventive engineers and scientists are optimistic about the prospects for further solar system exploration. Ivan Bekey, a former top NASA official and now a consultant, believes that a sustained effort could reduce launch costs from $20,000 a kilogram to as low as $2 a kilogram over the next 40 years. Fully reusable single-stage-to-orbit launchers should achieve the first factor of 10 within a decade, he predicts. Engines that combine hypersonic technology and rocket propulsion, together with new high-energy propellants, should achieve another factor of 10. (Reusable single-stage-to-orbit vehicles that could each fly 1,000 flights a year would be another way of bringing launch costs down to $200 per kilogram, Bekey estimates.) Bekey is impressed, too, with the potential of magnetically levitated catapults, devices that would suspend a rocket craft above a track like a maglev train. The track would have an upward curve at one end—built, perhaps, on the side of a mountain. The rocket-powered vehicle would accelerate along the track and leave it skyward at a 30- to 40-degree angle and about the speed of sound. Beyond 20 years from now, Bekey envisages microwave-powered vehicles like the designs described by Leik N. Myrabo of Rensselaer Polytechnic Institute [see box on page 88]. These craft would create thrust by means of what are termed magnetohydrodynamic forces, which arise when a conductive fluid or gas moves through crossed electric and magnetic fields. The engineering obstacles are substantial—but many of those who have examined the principle believe it could be made to work. Because beamed energy means that neither oxidizer nor fuel has "WORLD'S FIRST FULLY REUSABLE LAUNCH VEHICLE" is how Kistler Aerospace in Kirk-land, Wash., describes its K-1 rocket, scheduled to fly late this year. The two-stage rocket utilizes Russian-built engines that run on kerosene and liquid oxygen.The separated stages return to Earth by parachute. to be carried out of Earth's gravitational potential well, laser- or microwave-driven craft should reduce launch costs to $20 a kilogram, Bekey asserts. Myrabo and others believe beamed-energy craft could be supported by a network of orbital solar-power stations. In principle, power stations in space have many advantages: for the part of their orbit when they are illuminated by the sun, they are assured of receiving plenty of photons. NASA, spurred by an enthusiastic Dana Rohrabacher, representative from California and chairman of the House of Representatives's subcommittee on space and aeronautics, is studying the idea for supplying power to users on the ground. But Venneri says that "in the past the economics have not been there" to support that application. Using inflatable structures in low-Earth orbit could bring costs down somewhat, he adds. Orbital solar-power stations, which could resemble the alien saucers in the movie Independence Day, might however make more economic sense if their energy were used by craft in transit through Earth's atmospheric veil. That, at any rate, is Myrabo's contention. Space enthusiasts are also gung-ho about the potential of tethers, long connecting cables that in orbit acquire astonishing properties nearly qualifying them as a means of propulsion. Their bizarre behavior arises because to stay in orbit, objects farther from Earth's center must maintain a slightly slower horizontal velocity than closer objects. As a result, when objects at different altitudes are connected by a tether more than a few hundred meters long, powerful forces keep it in tension. Other physical principles, notably the conservation of angular momentum, can then operate on the tethered bodies. The upshot, via some counterintuitive mechanics, is that a tether can be used like a giant slingshot to transfer momentum efficiently between payloads and so quickly propel satellites between orbits. Electrically conducting versions can even be used to generate electricity or contribute lift [see box on page 86]. Yet predicting and controlling the dynamics of large, multibody systems in orbit remains a difficult challenge, Venneri cautions. Tethers even open up the startling possibility of connecting the whole Earth to a satellite in geostationary orbit by a fixed line attached at a point on the equator. Climbing devices could then ascend the tether to reach any desired 96 Scientific American February 1999 The Way to Go in Space Copyright 1999 Scientific American, Inc. Ion Propulsion System .*..*. ...•'.■■ j* r i- ■• ■■- -*:■-.:■■ ■■■'■■ - Launch year: 1998 Approximate cost: $150 million Power source: Photovoltaics ION ENGINE is flying now on the Deep Space 1 spacecraft, which is scheduled to visit a comet.The system uses solar panels to generate electric fields that accelerate charged atoms of xenon. The engine can operate for weeks at a time and so reach high velocities. altitude up to 36,000 kilometers, for very little energy. Such a tether could not be built today, because the forces it would experience mean it would have to be made from a material far stronger for its weight than Kevlar, the polymer used for some small-scale tethers. But Bekey points out that buckytubes, microscopic fibers made of carbon atoms assembled into tubes just a few nanometers in diameter, might fit the bill. "When we learn how to grow them into long ropes and work them and tie them, we'll be able to make a tether 600 times stronger than with current materials," he predicts, with airy confidence. That would be more than strong enough. A geostationary tether system could reduce launch costs to $2 a kilogram, Bekey insists. As if such schemes were not ambitious enough, long-term thinkers are even now studying concepts that might one day allow humans to send a spacecraft to another star. The most promising approach at present seems to be light sails [see box on page 90]. Such devices might well also be employed to move cargo around the solar system. Tapping the huge theoretical power of fusion to propel spacecraft has its devotees, too. Although controlled production of useful energy from fusion has not yet been demonstrated even on Earth, hope springs eternal, and a fusion reactor in space would be able to provide enough energy to reach any solar system destination with ease [see box on page 94]. Other notions for propulsion technologies are even more far-out and have been floated as possible means for making interstellar journeys: quantum teleportation, wormholes in space and the elimination of momentum. These mind-boggling ideas seem to require entirely new understandings of physics to be put into practice; the steps for making them feasible cannot even be listed today. Even so, serious investigators continue to look for ways to turn each of these concepts into reality. If they work, they will change radically our ideas about the universe. And who is to say that any of them will prove forever impossible? EH Further reading for this article is available at www.sciam.com/1999/0299issue/ 0299beardsleyboxl.html on the World Wide Web. The Way to Go in Space Scientific American February 1999 97 Copyright 1999 Scientific American, Inc. THE AMATEUR SCIENTIST by Shawn Carlson Tackling the Triple Point One of the horrible truths of scientific research is that simple and inexpensive techniques will get you just so far. Beyond some point, increasing accuracy can be obtained only with a disproportional rise in expense, sweat and frustration. That's partly because accurate measurements require an extremely well calibrated instrument, and providing such an exact scale can be a vexing challenge. Consider thermometers. You might think they would be easy to calibrate: just determine what they read at two known temperatures, like the boiling and freezing points of water. But it's not so simple. These temperatures cannot be reproduced accurately, because they depend on factors that are difficult to control, like atmospheric pressure. For precise work, researchers must resort to more sophisticated techniques. One method is based on a wonderfully repeatable property of water: the unique temperature, called the triple point, at which water can exist with its solid, liquid and gas phases all in equilibrium. To reproduce this temperature, defined to be exactly 0.01 degree Celsius, researchers rely on a special Pyrex flask filled with ultrapure water, evacuated with a vacuum pump and then hermetically sealed with a blowtorch. At $1,000 apiece, such "triple-point cells" are beyond the budgets of most home laboratories. But that is about to change, thanks to George Schmermund, a gifted amateur scientist in Vista, Calif. His device remains within about 0.0001 degree C of the triple point for days and costs less than $50 to build. The cell is simple to construct. Start with a Pyrex straight-walled flask about five centimeters (two inches) in diameter and at least 17 centimeters (seven inches) long. Schmermund hires a glass-blower to thicken and angle the opening slightly for a snug fit between the flask and a large rubber stopper. Without these modifications the lip can shatter explosively. As a precaution, wrap the top two centimeters of the flask with electrical tape. Drill a hole in the stopper and insert a long Pyrex test tube so that it reaches to within two centimeters of the bottom of THERMOMETER CALIBRATION can be performed with a triple-point cell (left), which settles at 0.01 degree Celsius: the unique temperature at which water can exist in its solid, liquid and gas phases all in equilibrium (bottom). Note that a portion of the crushed ice, which should cover the cell, has been removed to provide a better view of the apparatus. 1 V 1 i LIQUID UJ £^ 'S 'S) ICE WATER £^ WATER TRIFLE POINT i VAPOR 0.01 Scientific American February 1999 The Amateur Scientist Copyright 1999 Scientific American, Inc. the flask. Then hermetically seal the joint with silicone cement. To ensure a tight fit between the stopper and the flask, spread a thin film of silicone vacuum grease uniformly around the bottom two thirds of the stopper. Although professional units contain ultrapure, triple-distilled water, Schmermund has discovered that ordinary distilled water from a grocery store works just fine. Fill the flask until the water comes to about five centimeters below the stopper when assembled. Next, you must remove air from the chamber atmosphere as well as any gases dissolved in the water. Schmermund eliminates the need for a vacuum pump by simply boiling the water—the expanding steam will force out the air molecules. First, though, to prevent the water from boiling too violently, shatter a clean test tube inside a towel and drop a few shards into the flask to act as nu-cleation sites for the forming bubbles. Then, secure the cell in a ring stand and gently rest the stopper on top of the flask to allow the steam to escape. Heat the flask's bottom with a propane torch until the water boils gently. Dissolved gases in the flask will form visible bubbles on the inner test tube. Keep the water boiling until the convection currents have swept them away and until you no longer see any condensation inside at the top. The condensation will disappear when the internal atmosphere has been completely replaced by hot steam. Remove the flame and quickly press the stopper down to form a vacuum-tight seal. Before doing so, be sure to protect your hands and arms by wearing long sleeves and a pair of hot-water gloves used by professional dishwashers. Also, hold a towel against the flask. If you immerse the hot, tightly sealed cell in a cool bath, the water inside the flask will boil again. This delightful effect occurs as the water vapor within the cell condenses, lowering the internal pressure, which then decreases the boiling temperature. When the cell cools completely, you should test the quality of the vacuum by giving the cell a gentle vertical shake. (Be careful, because a vigorous jolt could shatter the glass.) You should hear a sharp "snap" caused by the so-called water-hammer effect: the water, uncushioned by air, will slam full-force into the glass. If you don't hear the sound, regenerate the vacuum. To reach the triple point, first chill the cell overnight in a refrigerator. Next, TESTTUBE RUBBER STOPPER ELECTRICAL TAPE ICE MANTLE FLASK TRIPLE-POINT CELL is a vacuum-tight flask containing water vapor, liquid water and an ice mantle that has formed around an inner test tube. you'll need to form a thick ice mantle around the inner test tube. Professionals usually pour a frigid mixture of dry ice and alcohol into the inner well, but Schmermund gets fantastic results with liquid nitrogen, which is much colder. You'll find both refrigerants at your local welder's supply store. Before you add the coolant, dry the inner surface of the test tube thoroughly because the glass could crack if ice forms inside the well. Keep in mind that refraction will make the ice mantle appear to grow faster than it actually does. When the mantle looks like it is nearly touching the flask, dump out the remaining refrigerant. Using liquid nitrogen entails a complication. The ice mantle will form fastest at the bottom where it is in contact with the nitrogen for the longest time. To make the mantle more even, Schmermund periodically lets all the nitrogen boil away and then drops in progressively longer wooden dowels. Additional nitrogen boils energetically around the dowel, and the expanding gases tend to keep the coolant above the dowel's top. Separate the mantle from the test tube by filling the well with distilled water and 10 percent isopropyl alcohol to melt the mantle's inner surface. Don't be alarmed, though, when the ice cracks violently. If when you rotate the flask the mantle stays put, the ice is no longer stuck to the glass. When that happens pour off enough of the water-alcohol mixture so that its level is two centimeters below the top of the ice. Last, place the cell inside an insulated drink container filled with crushed ice and water. Because the ice mantle is buoyant, it will press up against the bottom of the inner test tube, making this spot slightly colder than the triple point. A cutting from a pencil eraser makes an ideal spacer. Rest the thermometer on top of the eraser cutting inside the well. In about an hour, the thermometer will settle on the triple point. To determine temperature, the best thermometers work by measuring the resistance across a thin platinum wire. Because the change in resistance caused by a given temperature difference is well known for platinum, the triple point is all you'll need to calibrate the instrument. Sadly, such thermometers are very expensive. But Schmermund has an answer for that too, as you'll see in my next column. For more information about this and other amateur projects from "The Amateur Scientist," check the on-line discussion at web2.thesphere.com/SAS/WebX.cgi on the World Wide Web. As a service to the amateur community, the Society for Amateur Scientists is making the Schmermund triple-point cell available for $75. Send a check to the society at 4735 Clairemont Square, Suite 179, San Diego, CA 92117, or call the society at 619-239-8807. The Amateur Scientist Scientific American February 1999 99 Copyright 1999 Scientific American, Inc. MATHEMATICAL RECREATIONS by Ian Stewart Origami Tessellations The art of origami, or paper folding, has many mathematical aspects. This month I want to focus on a curious circle of ideas connecting paper folding, tiling patterns and engineering. This topic was brought to my attention by engineer Tibor Tarnai of the Technical University of Budapest in his article "Folding of Uniform Plane Tessellations" in the conference proceedings Origami Science & Art, edited by Koryo Miura et al. (Otsu, Shiga, Japan, 1994). One of the phenomena that engineers spend much effort understanding is buckling. Any structure that is subjected to excessive force will either break or buckle. Buckling patterns are especially interesting when the object is a relatively thin shell of metal. Such structures possess considerable strength but use less material than solid ones do, saving both cost and weight. Perhaps the most commonplace metal shell is the aluminum soft-drink can, a masterpiece of high-precision mass production. If a metal cylinder is compressed along its length, it remains cylindrical until the force reaches a critical value, the buckling load. Then the cylinder suddenly deforms into a mess. In careful laboratory experiments, however, you can restrict the amount of movement— YOSHMURA PATTERN corresponds to the primary mode of buckling of a cylinder. say, by fitting a slightly smaller solid cylinder inside the can or a tough glass cylinder outside it, leaving a small gap. In this way, you can observe the pattern of buckling when it first begins. Indeed, for a cylindrical metal shell this primary buckling mode is a beautiful, symmetric pattern of diamond-shaped dimples. This pattern is very close to one that can be made by folding (3.6.3.6) (3.122) (4.6.12) THE EIGHT TYPES of semiregular tessellations 100 Scientific American February 1999 a sheet of paper into triangles and rolling it into a cylinder. Note that paper buckles and crumples in much the same manner as a thin metal sheet does. The primary buckling mode is familiar to engineers as the Yoshimura pattern [see illustration at left]. Small regions of this pattern can be joined to a rounded cylinder, giving an excellent approximation of "local" buckling where the cylinder first fails at some slightly weaker part. The Yoshimura pattern is made by starting with a tiling of the plane, called a tessellation, by isosceles triangles. Tarnai wondered whether other tessellations of the plane can be folded in a similar manner. It has been known since ancient times that there are precisely three regular uniform tessellations. "Regular" means that all tiles are identical and that each is a regular polygon; "uniform" means that the arrangement of tiles is the same at every vertex. These are the tessellations by equilateral triangles, squares and a honeycomb of hexagons. Swiss mathematician Ludwig Schläfli showed in the 1850s that there are precisely eight further uniform "semiregular" tessellations, in which all tiles are regular polygons but not necessarily identical. Such a tessellation is conveniently denoted by its Schläfli symbol, which lists the number of faces of each tile in order around the vertex. For example, the honeycomb has the Schläfli symbol (63), meaning that there are three hexagons at each vertex. The other two regular uniform tessellations have Schläfli symbols (36) and (44). The semiregular uniform tessellations have Schläfli symbols (34.6),(33.42),(32.4.3.4), (3.4.6.4), (3.6.3.6), (3.122), (4.6.12) and (4.82). To elucidate, tessellation (3.4.6.4) has an equilateral tri-4.6.4) g angle (3), then a square (4), then a hexagon (6), then another square (4) at each vertex. The tessellation for the Yoshimura pattern employs a tile that is not a regular polygon, so it is not included in this list, but it is similar to (36). Which—if any—of these patterns can be folded along edges of the polygons, keeping the Mathematical Recreations Copyright 1999 Scientific American, Inc. polygonal faces themselves perfectly flat? Well, you can certainly fold the "bathroom tile" pattern (44) by creasing the paper along just horizontal lines or just vertical ones. But you can't crease it along a horizontal line and a vertical one, because the tiles are forced to bend out of their planes where the two creases meet, so these folding patterns are not terribly interesting. In 1989 Koryo Miura proved that no tessellation in which three edges meet at a vertex can be folded, which rules out (63), (3.122), (4.6.12) and (4.82). It is easy to see that tessellations (34.6) and (3.4.6.4) also cannot be folded. Straight lines run along tessellation (3.6.3.6), just as they do for (44), and it can be folded along those lines, but again the results are not interesting. This leaves only (36), (33.42) and (32.4.3.4). Not only can these be folded, but they can be wrapped around a cylinder, like the Yoshimura pattern. So they are potentially interesting as engineering models. In fact, these three tessellations can be folded in many different ways. The illustration below shows four ways to fold (32.4.3.4), with solid lines indicating "mountain" folds and dotted lines "valley" folds. That is, viewed from one side of the paper, these two kinds of fold are made in opposite directions. Above are three of the resulting buckled cylinders. The pattern of fold lines in each case repeats over the whole plane like a lattice. The shaded parallelograms in the illustration below indicate a unit cell, which if repeated over the whole plane determines all the directions of folding. The smallest possible unit cell (red) con- BUCKLED CYLINDERS correspond to red, green and orange foldings (below). tains two square tiles and four triangular ones. One of these squares is broken into two pieces that join to form a square if opposite edges of the unit cell are wrapped around to be conceptually adjacent to each other. Tarnai conjectures that this is the only possible folding pattern in which the unit cell contains two squares. The second unit cell (green) contains four squares and is also conjectured to be the unique folding with that property. The third unit cell (blue) contains six squares; foldings with this property are definitely not unique, and you might like to try to find another one. The last unit cell (orange) contains eight squares, and again such foldings are not unique. You can have fun finding foldings for (36) and (33.42). As with the Yoshimura pattern, some of these foldings resemble buckling patterns found in experiments with real cylinders. Moreover, the buckling can be modeled on a computer by pretending that the flat tiles in the tessellation are hinged together by some kind of springy material. The results are especially useful in the study of box columns, hollow girders with square cross sections, which are common in everyday buildings. It is fascinating to see how a tiny bit of math can unite an ancient art and a modern, practical science. jjj FEEDBACK FOLDING DIAGRAMS of tessellation (32.4.3.4) are shown, with two (red), four (green), six (blue) or eight (orange) squares in a unit cell. Michele A. Vaccaro of Rome, Italy, raised an important issue concerning the Bellows Conjecture [July 1998]. The article stated that there is an algebraic formula relating the volume of a polyhedron to the lengths of its sides, which in turn implies that the volume cannot change when the polyhedron is continuously deformed—so there is no polyhedral bellows with perfectly rigid faces. When making Klaus Steffen's flexible polyhedron out of cardboard, however, Vaccaro inadvertently displaced a few of the valley and mountain folds and by good fortune obtained another polyhedron. Because it is made from exactly the same faces as Steffen's polyhedron, it has the same edge lengths. But Vaccaro could see at once that it had a larger volume. Nevertheless, the volume formula is not wrong. Imagine a cube with a shallow pyramid built on one end. The pyramid can be built outward like a roof or inward like a dimple. Clearly, the volumes of these two solids are different—yet they have the same edge lengths. Given the edge lengths, the volume formula allows you to solve a polynomial equation to find the volume. Polynomial equations in general have several distinct solutions, only one of which is the correct volume for a given polyhedron. Another polyhedron might have the same edge lengths but a different volume— which is one of the other solutions. None of this affects the nonexistence of a mathematical bellows, because a continuous change in the polyhedron cannot switch its volume from one solution of the equation to a different one. The solutions are finite in number, so between any two of them are numbers that are not solutions. So the volume can jump—say, by changing the roof of that cube into a dimple—but it can't change gradually. —IS. Mathematical Recreations Scientific American February 1999 101 Copyright 1999 Scientific American, Inc. REVIEWS AND COMMENTARIES THE NARRATIVE OF NUMBERS Review by Simon Singh Once upon a Number: The Hidden Mathematical Logic of Stories BY JOHN ALLEN PAULOS Basic Books, New York, 1998 ($23) Over the past few years, books about mathematics have become fashionable. For example, a biography of John Nash, a couple of biographies of Paul Erdös and two histories of Fermat's Last Theorem have all become top sellers. Storytelling has played a major role in the success of all these books, but in each case the stories relate to the mathematicians rather than the mathematics. In Once upon a Number, John Allen Paulos, professor of mathematics at Temple University and author of Innumeracy and A Mathematician Reads the Newspaper, reverses the situation. He is not interested in stories that involve mathematicians but instead focuses on stories that revolve around mathematics. These stories provide an ideal environment for nonmathematicians to encounter mathematical ideas and examine them in comfort, without the fear usually associated with the subject. An example of one of Paulos's stories concerns the trial of O. J. Simpson, a surprisingly rich source of mathematical anecdotes. During the trial, Alan Der-showitz, Simpson's attorney, repeatedly declared that of all the women abused by their mates, fewer than one in 1,000 are killed by them, and hence the spousal abuse in the Simpsons' marriage was irrelevant to the case. At first sight, his reasoning might seem to make sense. Paulos points out, however, that Dershowitz's argument is nothing more than a non se-quitur hidden within a wonderfully sneaky story. If Nicole Simpson were still alive, then it would be fair to say that it would be unlikely that in the future she would be killed by her abuser. But we know that Nicole Simpson is dead, and the more relevant fact is that 80 percent of women in abusive relationships who are murdered are killed by their partners. One of the great lessons of Paulos's book is to be wary of who is telling the story, which facts they include and, more important, which facts they exclude. In another tale, Paulos shows that one way to motivate our problem-solving skills is to couch a mathematical riddle in terms of a story that arouses one of our primeval instincts. To demonstrate this, he asks us to imagine a deck of cards such that each card has a letter on one side and a number on the other. Four cards are placed on the table so that we can see the sequence D, F, 3, 2. The question is this: Which two cards must you turn over to demonstrate that if a card has a D on one side, it has a 3 on the other? Most people will turn over cards D and 3, but in fact, you should turn over D and 2. The question is not difficult, and yet instinct misleads most people. Consider the following problem. A bouncer at a bar must throw out underage drinkers. There are four people at the bar, and he knows they are a beer drinker, a cola drinker, a 28-year-old and a 16-year-old. Which two should he in- terrogate further? In contrast to the first problem, which is essentially identical, most people are correct in identifying the beer drinker and the 16-year-old. Paulos points to research in evolutionary psychology that suggests our brains have evolved to spot cheats, and hence a mathematical problem that exploits this talent is easier for us to deal with than an abstract version of the same problem. Popularizers of mathematics often rely on a standard collection of tried and trusted tales to illustrate particular topics painlessly, and anyone who regularly reads books on the subject will have had the experience of encountering the same old stories again and again. These stories are often so delightful that we do not mind being reminded of them, but one of Paulos's great strengths is his ability to invent new stories or at least add new twists to old ones. Making Stories Count The traditional story (a version of which readers may recall from Ian Stewart's "Mathematical Recreations" column in August 1998) concerns two students with mud on their forehead. Each sees the other's smudge but is unaware of his own. Their professor enters the room and states that at least one of them has a smudge on his forehead. This is something that they already know, but the result of this apparently redundant information is that both students, after hesitating for a short while, simultaneously wipe the smudge from their foreheads. The first student reasons that if his forehead was clean, then the second student would see this and immediately realize that the smudge must be on his own forehead. Because he does not see an instant reaction, the first student knows that he must have a smudge and wipes his forehead. The second student goes through the same thought process. The problem can be extended to several muddy students, and as long as the professor says that at least one of them has a smudge, then they all realize, after a pause proportional to the number of students, that they all have a smudge. In Paulos's spicier version of the story, there are 50 married couples, and each 102 Scientific American February 1999 Copyright 1999 Scientific American, Inc. Reviews and Commentaries woman knows when another woman's husband has been unfaithful but never when her own husband has. The statutes of the village require that if a woman can prove that her husband has been unfaithful, she must kill him that very day. As it happens, all 50 of the men have been unfaithful. Even though all the women are statute-abiding and rational, nothing happens until the tribal matriarch visits and says that there is at least one philandering husband in the village. Nothing happens for 49 days, but on the 50th day, after a process of simultaneous "meta-reasoning," there is a bloodbath, and all the husbands are slaughtered. This is a wonderful story, but Paulos takes it one step further by retelling it in terms of the Asian currency crisis. He replaces the wives with investors in different countries, their uneasiness about infidelity with nervousness about the markets, and slaying husbands with selling stocks. Each market suspected that the other markets were weak but was unaware of its own weakness until the Malaysian prime minister gave a speech in April 1997 that may have functioned as the matriarch's warning and triggered the crisis. The crash was not immediate, and perhaps the lengthy delay was due to a lengthy process of meta-reasoning. Beyond looking at mathematical problems within the context of stories, Paulos attempts to draw parallels between mathematics and stories in general. "In between vignettes and parables," he promises to "limn the intricate connections between two fundamental ways of relating to our world—narratives and numbers. Bridging this gap has been, in The trial of O. J. Simpson provides a surprisingly rich source of mathematical anecdotes. one way or another, a concern in all my previous books." Some of his "bridging" is quite specific: he argues, for example, that we can interpret the structure of a joke in terms of catastrophe theory—the punch line confounds expectation, which is equivalent to a discontinuity. Such observations are intriguing, but occasionally the analogies and conclusions seem slightly tenuous. When discussing the rationalization of coincidences, Paulos warns readers that "because the stories we believe become, at least metaphorically, a part of us, we are disposed, perhaps out of a sense of self-preservation, to look always for their confirmation, seldom their disconfirma-tion." Paulos himself seems to be sometimes guilty of this crime, but not to such an extent to spoil his overall argument. By the end of the book he has moved back to a broader contemplation of bridge building, suggesting that the disjunction between stories and statistics, and perhaps even between religion and science, may be a different guise of the mind-body problem, "the relationship between consciousness and physical stuff." Paulos believes there is ample space for both—for narratives and numbers, for religion and science— within the world's complexity, although an increasingly important problem is how we can vouchsafe a place for the individual, protected from the powerful tug of these various influences. The solution, he concludes, will require accepting "the indispensability of both stories and statistics—and of their nexus, the individual who uses and is shaped by both. The gap between stories and statistics must be filled somehow by us." SIMON SINGH is a physicist turned science journalist. He is a TV producer and author of Fermat's Enigma: The Quest to Solve the World's Greatest Mathematical Problem (Anchor Books, 1998). THE EDITORS RECOMMEND REASON ENOUGH TO HOPE: AMERICA AND THE WORLD OF THE TWENTY-FIRST CENTURY. Philip Morrison and Kosta Tsipis. MIT Press, Cambridge, Mass., 1998 ($25). Painting on a broad canvas, Morrison and Tsipis develop a picture of what global conditions might be in the coming decades. Their book is about "what is possible and hopeful" in human affairs. They treat the issues of war and peace, the growing human population, the need for economic development to reduce mass poverty, and the price of continued growth in its effects on the global environment. Three major perils that lie ahead, the authors say, can be mitigated by intelligent action. The first is large-scale war; the intelligent action is to build a "system of Common Security among nations," meanwhile gaining firm control over all nuclear weapons and reducing military budgets below some 2 percent of gross domestic product. The second peril is "the unmet daily needs of billions of people," for which the response is "Common Development," financed in large part by the savings in military expenditures. The third is degradation of the global environment; the response is to "move toward a better and fairer regime of frugality and efficiency" that would make it possible to "confront under global consensus the environmental problems whose advent and whose remedy must be found on still grander a scale." In sum: "The optimistic message of this book stands on a simple recognition. The fundamental parameters governing the outlook for humanity's future in terms of energy, war, water, food, and population are hopeful." ROBOT: MERE MACHINE TO TRANSCENDENT MIND. Hans Moravec. Oxford University Press, New York, 1998 ($25). Moravec, founder of the Robotics Institute at Carnegie Mellon University, foresees big things for robots. "Barring cataclysms, I consider the development of intelligent machines a near-term inevitability." First-generation universal robots, with lizard-scale intelligence, will be at hand by 2010, he says. No more than 30 years later, fourth-generation robots will have human-scale processing power. "The fourth robot gener- ation ... will have human perceptual and motor abilities and superior reasoning powers. They could replace us in every essential task and, in principle, operate our society increasingly well without us." Indeed, they should be able to carry human capabilities into the rest of the universe. And what will people do when the robots take over? They will all be able to lead the kind of life now enjoyed only by the idle rich. Reviews and Commentaries Scientific American February 1999 103 Copyright 1999 Scientific American, Inc. IN THE EYE OF THE BEHOLDER: THE SCIENCE OF FACE PERCEPTION. Vickl Bruce and Andy Young. Oxford University Press, New York, 1998 ($39.95). One of the marvels of human perception is how quickly we recognize a face and read its expression. No less impressive is our ability to call a face up from memory without actually seeing it. Bruce and Young are British psychologists who analyze both the psychology and the physiology of face perception. They wrote the book to accompany an exhibition, "The Science of the Face," presented by the Scottish National Portrait Gallery in Edinburgh last spring. A number of portraits from the gallery serve to illustrate points in the tale, as do computer manipulations of facial characteristics. One learns a great deal about what goes on in the brain as one looks at a face, but in the end the authors conclude that "many things remain mysterious" about the process. PROBABILITY 1: WHY THERE MUST BE INTELLIGENT LIFE IN THE UNIVERSE. Amir D. Aczel. Harcourt Brace & Company, New York, 1998 ($22). Probability 1, meaning a certainty that the thing will happen, is what mathematician and probability theorist Aczel assigns to the discovery of intelligent life elsewhere in the universe. The idea is very old; Aczel quotes Epicurus (341-270 B.C.) as saying there are many worlds, all with "living creatures and plants and other things we see in this world." Recent discoveries of planets orbiting stars other than the sun increase the odds. Astronomer Frank D. Drake, long involved with the search for extraterrestrial intelligence, has formulated an equation: N = N.ifpnefififcL, where N stands for the number of civilizations in our galaxy capable of communicating with other civilizations. N, is the number of stars in the galaxy (billions in the Milky Way), f p the percentage of stars with planets (debatable but high), nethe number of planets with environments favorable to life (roughly 10 percent), fi the fraction of planets with life (guesswork but perhaps with a probability of 0.1 or 0.2), fi the proportion of those planets on which intelli- gent life has evolved (again guesswork, with probabilities ranging from 0.1 to 0.5), fc the fraction of planets able to communicate with other civilizations by radio or some other means (inestimable until Earth receives such a communication), and L for the longevity of the civilization. Drake believes that N may be as high as 10,332; the late Carl Sagan put it at about a million. Aczel's quest is for intelligent life anywhere else in the universe, not just in our galaxy. Here he is dealing with almost incomprehensibly big numbers. "Our galaxy has about 300 billion stars (although some estimates are lower), and let's assume there are 100 billion galaxies in the universe." Hence, even though the probability of life around any one star is extremely small, the compound probability with such vast numbers of stars to consider rises to 1. Q IS FOR QUANTUM: AN ENCYCLOPEDIA OF PARTICLE PHYSICS. John Grib-bin. Free Press, New York, 1999 ($35). For people who have difficulty keeping the color and flavor of quarks straight, and for physicists seeking a brief summary of major events or the lives of prominent men and women in their field, Gribbin provides a splendid reference work. His nine-page introduction puts the development, present state and importance of particle physics in a sharp light. The A-Z dictionary runs from "Abelian group" to "Zweig, George (1937-)." A concluding section called Timelines, compiled by Benjamin Gribbin, presents in three adjoining columns the birth dates and career summaries of scientists "who made significant contributions to our understanding of the quantum world," key dates in science and key dates in history. John Gribbin offers, besides a wealth of facts, a tart opinion about science teaching. There is, he says, "a deep flaw in the whole way in which science is taught, by recapitulating the work of the great scientists from Galileo to the present day, and it is no wonder that this approach bores the pants off kids in school. The right way to teach science is to start out with the exciting new ideas, things like quantum physics and black holes, building on the physical principles and not worrying too much too soon about the mathematical subtleties." RESTORATIVE GARDENS: THE HEALING LANDSCAPE. Nancy Gerlach-Spriggs, Richard Enoch Kaufman and Sam Bass Warner, Jr. Yale University Press, New Haven, 1998 ($40). The idea of a garden as an adjunct to high-tech medicine in the treatment of patients is not widely entertained, but the three authors argue that it should be. "Whatever the precise design, a restorative garden is a healing landscape," they say. "It can sometimes be soothing in its sensitivity or stimulating in its exuberance, but at either extreme it is intended to engage the viewer in an act of invigoration." Gerlach-Spriggs is a landscape designer, Kaufman a physician and Warner a professor of urban studies. They combine their interests to describe, with numerous supporting photographs and drawings, six gardens of the kind they have in mind: at the Howard A. Rusk Institute of Rehabilitation Medicine in New York City, Queen of Peace Residence in Queens Village, N.Y., the Hospice at the Texas Medical Center in Houston, Friends Hospital in Philadelphia, Wausau Hospital in Wausau, Wis., and Community Hospital of the Monterey Peninsula in Monterey, Calif. At each place, the authors found that the garden's effects touched staff as well as patients, resulting in "an emphasis on individualized, meticulous, intimate caring for the patient." A restorative garden in a health care setting, the authors say, provides a "touch of grace [that] goes remarkably far in restoring personhood to patienthood." LOUIS PASTEUR. Patrice Debré. Translated by Elborg Forster. Johns Hopkins University Press, Baltimore, 1998 ($39.95). Trained in physics and chemistry, beginning his career as a teacher of those subjects and a researcher in crystallography, Pasteur (1822-1895) as a young man would not have seemed likely to make an international reputation in medical research. But he did, and Debré in this fine biography traces the steps in the transition and illuminates Pasteur's many achievements in the field. Debré sees Pasteur's paper of 1857 on lactic fermentation as "the birth certificate of microbiology" and his later work on vaccines as "the birth of a new discipline," namely, immunology. "The Pasteurian revolution," Debré writes, "created a close link between theory and practice. It became clear that medicine could no longer do without science and that hospitals must no longer be mere hospices." E Reviews and Commentaries 104 Scientific American February 1999 Copyright 1999 Scientific American, Inc. Geologie time is a more forceful figure of speech for slow change than is any talk of molasses. The students of strata work in million-year steps; that choice is apt for the rise and weathering away of rock features. Hot fluid rock, or magma, is always transient. But the most abundant single mineral held in the first couple of miles below sea level is nimble water. Floods are water in rapid motion; even icy glaciers, compacted of solid crystals of water never far below their melting point, come and go at a geologically dizzying pace, nor do placid lakes share the longevity of the hills around them. If in years geologists are comfortable millionaires— even billionaires—historians are impoverished. They and we treat mere centuries as the round unit for substantial changes in human life. Marine turtles could store in their genes the widening Atlantic for 100 megayears, but our ancestors reached the New World in a few dozen millennia, learning to move across wide shallows as the glaciers stood mountain high. If you probe human affairs before recorded history—no inscriptions are older than the dawn of writing, about 3500 B.C.— you will reckon in millennia. We recall with pleasure the thrilling view of sea and sky we saw from the stony summit of the Rock of Gibraltar 20 years ago. Columbia University geologist Bill Ryan pointed out how strange that seascape must once have been: the great Atlantic to our right was long ago higher by thousands of feet than the Mediterranean Sea to our left. Minor shifting of the continents had shut the narrow, old sea valve at the Straits of Gibraltar, to turn the Mediterranean Sea into a grand lake that slowly evaporated into a low salt desert dotted with temporary lakes, wherein the present islands long stood as mountainous plateaus. One day around five million years ago, after slowly growing leakage, the invincible ocean finally burst though the rock barrier in a howling cataract like thousands of Niagaras, and the entire desert basin went brimful with seawater within a century or two. The huge continents still shoulder and shift, and it is by no means unlikely that Gibraltar will shut again one of these millions of years, to bring back that vast, dry basin. No articulate witness was present; our watchful species is simply too No moral reason is made clear for the destruction, and no rainbow covenanted. young. We must rely on the inferences of geologists, their evidence largely drawn from analysis of rock cores made on the drill ship sent to sample the seafloor miles down. At each of many well-chosen points, hundreds of feet of drilled cores allow detailed field study inaccessible to the geologists of earlier times, who paced out their strata hammer in hand. Diverse confirmations have by now placed the slow desertification and the cataclysmic refilling of the Mediterranean within the accepted chronicles of rock and water. The Black Sea is one of a string of wide basins from Atlantic to Caspian. Its sea valve, the Bosporus, is analogous to Gibraltar. In 1961 the research vessel Chain, an ex-ocean tug out of Woods Hole, Mass., made its echo-recording way out into the Black Sea itself, closely shadowed by a wary Soviet destroyer. It dragged a new sparker, which every few seconds emitted an undersea sound pulse with the kick of a small hand grenade. Those echoes from far below the sea bottom disclosed bedrock deeply incised by the underwater river that still flows into the Black Sea. The cleft was carved out well into the bottom proper when it was open to the air. The sediments that over time have filled that cut contain a chaotic mixture of boulders, cobbles, sand and shells. Once a torrent had flowed in from the west; the rock floor for miles within the Black Sea was not merely eroded but also ripped and jumbled into pools and caverns. (The seafloor past Gibraltar shows that same appearance.) Five years ago new precision dating of the crucial cores taken at the entry to the Black Sea was made. Carbon prepared from the tiny shells of marine mollusks in those cores gives a sharp and single date for many samples, close to 5650 B.C. These saltwater creatures were the first new immigrants into a freshwater fauna of long duration. The old glacial lake had turned into a brimming salt sea, a catastrophe supported by flow calculations. The Black Sea, refilled by a terrifying oceanic inflow, returned to brine within one human lifetime. The flooding of the Mediterranean is not unique; these two Continued on page 107 Reviews and Commentaries Scientific American February 1999 105 Copyright 1999 Scientific American, Inc. COMMENTARY CONNECTIONS by James Burke A Light Little Number I poured myself a glass of delicious Bordeaux recently before settling down to watch one of the more spectacular things you can see on the box these days. As I idly sipped, my eye fell on the number on the bottle label. A pleasant little vintage—not too big. The percentage alcohol by volume number was only 11. Meanwhile the shuttle was lifting off, and I was, as ever, glued to my seat. I'm amazed by everything to do with the shuttle, but in particular the delicate way the pilot gets to position the 78-ton vehicle to within a half-degree so he can do orbital delivery runs or pickups if required. This is accomplished with the aid of 44 tiny jets all round the spaceship, some of which can produce as little as 24 pounds of thrust, thanks to more-bang-for-your-buck hypergolic fuel, one part of which is stuff called hydrazine. Apart from these celestial uses, hydrazine features in more down-to-earth forms such as pharmaceuticals, photography, plastics and rust control in hot-water systems. And something related to my glass of red: hydrazine is also a fungicide, the first of which appeared in Bordeaux, suicide capital of the world around the 1880s. If, that is, you had been a wine maker and suddenly weren't, thanks to the devastating effects of downy mildew. This killer fungus arrived on vinestock brought from America to replace the vines earlier destroyed by Phylloxera, which itself came on American stock brought in to fix an earlier plague. When the mildew first attacked in 1878, it was jump-out-the-window time for anybody left in the business. Until 1882, when Pierre-Marie-Alexis Millardet of Bordeaux University came up with a fungicide mix of lime, copper sulfate and water, and their troubles were over. Millardet learned all he knew from Heinrich Anton de Bary, his teacher at Strasbourg University, who goes in the history books under "Father of Mushrooms" (read "mycology"). Up till then, people thought fungi were products of the plant they grew on. De Bary showed they were symbiotes (he coined the word). So, as you spray today, you know who to thank. De Bary himself had started in medicine, in Berlin, under the great physiologist Johannes Müller (author of the classic Handbook of Physiology, from 1840). This was the guy who finally knocked on the head the old natural philosophy speculative guff that permeated medicine at the time and that involved everything from supernatural cures to animal magnetism and negative forces. Müller had his own bouts with the negative, at one point becoming so suicidally depressed he went to Ostend. In 1847, while he was a dean at Berlin, Müller appointed one of his brighter students, Rudolf Virchow, to the post of instructor. A year later everything hit the These were great days for noodlers. fan in Germany with the revolution of 1848, and Virchow was fighting on the barricades. Social issues then informed all of Virchow's work, fundamental to which was his discovery of the basic function of the cell. Virchow saw the body as being like a democratic society, a free state of equal individuals, a federation of cells. And all disease was nothing more than a change in the condition of the cells. These egalitarian views led him to help found the German Progressive Party in 1861 and four years later to provoke none other than chancellor-to-be Bismarck to challenge him to a duel. Fortunately for Virchow, nothing came of it, and Virchow went on to become so har-rumph he was eventually known as the "Pope of German Medicine." 106 Scientific American February 1999 During a short teaching break at Würzburg, Virchow taught Victor Hen-sen, who made his mark in studies of the hearing organs in grasshoppers' forelegs and in the identification of a couple of bits of the human cochlea. For 115 days in 1889, Hensen sailed all over the Atlantic in search of another of his obsessions—plankton. For which task he had designed a special net made of uniformly woven silk, normally used by millers to separate different grades of flour. Hen-sen's target was invisible, minute and everywhere—well, anywhere there were nutrients. When plankton ate up these nutrients, the organisms would die and sift down to the bottom of the ocean, where over a zillion years their shells would form sedimentary rocks. By the early 20th century these rocks were being ground up into a fine powder called kieselguhr. One use for which was as most of the stick in a stick of dynamite, kieselguhr being uniquely inert (a valuable property when being used to blot up nitroglycerine). Another use was more mundane. When tiny amounts of nickel were deposited onto kieselguhr, the nickel acted as a catalyst to encourage hydrogen molecules to combine with oil molecules and make oil hard enough at room temperature to spread (when the oil was palm oil and part of margarine, that is). I've mentioned the 1869 inventor of margarine before. Frenchman called Měge-Mouriěs, who also patented an idea for effervescent tablets and (in 1845) a use of egg yolks in tanning leather. Reviews and Commentaries Copyright 1999 Scientific American, Inc. These were great days for noodlers. Mege-Mouries's margarine work (worth a Legion d'Honneur medal) was suggested by France's Great Man of Chemistry, Michel-Eugene Chevreul, who wrote the book on fats during his nearly 90 years at the Paris Museum of Natural History. Chevreul discovered and analyzed every fatty acid, named them all and turned the haphazard business of the soap boiler into an exact science. With what Chevreul was able to tell them, manufacturers could now make soap cheaper and better. His fatty knowledge also made possible brighter candles. Small wonder that when the man who thus made the world clean and light turned 100, the entire country took the day off. One of the products needed for soap was alkali, and when Chevreul began his work, this was still provided from wood ash. Because most of the French forests had gone down to the axes of the naval shipbuilders, alternative sources were being avidly sought by both the French and English. One such possibility turned out to lie along the rocky coasts of Brittany and western Scotland. It was kelp. Turning it into ash was simple and profitable. Your peasants raked it off the rocks and burned it in a pit with stones pressed on it so that it became a large, hard cake, which could be ground up for later use. hi Scotland the kelp-ash replacement financed many a new pseudo-Gothic castle for the owners of the previously valueless rocky beaches. hi 1811 France, the ash went into the niter beds of Bernard Courtois, gunpowder manufacturer and accidental discoverer of a new element. That year Courtois leached the ash with water, ready to evaporate the salts he needed. When he added an overgenerous amount of sulfuric acid (to get rid of the unwanted sulfur compounds), he was engulfed in violet fumes from the vat. Two years later the stuff had been analyzed by one of France's foremost analyzers and given a name, after the Greek for its color: iodine. The analyst, Joseph-Louis Gay-Lussac, did the contemporary equivalent of what was going on at the start of this column. He went up to a record height (in a balloon) to do science. It was also Gay-Lussac who told me my launch-watching wine would be light. That alcohol-by-volume figure on the label (remember?) is known as the Gay-Lussac number. ES Wonders, continued from page 105 events are similar, though not identical. Their most dramatic difference is the human context. Plenty of settlements surrounded the shrunken Black Sea when the ocean waters suddenly returned. As for human witness, several records of deluge myth go back 1,000 years before the Aramaic cuneiform manuscript of 900 B.C. from which the Genesis text itself comes. The oldest known version of many is presented here in an account of what we surmise the storyteller's performance might have been like. Outside the walls of old Babylon on a starry night, the famous bard sang and struck his 11-string lyre. His hero is Atrahasis, the language Akkadian. Atrahasis is divinely told to build a boat, load his family, save living things but take no possessions: Lord Ellil has found humans too numerous and loud, but Lord Enki seeks to warn them in secret. No moral reason is made clear for the destruction, and no rainbow covenanted, but land birds are released at last, to find a dry resting place. It becomes clear how much the experience of ancient witnesses depended on their location. At the eastern end of the Black Sea there was an inexorable rising of the waters by a foot or so daily, slowly turning salt. Eight hundred miles westward, near the Bosporus, the people heard and saw gigantic spray rise over an unearthly cataract; some even fled before the awesome rush of waters. The north shore of the Black Sea would send its children far-flung into the long spread of farming across Europe. But many southern shore refugees would come finally to the land between the rivers and tell of the boat that had saved them. Is this a real flood, no mere rising river but the rebirth of a whole sea, the shared experience behind the deluge tales of Mesopotamia? Bill Ryan and his colleague Walter Pitman have made a book of all this— Noah's Flood—as engaging as it is important. They make personal the "big science" of contemporary field geology with its powerful drill ships and expeditions, one in a Bulgarian minisub. The exciting story, full of surprises, rivalries and partnerships, opens as well the modern understanding of oral tradition. Its readers will share our pleasure in this narrative of old and new. Could storytellers span 3,000 unlettered years? E3 SCIENTIFIC AMERICAN COMING IN THE MARCH ISSUE... Komodo Dragons Also in March... Beating the Quantum Limit The Timing of Birth Crash Testing ON SALE FEBRUARY 25 Reviews and Commentaries Scientific American February 1999 107 Copyright 1999 Scientific American, Inc. ^ WORKING KNOWLEDGE GIANT CRAWLER CRANES by Howard I. Shapiro, Jay P. Shapiro and Lawrence K. Shapiro Howard I. Shapiro & Associates, Consulting Engineers Giant crawler cranes dramatically demonstrate the principles of classical mechanics. These powerful machines are essentially fulcrums that counterbalance the outstretched boom and the lifted load with massive machinery, a heavy chassis and counterweights. Since their introduction around the 1920s, crawler cranes—the name refers to their caterpillar treads—have improved continually. The newest generation uses very high strength steel to reduce boom weight, remote sensing devices and microprocessors to thwart overloading, and high-pressure hydrostatic drives to achieve precise motion control. Such technology notwithstanding, today's giant crawler cranes work essentially the same way that their ancestors did, by exploiting the rules of equilibrium and mechanical advantage. Safe crane operation requires careful adherence to engineered guidelines made complicated by the shifting of the overall center of mass of the equipment because of factors such as the length and position of the boom, counterweights and | accessories in use, and the lifted load. In addition, the wind is often an important consideration. TYPICAL CRANE OPERATION requires the weight of the lifted load (orange) and that of the boom (green) to be stabilized by the weight of the body of the machine (black). The fulcrum is located near the center of the track below the boom (red). Stabilizing moments must be greater than overturning moments. (A moment is a force or weight multiplied by the applicable distance from the fulcrum.) The wind (blue), which can topple the crane forward, must also be considered. OVERTURNING MOMENTS STABILIZING MOMENT BOOM LOWERED for assembly or disassembly is the condition that typically imposes the upper limit on boom length. Note that the boom's center of mass (green) is at its fa rthest horizonta I d ista nee from the f u I-crum (red). Counterweights (brown) can be added, but the amount is constrained by the risk of the crane tipping backward when the boom is raised to its highest. RAISED BOOM with out any load can result in the crane tipping over backward from a head-on wind. For this condition the fulcrum is at the rear of the crane (red), and the effect of the wind moment (blue) is greatest at the highest boom angle (about 82 degrees from the horizontal). 108 Scientific American February 1999 Working Knowledge Copyright 1999 Scientific American, Inc.