Thursday, December 31, 2009

Recorded Music

The first recordings remained silent for 150 years

In the ninth century Persian scholars invented the first known mechanical instrument, a hydropowered organ that played music preprinted onto a rotating cylinder. It would be 1,000 years before inventors cracked the reverse process—printing sounds onto a storage device. The first machine that could pull music from the air was Édouard-Léon Scott de Martinville’s phonautograph, which he introduced in 1857. The device used a horn to focus sound waves and direct them onto a small diaphragm; attached to the diaphragm was a stylus that scratched a record of the waves onto a soot-stained rotating glass cylinder. The device showed that sound recording was possible, but it remained a historic curiosity for a simple reason: it could not play back the recorded songs. (At least not until last year, when researchers at Lawrence Berkeley National Laboratory deciphered the scratches and played an 1860 recording of a woman singing Au Clair de la Lune.) De Martinville’s phonoautograph has remained a quaint footnote, but his basic architecture of a horn, diaphragm, stylus and cylinder provided the foundation for all sound recording for the next 70 years. In 1874 Alexander Graham Bell experimented with sound recording using de Martinville’s architecture, except he used a cadaver’s ear. He abandoned his efforts to focus on the telephone, which he introduced in 1876. A year later Thomas A. Edison (right) was experimenting with a way to record sounds made by Bell’s telephone when he shifted efforts to record sounds in the air. His setup was almost identical to de Martinville’s except that Edison used tin foil as his recording surface, which allowed for playback. He brought the phonograph to the offices of Scientific American in December 1877, the same month he patented the device. We wrote, “No matter how familiar a person may be with modern machinery and its wonderful performances . . . it is impossible to listen to the mechanical speech without his experiencing the idea that his senses are deceiving him.”

Source of Information : Scientific American September 2009

Wednesday, December 30, 2009


The natural pigment was once a “precious” color

Looking for the perfect blue? You’ll have to specify. Cobalt, Prussian, azurite or ultramarine? According to Philip Ball’s book Bright Earth, if you were an artist living in the 14th century, the finest blue could cost you a king’s ransom. We can’t even reproduce it in this magazine—it’s not part of the gamut, or achievable range of colors, that can be rendered by the four “process colors” of ordinary printing.

The oldest man-made blue—the oldest synthetic pigment, period— is “Egyptian” blue. Color makers fired a mixture of one part lime, one part copper oxide and four parts quartz in a kiln, which left an opaque blue material that can be ground to a fine powder for making paint. The stuff occurs on Egyptian artifacts dating to around 2500 B.C. and was still in use when Mount Vesuvius buried Pompeii in A.D. 79.

In the Middle Ages color became central to the alchemists’ obsession with transmutation. And the alchemists’ great contribution to artists’ blue was ultramarine. It is made from blue lapis lazuli, a semiprecious stone then mined in Afghanistan. The costly raw material and elaborate preparation—which involved endless kneading of the lapis powder and washing in lye—led to the deep, rich, dark blue seen, as Ball points out, in paintings of the robe of the Virgin Mary. The medieval painter’s patron who could afford a Virgin in ultramarine was displaying the piety of an archbishop and the wealth of a modern hedge-fund manager.

As late as 1800, despite several alternative blues, artists were still seeking a less costly substitute to ultramarine. In 1824 the French Society for the Encouragement of National Industry offered 6,000 francs for an industrial process that could make a synthetic ultramarine for less than 300 francs a kilogram. A color manufacturer named Jean-Baptiste Guimet claimed the prize, and by the 1870s the snob appeal of the natural pigment had died out—killed by time and a price between 100 and 2,500 times higher than the synthetic variety. Industrial ultramarine became the blue of choice in the work of Impressionists such as Renoir, Cézanne and van Gogh.

Source of Information : Scientific American September 2009

Tuesday, December 29, 2009

Religious Thought

Belief in the supernatural may have emerged from the most basic components of human cognition

God may or may not exist, but His followers certainly do. Nearly every civilization worships some variety of supernatural power, which suggests that humans are hard-wired to believe in something that, by definition, is not of this world. But why? Evolutionarily speaking, how could belief in something in the absence of physical evidence have aided the survival of early Homo sapiens? Evolutionary biologists Stephen Jay Gould and Richard Lewontin of Harvard University proposed that religious thinking is a side effect of tendencies that more concretely help humans to thrive. Perhaps the most primitive is our “agency detector,” the ability to infer the presence of others. If the grass rustles in the distance, our first instinct is that someone or something may be lurking. This propensity has obvious evolutionary advantages: if we are right, we have just alerted ourselves to a nearby predator. (And if we are wrong, no harm done and we can get back to picking berries.)

In addition, humans instinctually construct narratives to make sense of what may be a disconnected jumble of events. Nassim Nicholas Taleb, author of The Black Swan and a professor of risk engineering, calls this the “narrative fallacy”—we invent cause-and-effect stories to explain the world around us even if chance has dictated our circumstances. Gods, empowered with omnipotence and shielded from natural inquiry, can be used to explain any mysterious event.

Finally, humans can imagine the thoughts and intentions of others and imagine that they are different from our own, a trait known as theory of mind. The condition, which is severely diminished in autistic children, is so fundamental to what it is to be human that it might be a necessary precondition for civilization. It is a small step from imagining the mind of another person—even if you have no direct access to it—to imagining the mind of a deity. Taken together, the evolutionary adaptations that made the garden of human society flourish also provided fertile ground for belief in God. Of course, it is impossible to transport ourselves back to early civilization to rigorously test these ideas, so perhaps one more idea about the divine will have to wait for verification.

Monday, December 28, 2009


The viral infection’s origin among apes might hold a key for someday taming it

Acquired immunodeficiency syndrome (AIDS) received its utilitarian name in 1982, a year after U.S. doctors recognized an epidemic of pneumonias, rare cancers and assorted bacterial infections among mostly male, mostly young and mostly previously healthy adults. The next year French researchers isolated the cause of the immune system collapse that defined the syndrome: a virus that selectively infects and destroys immune cells themselves.

The human immunodeficiency virus (HIV), which today resides in more than 30 million people and seemed to come out of the blue in the early 1980s, is now known to have been infecting humans for at least a century. Recent studies of preserved tissue samples show HIV present in the former Belgian Congo in 1959, in Haiti by 1966 and possibly in the U.S. as early as 1969. The historic specimens also let scientists calibrate “molecular clocks” to trace the evolution of the virus back to its first appearance in humans.

Those analyses place the emergence of the most widespread HIV strain, known as group M, in southern Cameroon around 1908. Its ancestor was likely a virus that has been infecting West African chimpanzees since 1492, according to another recent molecular clock study. If so, many rural people were surely exposed to simian immunodeficiency syndrome (SIV) over the centuries through live chimps or in bush meat before the infection caught hold in the human population. Scientists are consequently keen to figure out what allowed “successful” SIV strains to adapt to our species and begin spreading as HIV.

AIDS researchers are also intensively studying the behavior of SIV in its native host because although the simian virus is nearly identical to HIV, in wild chimps it is generally benign. The immune cells of our closest primate cousins get infected, too, but eventually manage to rally and reconstitute their numbers. The origin of the devastating syndrome that is AIDS therefore lies in some combination of minute changes in HIV itself—and the human body’s responses to it—and remains a mystery. —Christine Soare

Source of Information : Scientific American September 2009

Sunday, December 27, 2009


Structure, strength and storage in one package

A social gathering in the Cambrian era, beginning some 540 million years ago, might have resembled an underwater war game—all life resided in the ocean then and almost every creature present would have been wearing some sort of external armor, complete with spiked helmets. The ancestors of insects and crustaceans wore full exoskeletons, probably made from a mixture of protein and chitin like the shells of modern lobsters. Starfishlike organisms and mollusks manufactured their body armor from calcium carbonate extracted from seawater. Even one fishlike evolutionary dead-end, the ostracoderm, managed somehow to swim while encased in scales and heavy plates made of true bone—that is, mineralized cartilage rich in calcium and phosphates. It was the mild-mannered softies of the period, however, that would first develop internal bones. Wormlike organisms, such as the conodonts, started to mineralize the cartilage surrounding their primitive spinal cords, becoming the first vertebrates. Bony cranial coverings came next, and other creatures with more extensive cartilaginous internal skeletons soon followed suit. Because these swimmers used muscle contractions to propel themselves, having muscles anchored to solid bone would have provided greater strength. The hardened skeleton also offered a more solid scaffold for bodies to grow larger and to diversify, adding limbs to their repertoires.

Serving as a massive and highly responsive storage depot for critical minerals, particularly calcium, is a role that likely evolved later but is now one of the most important functions of human bone. Without calcium, the heart cannot beat and brain cells cannot fire, so far from being inert, bone is in constant flux between growth and self-demolition to meet the body’s needs and to maintain its own structure. Cells called osteoclasts (“bone breakers”) destroy old or dead bone tissue, and osteoblasts (“bone growers”) give rise to new bone cells. Working together, these cells replace about 10 percent of the skeleton every year. In the shorter term, if blood calcium levels are too low, osteoclasts destroy bone to release the mineral. Conversely, if exercise produces larger muscles, osteoblasts get to work building new bone to withstand their pull. —Christine Soares

Source of Information : Scientific American September 2009

Saturday, December 26, 2009


Barbs became plumes long before birds took wing—in fact, long before birds

The scaly, green Tyrannosaurus rex of monster movies is history. The real T. rex was probably covered in a fine feathery fuzz, as were most of the dinosaurs in its family, known as the theropods, which later gave rise to birds. Rich fossil beds in northeastern China have yielded specimens confirming that a wide variety of strictly earthbound dinosaurs sported feathers during the Cretaceous period, some 125 million years ago. Studying those fossils along with feather development in modern birds has allowed researchers to reconstruct the likely steps in feather evolution. The earliest protofeathers were little more than hollow barbs of keratin, the tough protein that makes up scales, hooves and hair. At some point the barbs developed horizontal ridges that separated into filaments, then split open vertically, resulting in a tassel-like feather. Long, filamentous tail feathers were recently found in a fossil belonging to a dinosaur lineage known as the ornithischians, which diverged from the dinos that would become theropods 70 million years before the Cretaceous—suggesting that feathers could be a very ancient and widespread feature.

The original purpose of plumage might have been simply to provide lightweight warmth, but the vivid hues and patterns seen in modern birds also play a critical role in mating display. Not all feather colors are produced by pigment, however. Nanoscale keratin structures within the feathers trap air and scatter light of certain wavelengths, depending on their shapes—the dark blues of the Eastern bluebird, for instance, result from twisted air channels and keratin bars. Further studies of how these nanostructures self-assemble could yield new techniques for making colored and light-emitting materials. —Christine Soares

Source of Information : Scientific American September 2009

Friday, December 25, 2009


A failure for photography, it was long irreplaceable for duplicating house plans

This paper will prove valuable,” wrote John Herschel in a scientific memorandum on April 23, 1842, noting the effect of sunlight on a sample he had treated with “ferrocyanate of potash.” The light turned the chemical blue, leading Herschel to believe he had found a basis for the invention of color photography. He had not—nor would he live long enough to witness the true usefulness of his discovery.

A British astronomer and chemist, Herschel had already played a crucial role in the 1839 invention of the black-and-white salt print—the first photographic negative—by finding a way to fix, or set, the fugitive image with sodium thiosulfate. His obsessive search for other photosensitive chemicals led him to try out everything from vegetable extracts to dog urine, as well as the then new pharmaceutical known as ferrocyanate of potash, a substance now called potassium ferricyanide. The ferrocyanate produced a strong image, particularly when combined with another pharmaceutical called ammonio (ammonium ferric citrate), and the image proved permanent after washing. Herschel dubbed his invention the “cyanotype,” but he was deeply dissatisfied with it, because he could not coax the chemistry to produce a stable positive image—only a negative. Most photographers shared his opinion, shunning the strange cyan hue in favor of conventional black-and-white pictures.

Only in 1872, one year after Herschel died, was the cyanotype revived, when the Paris-based Marion and Company renamed his invention “ferroprussiate paper” and began marketing it for the replication of architectural plans. (Previously, they had been copied by hand, which was expensive and prone to human error.) At the 1876 Philadelphia Centennial Exposition, the process reached American shores, where it finally met success as the blueprint, the first inexpensive means of duplicating documents. All that was required was a drawing traced on translucent paper. Pressed against a second sheet coated with Herschel’s chemical under glass, the drawing was exposed to sunlight, then washed in water. The blueprint paper recorded the drawing in reverse, black lines appearing white against a cyan background. Occupying the top floors of office buildings where there was ample sunlight, blueprint shops thrived for nearly a century, only gradually phasing out Herschel’s chemistry for less labor-intensive processes such as the diazo print and the photocopy from the 1950s to the 1970s. Today most architectural plans are digitally rendered, and Herschel would have marveled at the color gamut of the modern laser printer. Yet he would have been puzzled, given his failed efforts to print in full color, to see that when we want to communicate an innovative new plan, we call it a blueprint and output it in cyan. —Jonathon Keats

Thursday, December 24, 2009


The reaction that makes the world green is just one of many variants

When the sun shines, green plants break down water to get electrons and protons, use those particles to turn carbon dioxide into glucose, and vent out oxygen as a waste product. That process is by far the most complex and widespread of the various known versions of photosynthesis, all of which turn the light of particular wavelengths into chemical energy. (Studies have even suggested that certain single-celled fungi can utilize the highly energetic gamma rays: colonies of such fungi have been found thriving inside the postmeltdown nuclear reactor at Chernobyl.) Using water as a photosynthetic reactant instead of scarcer substances such as hydrogen sulfide eventually enabled life to grow and thrive pretty much everywhere on the planet. Water-splitting photosynthesis was “invented” by the ancestors of today’s cyanobacteria, also known as blue-green algae. The organisms that now do this type of photosynthesis, including plants, green algae and at least one animal (the sea slug Elysia chlorotica), carry organelles called chloroplasts that appear to be the descendants of what once were symbiotic cyanobacteria.

All of them use some form of the pigment chlorophyll, sometimes in combination with other pigments. Photosynthesis starts when arrays of chlorophyll molecules absorb a photon and channel its energy toward splitting water. But water is a uniquely hardy molecule to be involved in photosynthesis. Taking electrons from water and giving them enough energy to produce glucose requires two separate assemblies of slightly different chlorophyll molecules (and an apparatus of more than 100 different types of proteins). Simpler forms of photosynthesis use one or the other version, but not both. The mystery is, Which one appeared first in evolution, and how did the two end up combined? “It’s a question we don’t really know the answer to,” says Robert Blankenship of Washington University in St. Louis. Scientists also do not know when cyanobacteria learned to split water. Some evidence suggests that it may have been as early as 3.2 billion years ago. It surely must have happened at least 2.4 billion years ago, when oxygen shifted from being a rare gas to being the second most abundant one in the atmosphere—a change without which complex multicellular animals that can formulate scientific questions could never have existed. —Davide Castelvecchi

Source of Information : Scientific American September 2009

Wednesday, December 23, 2009

Mad Cow Disease

Cannibalism takes its revenge on modern farms

The story behind the brain-destroying mad cow disease vividly illustrates why it’s not a good idea to eat your own species. For cattle, cannibalism had nothing to do with survival or grisly rituals and everything to do with economics. The first so-called mad cows (the sickness is formally called bovine spongiform encephalopathy) were identified in 1984 in the U.K. They were probably infected a few years earlier by eating feed derived from the parts of sheep, cows and pigs that people avoided—diaphragms, udders, hooves, spinal cords, brains, and the like. The process of separating the ground-up components of slaughtered animals to make feed—and other products such as soap and wax—is called rendering and has existed for hundreds of years. In the mid-20th century in the U.K., rendering demanded the use of solvents and hours of boiling. The procedures presumably destroyed any pathogens that might have come from diseased creatures—pathogens that include the prion, a dangerous, malformed version of a protein found in all mammals.

In the 1970s the price of oil rose sharply, shooting up 10-fold by 1980. High crude prices, coupled with stagnant economic times, led renderers to seek ways to cut energy costs. So they did away with the solvents and the extended heating, opting instead to separate the parts in a centrifuge. The elimination of the extra cooking steps apparently enabled prions to persist. Perhaps the first prions came from a cow that spontaneously developed the disease. Or perhaps scrapie, a prion disease of sheep that had been endemic in the U.K. for centuries but did not seem to pose a threat to human health, jumped species to infect bovines. In any case, subsequent rendering of infected cows—and then giving the resulting feed to other cows to eat as a cheap source of protein—amplified the outbreak.
The situation echoed the devastation of the Fore people of Papua New Guinea: when the group practiced cannibalistic funerary rites in the early 20th century, it spread a fatal prion disease called kuru. For the cows of the U.K. and elsewhere—the export of contaminated feed spread the disease globally—the epidemic subsided after regulations banned cannibalistic feed. Animal-health officials last year registered 125 cases worldwide, down from the peak of 37,000 in 1992. The rules came too late to save some 200 people who contracted the human form of the ailment—a small number, thankfully, considering that tens of millions have probably dined on mad cow beef. —Philip Yam

Source of Information : Scientific American September 2009

Tuesday, December 22, 2009

The Mechanical Loom

Programmable textile machinery provided inspiration for the player piano and the early computer

A master weaver in 18th-century Lyon, France, Jean-Charles Jacquard was able to fabricate no more than six inches of silk brocade a week. Even that production rate was feasible only with the aid of an apprentice to sit atop his wooden draw loom, raising individual warp threads by hand while the maître slid through brightly colored threads of weft. The unrelenting tedium of weaving a pattern line by line may explain why his son, Joseph-Marie, avoided it even before the French Revolution briefly put brocade out of fashion. Only after squandering his family inheritance did Joseph-Marie reconsider—and even then, instead of becoming a master weaver, he invented a machine to save himself the labor.

Jacquard’s key idea was to store brocade patterns on perforated cards that could be fed through the loom, with one card per line of weaving. The loom would read the arrangement of holes punched on a card with a lattice of spring activated pins connected to hooks that would each individually lift a warp thread wherever a pin entered a hole. In this way, the loom could be programmed, and patterns could be modifi ed or switched by rearranging or replacing the card deck. Patented in 1804, an expertly operated Jacquard loom could produce two feet of brocade a day, a feat impressive enough, given France’s dependence on textile exports, to merit the device a visit from Napoleon. Yet not even the notoriously ambitious emperor could have appreciated the significance that Jacquard’s invention would have to future generations.

As it turned out, holes punched in paper provided a ready-made solution for developing any kind of programmable machine. Inside the pneumatic mechanism of a pianola, one punched roll would play a Bach toccata, while another would play a Gershwin rag. Vastly greater was the versatility inside a computer, as 19th-century British scientist Charles Babbage imagined with his unbuilt Analytical Engine and as American engineer Howard Aiken realized in the 1930s when he constructed the Harvard Mark I at IBM. Following Babbage’s lead, Aiken made stacks of Jacquard punch cards operate in tandem, with one stack setting the operation applied to read data from another.

In modern computers the cards are gone (as are Aiken’s electromechanical switches), but computers still embody essentially the same architecture. And although industrial looms are no longer manned by masters of the craft such as Jacquard’s father, Joseph-Marie’s innovation brings even weaving to ever higher levels of efficiency through the computer consoles that control the patterning of modern textiles.—Jonathon Keats

Source of Information : Scientific American September 2009

Monday, December 21, 2009

The Pill

Infertility treatments led to reproductive liberation

The oral contraceptive so universally embraced it became known simply as “the pill” was a decades-long dream of family-planning advocate Margaret Sanger, although none of the men who realized her vision started out with that purpose. In the 1930s scientists began discovering the roles of steroid hormones in the body and contemplated their therapeutic potential, but extracting hormones from animals was prohibitively expensive for most medical uses. Then, in 1939, Penn State chemist Russell Marker devised a method for making steroids from plants that remains the basis of hormone production even today. The company he founded, Syntex, soon developed an injectable synthetic progesterone derived from a wild yam. Progesterone was an attractive drug candidate for treating menstrual irregularities that contributed to infertility because its natural role is to prevent ovulation during pregnancy and parts of a woman’s menstrual cycle. In 1951 Syntex chemist Carl Djerassi—who would later become famous for his prodigious literary output—synthesized a plant-derived progestin that could be taken in convenient oral form. When Sanger and her wealthy benefactor, Katharine Dexter McCormick, approached steroid researcher Gregory Pincus about creating a contraceptive pill in 1953, he was working for the small and struggling Worcester Foundation for Experimental Biology in Massachusetts. But 20 years earlier at Harvard University, Pincus had scandalized polite society by carrying out successful in vitro fertilization of rabbits; Sanger believed he had the daring and know-how to produce her long-sought pill. Pincus in turn recruited an infertility doctor, John Rock, who was already using progesterone to suspend his patients’ ovulation for a few months in the hope that their fertility would rebound. Still under the guise of fertility research, Rock and Pincus conducted their first human trial in 1954, injecting 50 women with synthetic progestins over the course of three months. All 50 stopped ovulating for the duration of the trial and resumed when the drugs were withdrawn. After several more years of experimentation, the first contraceptive pill was approved by the U.S. Food and Drug Administration in June 1960.

Source of Information : Scientific American September 2009

Sunday, December 20, 2009


Its hardness is natural; its value is not

A diamond is forever. So are sapphire, silica and Styrofoam. It is the hardest known naturally occurring substance, which explains why diamonds are excellent industrial cutting materials, not emblems of romance. They are no more rare than any number of minerals, no more dazzling. So although diamonds may have their genesis in the heat and pressure of the earth’s mantle billions of years ago, what a diamond represents is a very modern tale.

In 1870 British mining efforts in South Africa uncovered massive diamond deposits. Until then, as commodities, diamonds had been extremely rare; the new finds threatened to flood the market with stones and obliterate their price. Investors in the mines realized they had to consolidate their interests to control the flow of diamonds into the open market, and so in 1888 they formed the De Beers Consolidated Mines Ltd. consortium. By stockpiling its goods to keep prices high, De Beers controlled the worldwide diamond supply for the next century.

Its next trick was to control demand. In 1938 De Beers hired the American public-relations firm N. W. Ayer to begin the first advertising campaign that aimed not to sell a specific item, nor to bring customers into a specific store, but rather to sell an idea: that a diamond is the only acceptable symbol of everlasting love—and the larger the diamond, the greater the love. The company planted stories in newspapers and magazines that emphasized the size of the diamonds movie stars gave one another; four-color advertisements of celebrities conspicuously flashing their rocks helped to cement the connection. The slogan “A Diamond Is Forever” entered the lexicon in 1949, and by the time the postwar generation grew old enough to wed, the diamond engagement ring had become a nonnegotiable symbol of courtship and prestige. Antitrust rulings earlier this decade broke De Beers’s choke hold on the diamond market and forced an end to its practice of stockpiling. Yet it has effectively been replaced by Alrosa, a firm 90 percent owned by the Russian government that became the world’s largest diamond producer earlier this year. Alrosa, worried about a drop in prices during a global recession, has not sold a stone on the open market since December 2008. As Andrei V. Polyakov, a spokesperson for Alrosa, explained to the New York Times, “If you don’t support the price, a diamond becomes a mere piece of carbon.”

Sourceo of Information : Scientific American September 2009

Saturday, December 19, 2009

The Eye

What was half an eye good for? Quite a lot, actually

One of creationists’ favorite arguments is that so intricate a device as the eye—with a light-regulating iris, a focusing lens, a layered retina of photosensitive cells, and so on—could not have arisen from Darwinian evolution. How could random mutations have spontaneously created and assembled parts that would have had no independent purpose? “What good is half an eye?” the creationists sneer, claiming the organ as prima facie proof of the existence of God. Indeed, even Charles Darwin acknowledged in On the Origin of Species that the eye seemed to pose an objection to his theory. Yet by looking at the fossil record, at the stages of embryonic development and at the diverse types of eyes in existing animals, biologists since Darwin have outlined incremental evolutionary steps that may have led to the eye as we know it. The basic structure of our eyes is similar in all vertebrates, even lampreys, whose ancestors branched away from ours about 500 million years ago. By that time, therefore, all the basic features of the eye must have existed, says Trevor Lamb of the Australian National University. But vertebrates’ next closest kin, the slippery hagfish—animals with a cartilaginous cranium but no other bones—has only rudimentary eyes. They are conical structures under the skin, with no cornea, no lens and no muscles, whose function is probably just to measure the dim ambient light in the deep, muddy seabeds where hagfish live.

Our eyes are thus likely to have evolved after our lineages diverged from those of hagfish, perhaps 550 million years ago, according to Lamb. Earlier animals might have had patches of light-sensitive cells on their brain to tell light from dark and night from day. If those patches had re-formed into pouchlike structures as in hagfish, however, the animals could have distinguished the direction from which light was coming. Further small improvements would have enabled the visualization of rough images, as do the pinhole-camera eyes of the nautilus, a mollusk. Lenses could eventually have evolved from thickened layers of transparent skin. The key is that at every stage, the “incomplete” eye offered survival advantages over its predecessors.

All these changes may have appeared within just 100,000 generations, biologists have calculated, which in geologic terms is the blink of an eye. Such speedy evolution may have been necessary, because many invertebrates were developing their own kinds of eyes. “There was a real arms race,” Lamb says. “As soon as somebody had eyes and started eating you, it became important to escape them.”

Source of Information : Scientific American September 2009

Friday, December 18, 2009

Intermittent Windshield Wipers

A now routine automotive feature pitted an individual inventor against the entire industry

The origins of even the simplest technology are sometimes best remembered not for the ingenuity of the inventor’s imagination but rather for the endless legal disputes it engendered. In the annals of famous patent litigation, the intermittent windshield wiper holds a pride of place. The genesis of this useful but seemingly incidental feature of the modern automobile even attracted Hollywood scriptwriters in search of a latter-day David and Goliath tale that became a 2008 release called Flash of Genius.

The story revolves around a brilliant, idiosyncratic college professor named Robert Kearns. Almost blinded by a champagne cork on his wedding night in 1953, Kearns later found that the monotonous backand-forth movement of wiper blades vexed his diminished vision, as recounted in the most commonly cited version of events. Kearns used off-the-shelf electronic parts in 1963 to devise windshield wipers that would clean the surface and then pause. The engineer demonstrated his system to Ford and ended up revealing details of how it worked. The automaker decided not to buy wipers from a Detroit tool and die company to which Kearns had licensed his patent rights— and it subsequently developed its own system.

In 1976 Kearns, then working with the National Bureau of Standards, disassembled a commercial wiper system and discovered that the company had apparently adopted his own design. He promptly had a nervous breakdown and, once recovered, began a struggle that lasted until the 1990s to gain redress. Kearns recruited several of his children to help in preparing lawsuits against the world’s major auto companies, sometimes even serving as his own legal counsel. Juries ultimately determined that Ford and Chrysler had infringed Kearns’s patents, resulting in about $30 million in awards. Critics have argued that Kearns’s inventions violated a key criterion of patentability, that an invention should not be “obvious” to one skilled in making widgets similar to the type being patented. An electronic timer— the essence of Kearns’s invention—was, if anything, obvious, Ford contended. Still, Kearns prevailed in these two cases (but not later ones), and he will live on indefinitely as a hero to small inventors.

Source of Information : Scientific American September 2009

Thursday, December 17, 2009

The Paper Clip

Despite its shortcomings, the iconic design will likely stick around

People have fastened sheets of paper together more or less permanently ever since the Chinese invented the stuff in the first or second century A.D. Yet according to the Early Office Museum, the first bent wire paper clip wasn’t patented until 1867, by one Samuel B. Fay. The iconic shape of the Gem paper clip (the namesake of Gem Office Products Company) that we know today did not appear until around 1892, and it was never patented. Henry Petroski, the technology historian, wrote that its development had to await the availability of the right wire as well as machinery that could bend wire quickly enough for a box of clips to be sold for pennies. Both the paper clip and the machine that makes it trace their origins to pin making. Office workers in the early 19th century stuck their papers together—literally—with pins; a pin design known as the T-pin is still advertised in office products catalogues today. Victorian-era pin-making machinery had already solved the problem of cheaply mass-converting wire to pins; adapting the machine’s talents to shaping wire was a relatively minor adjustment that made it possible for hosts of creative wire benders to dream of cashing in big.

Today paper clips made out of molded plastic, wire clips coated with colored plastic, and even semicircular sheets of aluminum that fold the top corners of the papers (and are thereby able to carry a logo or a favorite design) have come on the market. And you can still readily buy T-pins, owl clips, binder clips and ideal clips. Taken together, they have even made some inroads in the traditional Gem paper clip business. But before you send a sketch of your new, improved design to Gem Office Products, consider this: the Gem paper clip can scratch or tear paper, catches on others of its kind in a box and, if spread too wide, slips off the papers it is intended to hold. The company once estimated that it received at least 10 letters a month suggesting alternative designs. Yet to most people, the Gem simply is the paper clip. It’s as frozen into office culture as the “qwerty” keyboard.

Source of Information : Scientific American September 2009

Wednesday, December 16, 2009

A surgical cure for diabetes?

The gastric bypass was originally developed as a treatment for obesity, but doctors have been amazed at how quickly it benefits people with type 2 diabetes sometimes within a matter of days or even hours. Some surgeons and learned societies have stopped calling the procedure obesity surgery and instead speak of metabolic surgery - referring to the fact that it can treat metabolic conditions such as diabetes. The radical idea now is that diabetes could be treated more routinely with this type of surgery. The key question is how overweight people have to be in order to qualify. At the moment the procedure is normally offered only to those with a body mass index of 35 or more (between 18.5 and 25 is considered normal), on the grounds that only at that weight will the clinical improvement outweigh the risks of surgery. But some doctors are lowering the BM I cut-off to 30 for people with diabetes. Some in South America have raised eyebrows by operating on diabetics with a BMI as low as 25. 'The excel lent results early after surgery before significant weight loss has led us to extend the surgical approach to people in whom significant weight loss is not required;' says Bruno Geloneze, an endocrinologist at the State University of (ampinas in Brazil. In May his team published a study of 12 diabetics whose BMI was between 25 and 30 - in other words, they were not obese, merely overweight. All experienced improvements in their symptoms after a bypass, and without unexpected complications. But only larger randomised trials will reveal if the benefits outweigh the risks. Francesco Rubino, a metabolic surgeon at (ornell University-New York Presbyterian Hospital, doesn't think surgery should automatically be offered to everyone with type 2 diabetes, but he agrees that using a cut-off for BMI is too crude a measure. "It should be based on more comprehensive considerations than BMI alone," he says.

Tuesday, December 15, 2009



Clean your clothes without putting them—or your utility bills—through the wringer. Xeros’s prototype washing machine uses 90 percent less water than ordinary models, which also eliminates energy-intensive spin cycles and dryer blasts. The machine replaces all but one tenth of the usual water and about one third of the usual detergent with 0.1-inch plastic beads, reusable for hundreds of washes. The beads are made of the same nylon as many carpets, because the properties that make nylon easy to stain also make it a great scrubber: Its polarized molecules attract soil, and in the humidity created by a little water, the polymer chains separate slightly to absorb grime and lock it into the beads’ cores.

Nylon beads sit in the outer of two nested drums. When both drums rotate, the absorbent beads fall through the mesh of the inner drum to tumble with your laundry, where they dislodge and trap dirt. After the wash cycle finishes, the outer drum stops moving and centripetal force pushes the beads back through the mesh into the outer drum, where they await your next mess.

Xeros aims to put machines in commercial laundries next year, where they will use eight gallons of water instead of 80 for each 45-pound load. They may be cleaning your favorite T-shirts at home within several years.

Source of Information : Popular Science November 2009

Monday, December 14, 2009



Multitouch screens, which can register more than one finger-press at a time, will let computers trade keyboards and mice for simple strokes and pinches. The models shown here are just the start. Nearly every major PC maker will introduce touch-y designs of various shapes and sizes in the coming months.

Microsoft Windows 7, which launches October 22, is the first major computer operating system designed to work with multitouch displays. Because it incorporates the software code needed to understand your gestures, manufacturers can now include these screens more easily than ever before.

Use your fingers instead of a mouse in almost any program; for instance, pinch to zoom out in Google Earth, or drag a finger to scroll through a Web page in Firefox. Developers are also beginning to build applications that use touch in new and more-creative ways—such as in 3-D design programs that let you morph virtual products with a twist—so that formerly complicated tasks will become as easy as a tap.

The T400s looks like an ordinary 14.1-inch laptop, but a touchscreen frees you from the tiny cursor. For instance, you can rearrange two photos at once by dragging them, or partygoers can point at a song they want to hear. Its capacitive screen senses the electrical conductivity of fingers and even recognizes up to four touches at a time. Lenovo T400S with Multitouch Option From $2,000;

Lose the keyboard entirely with a laptop whose display spins and folds to hide the keys. You can use a stylus to write or draw precisely, since the 13.3-inch screen includes both a fleshsensing capacitive layer and the same electronic-pen-based layer used by graphic artists. Don’t worry about penmanship: Windows 7 boasts better handwriting recognition. Fujitsu Life-Book T5010 with Multitouch Option From $1,860;

A 21.5-inch widescreen display makes it easy for even big fingers to hit their mark. Tap where you want to enter text, and up pops Windows 7’s virtual keyboard, which you can enlarge to take advantage of the big screen. Poke at letters using either your fingers or the end of a pencil, since the camera-based optical touchscreen can detect when any opaque object comes in contact. MSI Wind Top All-in-One PC From $730;

Source of Information : Popular Science November

Sunday, December 13, 2009


An eggshell membrane evolved into the organ that lets fetuses grow in the womb

More than 120 million years ago, while giant dinosaurs crashed through the forests in fearsome combat, a quieter drama unfolded in the Cretaceous underbrush: some lineage of hairy, diminutive creatures stopped laying eggs and gave birth to live young. They were the progenitors of nearly all modern mammals (the exceptions, platypuses and echidnas, still lay eggs to this day).

What makes mammals’ live birth possible is the unique organ called the placenta, which envelops the growing embryo and mediates the flow of nutrients and gases between it and the mother via the umbilical cord. The placenta seems to have evolved from the chorion, a thin membrane that lines the inside of eggshells and helps embryonic reptiles and birds draw oxygen. Kangaroos and other marsupials have and need only a rudimentary placenta: after a brief gestation, their bean-size babies finish their development while suckling in the mother’s pouch. Humans and most other mammals, however, require a placenta that can draw nutrients appropriately from the mother’s blood throughout an extended pregnancy.

Recent studies have shown that the sophistication of the placenta stems in part from how different genes within it are activated over time. Early in embryonic development, both mouse and human placentas rely on the same set of ancient cell-growth genes. But later in a pregnancy, even though the placenta does not obviously change in appearance, it invokes genes that are much newer and more species-specific. Thus, placentas are fine-tuned for the needs of mammals with different reproductive strategies: witness mice, which gestate for three weeks with a dozen or more pups, versus humans, who deliver one baby after nine months. To last more than a week or two, the placenta, which is primarily an organ of the fetus, must prevent the mother’s immune system from rejecting it.

To do so, the placenta may deploy a mercenary army of endogenous retroviruses— viral genes embedded in the mammal’s DNA. Scientists have observed such viruses budding from the placenta’s cell membranes. Viruses may play crucial roles in pacifying the mother’s immune system into accepting the placenta, just as they do in helping some tumors survive.

Source of Information : Scientific American September 2009

Saturday, December 12, 2009


Synonymous with life, it was born in the heart of stars

Although carbon has recently acquired a bad rap because of its association with greenhouse gases, it has also long been synonymous with biology. After all, “carbon-based life” is often taken to mean “life as we know it,” and “organic molecule” means “carbon-based molecule” even if no organism is involved.

But the sixth element of the periodic table—and the fourth most abundant in the universe—has not been around since the beginning of time. The big bang created only hydrogen, helium and traces of lithium. All other elements, including carbon, were forged later, mostly by nuclear fusion inside stars and supernovae explosions.

At the humongous temperatures and pressures in a star’s core, atomic nuclei collide and fuse together into heavier ones. In a young star, it is mostly hydrogen fusing into helium. The merger of two helium nuclei, each carrying two protons and two neutrons, forms a beryllium nucleus that carries four of each. That isotope of beryllium, however, is unstable and tends to decay very quickly. So there would seem to be no way to form carbon or heavier elements.

But later in a star’s life, the core’s temperature rises above 100 million kelvins. Only then is beryllium produced fast enough for there to be a significant amount around at any time—and some chance that other helium nuclei will bump into those beryllium nuclei and produce carbon. More reactions may then occur, producing many other elements of the periodic table, up to iron.

Once a star’s core runs out of nuclei to fuse, the outward pressure exerted by the nuclear fusion reaction subsides, and it collapses under its own weight. If a star is large enough, it will produce one of the universe’s most spectacular flares: a supernova explosion. Such cataclysms are good, because supernovae are what disperse carbon and the other elements (some of them forged in the explosions themselves) around the galaxy, where they will form new stars but also planets, life ... and greenhouse gases.

Source of Information : Scientific American September 2009

Friday, December 11, 2009

Buckyballs and Nanotubes

A once overlooked form of carbon may represent the future of technology

Fullerenes, a form of solid carbon distinct from diamond and graphite, owe their discovery to a supersonic jet—but not of the airplane variety. At Rice University in 1985 the late Richard E. Smalley, Robert F. Curl and Harold W. Kroto (visiting from the University of Sussex in England), along with graduate students James R. Heath and Sean C. O’Brien, were studying carbon with a powerful tool that Smalley had helped pioneer: supersonic jet laser spectroscopy. In this analytical system, a laser vaporizes bits of a sample; the resulting gas, which consists of clusters of atoms in various sizes, is then cooled with helium and piped into an evacuated chamber as a jet. The clusters expand supersonically, which cools and stabilizes them for study.

In their experiments with graphite, the Rice team recorded an abundance of carbon clusters in which each contained the equivalent of 60 atoms. It puzzled them because they had no idea how 60 atoms could have arranged themselves so stably. They pondered the conundrum during two weeks of discussion, frequently over Mexican food, before hitting on the solution: one carbon atom must lie at each vertex of 12 pentagons and 20 hexagons arranged like the panels of a soccer ball. They named the molecule “buckminsterfullerene,” in tribute to Buckminster Fuller’s similar geodesic domes. Their discovery sparked research that led to elongated versions called carbon nanotubes, which Sumio Ijima of NEC described in a seminal 1991 paper. Both “buckyballs” and nanotubes could have been found earlier. In 1970 Eiji Osawa of Toyohashi University of Technology in Japan postulated that 60 carbon atoms could adopt a ball shape, but he did not actually make any. In 1952 two Russian researchers, L. V. Radushkevich and V. M. Lukyanovich, described producing nanoscale, tubular carbon filaments; published in Russian during the cold war, their paper received
little attention in the West.

As it turned out, buckminsterfullerene is not hard to make. It forms naturally in many combustion processes involving carbon (even candle burning), and traces can be found in soot. Since the Rice discovery, researchers have devised simpler ways to create buckyballs and nanotubes, such as by triggering an electrical arc between two graphite electrodes or passing a hydrocarbon gas over a metal catalyst. Carbon nanotubes have drawn much scrutiny; among their many intriguing properties, they have the greatest tensile strength of any material known, able to resist 100 times more strain than typical structural steel. During an interview with Scientific American in 1993, Smalley, who died in 2005 from leukemia, remarked that he was not especially interested in profiting from fullerenes. “What I want most,” he said, “is to see that x number of years down the road, some of these babies are off doing good things.” Considering that nanotubes in particular are driving advances in electronics, energy, medicine and materials, his wish will very likely come true.

Source of Information : Scientific American September 2009

Thursday, December 10, 2009

Economic Thinking

Even apparently irrational human choices can make sense in terms of our inner logic

Much economic thinking rests on the assumption that individuals know what they want and that they make rational decisions to achieve it. Such behavior requires that they be able to rank the possible outcomes of their actions, also known as putting a value on things.

The value of a decision’s outcome is often not the same as its nominal dollar value. Say you are offered a fair bet: you have the same chance of doubling your $1 wager as you have of losing it. Purely rational individuals would be indifferent to the choice between playing or not playing: if they play such a bet every day, on average they will be no better or no worse off. But as Captain Kirk might tell Mr. Spock, reality often trumps logic. Or as Swiss mathematician Gabriel Cramer wrote in a 1728 letter to his colleague Nicolas Bernoulli, “The mathematicians estimate money in proportion to its quantity, and men of good sense in proportion to the usage that they may make of it.” Indeed, many people are “risk-averse”: they will forfeit their chance of winning $1 to be guaranteed of keeping the $1 they have, especially if it is their only one. They assign more value to the outcome of not playing than to the outcome of potentially losing. A risk-oriented person, on the other hand, will go for the thrill.

Cramer’s idea was later formalized by Bernoulli’s statistician cousin Daniel into the concept of expected utility, which is an implicit value given to the possible outcomes of a decision, as revealed by comparing them with the outcomes of a bet. Risk-averse and risk-oriented persons are not irrational; rather they make rational decisions based on their own expected utility.

Economists generally assume that most people are rational most of the time, meaning that they know which decisions will maximize the expected utility of their choices. (Of course, doing so requires knowing how to evaluate risk wisely, which people do not always do well. AIG, anyone?) Some experiments, however, have shown that people are occasionally unable to rank outcomes in a consistent way. In 1953 American mathematician Kenneth May conducted an experiment in which college students were asked to evaluate three hypothetical marriage candidates, each of whom excelled in a different quality. The students picked intelligence over looks, looks over wealth and wealth over intelligence.

Source of Information : Scientific American September 2009

Wednesday, December 9, 2009

The Vibrator

One of the first electrical appliances made its way into the home as a purported medical device

For a sex toy, the vibrator’s roots seem amazingly antiseptic and clinical. Prescribed as a cure for the curious disease hysteria, the device for decades found clinical application as a supposed medical therapy.

Derived from the Greek word for “uterus,” hysteria occurred in women with pent-up sexual energy—or so healers and early physicians believed. Nuns, widows and spinsters were particularly susceptible, but by the Victorian era many married women had fallen prey as well. In the late 19th century a pair of prominent physicians estimated that three quarters of American women were at risk.

The prescription of clitoral orgasm as a treatment for hysteria dates to medical texts from the fi rst century A.D. Hysterical women typically turned to doctors, who cured them with their hands by inducing a “paroxysm”—a term that hides what we now know as a sexual climax. But manual stimulation was time-consuming and (for the doctors at least) tedious. In The Technology of Orgasm: “Hysteria,” the Vibrator and Women’s Sexual Satisfaction, science historian Rachel P. Maines reports that physicians often passed the job off to midwives.

The invention of electricity made the task easier. Joseph Mortimer Granville patented an electromechanical vibrator in the early 1880s to relieve muscle aches, and doctors soon realized it might be used on other parts of the body. That innovation shortened treatment time for hysteria, fattening doctors’ wallets.

Patients were happy, too. The number of health spas offering vibration therapy multiplied, and the service was so popular vibrator manufacturers warned doctors not to overdo it with the modern appliance: if they met relentless patient demand, even mechanical vibration could be tiring. By the turn of the century needlework catalogues advertised models for women who wanted to try the treatment at home, making the vibrator the fifth electric appliance to arrive in the home—after the sewing machine, the fan, the teakettle and the toaster.

The vibrator’s legitimacy as a medical device declined after the 1920s, when Sigmund Freud correctly identified paroxysm as sexual. In 1952 the American Psychiatric Association dropped hysteria from its list of recognized conditions. When the vibrator was again popularized years later, women no longer needed the pretense of illness to justify a purchase.

Source of Information : Scientific American September 2009

Tuesday, December 8, 2009

Paper Money

A substitute for coins turned into a passport for globalization

Blame it on paper currency. The development of banknotes in China more than a millennium ago accelerated wealth accumulation, deficit spending and credit extension—paving the way for our present-day financial crisis. When Chinese merchants started using paper money in the Tang Dynasty (which spanned A.D. 618 to 907), they could have hardly foreseen such difficulties. At the time, the introduction of notes that could be redeemed for coins at the end of a long journey was a boon.

Paper cut down on traders’ loads, enabling them to transport large sums of money over sizable distances. The practice caught on nationwide in the 10th century, when a copper shortage prompted the emperor of the Song Dynasty to issue the world’s fi rst circulating notes. A string of earlier Chinese inventions— including paper, ink and block printing—made it all possible. When Marco Polo visited the Mongol Empire in the 1200s, he was impressed by Kublai Khan’s sophisticated mints, connecting them to an apparently booming economy. (The explorer did not pick up on signs of the inflation brought on by the rapid printing of notes.) Later, faster circulation of currency allowed European nations to siphon resources out of Asia and Africa, fundamentally altering the global balance of power.

Today paper money means that wealth flows back to the developing world as well. Financial convertibility makes it possible for China to buy up U.S. bonds, financing debts that may never be paid back. But it also escalates the pace of wealth accumulation. Paper currency—and its modern heir, electronic trading—lay behind the recent commodities and housing bubbles, contributing to last year’s financial crash. In today’s recession, things have come full circle. Amid concerns about financial stability, some investors are holding on to precious metals. A backlash against more abstract forms of currency means a return to our economic roots: centuries after our conversion to paper, the price of gold has soared.

Source of Information : Scientific American September 2009

Monday, December 7, 2009

Legs, Feet and Toes

The essential parts for walking on land evolved in water

The evolution of terrestrial creatures from aquatic fish with fins may have begun with the need for a breath of fresh air. Animals with limbs, feet and toes—a group known as the tetrapods (literally, “four-footed”)—arose between 380 million and 375 million years ago. Scientists long believed that limbs evolved as an adaptation to life on terra firma. But recent discoveries have revealed that some of the key changes involved in the fin-to-limb transition occurred while the ancestors of tetrapods were still living in the water. Tetrapod evolution experts such as Jennifer Clack of the University of Cambridge hypothesize that these early modifications to the bones and joint surfaces of the pectoral fins might have benefited tetrapod ancestors in two key ways. First, they could have allowed the creatures, which lived in the plant-choked shallows, to perform a push-up that raised their heads out of the oxygen-poor water for a breather. (Changes in other parts of the skeleton, such as the skull and neck, also facilitated air breathing.) The protolimbs could have also helped these animals to propel themselves along the bottom and to steady themselves against the current while waiting to ambush prey.

Researchers once thought that the bones making up feet and toes were an evolutionary innovation unique to the tetrapods. But over the past few years analyses of tetrapod forerunners, such as the Tiktaalik fossil unveiled in 2006, have revealed that these bones derive directly from bones in the fish fin. Curiously, the earliest tetrapods and tetrapodlike fish had feet with between six and eight digits, rather than the five of most modern tetrapods. Why tetrapods ultimately evolved a five-digit foot is uncertain, but this arrangement may have provided the ankle joint with the stability and flexibility needed for walking.

Source of Information : Scientific American September 2009

Sunday, December 6, 2009


Their origin is one of the deepest questions in modern physics

Sundials and water clocks are as old as civilization. Mechanical clocks—and, with them, the word “clock”—go back to 13th-century Europe. But these contraptions do nothing that nature did not already do. The spinning Earth is a clock. A dividing cell is a clock. Radioactive isotopes are clocks. So the origin of clocks is a question not for history but for physics, and there the trouble begins. You might innocently think of clocks as things that tell time, but according to both of the pillars of modern physics, time is not something you can measure. Quantum theory describes how the world changes in time. We observe those changes and infer the passage of time, but time itself is intangible. Einstein’s theory of general relativity goes further and says that time has no objective meaning. The world does not, in fact, change in time; it is a gigantic stopped clock. This freaky revelation is known as the problem of frozen time or simply the problem of time. If clocks do not tell time, then what do they tell? A leading idea is that the universe’s components—the fact, for example, that if Earth is at a certain position in its orbit, the other planets are at specific positions in theirs. Physicist Julian Barbour developed this relational view of time in the winning entry for the Foundational Questions Institute essay contest last year. He argued that because of the cosmic patterns, each piece of the universe is a microcosm of the whole. We can use Earth’s orbit as a reference for reconstructing the positions of the other planets. In other words, Earth’s orbit serves as a clock. It does not tell time but rather the positions of the other planets.

By Barbour’s reasoning, all clocks are approximate; no single piece of a system can fully capture the whole. Any clock eventually skips a beat, runs backward or seizes up. The only true clock is the universe itself. In a sense, then, clocks have no origin. They have been here all along. They are what make the concept of “origin” possible to begin with.

Source of Information : Scientific American September 2009

Saturday, December 5, 2009


Preparing foods with fire may have made us humans what we are

In a world without cooking, we would have to spend half our days chewing raw food, much as the chimpanzee does. Cooking not only makes food more delicious, it also softens food and breaks starches and proteins into more digestible molecules, allowing us to enjoy our meals more readily and to draw more nutrition from them. According to Harvard University biological anthropologist Richard Wrangham, cooking’s biggest payoff is that it leaves us with more energy and time to devote to other things—such as fueling bigger brains, forming social relationships and creating divisions of labor. Ultimately, Wrangham believes, cooking made us human.

Archaeological evidence is mixed as to when our ancestors started building controlled fires—a prerequisite for cooking—but Wrangham argues that the biological evidence is indisputable: we must have first enjoyed the smell of a good roast 1.9 billion years ago. That is when a species of early human called Homo erectus appeared—and those hominids had 50 percent larger skulls and smaller pelvises and rib cages than their ancestors, suggesting bigger brains and smaller abdomens.

They also had much smaller teeth. It makes sense that cooking “should have left a huge signal in the fossil record,” Wrangham says, and, quite simply, “there’s no other time that fits.” Never before and never again during the course of human evolution did our teeth, skull and pelvis change size so drastically. If cooking had arisen at a different point, he says, we would be left with a big mystery: “How come cooking was adopted and didn’t change us?”

Wrangham also has a theory as to how controlled fires, and thus cooking, came about. He speculates that H. erectus’s closest ancestors, the australopithecines, ate raw meat but hammered it to make it flatter and easier to chew, rather like steak carpaccio. “I’ve tried hammering meat with rocks, and what happens? You get sparks,” he says. “Time and time again this happens, and eventually you figure out how to control the fire.”

Source of Information : Scientific American September 2009

Friday, December 4, 2009

An inquisitive Swiss chemist sent himself on the first acid trip

The medical sciences can invoke a long and storied tradition of self-experimentation. Typhoid vaccine, cardiac catheterization, even electrodes implanted in the nervous system came about because scientists recruited themselves as their own guinea pigs. One of the most memorable instances happened on April 16, 1943, when Swiss chemist Albert Hofmann inadvertently inhaled or ingested a compound derived from a crop fungus that went by the chemical name of lysergic acid diethylamide, or LSD-25. He subsequently entered into “a not unpleasant intoxicated-like condition, characterized by an extremely stimulated imagination,” he recalled in his 1979 autobiography, LSD, My Problem Child. “In a dreamlike state, with eyes closed ...” he continued, “I perceived an uninterrupted stream of fantastic pictures extraordinary shapes with intense, kaleidoscopic play of colors.” Ever the intrepid researcher, Hofmann decided to probe further the psychotropic properties of the substance, which Sandoz Laboratories had previously developed and then abandoned as a possible stimulant for breathing and circulation. A few days after his first trip, he carefully apportioned a 0.25-milligram dose; within a short time the Sandoz laboratory where he worked again became distorted and strange. The words “desire to laugh” were the last ones scrawled in his research journal that day. His inebriated state prompted him to leave work early.

The bicycle ride home—in which he could not tell that he was moving— has given April 19 the designation of “bicycle day” among LSD aficionados everywhere. Hofmann went on to use LSD hundreds of times more—and his creation became a ticket into the altered mental states embraced by the counterculture. Though subsequently banned, the drug continues to attract intense interest by investigators who are examining therapeutic uses, including the possibility that it may help the terminally ill reconcile themselves to their mortality.

Source of Information : Scientific American September 2009

Thursday, December 3, 2009

The Stirrup

Invention of the stirrup may rival that of the longbow and gunpowder

A slight alteration to the custom of riding a horse may have dramatically changed the way wars were fought. Humans rode bareback or mounted horses with a simple blanket after they first domesticated the animals, thousands of years after the dawn of agriculture. The leather saddle first straddled a horse’s back in China perhaps as far back as the third century B.C. But the saddle was only one step toward transforming the use of cavalry as a means of waging war. Climbing onto a horse while bearing weapons had long presented its own precarious hazards. Cambyses II, a Persian king in the sixth century B.C., died after stabbing himself as he vaulted onto a horse.

By the fourth century A.D., the Chinese had begun to fashion foot supports from cast iron or bronze. What made the stirrup (derived from the Old English word for a climbing rope) such an important innovation was that it allowed the rider immensely greater control in horsemanship: rider and animal became almost extensions of each other. It was possible to shoot arrows accurately while the horse dashed ahead at full gallop. A cavalryman could brace himself in the saddle and, with a lance positioned under his arm, use the tremendous force of the charging horse to strike a stunned enemy. The horse's sheer mass and quickness became an implement of the cavalry's weaponry—and a powerful intimidation factor.

The fierce Avar tribe may have brought stirrups to the West when it arrived in Byzantium in the sixth century A.D. The Byzantine Empire soon adopted the stirrup—and later the Franks embraced it as well. The societal impact of this saddle accoutrement has intrigued historians for decades. Some scholars suggested that feudalism emerged in Europe because mounted warfare, facilitated by the stirrup, became vastly more effective for the cavalry of the Franks. An aristocratic class emerged that received land for its service in the cavalry.

Others, on the opposite side of what is known as the Great Stirrup Controversy, argue that this interpretation of events is baseless. Whether the stirrups were the single enabling technology that brought about the rise of feudalism remains in doubt. Unquestionably, though, this small extension from a saddle was an innovation that transformed the craft of war forever.

Source of Information : Scientific American September 2009

Wednesday, December 2, 2009


When a cell’s controls break down, chaos is unleashed Multicellularity has its advantages, but they come at a price. The division of labor in a complex organism means that every cell must perform its job and only its job, so an elaborate regulatory system evolved to keep cells in line. Nearly every one of the trillion or so cells in the human body, for instance, contains a full copy of the genome – the complete instruction set for building and maintaining a human being. Tight controls on which genes are activated, and when, inside any given cell determine that cell’s behavior and identity. A healthy skin cell executes only the genetic commands needed to fulfill its role in the skin. It respects neighboring cells’ boundaries and cues, and when the regulatory system permits, it divides to generate just enough new cells to repair a wound, never more.

Greek physician Hippocrates first used the term karkinos, or “crab,” in the fourth century B.C. to describe malignant tumors because their tendril-like projections into surrounding tissue reminded him of the arms of the crustacean. In Latin, the word for crab was cancer, and by the second century B.C. the great Roman physician Galen knew those spiny arms were just one sign that normal body tissues had gone out of control. He attributed the dysfunction to an excess of black bile. Modern scientists see a breakdown of the cellular regulatory system in the hallmarks of cancer: runaway growth, invasion of neighboring tissue and metastasis to far-off parts of the body.

The proteins and nucleic acids that control gene activity are themselves encoded by genes, so cancers begin with mutations that either disable key genes or, conversely, cause them to be overactive. Those changes initiate a cascade of imbalances that knock out downstream regulatory processes, and soon the cell is careening toward malignancy. So far efforts to identify the exact combination of mutations necessary to ignite a particular type of cancer—in the brain, the breast or elsewhere—have not yielded clear patterns. Once regulatory networks are destabilized, they can break down in ways as complex and diverse as the molecular pathways they regulate, making the precise origin of each instance of cancer unique. For all their internal chaos, though, cancer cells share some characteristics with stem cells—those primal body-building cells that are exempt from many constraints on normal cells. One important difference is that in stem cells, the full potential of the genome is controlled; in cancer, it is unleashed.

Source of Information : Scientific American September 2009

Tuesday, December 1, 2009


As you age, the style of your skin changes. You start with a tight-fitting sports jacket, and you wind up with something closer to a pair of baggy pajamas. This transition is quite traumatic for many people, as our culture considers it deeply embarrassing for one’s body to betray any sign that it’s a day over 18. If given the choice to look wise and experienced or young and nubile, most of us would choose the baby face every time. Many factors work together to cause midlife wrinkles and the pruniness of old age. As the years tick by, the collagen and elastin fibers in your dermis— those components that make your skin flexible and resilient—begin to break down, loosening their hold on your skin. Unfortunately, there’s not much you can do to intervene. But if you must try, here are a few wrinkleavoiding strategies:

• Choose the right parents. Your genes have the greatest say in deciding how elastic your skin is, and how long it stays relatively smooth and unwrinkled. That’s why some people in their sixties look like they’re in their thirties, much to the chagrin of everyone around them. If you want a quick prediction of how your skin will fare over the next few decades, look at your parents. And if this leaves you too depressed to continue through the rest of this chapter, consider the possibility that you were adopted from a passing circus.

• Don’t use your face. Many of the deeper grooves in your face are usage lines that mark where your skin folds when you scowl, smile, frown, or look utterly confused. To reduce the rate at which these wrinkles form, stop expressing any of these emotions. Or just accept the fact that wrinkles add character to your face.

• Don’t smoke. Cigarette smoke damages skin, causing it to wrinkle prematurely. This probably happens because cigarette smoke reduces blood flow to your skin, starving it of important nutrients. And while quitting the habit may improve your lungs, it won’t repair skin that’s already sagging.

• Limit sun exposure. Ultraviolet light (both UVA and UVB) breaks down the collagen in your skin. This weathering process speeds up aging and increases wrinkles. To prevent sun damage, slap on some sunscreen and follow the good sun habits.

These techniques may slow the rate at which your skin becomes progressively more wrinkled, but what can you do to remove the wrinkles you already have?

There’s certainly no shortage of cosmetic products that promise ageconquering miracles. However, most skin creams do relatively little. On the practical side, they may moisturize your skin (as dry skin looks older) and shield it from sun damage (with sunscreen). The effect of other ingredients is less clear-cut. Although many anti-aging skin creams are packed with anti-inflammatory ingredients, their concentrations are low and there’s little independent research to suggest that they actually do anything. Similarly, vitamins, collagen, antioxidants, and other useful-sounding substances are unlikely ever to reach the lower-level dermis, which is where wrinkling takes place. Some creams contain ingredients that obscure fine wrinkles or scatter light, giving skin a “soft-focus” effect. Whatever the case, these creams can only hide aging rather than make lasting improvements. And lotion lovers beware: Some ingredients can actually aggravate sensitive skin or clog pores, exacerbating acne.

More drastically, cosmetic procedures like chemical peels, laser resurfacing, and microdermabrasion can improve wrinkles by removing excess dead skin in a strategic way. The effect is temporary, usually limited to fine lines rather than deep wrinkles, and may cause redness and peeling. For all but the most wrinkle-averse, it hardly seems worth the trouble. The truth is that if you live in your body for half a century, it will gradually develop the creases of use and abuse. The over-80 crowd will tell you that the relentless march of time leaves the human face with more grooves than a 45-rpm record (but first they’ll have to explain what a 45-rpm record is). The real decision you have to make is not how to fight wrinkles, but whether you want to accept them with dignity or become an increasingly desperate chaser of youth.

A variation of this wrinkle-avoiding technique is Botox injections, which paralyze the face muscles using a highly toxic nerve agent. (It’s the same substance that causes death by paralysis in improperly canned foods.) A Botoxed face temporarily loses some of its ability to move, and a face that can’t move has a hard time furrowing up a decent wrinkle. What you get is a sort of blander, wax museum version of your face. If you prefer being wrinkle-free to being able to move your forehead, Botox just might be your ticket.

Source of Information : Oreilly - Your Body Missing Manual (08-2009)