Saturday, October 31, 2009

Span of control

Engineering: A new generation of “smart” bridges use sensors to detect structural problems and warn of impending danger

WHEN an eight-lane steel-truss-arch bridge across the Mississippi River in Minneapolis collapsed during the evening rush hour on August 1st 2007, 13 people were killed and 145 were injured. There had been no warning. The bridge was 40 years old but had a life expectancy of 50 years. The central span suddenly gave way after the gusset plates that connected the steel beams buckled and fractured, dropping the bridge into the river.

In the wake of the catastrophe, there were calls to harness technology to avoid similar mishaps. The St Anthony Falls bridge, which opened on September 18th 2008 and replaces the collapsed structure, should do just that. It has an embedded early-warning system made of hundreds of sensors. They include wire and fibre-optic strain and displacement gauges, accelerometers, potentiometers and corrosion sensors that have been built into the span to monitor it for structural weaknesses, such as corroded concrete and overly strained joints.

On top of this, temperature sensors embedded in the tarmac activate a system that sprays antifreeze on the road when it gets too cold, and a traffic-monitoring system alerts the Minnesota Department of Transportation to divert traffic in the event of an accident or overcrowding. The cost of all this technology was around $1m, less than 1% of the $234m it cost to build the bridge.

The new Minneapolis bridge joins a handful of “smart” bridges that have built-in sensors to monitor their health. Another example is the six-lane Charilaos Trikoupis bridge in Greece, which spans the Gulf of Corinth, linking the town of Rio on the Peloponnese peninsula to Antirrio on the mainland. This 3km-long bridge, which was opened in 2004, has roughly 300 sensors that alert its operators if an earthquake or high winds warrant it being shut to traffic, as well as monitoring its overall health. These sensors have already detected some abnormal vibrations in the cables holding the bridge, which led engineers to install additional weights as dampeners.

The next generation of sensors to monitor bridge health will be even more sophisticated. For one thing, they will be wireless, which will make installing them a lot cheaper.

Jerome Lynch of the University of Michigan, Ann Arbor, is the chief researcher on a project intended to help design the next generation of monitoring systems for bridges. He and his colleagues are looking at how to make a cement-based sensing skin that can detect excessive strain in bridges. Individual sensors, says Dr Lynch, are not ideal because the initial cracks in a bridge may not occur at the point the sensor is placed. A continuous skin would solve this problem. He is also exploring a paint-like substance made of carbon nanotubes that can be painted onto bridges to detect corrosion and cracks. Since carbon nanotubes conduct electricity, sending a current through the paint would help engineers to detect structural weakness through changes in the paint’s electrical properties.

The researchers are also developing sensors that could be placed on vehicles that regularly cross a bridge, such as city buses and police cars. These could measure how the bridge responds to the vehicle moving across it, and report any suspicious changes.

Some civil engineers are sceptical about whether such instrumentation is warranted. Emin Aktan, director of the Intelligent Infrastructure and Transport Safety Institute at Drexel University in Philadelphia, points out that although the sensors generate a huge amount of data, civil engineers simply do not know what happened in the weeks and days before a given bridge failed. It will take a couple of decades to arrive at a point when bridge operators can use such data intelligently, he predicts. Meanwhile, the Obama administration’s stimulus plan has earmarked $27 billion for building and repairing roads and bridges. Just 1% of that would pay for a lot of sensors.

Source of Information : The Economist 2009-09-05

Friday, October 30, 2009

Tilting in the breeze

Energy: A novel design for a floating wind-turbine, which could reduce the cost of offshore wind-power, has been connected to the electricity grid

FAR out to sea, the wind blows faster than it does near the coast. A turbine placed there would thus generate more power than its inshore or onshore cousins. But attempts to build power plants in such places have foundered because the water is generally too deep to attach a traditional turbine’s tower to the seabed.

One way round this would be to put the turbine on a floating platform, tethered with cables to the seabed. And that is what StatoilHydro, a Norwegian energy company, and Siemens, a German engineering firm, have done. The first of their floating offshore turbines has just started a two-year test period generating about 1 megawatt of electricity—enough to supply 1,600 households.

The Hywind is the first large turbine to be deployed in water more than 30 metres deep. The depth at the prototype’s location, 10 kilometres (six miles) south-west of Karmoy, is 220 metres. But the turbine is designed to operate in water up to 700 metres deep, meaning it could be put anywhere in the North Sea. Three cables running to the seabed prevent it from floating away.

It is an impressive sight. Its three blades have a total span of 82 metres and, together with the tower that supports them, weigh 234 tonnes. That makes the Hywind about the same size as a large traditional offshore turbine.

Even though it is tethered, and sits on a conical steel buoy, the motion of the sea causes the tower to sway slowly from side to side. This swaying places stress on the structure, and that has to be compensated for by a computer system that tweaks the pitch of the rotor blades to keep them facing in the right direction as the tower rocks and rolls to the rhythm of the waves. That both improves power production and minimises the strain on the blades and the tower. The software which controls this process is able to measure the success of previous changes to the rotor angle and use that information to finetune future attempts to dampen wave-induced movement.

If all works well, the potential is huge. Henrik Stiesdal of Siemens’s windpower business unit reckons the whole of Europe could be powered using offshore wind, but that competition for space near the coast will make this difficult to achieve if only inshore sites are available. Siting turbines within view of coastlines causes conflicts with shipping, the armed forces, fishermen and conservationists. But floating turbines moored far out to sea could avoid such problems. That, plus the higher wind speeds which mean that a deep-water turbine could generate much more power than a shallow-water one, make the sort of technology that the Hywind is pioneering an attractive idea.

One obvious drawback is that connecting deep-water turbines to the electrical grid will be expensive. But the biggest expense—the one that will make or break far-offshore wind power—will probably be maintenance. In deep seas, it will not be possible to use repair vessels that can jack themselves up on the seabed for stability, like the machines that repair shallow-water turbines. Instead maintenance will be possible only in good weather. If the Hywind turbine turns out to need frequent repairs, the cost of leaving it idle while waiting for fair weather, and of ferrying the necessary people and equipment to and fro, will outweigh the gains from generating more power. But if all goes according to plan, and the new turbine does not need such ministrations, it would put wind in the sails of far-offshore power generation.

Source of Information : The Economist 2009-09-05

Thursday, October 29, 2009

Keeping pirates at bay

Policing the internet: The music industry has concluded that lawsuits alone are not the way to discourage online piracy

THREE big court cases this year—one in Europe and two in America—have pitted music-industry lawyers against people accused of online piracy. The industry prevailed in each case. But the three trials may mark the end of its efforts to use the courts to stop piracy, for they highlighted the limits of this approach.

The European case concerned the Pirate Bay, one of the world’s largest and most notorious file-sharing hubs. The website does not actually store music, video and other files, but acts as a central directory that helps users locate particular files on BitTorrent, a popular file-sharing network. Swedish police began investigating the Pirate Bay in 2003, and charges were filed against four men involved in running it in 2008. When the trial began in February 2009, they claimed the site was merely a search engine, like Google, which also returns links to illegal material in some cases. One defendant, Peter Sunde, said a guilty verdict would “be a huge mistake for the future of the internet…it’s quite obvious which side is the good side.”

The court agreed that it was obvious and found the four men guilty, fining them a combined SKr30m ($3.6m) and sentencing them each to a year in jail. Despite tough talk from the defendants, they appear to have tired of legal entanglements: in June another firm said it would buy the

Pirate Bay’s internet address for SKr60m and open a legal music site. The Pirate Bay is the latest in a long list of file-sharing services, from Napster to Grokster to KaZaA, to have come under assault from the media giants. If it closes, some other site will emerge to take its place; the music industry’s victories, in short, are never final. Cases like this also provoke a backlash against the music industry, though in Sweden it took an unusual form. In the European elections in June, the Pirate Party won 7.1% of the Swedish vote, making it the fifth-largest party in the country and earning it a seat in the European Parliament. “All non-commercial copying and use should be completely free,” says its manifesto.



So much for that plan
The Recording Industry Association of America (RIAA) has pursued another legal avenue against online piracy, which is to pursue individual users of file-sharing hubs. Over the years it has accused 18,000 American internet users of engaging in illegal file-sharing and demanding settlements of $4,000 on average. Facing the scary prospect of a federal copyright-infringement lawsuit, nearly everyone settled; but two cases have proceeded to trial. The first involved Jammie Thomas-Rasset, a single mother from Minnesota who was accused of sharing 24 songs using KaZaA in 2005. After a trial in 2007, a jury ruled against her and awarded the record companies almost $10,000 per song in statutory damages.

Critics of the RIAA’s campaign pointed out that if Ms Thomas-Rasset had stolen a handful of CDs from Wal-Mart, she would not have faced such severe penalties. The judge threw out the verdict, saying that he had erred by agreeing to a particular “jury instruction” (guidance to the jury on how they should decide a case) that had been backed by the RIAA. He then went further, calling the damages “wholly disproportionate” and asking Congress to change the law, on the basis that Ms Thomas-Rasset was an individual who had not sought to profit from piracy.

But at a second trial, which concluded in June 2009, Ms Thomas-Rasset was found guilty again. To gasps from the defendant and from other observers, the jury awarded even higher damages of $80,000 per song, or $1.92m in total. One record label’s lawyer admitted that even he was shocked. In July, in a separate case brought against Joel Tenenbaum, a student at Boston University, a jury ordered him to pay damages of $675,000 for sharing 30 songs.

According to Steven Marks, general counsel for the RIAA, the main point of pursuing these sorts of cases is to make other internet users aware that file-sharing of copyrighted material is illegal. Mr Marks admits that the legal campaign has not done much to reduce file-sharing, but how much worse might things be, he wonders, if the industry had done nothing? This year’s cases, and other examples (such as the RIAA’s attempt in 2005 to sue a grandmother, who had just died, for file-sharing), certainly generate headlines— but those headlines can also make the industry look bad, even to people who agree that piracy is wrong.

That helps explain why, in late 2008, the RIAA abandoned the idea of suing individuals for file-sharing. Instead it is now backing another approach that seems to be gaining traction around the world, called “graduated response”. This is an effort to get internet service-providers to play a greater role in the fight against piracy. As its name indicates, it involves ratcheting up the pressure on users of file-sharing software by sending them warnings by e-mail and letter and then restricting their internet access. In its trictest form, proposed in France, those accused three times of piracy would have their internet access cut off and their names placed on a national blacklist to prevent them signing up with another service provider. Other versions of the scheme propose throttling broadband-connection speeds.

All this would be much quicker and cheaper than going to court and does not involve absurd awards of damages and their attendant bad publicity. A British study found that most file-sharers will stop after receiving a warning—but only if it is backed up by the threat of sanctions.

It sounds promising, from the industry’s perspective, but graduated response has drawbacks of its own. In New Zealand the government scrapped the idea before implementation, and in Britain the idea of cutting off access has been ruled out. In France the first draft of the law was savaged by the Constitutional Council over concerns that internet users would be presumed guilty rather than innocent. Internet serviceproviders are opposed to being forced to act as copyright police. Even the European Parliament has weighed in, criticising any sanctions imposed without judicial oversight. But the industry is optimistic that the scheme will be implemented in some form. It does not need to make piracy impossible—just less convenient than the legal alternatives.

But many existing sources of legal music have not offered what file-sharers want. “In my view, growing internet piracy is a vote of no confidence in existing business models,” said Viviane Reding, the European commissioner for the information society, in July.

The industry is desperately searching for better business models, and is offering its catalogue at low rates to upstarts that could never have acquired such rights a decade ago. Services such as Pandora, Spotify and we7 that stream free music, supported by advertising, are becoming popular. Most innovative are the plans to offer unlimited downloads for a flat fee. British internet providers are keen to offer such a service, the cost of which would be rolled into the monthly bill. Similarly, Nokia’s “Comes With Music” scheme includes a year’s downloads in the price of a mobile phone. The music industry will not abandon legal measures against piracy altogether. But solving the problem will require carrots as well as sticks.


Source of Information : The Economist 2009-09-05

Wednesday, October 28, 2009

AIDS treatment - Almost halfway there

The routine use of anti-AIDS drugs is spreading MORE news from the battle against AIDS. A report published jointly by the World Health Organisation, the United Nations
Children’s Fund and UNAIDS says that over 4m infected people in poor and middle-income countries are now on drugs intended to keep the virus under control. That is 1m more than last year. More than 5m others who might benefit from those drugs are not on them, however, so there is no room for complacency. But the latest data suggest that with 42% of those who need the drugs actually receiving them, significant progress is being made.

Encouragingly, the proportion covered in sub-Saharan Africa, the worst-affected area and the one with the least developed health infrastructure, is slightly higher than the global average, at 44%. And women, long regarded by AIDS activists as the epidemic’s forgotten sex, are doing better than men. They comprise 55% of those in need, but form 60% of those receiving therapy.

The routine testing of pregnant women in the countries covered by the report is also expanding. In 2007, 15% were tested. In 2008 that figure was 21%, although, in line with rates for the rest of the population, only 45% of pregnant women who did turn out to be infected received drugs to control their infection. The fight, then, is by no means over. But the good guys seem to be winning.

Source of Information : The Economist 2009-10-03

Tuesday, October 27, 2009

Quantum mechanics - Schrödinger's virus

An old thought experiment may soon be realized ONE of the most famous unperformed experiments in science is Schrödinger’s cat. In 1935 Erwin Schrödinger (pictured), who was one of the pioneers of quantum mechanics, imagined putting a cat, a flask of Prussic acid, a radioactive atom, a Geiger counter, an electric relay and a hammer in a sealed box. If the atom decays, the Geiger counter detects the radiation and sends a signal that trips the relay, which releases the hammer, which smashes the flask and poisons the cat.

The point of the experiment is that radioactive decay is a quantum process. The chance of the atom decaying in any given period is known. Whether it has actually decayed (and thus whether the cat is alive or dead) is not—at least until the box is opened. The animal exists, in the argot of the subject, in a “superposition” in which it is both alive and dead at the same time.

Schrödinger’s intention was to illuminate the paradoxes of the quantum world. But superposition (the existence of a thing in two or more quantum states simultaneously) is real and is, for example, the basis of quantum computing. A pair of researchers at the Max Planck Institute for Quantum Optics in Garching, Germany, now propose to do what Schrödinger could not, and put a living organism into a state of quantum superposition.

The organism Ignacio Cirac and Oriol Romero-Isart have in mind is the flu virus. Pedants might object that viruses are not truly alive, but that is a philosophical rather than a naturalistic argument, for they have genes and are capable of reproduction—a capability they lose if they are damaged. The reason for choosing a virus is that it is small. Actual superposition (as opposed to the cat-in-a-box sort) is easiest with small objects, for which there are fewer pathways along which the superposition can break down. Physicists have already put photons, electrons, atoms and even entire molecules into such a state and measured the outcome. In the view of Dr Cirac and Dr Romero-Isart, a virus is just a particularly large molecule, so existing techniques should work on it.

The other thing that helps maintain superposition is low temperature. The less something jiggles about because of heat-induced vibration, the longer it can remain superposed. Dr Cirac and Dr Romero-Isart therefore propose putting the virus inside a microscopic cavity and cooling it down to its state of lowest energy (ground state, in physics parlance) using a piece of apparatus known as a laser trap. This ingenious technique—which won its inventors, one of whom was Steven Chu, now America’s energy secretary, a Nobel prize—works by bombarding an object with laser light at a frequency just below that
which it would readily absorb and re-emit if it were stationary. This slows down the movement, and hence the temperature, of its atoms to a fraction of a degree above absolute zero.

Once that is done, another laser pulse will jostle the virus from its ground state into an excited state, just as a single atom is excited by moving one of its electrons from a lower to a higher orbital. By properly applying this pulse, Dr Cirac believes it will be possible to leave the virus in a superposition of the ground and excited states.

For that to work, however, the virus will need to have certain physical properties. It will have to be an insulator and to be transparent to the relevant laser light. And it will have to be able to survive in a vacuum. Such viruses do exist. The influenza virus is one example. Its resilience is legendary. It can survive exposure to a vacuum, and it seems to be an insulator—which is why the researchers have chosen it. And if the experiment works on a virus, they hope to move on to something that is indisputably alive: a tardigrade.

Tardigrades are tiny but resilient arthropods. They can survive in vacuums and at very low temperatures. And, although the difference between ground state and an excited state is not quite the difference between life and death, Schrödinger would no doubt have been amused that his 70-year-old jeu d’esprit has provoked such an earnest following.

Source of Information : The Economist 2009-10-03

Monday, October 26, 2009

Portable dialysis machines

Kidney machines go mobile
DIALYSIS is not as bad as dying, but it is pretty unpleasant, nonetheless. It involves being hooked up to a huge machine, three times a week, in order to have your blood cleansed of waste that would normally be voided, via the kidneys, as urine. To make matters worse, three times a week does not appear to be enough. Research now suggests that daily dialysis is better. But who wants to tied to a machine—often in a hospital or a clinic—for hours every day for the rest of his life?

Victor Gura, of the University of California, Los Angeles, hopes to solve this problem with an invention that is now undergoing clinical trials. By going back to basics, he has come up with a completely new sort of dialyser—one you can wear.

A traditional dialyser uses around 120 litres of water to clean an individual’s blood. This water flows past one side of a membrane while blood is pumped past the other side. The membrane is impermeable to blood cells and large molecules such as proteins, but small ones can get through it. Substances such as urea (a leftover from protein metabolism) and excess phosphate ions therefore flow from the blood to the water. The good stuff, such as sodium and chloride ions, stays in the blood because the cleansing water has these substances dissolved in it as well, and so does not absorb more of them.

Both water and blood require a lot of pumping. Those pumps are heavy and need electrical power. The first thing Dr Gura did, therefore, was dispose of them. The reason for using big pumps is to keep dialysis sessions short. If machines are portable that matters less. So Dr Gura replaced the 10kg pumps of a traditional machine with small ones weighing only 380 grams. Besides being light, these smaller pumps use less power. That means batteries can be employed instead of mains electricity—and modern lithiumion batteries, the ones Dr Gura chose, are also light, and thus portable.

To reduce the other source of weight, the water, Dr Gura and his team designed disposable cartridges containing materials that capture toxins from the cleansing water, so that it can be recycled. The upshot is a device that weighs around 5kg and can be strapped to a user’s waist. Indeed, at a recent demonstration in London, one patient was able to dance while wearing the dialyser—for joy, presumably, at no longer having to go to hospital so often.

Source of Information : The Economist 2009-10-03

Sunday, October 25, 2009

Dead in the water

What killed Fossil Lake?

SINCE the early 19th century, Fossil Lake, a 52m-year-old site in south-west Wyoming, has been known for its fish, insects, reptiles, birds and mammals. It contains millions of them, beautifully preserved in layers of limestone that are interspersed with volcanic ash. Yet this palaeontological paradise holds a dark secret: the mass deaths were not caused by a single event. The interspersing layers of ash show they were a regular occurrence. Until now, though, nobody has worked out what happened.

Jo Hellawell of Trinity College, Dublin, and her colleagues in the Organic Geochemistry Unit at Bristol University think that they have solved the mystery. In doing so, they adopted Sherlock Holmes’s maxim that when you have eliminated the impossible, whatever remains, however improbable, must be the truth.

One suggestion—that the eruptions which laid down the ash were responsible—is easily ruled out because the ash layers do not correlate with the fossil-rich ones. Severe storms or floods look equally unlikely causes. They would have washed vast quantities of rock into the lake at the time the animals died—but the fossils are surrounded only by finely layered silt. Droughts, too, look scarcely credible as culprits. They would leave subtle clues in the isotopic composition of the limestone by shifting the ratios of light to heavy carbon and oxygen atoms in its calcium carbonate.

Having shown that climatic or environmental events were improbable explanations, Ms Hellawell and her colleagues started analysing the sediments in more detail. First, they considered the possibility that seasonal upwellings of toxic or oxygen-poor bottom water were responsible. Changing temperatures in the lake during the winter could have brought stagnant water to the surface, thus asphyxiating entire schools of fish. They concluded that such upwellings might indeed account for the catastrophic fish deaths, but that they would not explain the deaths of insects, reptiles, birds and mammals, since these animals breathe air rather than relying on dissolved oxygen.

The only possibility that remained was that the lake itself had somehow become an enormous pot of poison, intoxicating anything that drank the waters or ate animals found within. How such a thing could happen on a regular basis was, at first, perplexing. But Ms Hellawell revealed on September 23rd, at the Society of Vertebrate Palaeontology’s annual meeting, that she and her team had found evidence that toxic algae were responsible for the mass deaths.

The team analysed the organic compounds in the rocks of Fossil Lake. This analysis detected 4-methyl steranes—chemicals often made by tiny algae known as dinoflagellates. Many dinoflagellates are harmless, but some produce neurotoxins. In several parts of the world, such as the seas off the coast of Florida, toxic dinoflagellates sometimes develop into enormous blooms called red tides. These release vast quantities of neurotoxin and kill almost everything that comes near.

Toxic blooms also occur, albeit on a smaller scale, in bodies of freshwater—and that, Ms Hellawell thinks, is the answer to the mysterious case of Fossil Lake. The killer has been caught, as it were, red-handed.

Source of Information : The Economist 2009-10-03

Saturday, October 24, 2009

Avoiding the heffalump trap

As the climate warms, conservationists might consider looking to the past to protect the future

THE Earth is heating up—and, if a study presented by Britain’s Meteorological Office to a meeting in Oxford this week is anything to go by, it may soon be hotter than it has been for more than a million years. Even if the 4ºC rise this century that the Met Office predicts does not come to pass, climate change is still going to be awkward. Because people are able to adjust their surroundings to meet their needs, Homo sapiens will no doubt survive. Other species, though, cannot make such adjustments. Instead, they deal with changing climates by moving their ranges. Or, at least, they have done so in the past.

Now, with much of the Earth’s surface given over to agriculture, such range-shifts are not so easy, and many species may need a helping human hand. If such “assisted migration” of threatened creatures does take place, however, the question will arise of what to move where. Introductions of new species, both accidental and deliberate, over the past few centuries show the problem. Many fail to thrive. A few thrive too well.

Creating whole new ecosystems that mix natives with transplanted exotics might be risky. But on September 23rd, at the annual meeting of the Society of Vertebrate Palaeontology, held in Bristol, England, Anthony Barnosky of the University of California, Berkeley, and Elizabeth Hadly of Stanford University suggested a way to minimise this risk. Their proposal was to learn from the flora and fauna living in past environments that went through climatic changes similar in scale to those happening today— namely the ones that accompanied the ice ages of the Pleistocene epoch, which ran from 2.6m years ago to 10,000 years ago.


Hot and bothered
The idea of turning parts of the United States into “Pleistocene Park”, by importing endangered African wildlife and releasing it in, say, Texas, was raised in 2005 by Josh Donlan of Amherst College, in Massachusetts. Though the threats he had in mind were hunting and habitat loss rather than climate change, part of his inspiration was, indeed, the fossil record. This shows that, in the recent past, North America had a much more diverse large-mammal fauna than it does now. For example, bones dug out of the tar pits at Rancho La Brea in Los Angeles (illustrated above) show that mammoths, mastodons, sabretoothed cats, jaguars and lions were all present in southern California between 38,000 and 10,000 years ago.

Dr Donlan argues, in light of this, that moving lions, cheetahs and elephants from Africa to America is not a stupid idea. Though they are not the exact same species that disappeared from the Americas 10,000 years ago, they are those species’ ecological equivalents. Better a home in an alien habitat than extinction.

Dr Barnosky is not quite so gung-ho about transcontinental transplants, but he thinks that understanding the ebb and flow of species in response to previous climate change within a continent may help conservationists by pointing out places where species that are endangered in their present ranges have done well in the past. In previous periods of warming, for example, many types of animals have displayed reliable trends. Ground squirrels spread rapidly as the temperature ameliorates. Vole populations contract.

Gophers move west, while simultaneously becoming physically smaller. His work in a place called Porcupine Cave in Colorado, shows that cheetahs, camels, horses and peccaries were present 800,000 years ago, yet none of these species survives there today. Meanwhile, Dr Hadly has been comparing the fossil record with modern-day reality in Yellowstone National Park, where types of amphibian that have thrived for 3,000 years have all but disappeared recently, as their ponds have dried up.

Whether such research really can help conservationists is moot. The fossil climates Dr Barnosky and Dr Hadly have been looking at are, even at their warmest, cooler than the more extreme predictions for the next 100 years. But, by showing what sorts of species have managed to coexist in the ecosystems of the past, palaeontology might indeed help to design those of the future.

Source of Information : The Economist 2009-10-03

Friday, October 23, 2009

Asteroids

The small fry of the solar system have troubled pasts

For many people, asteroids are big rocks that drift menacingly through space and are great places to have a laser cannon dogfight. Conventional scientific wisdom holds that they are the leftover scraps of planet formation. Their full story, though, is rather more complex and still only dimly glimpsed. What planetary scientists lump together as asteroids are far too diverse—from boulders to floating heaps of gravel to mini planets with signs of past volcanic activity and even liquid water—to have a single common origin. Only the largest, more than about 100 kilometers across, date to the dawn of our solar system 4.6 billion years ago. Back then, the system was basically one big swarm of asteroids or, as researchers call them at this early stage, planetesimals. How it got that way is a puzzle, but the leading idea is that primordial dust swirling around the nascent sun coagulated into progressively larger bodies. Some of those bodies then agglomerated into planets; some, accelerated by the gravity of larger bodies, were flung into deep space; some fell into the sun; and a tiny few did none of the above. Those survivors linger in pockets where the planets have left them alone, notably the gap between the orbits of Mars and Jupiter. Gradually they, too, are being picked off. Fewer than one in 1,000, and perhaps as few as one in a million, of the asteroids originally in the main belt remain.

Smaller asteroids are not relics but debris. They come in an assortment of sizes that indicate they are products of a chain reaction of collisions: asteroids hit and shatter, the fragments hit and shatter, and so on. Some are rocky; some are metal—suggesting they came from different layers within the original bodies. About a third of asteroids belong to families with similar orbits, which can be rewound in time to a single point in space, namely, the location of the collision that birthed them. Because families should disperse after 10 million to 100 million years, asteroid formation by collision must be an ongoing process. Indeed, so is planet formation. Whenever an asteroid hits a planet, it helps to bulk it up. Asteroids are not the leftovers of planet formation so much as they are the finishing touches.

Source of Information : Scientific American September 2009

Thursday, October 22, 2009

Digital Audio Player

Mobile music rocked the record industry

Sony’s Walkman portable audio cassette player in 1979 improved on the transistor radio by allowing people to take their preferred music wherever they went (engineer Nobutoshi Kihara supposedly invented the device so that Sony co-chairman Akio Morita could listen to operas during long flights). But the digital revolution in personal audio technology was another two decades in the making and had implications beyond both the personal and audio.

Portable music went digital in the 1980s with the rise of devices built around CDs, mini discs and digital audiotape. In the 1990s the Moving Picture Experts Group (MPEG) developed a standard that became the MP3, a format that highly condenses audio files by discarding imperceptible sounds (although discriminating audiophiles tend to disagree with that description). The Eiger Labs MPMan F10, which hit the market in 1998, was the first MP3 player to store music on digital flash memory—a whopping 32 megabytes, enough for about half an hour of audio. A slew of similar gadgets followed, some of which replaced the flash memory with compact hard drives capable of holding thousands of songs. The breakthrough product was Apple’s 2001 iPod. Technologically, it was nothing new, but the combination of its physical sleekness, its spacious five-gigabyte hard drive and its thumbwheel-based interface proved compelling. Today digital players are as likely to hold photographs, videos and games as music, and they are increasingly often bundled into mobile phones and other devices.

MP3s—immaterial and easily copied—freed music from the physical grooves in vinyl or plastic media. They also dealt a severe blow to the recording industry, which long resisted selling MP3s, prompting music lovers to distribute files on their own. Since 2000, CD sales plummeted from $13 billion to $5 billion, according to the Recording Industry Association of America. Meanwhile digital downloads rose from $138 million in 2004 to $1 billion last year; however, says Russ Crupnick, a senior industry analyst at NPD Entertainment, peer-to-peer shared files outnumber legal downloads by at least 10 to one. Looking ahead, he believes music will not be something to possess at all: the industry’s salvation (if any) may come from paid access to songs streaming from the Web.

Source of Information : Scientific American September 2009

Wednesday, October 21, 2009

Love

Large brains may have led to the evolution of amour

For most creatures, procreation is an emotionally uncomplicated affair. In humans, however, it has a tricky accomplice: romantic love, capable of catapulting us to bliss or consigning us to utmost despair. Yet capricious though it may seem, love is likely to be an adaptive trait, one that arose early in the evolution of our lineage. Two of the hallmarks of human evolution—upright walking and large brains—may have favored the emergence of love, according to a theory advanced by anthropologist Helen Fisher of Rutgers University. Bipedalism meant that mothers had to carry their babies, rather than letting them ride on their back. Their hands thus occupied, these moms needed a partner to help provision and protect them and their newborns. Ancient bipedal hominids such as Australopithecus afarensis, the species to which the 3.2-million-year-old Lucy fossil belongs, probably formed only short-term pair bonds of a few years, however—just long enough for the babies to be weaned and walking, after which females were ready to mate anew.

The advent of large brains more than a million years ago extended the duration of these monogamous relationships. As brain size expanded, humans had to make an evolutionary trade-off. Our pelvis, built for bipedalism, places a constraint on the size of a baby’s head at birth. As a result, human babies are born at an earlier stage of development than are other primate infants and have an extended childhood during which they grow and learn. Human ancestors would thus have benefited from forming longer-term pair bonds for the purpose of rearing young.

Fisher further notes that the ballooning of the hominid brain (and the novel organizational features that accompanied this growth) also provided our forerunners with an extraordinary means of wooing one another—through poetry, music, art and dance. The archaeological record indicates that by 35,000 years ago, humans were engaging in these sorts of behaviors. Which is to say, they were probably just as lovesick as we are.

Source of Information : Scientific American September 2009

Tuesday, October 20, 2009

Flying car

A long-standing dream

If only my car could fl y! Who has not uttered this cry in traffi c? But what motivated the people who began designing flying cars near the turn of the 20th century? Most aviation pioneers of the time were thinking not in terms of flight alone but of “personal mobility” and getting cars to take wing, according to John Brown, editor of the Internet magazine Roadable Times. In fact, he notes, “the true brilliance” of the Wright Brothers— who demonstrated sustained, controlled powered fl ight at Kitty Hawk, N.C., in 1903—was their decision to concentrate solely on fl ying and “forget about the roadability part.”

Of course, over time, additional reasons for pursuing flying cars came into play. Near the end of World War I, for instance, a Chicagoan named Felix Longobardi had military flexibility in mind. In his patent application, submitted in June 1918, he detailed a contraption that was a flying car as well as a gunboat—“for anti-air-craft purposes”—and a submarine. (It saw neither light of day nor eye of fish.) Even before World War I ended, Glenn H. Curtiss, the legendary aircraft designer, submitted a patent for an “autoplane” that he intended to be a “pleasure craft.” And Moulton B. Taylor, whose Aerocar was famously used by actor Robert Cummings, wrote in his 1952 patent application that he wanted his invention to be suitable “for air or highway travel, and inexpensive enough to appeal to a potentially large market.”

To date, dozens of patents for fl ying cars have been issued, and more than 10, including a successor to the Aerocar, are under serious development. One developer, Terrafugia in Woburn, Mass., is perfecting (and taking $10,000 deposits for) the Transition, a light sport plane that is not meant for everyday driving. After landing at an airport, though, pilots should be able to fold the wings electronically and just drive the rest of the way to their destination. Test flights in March went well, but whether the company will take off as hoped remains to be seen.

Source of Information : Scientific American September 2009

Monday, October 19, 2009

Rainbows

The simple magic of their shape and colors still puzzles English poet John Keats famously worried that scientific explanations would “unweave a rainbow”—that by elucidating rainbows and other phenomena rationally, scientists would drain the world of its mystery. Yet if anything, the close study of rainbows enriches our appreciation of them. The multicolored arc is just the beginning. Look closely, and you will see that outside the main bow is a darkened band of sky and a second, dimmer arc, with its colors in reverse order. Inside the main bow are greenish and purplish arcs known as supernumerary bows. The rainbow can vary in brightness along its width or length, and it can split into multiple bows near the top. Viewed through polarizing sunglasses, the rainbow waxes and wanes as you tilt your head.

The basic scientific explanation for rainbows dates to Persian physicist Kama ¯ l al-Di¯ n al-Fa ¯ risi¯ and, independently, German physicist Theodoric of Freiberg in the 14th century. But scientists continued to work on the theory into the 1970s and beyond [see “The Theory of the Rainbow,” by H. Moysés Nussenzveig; Scientific American,
April 1977]. Many textbook explanations of rainbows are wrong, and a thorough description is still elusive. “The rainbow has the undeserved reputation of having a simple explanation,” says atmospheric physicist Craig Bohren of Pennsylvania State University.

The central principle is that each water droplet in the air acts as a mirror, lens and prism, all in one. Droplets scatter sunlight in every direction but do so unevenly, tending to focus light 138 degrees from the incident direction. Those droplets that form this angle with the sun look brighter; together they produce a ring. Typically you see only the top half of this ring because there are not enough drops near the ground to fill out the bottom half. “The rainbow is just a distorted image of the sun,” write atmospheric scientists Raymond Lee, Jr., and Alistair Fraser in their definitive book The Rainbow Bridge.

The angle of 138 degrees means you see the rainbow when standing with your back to the sun. The lensing angle varies slightly with wavelength, separating the white sunlight into colored bands. Multiple reflections within droplets create the outer bows; wave interference accounts for the supernumerary arcs; flattening of the droplets causes brightness variations along the arc; multiple droplet sizes produce split bows; and light is polarized much like the glare on any watery surface. Even this physics does not touch on how our eyes and brains perceive the continuous spectrum as discrete colors. The weaving of the rainbow occurs in our heads as much as it does in the sky.

Source of Information : Scientific American September 2009

Sunday, October 18, 2009

Computing - Machine Evolution

The subsequent 60-year diffusion of the computer within society is a long story that has to be told in another place. Perhaps the single most remarkable development was that the computer—originally designed for mathematical calculations—turned out to be infinitely adaptable to different uses, from business data processing to personal computing to the construction of a global information network. We can think of computer development as having taken place along three vectors—hardware, software and architecture. The improvements in hardware over the past 60 years are legendary. Bulky electronic tubes gave way in the late 1950s to “discrete” transistors—that is, single transistors individually soldered into place. In the mid-1960s microcircuits contained several transistors—then hundreds of transistors, then thousands of transistors—on a silicon “chip.” The microprocessor, developed in the early 1970s, held a complete computer processing unit on a chip. The microprocessor gave rise to the PC and now controls devices ranging from sprinkler systems to ballistic missiles.

The challenges of software were more subtle. In 1947 and 1948 von Neumann and Goldstine produced a series of reports called Planning and Coding Problems for an Electronic Computing Instrument. In these reports they set down dozens of routines for mathematical computation with the expectation that some lowly “coder” would be able to convert them into working programs. It was not to be. The process of writing programs and getting them to work was excruciatingly difficult. The first to make this discovery was Maurice Wilkes, the University of Cambridge computer scientist who had created EDSAC, the first practical stored-program computer. In his Memoirs, Wilkes ruefully recalled the moment in 1949 when “the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.”

He and others at Cambridge developed a method of writing computer instructions in a symbolic form that made the whole job easier and less error prone. The computer would take this symbolic language and then convert it into binary. IBM introduced the programming language Fortran in 1957, which greatly simplified the writing of scientific and mathematical programs. At Dartmouth College in 1964, educator John G. Kemeny and computer scientist Thomas E. Kurtz invented Basic, a simple but mighty programming language intended to democratize computing and bring it to the entire undergraduate population. With Basic even schoolkids—the young Bill Gates among them— could begin to write their own programs.

In contrast, computer architecture—that is, the logical arrangement of subsystems that make up a computer—has barely evolved. Nearly every machine in use today shares its basic architecture with the stored-program computer of 1945. The situation mirrors that of the gasolinepowered automobile—the years have seen many technical refinements and efficiency improvements in both, but the basic design is largely the same. And although it might be possible to design a radically better device, both have achieved what historians of technology call “closure.” Investments over the decades have produced such excellent gains that no one has had a compelling reason to invest in an alternative.

Yet there are multiple possibilities for radical evolution. In the 1980s interest ran high in socalled massively parallel machines, which contained thousands of computing elements operating simultaneously. This basic architecture is still used for computationally intensive tasks such as weather forecasting and atomic weapons research. Computer scientists have also looked to the human brain for inspiration. We now know that the brain contains specialized processing centers for different tasks, such as face recognition or speech understanding. Scientists are harnessing some of these ideas in “neural networks” for applications such as license plate identification and iris recognition.

More blue sky research is focused on building computers from living matter such as DNA and computers that harness the weirdness of the quantum world. No one knows that the computers of 50 years hence will look like. Perhaps their abilities will surpass even the powers of the minds that created them.


THE FUTURE OF COMPUTER ARCHITECTURE
The stored-program computer has formed the basis of computing technology since the 1950s.

What may come next?
Quantum: The much touted quantum computer exploits the ability of a particle to be in many states at once. Quantum computations operate on all these states simultaneously.

Neural Net: These systems are formed from many simple processing nodes that connect to one another in unique ways. The system as a whole exhibits complex global behavior.

Living: Computers based on strands of DNA or RNA process data encoded in genetic material.

Soource of Information : Scientific American September 2009

Saturday, October 17, 2009

Computing - Everything Is Change

On December 7, 1941, Japanese forces attacked the U.S. Navy base at Pearl Harbor. The U.S. was at war. Mobilization meant the army needed ever more fi ring tables, each of which contained about 3,000 entries. Even with the Differential Analyzer, the backlog of calculations at Aberdeen was mounting.

Eighty miles up the road from Aberdeen, the Moore School of Electrical Engineering at the University of Pennsylvania had its own differential analyzer. In the spring of 1942 a 35-yearold instructor at the school named John W.Mauchly had an idea for how to speed up calculations: construct an “electronic computor” [sic] that would use vacuum tubes in place of the mechanical components. Mauchly, a theoretically- minded individual, found his complement in an energetic young researcher at the school named J. Presper (“Pres”) Eckert, who had already shown sparks of engineering genius.

A year after Mauchly made his original proposal, following various accidental and bureaucratic delays, it found its way to Lieutenant Herman Goldstine, a 30-year-old Ph.D. in mathematics from the University of Chicago who was the technical liaison officer between Aberdeen and the Moore School. Within days Goldstine got the go-ahead for the project. Construction of the ENIAC—for Electronic Numerical Integrator and Computer—began on April 9, 1943. It was Eckert’s 23rd birthday.

Many engineers had serious doubts about whether the ENIAC would ever be successful. Conventional wisdom held that the life of a vacuum tube was about 3,000 hours, and the ENIAC’s initial design called for 5,000 tubes. At that failure rate, the machine would not function for more than a few minutes before a broken tube put it out of action. Eckert, however, understood that the tubes tended to fail under the stress of being turned on or off. He knew it was for that reason radio stations never turned off their transmission tubes. If tubes were operated significantly below their rated voltage, they would last longer still. (The total number of tubes would grow to 18,000 by the time the machine was complete.)

Eckert and his team completed the ENIAC in two and a half years. The finished machine was an engineering tour de force, a 30-ton behemoth that consumed 150 kilowatts of power. The machine could perform 5,000 additions per second and compute a trajectory in less time than a shell took to reach its target. It was also a prime example of the role that serendipity often plays in invention: although the Moore School was not then a leading computing research facility, it happened to be in the right location at the right time with the right people.

Yet the ENIAC was finished in 1945, too late to help in the war effort. It was also limited in its capabilities. It could store only up to 20 numbers at a time. Programming the machine took days and required manipulating a patchwork of cables that resembled the inside of a busy telephone exchange. Moreover, the ENIAC was designed to solve ordinary differential equations. Some challenges—notably, the calculations required for the Manhattan Project—required the solution of partial differential equations.

John von Neumann was a consultant to the Manhattan Project when he learned of the ENIAC on a visit to Aberdeen in the summer of 1944. Born in 1903 into a wealthy Hungarian banking family, von Neumann was a mathematical prodigy who tore through his education. By 23 he had become the youngest ever privatdozent (the approximate equivalent of an associate professor) at the University of Berlin. In 1930 he emigrated to the U.S., where he joined Albert Einstein and Kurt Gödel as one of first faculty members of the Institute for Advanced Study in Princeton, N.J. He became a naturalized U.S. citizen in 1937.

Von Neumann quickly recognized the power of electronic computation, and in the several months after his visit to Aberdeen, he joined in meetings with Eckert, Mauchly, Goldstine and Arthur Burks—another Moore School instructor— to hammer out the design of a successor machine, the Electronic Discrete Variable Automatic Computer, or EDVAC.

The EDVAC was a huge improvement over the ENIAC. Von Neumann introduced the ideas and nomenclature of Warren McCullough and Walter Pitts, neuroscientists who had developed a theory of the logical operations of the brain (this is where we get the term computer “memory”). Like von Neumann, McCullough and Pitts had been influenced by theoretical studies in the late 1930s by British mathematician Alan Turing, who established that a simple machine can be used to execute a huge variety of complex tasks. There was a collective shift in perception around this time from the computer as a mathematical instrument to a universal informationprocessing machine.

Von Neumann thought of the machine as having five core parts: Memory held not just numerical data but also the instructions for operation. An arithmetic unit performed calculations. An input “organ” enabled the transfer of programs and data into memory, and an output organ recorded the results of computation. Finally, a control unit coordinated operations. This layout, or architecture, makes it possible to change the computer’s program without altering the physical structure of the machine. Moreover, a program could manipulate its own instructions. This feature would not only enable von Neumann to solve his partial differential equations, it would confer a powerful fl exibility that forms the very heart of computer science.

In June 1945 von Neumann wrote his classic First Draft of a Report on the EDVAC on behalf of the group. In spite of its unfi nished status, it was rapidly circulated among the computing cognoscenti with two consequences. First, there never was a second draft. Second, von Neumann ended up with most of the credit.

Source of Information : Scientific American September 2009

Friday, October 16, 2009

Computing - The Dark Ages

Babbage’s vision, in essence, was digital computing. Like today’s devices, such machines manipulate numbers (or digits) according to a set of instructions and produce a precise numerical result.

Yet after Babbage’s failure, computation entered what English mathematician L. J. Comrie called the Dark Age of digital computing—a period that lasted into World War II. During this time, machine computation was done primarily with so-called analog computers. These devices model a system using a mechanical analog. Suppose, for example, one wanted to predict the time of a solar eclipse. To do this digitally, one would numerically solve Kepler’s laws of motion.

Before digital computers, the only practical way to do this was hand computation by human computers. (From the 1890s to the 1940s the Harvard Observatory employed just such a group of all-female computers.) One could also create an analog computer, a model solar system made of gears and shafts that would “run” time into the future. Before World War II, the most important analog computing instrument was the Differential Analyzer, developed by Vannevar Bush at the Massachusetts Institute of Technology in 1929. At that time, the U.S. was investing heavily in rural electrification, and Bush was investigating electrical transmission. Such problems could be encoded in ordinary differential equations, but these were very time-consuming to solve. The Differential Analyzer allowed for an approximate solution without any numerical processing. The machine was physically quite large—it filled a laboratory—and was something of a Rube Goldberg construction of gears and rotating shafts. To “program” the machine, researchers connected the various components of the device using screwdrivers, spanners and lead hammers. Though laborious to set up, once done the apparatus could solve in minutes equations that would take several days by hand. A dozen copies of the machine were built in the U.S. and England.

One of these copies belonged to the U.S. Army’s Aberdeen Proving Ground in Maryland, the facility responsible for readying field weapons for deployment. To aim artillery at a target of known range, soldiers had to set the vertical and horizontal angles (the elevation and azimuth) of the barrel so that the fi red shell would follow the desired parabolic trajectory—soaring skyward before dropping onto the target. They selected the angles out of a firing table that contained numerous entries for various target distances and operational conditions.

Every entry in the firing table required the integration of an ordinary differential equation. A human computer would take two to three days to do each calculation by hand. The Differential Analyzer, in contrast, would need only about 20 minutes.

Source of Information : Scientific American September 2009

Thursday, October 15, 2009

Computing - The Difference Engine

In 1790, shortly after the start of the French Revolution, Napoleon Bonaparte decided that the republic required a new set of maps to establish a fair system of property taxation. He also ordered a switch from the old imperial system of measurements to the new metric system. To aid the engineers and mathematicians making the change, the French ordinance survey office commissioned a fresh set of mathematical tables.

In the 18th century, however, computations were done by hand. A “factory floor” of between 60 and 80 human computers added and subtracted numbers to fi ll in line after line of the tables for the survey’s Tables du Cadastre project. It was grunt work, demanding no special skills above basic numeracy and literacy. In fact, most computers were hairdressers who had lost their jobs—aristocratic hairstyles being the sort of thing that could endanger one’s neck in revolutionary France.

The project took about 10 years to complete, but by then the war-torn republic did not have the funds necessary to publish the work. The manuscript languished in the Académie des Sciences weapfor decades. Then, in 1819, a young British mathematican named Charles Babbage would view it on a visit to Paris. Babbage was 28 at the time; three years earlier he had been elected to the Royal Society, the most prominent scientific organization in Britain. He was also very knowledgeable about the world of human computers—at various times he personally supervised the construction of astronomical and actuarial tables.

On his return to England, Babbage decided he would replicate the French project not with human computers but with machinery. England at the time was in the throes of the Industrial Revolution. Jobs that had been done by human or animal labor were falling to the efficiency of the machine. Babbage saw the power of mechanization and realized that it could replace not just muscle but the work of minds. He proposed the construction of his Calculating Engine in 1822 and secured government funding in 1824. For the next decade he immersed himself in the world of manufacturing, seeking the best technologies with which to construct his engine.

The year 1832 was Babbage’s annus mirabilis. That year he not only produced a functioning model of his calculating machine (which he called the Difference Engine) but also published his classic Economy of Machinery and Manufactures, establishing his reputation as the world’s leading industrial economist. He held Saturday evening soirees at his home in Dorset Street in London, which were attended by the front rank of society. At these gatherings the model Difference Engine was placed on display as a conversation piece.

A year later Babbage abandoned the Difference Engine for a grander vision that he called the Analytical Engine. Whereas the Difference Engine had been limited to the single task of table making, the Analytical Engine would be capable of any mathematical calculation. Like a modern computer, it would have a processor that performed arithmetic (the “mill”), memory to hold numbers (the “store”), and the ability to alter its function via user input, in this case by punched cards. In short, it was a computer conceived in Victorian technology.

Babbage’s decision to abandon the unfinished Difference Engine was not well received, however, and the government demurred to supply him with additional funds. Undeterred, he produced thousands of pages of detailed notes and machine drawings in the hope that the government would one day fund construction. It was not until the 1970s, well into the computer age, that scholars studied these papers for the first time. The Analytical Engine was, as one of those scholars remarked, almost like looking at a computer designed on another planet.

Source of Information : Scientific American September 2009

Wednesday, October 14, 2009

Computing

The information age began with the realization that machines could emulate the power of minds

In the standard story, the computer’s evolution has been brisk and short. It starts with the giant machines warehoused in World War II–era laboratories. Microchips shrink them onto desktops, Moore’s Law predicts how powerful they will become, and Microsoft capitalizes on the software. Eventually small, inexpensive devices appear that can trade stocks and beam video around the world. That is one way to approach the history of computing—the history of solid-state electronics in the past 60 years.

But computing existed long before the transistor. Ancient astronomers developed ways to predict the motion of the heavenly bodies. The Greeks deduced the shape and size of Earth. Taxes were summed; distances mapped. Always, though, computing was a human pursuit. It was arithmetic, a skill like reading or writing that helped a person make sense of the world.

The age of computing sprang from the abandonment of this limitation. Adding machines and cash registers came first, but equally critical was the quest to organize mathematical computations using what we now call “programs.” The idea of a program first arose in the 1830s, a century before what we traditionally think of as the birth of the computer. Later, the modern electronic computers that came out of World War II gave rise to the notion of the universal computer—a machine capable of any kind of information processing,even including the manipulation of its own programs. These are the computers that power our world today. Yet even as computer technology has matured to the point where it is omnipresent and seemingly limitless, researchers are attempting to use fresh insights from the mind, biological systems and quantum physics to build wholly new types of machines.


Key Concepts
• T he first “computers” were people—individuals and teams who would tediously compute sums by hand to fill in artillery tables.

• I nspired by the work of a computing team in revolutionary France, Charles Babbage, a British mathematician, created the first mechanical device that could organize calculations.

• T he first modern computers arrived in the 1950s, as researchers created machines that could use the result of their calculations to alter their operating instructions.

Source of Information : Scientific American September 2009

Tuesday, October 13, 2009

The Mind - Alien Thoughts

From didactic meerkats to inequity-averse monkeys, the same observation applies: each of these animals has evolved an exquisite mind that is adapted to singular problems and is thus limited when it comes to applying skills to novel problems. Not so for us hairless bipeds. Once in place, the modern mind enabled our forebears to explore previously uninhabited parts of the earth, to create language to describe novel events, and to envision an afterlife.

The roots of our cognitive abilities remain largely unknown, but having pinpointed the unique ingredients of the human mind, scientists now know what to look for. To that end, I am hopeful that neurobiology will prove illuminating. Although scholars do not yet understand how genes build brains and how electrical activity in the brain builds thoughts and emotions, we are witnessing a revolution in the sciences of the mind that will fill in these blanks—and enrich our understanding of why the human brain differs so profoundly from those of other

For instance, studies of chimeric animals—in which brain circuits from an individual of one species are transplanted into an individual of another species—are helping to unravel how the brain is wired. And experiments with genetically modified animals are revealing genes that play roles in language and other social processes. Such achievements do not reveal anything about what our nerve cells do to give us our unique mental powers, but they do provide a roadmap for further exploration of these traits.

Still, for now, we have little choice but to admit that our mind is very different from that of even our closest primate relatives and that we do not know much about how that difference came to be. Could a chimpanzee think up an experiment to test humans? Could a chimpanzee imagine what it would be like for us to solve one of their problems? No and no. Although chimpanzees can see what we do, they cannot imagine mental machinery. Although chimpanzees and other animals appear to develop plans and consider both past experiences and future options, there is no evidence that they think in terms of counterfactuals—imagining worlds that have been against those that could be. We humans do this all the time and have done so ever since our distinctive genome gave birth to our distinctive minds. Our moral systems are premised on this mental capacity.

Have our unique minds become as powerful as a mind can be? For every form of human expression— including the world’s languages, musical compositions, moral norms and technological forms—I suspect we are unable to exhaust the space of all possibilities. There are significant limitations to our ability to imagine alternatives.

If our minds face inherent constraints on what they can conceive, then the notion of “thinking outside of the box” is all wrong. We are always inside the box, limited in our capacity to envision alternatives. Thus, in the same way that chimpanzees cannot imagine what it is like to be human, humans cannot imagine what it is like to be an intelligent alien. Whenever we try, we are stuck in the box that we call the human mind. The only way out is through evolution, the revolutionary remodeling of our genome and its potential to sculpt fresh neural connections and fashion new neural structures. Such change would give birth to a novel mind, one that would look on its ancestors as we often look on ours: with respect, curiosity, and a sense that we are alone, paragons in a world of simple minds.

Source of Information : Scientific American September 2009

Monday, October 12, 2009

The Mind - Minding the Gap

One of our most basic tools, the No. 2 pencil, used by every test taker, illustrates the exceptional freedom of the human mind as compared with the limited scope of animal cognition. You hold the painted wood, write with the lead, and erase with the pink rubber held in place by a metal ring. Four different materials, each with a particular function, all wrapped up into a single tool. And although that tool was made for writing, it can also pin hair up into a bun, bookmark a page or stab an annoying insect. Animal tools, in contrast—such as the sticks chimps use to fish termites out from their mounds—are composed of a single material, designed for a single function and never used for other functions. None have the combinatorial properties of the pencil.

Another simple tool, the telescopic, collapsible cup found in many a camper’s gear, provides an example of recursion in action. To make this device, the manufacturer need only program a simple rule—add a segment of increasing size to the last segment—and repeat it until the desired size is reached. Humans use recursive operations such as this in virtually all aspects of mental life, from language, music and math to the generation of a limitless range of movements with our legs, hands and mouths. The only glimmerings of recursion in animals, however, have come from watching their motor systems in action.

All creatures are endowed with recursive motor machinery as part of their standard operating equipment. To walk, they put one foot in front of the other, over and over again. To eat, they may grasp an object and bring it to the mouth repeatedly until the stomach sends the signal to stop. In animal minds, this recursive system is locked away in the motor regions of the brain, closed off to other brain areas. Its existence suggests that a critical step in acquiring our own distinctive brand of thinking was not the evolution of recursion as a novel form of computation but the release of recursion from its motor prison to other domains of thought. How it was unlocked from this restrictive function links to one of our other ingredients—promiscuous interfaces—which I will turn to shortly.

The mental gap broadens when we compare human language with communication in other species. Like other animals, humans have a nonverbal communication system that conveys our emotions and motivations—the chortles and cries of little babies are part of this system. Humans are alone, however, in having a system of linguistic communication that is based on the manipulation of mental symbols, with each example of a symbol falling into a specific and abstract category such as noun, verb and adjective. Although some animals have sounds that appear to represent more than their emotions, conveying information about objects and events such as food, sex and predation, the range of such sounds pales in relation to our own, and none of them falls into the abstract categories that structure our linguistic expressions.

This claim requires clarification, because it often elicits extreme skepticism. You might think, for example, that animal vocabularies appear small because researchers studying their communications do not really understand what they are talking about. Although scientists have much to learn about animal vocalizations, and communication more generally, I think insufficient study is unlikely to explain the large gap.
Most vocal exchanges between animals consist of one grunt or coo or scream, with a single volley back. It is possible that animals pack a vast amount of information into a 500-millisecond grunt—perhaps equivalent to “Please groom my lower back now, and I will groom yours later.” But then why would we humans have developed such an arcane and highly verbose system if we could have solved it all with a grunt or two?

Furthermore, even if we grant that the honeybee’s waggle dance symbolically represents the delicious pollen located a mile north and that the putty-nosed monkey’s alarm calls symbolically represent different predators, these uses of symbols are unlike ours in five essential ways: they are triggered only by real objects or events, never imagined ones; they are restricted to the present; they are not part of a more abstract classification scheme, such as those that organize our words into nouns, verbs and adjectives; they are rarely combined with other symbols, and when they are, the combinations are limited to a string of two, with no rules; and they are fixed to particular contexts.

Human language is additionally remarkable— and entirely different from the communication systems of other animals—in that it operates equally well in the visual and auditory modes. If a songbird lost its voice and a honeybee its waggle, their communication would end. But when a human is deaf, sign language provides an equally expressive mode of communication that parallels its acoustic cousin in structural complexity.

Our linguistic knowledge, along with the computations it requires, also interacts with other domains of knowledge in fascinating ways that strikingly reflect our uniquely human ability to make promiscuous connections between systems of understanding. Consider the ability to quantify objects and events, a capacity that we share with other animals. A wide variety of species have at least two nonlinguistic abilities for counting. One is precise and limited to numbers less than four. The other is unlimited in scope, but it is approximate and limited to certain ratios for discrimination—an animal that can discriminate one from two, for instance, can also discriminate two from four, 16 from 32, and so on. The first system is anchored in a brain region involved in keeping track of individuals, whereas the second is anchored in brain regions that compute magnitudes.

Last year my colleagues and I described a third counting system in rhesus monkeys, one that may help us understand the origins of the human ability to mark the difference between singular and plural. This system operates when individuals see sets of objects presented at the same time—as opposed to individuals presented serially—and causes rhesus monkeys to discriminate one from many but not many from many food items. In our experiment, we showed a rhesus monkey one apple and placed it in a box. We then showed the same monkey five apples and placed all five at once into a second box. Given a choice the monkey consistently picked the second box with five apples. Then we put two apples in one box and five into the other. This time the monkey did not show a consistent preference. We humans do essentially the same thing when we say “one apple” and “two, five or 100 apples.”

But something peculiar happens when the human linguistic system connects up with this more ancient conceptual system. To see how, try this exercise: for the numbers 0, 0.2 and –5, add the most appropriate word: “apple” or “apples.” If you are like most native English speakers, including young children, you selected “apples.” In fact, you would select “apples” for “1.0.” If you are surprised, good, you should be. This is not a rule we learned in grammar school—in fact, strictly speaking, it is not grammatically correct. But it is part of the universal grammar that we alone are born with. The rule is simple but abstract: anything that is not “1” is pluralized.

The apple example demonstrates how different systems—syntax and concepts of sets—interact to produce new ways of thinking about or conceptualizing the world. But the creative process in humans does not stop here. We apply our language and number systems to cases of morality (saving five people is better than saving one), economics (if I am giving $10 and offer you $1, that seems unfair, and you will reject the dollar), and taboo trade-offs (in the U.S., selling our children, even for lots of money, is not kosher).

Source of Information : Scientific American September 2009

Sunday, October 11, 2009

The Mind - Beautiful Minds

When my youngest daughter, Sofia, was three years old, I asked her what makes us think. She pointed to her head and said: “My brain.” I then asked her whether other animals have brains, starting with dogs and monkeys and then birds and fish. She said yes. When I asked her about the ant that was crawling in front of us, she said: “No. Too small.” We adults know that size does not provide a litmus test of whether an animal has a brain, although size does affect some aspects of brain structure and, consequently, some aspects of thought. And research has shown that most of the different cell types in the brain, along with their chemical messengers, are the same across vertebrate species, including humans. Furthermore, the general organization of the different structures in the brain’s outermost layer, the cerebral cortex, is largely the same in monkeys, apes and humans. In other words, humans have a number of brain features in common with other species. Where we differ from them is in the relative size of particular regions of the cortex and how these regions connect, differences that give rise to thoughts having no analogue elsewhere in the animal kingdom.

Animals do exhibit sophisticated behaviors that appear to presage some of our capabilities. Take, for example, the ability to create or modify objects for a particular goal. Male bowerbirds construct magnificent architectural structures from twigs and decorate them with feathers, leaves, buttons and paint made from crushed berries to attract females. New Caledonian crows carve blades into fishing sticks for catching insects. Chimpanzees have been observed to use wooden spears to shish-kebab bush babies tucked away in tree crevasses.

In addition, experimental studies in a number of animals have revealed a native folk physics that enables them to generalize beyond their direct experiences to create novel solutions when exposed to foreign challenges in the laboratory. In one such experiment, when orangutans and chimps were presented with a mounted plastic cylinder containing a peanut at the bottom, they accessed the out-of-reach treat by sipping water from their drinking fountains and then spitting the liquid into the cylinder, thus making the peanut float to the top.

Animals also exhibit social behaviors in common with humans. Knowledgeable ants teach their naive pupils by guiding them to essential food resources. Meerkats provide their pups with tutorials on the art of dismembering a lethal but delectable scorpion. And a rash of studies have shown that animals as varied as domestic dogs, capuchin monkeys and chimpanzees maintaining dominance status, caring for infants, and finding new mates and coalition partners. Rather they can readily respond to novel social situations, as when a subordinate animal with a unique skill gains favors from more dominant individuals.

These observations inspire a sense of wonder at the beauty of nature’s R&D solutions. But once we get over this frisson, we must confront the gap between humans and other species, a space that is cavernous, as our aliens reported. To fully convey the extent of this gap and the difficulty of deciphering how it arose, let me describe our humaniqueness in more detail.

Source of Information : Scientific American September 2009

Saturday, October 10, 2009

The Mind - Singularly Smart

If we scientists are ever to unravel how the human mind came to be, we must first pinpoint exactly what sets it apart from the minds of other creatures. Although humans share the vast majority of their genes with chimps, studies suggest that small genetic shifts that occurred in the human lineage since it split from the chimp line produced massive differences in computational power. This rearranging, deleting and copying of universal genetic elements created a brain with four special properties. Together these distinctive characteristics, which I have recently identified based on studies conducted in my lab and elsewhere, constitute what I term our humaniqueness.

The first such trait is generative computation, the ability to create a virtually limitless variety of “expressions,” be they arrangements of words, sequences of notes, combinations of actions, or strings of mathematical symbols. Generative computation encompasses two types of operation, recursive and combinatorial. Recursion is the repeated use of a rule to create new expressions. Think of the fact that a short phrase can be embedded within another phrase, repeatedly, to create longer, richer descriptions of our thoughts– for example, the simple but poetic expression from Gertrude Stein: “A rose is a rose is a rose.” The combinatorial operation, meanwhile, is the mixing of discrete elements to engender new ideas, which can be expressed as novel words (“Walkman”) or musical forms, among other possibilities.

The second distinguishing characteristic of the human mind is its capacity for the promiscuous combination of ideas. We routinely connect thoughts from different domains of knowledge, allowing our understanding of art, sex, space, causality and friendship to combine. From this mingling, new laws, social relationships and technologies can result, as when we decide that it is forbidden [moral domain] to push someone [motor action domain] intentionally [folk psychology domain] in front of a train [object domain] to save the lives [moral domain] of five [number domain] others.

Third on my list of defining properties is the use of mental symbols. We can spontaneously convert any sensory experience—real or imagined— into a symbol that we can keep to ourselves or express to others through language, art, music or computer code.

Fourth, only humans engage in abstract thought. Unlike animal thoughts, which are largely anchored in sensory and perceptual experiences, many of ours have no clear connection to such events. We alone ponder the likes of unicorns and aliens, nouns and verbs, infinity and God.

Although anthropologists disagree about exactly when the modern human mind took shape, it is clear from the archaeological record that a major transformation occurred during a relatively brief period of evolutionary history, starting approximately 800,000 years ago in the Paleolithic era and crescendoing around 45,000 to 50,000 years ago. It is during this period of the Paleolithic, an evolutionary eyeblink, that we see for the first time multipart tools; animal bones punctured with holes to fashion musical instruments; burials with accoutrements suggesting beliefs about aesthetics and the afterlife; richly symbolic cave paintings that capture in exquisite detail events of the past and the perceived future; and control over fire, a technology that combines our folk physics and psychology and allowed our ancestors to prevail over novel environments by creating warmth and cooking foods to make them edible.

These remnants of our past are magnificent reminders of how our forebears struggled to solve novel environmental problems and express themselves in creative new ways, marking their unique cultural identities. Nevertheless, the archaeological evidence will forever remain silent on the origins and selective pressures that led to the four ingredients making up our humaniqueness. The gorgeous cave paintings at Lascaux, for instance, indicate that our ancestors understood the dual nature of pictures— that they are both objects and refer to objects and events. They do not, however, reveal whether these painters and their admirers expressed their aesthetic preferences about these artworks by means of symbols that were organized into grammatical classes (nouns, verbs, adjectives) or whether they imagined conveying these ideas equally well through sound or sign, depending on the health of their sensory systems. Similarly, none of the ancient instruments that have been found—such as the 35,000-year-old flutes made of bone and ivory— tell a story about use, about whether a few notes were played over and over again, Philip Glass–style, or about whether the composer imagined, as did Wagner, embedding themes within themes in a recursive manner.

What we can say with utmost confidence is that all people, from the hunter-gatherers on the African savanna to the traders on Wall Street, are born with the four ingredients of humaniqueness. How these ingredients are added to the recipe for creating culture varies considerably from group to group, however. Human cultures may differ in their languages, musical compositions, moral norms and artifacts. From the viewpoint of one culture, another’s practices are often bizarre, sometimes distasteful, frequently incomprehensible and occasionally immoral. No other animal exhibits such variation in lifestyle. Looked at in this way, a chimpanzee is a cultural nonstarter.

Chimps and other animals are still interesting and relevant for understanding the origins of the human mind, though. In fact, only by working out which capacities we share with other animals and which are ours alone can scientists hope to piece together the story of how our humaniqueness came to be.



KEY INGREDIENTS OF THE HUMAN MIND
The four traits below distinguish the human mind from those of animals. Uncovering the origin of the human mind will require explaining how these unique properties came about.

Generative computation enables humans to create a virtually limitless variety of words, concepts and things. The characteristic encompasses two types of operation: recursive and combinatorial. Recursion is the repeated use of a rule to create new expressions. The combinatorial operation is the mixing of discrete elements to engender new ideas.

Promiscuous combination of ideas allows the mingling of different domains of knowledge—such as art, sex, space, causality and friendship— thereby generating new laws, social relationships and technologies.

Mental symbols encode sensory experiences both real and imagined, forming the basis of a rich and complex system of communication. Such symbols can be kept to oneself or expressed to others as words or pictures.

Abstract thought permits contemplation of things beyond what we can see, hear, touch, taste or smell.

Source of Information : Scientific American September 2009

Friday, October 9, 2009

The Mind

The first step in figuring out how the human mind arose is determining what distinguishes our mental processes from those of other creatures

Not too long ago three aliens descended to Earth to evaluate the status of intelligent life. One specialized in engineering, one in chemistry and one in computation. Turning to his colleagues, the engineer reported (translation follows): “All of the creatures here are solid, some segmented, with capacities to move on the ground, through the water or air. All extremely slow. Unimpressive.” The chemist then commented: “All quite similar, derived from different sequences of four chemical ingredients.” Next the computational expert opined: “Limited computing abilities. But one, the hairless biped, is unlike the others. It exchanges information in a manner that is primitive and inefficient but remarkably different from the others. It creates many odd objects, including ones that are consumable, others that produce symbols, and yet others that destroy members of its tribe.”

“But how can this be?” the engineer mused. “Given the similarity in form and chemistry, how can their computing capacity differ?” “I am not certain,” confessed the computational alien. “But they appear to have a system for creating new expressions that is infinitely more powerful than those of all the other living kinds. I propose that we place the hairless biped in a different group from the other animals, with a separate origin, and from a different galaxy.” The other two aliens nodded, and then all three zipped home to present their report.

Perhaps our alien reporters should not be faulted for classifying humans separately from bees, birds, beavers, baboons and bonobos. After all, our species alone creates soufflés, computers, guns, makeup, plays, operas, sculptures, equations, laws and religions. Not only have bees and baboons never made a soufflé, they have never even contemplated the possibility. They simply lack the kind of brain that has both technological savoir faire and gastronomical creativity.

Charles Darwin argued in his 1871 book The Descent of Manthat the difference between human and nonhuman minds is “one of degree and not of kind.” Scholars have long upheld that view, pointing in recent years to genetic evidence showing that we share some 98 percent of our genes with chimpanzees. But if our shared genetic heritage can explain the evolutionary origin of the human mind, then why isn’t a chimpanzee writing this essay, or singing backup for the Rolling Stones or making a soufflé? Indeed, mounting evidence indicates that, in contrast to Darwin’s theory of a continuity of mind between humans and other species, a profound gap separates our intellect from the animal kind. This is not to say that our mental faculties sprang fully formed out of nowhere. Researchers have found some of the building blocks of human cognition in other species. But these building blocks make up only the cement footprint of the skyscraper that is the human mind. The evolutionary origins of our cognitive abilities thus remain rather hazy. Clarity is emerging from novel insights and experimental technologies, however.


Key Concepts
• Charles Darwin argued that a continuity of mind exists between humans and other animals, a view that subsequent scholars have supported.

• But mounting evidence indicates that, in fact, a large mental gap separates us from our fellow creatures. Recently the author identified four unique aspects of human cognition.

• The origin and evolution of these distinctive mental traits remain largely mysterious, but clues are emerging slowly.

Source of Information : Scientific American September 2009

Thursday, October 8, 2009

Surviving in the Suburbs

A fossil search for why some critters made it past the dinosaur-killing event BY CHARLES Q. CHOI

Outside Freehold, N.J.—The water is icy cold and the stone is slippery as I wade in up to my calves. Along the banks of this slow-flowing stream, guarded by prickly brambles, lies one of the richest caches of fossils dating back to the extinction that claimed the dinosaurs. The remains of marine creatures buried here, kept secret to prevent looting, tell an unusual tale: rather than dying off 65 million years ago, these creatures lived on afterward, albeit briefly. The discovery is causing scientists to rethink why some creatures survived the socalled KT extinction while others did not. Unlike this one, significant fossil sites tend to be found in exotic locales such as the searing hot Gobi Desert or the windswept pampas of Patagonia, areas remote from the kind of urban development that can ruin them. “You don’t expect to fi nd them here in suburban New Jersey some 90 minutes away from New York City,” explains Neil Landman, curator of fossil invertebrates at the American Museum of Natural History.

The fossils here are not of dinosaurs, but ammonites. These cousins of squid and octopus were the iconic marine animals of the age of dinosaurs, flourishing worldwide for 300 million years or more before the KT extinction wiped them out. They bore shells that often resembled those of nautiluses, which rapidly evolved into hundreds of different shapes, ornamented with undulations and bumps.

Amateur paleontologist Ralph Johnson, a New Jersey park ranger, discovered ammonites in this stream in 2003 when construction workers exposed them while setting up bridge foundations. The site is now kept quiet from all but scientists—poachers have already trawled nearby areas, on the prowl for fossil shark teeth. Although this shallow, inconspicuous creek has no formal name on maps, after enduring many thorns on the way there, Landman and his team dubbed it Agony Creek.

At the time of the KT extinction, the water level at this site was some 30 meters higher than it is now. Landman investigates the iron-rich glauconite rocks here with his colleagues and students from the museum’s graduate school, using iron spikes and sledgehammers to knock off slabs that are picked apart with screwdrivers and fingers. They find the fossil bed rich with dozens of species of marine invertebrates, such as crabs, snails, clams, sea urchins, large flat oysters and ammonites, as well as fish teeth and scales.

Past digs unearthed ammonite shells up to some 35 centimeters wide. These lay amid pinna, triangular bivalves that all died here relatively undisturbed: they jut upward as they would have been posed in life. Their position suggests they were all snuffed out rapidly, “perhaps by a Pompeii-like disaster, like a pulse of mud,” Landman says. To see if these deaths were linked with the KT extinction, the researchers tested for iridium, the rare metal found throughout the world near the KT boundary, thought by most to be evidence of a cosmic impact.

Unexpectedly, the researchers discovered that the iridium was laid down before the pinna layer, which means that the ammonites and other creatures there died after the event “by 10 to maybe 100 years,” Landman concludes. Their survival runs “counter to everything we’ve been taught,” he adds. He plans to go to a site in Denmark to retrieve more potential evidence of ammonite survival past KT.

Their existence in the post-KT world raises a host of questions. “If they made it through this event like they did through other mass extinctions, why didn’t they take off again?” asks invertebrate paleontologist Peter Harries of the University of South Florida. “Why did the ancestors of the modern nautiluses make it through and not the ammonites? That’s extremely intriguing to me, and the broader message to me is that mass-extinction events are much more complex than we think.”

The trove’s proximity to cities is a bit of a double-edged sword. “In Mongolia, you don’t really have the danger that a good site today might be paved over by asphalt tomorrow, and you can’t walk into people’s backyards,” Landman remarks. “But who knows if we could have found this site otherwise without urban development.”

Source of Information : Scientific American September 2009

Wednesday, October 7, 2009

Animals by the Numbers

Counting may be an innate ability among many species BY MICHAEL TENNESEN

Scientists have been skeptical of claims of mathematical abilities in animals ever since the case of Clever Hans about 100 years ago. The horse, which performed arithmetic and other intellectual tasks to delighted European audiences, was in reality simply taking subconscious cues from his trainer. Modern examples, such as Alex the African grey parrot, which could count up to six and knew sums and differences, are seen by some as special cases or the product of conditioning.

Recent studies, however, have uncovered new instances of a counting skill in different species, suggesting that mathematical abilities could be more fundamental in biology than previously thought. Under certain conditions, monkeys could sometimes outperform college students.

In a study published last summer in the Proceedings of the Royal Society B, Kevin C. Burns of Victoria University of Wellington in New Zealand and his colleagues burrowed holes in fallen logs and stored varying numbers of mealworms (beetle larvae) in these holes in full view of wild New Zealand robins at the Karori Wildlife Sanctuary. Not only did the robins flock first to the holes with the most mealworms, but if Burns tricked them, removing some of the insects when they weren’t looking, the robins spent twice as long scouring the hole for the missing mealworms. “They probably have some innate ability to discern between small numbers” as three and four, Burns thinks, but they also “use their number sense on a daily basis, and so through trial and error, they can train themselves to identify numbers up to 12.”

More recently, in the April issue of the same Royal Society journal, Rosa Rugani of the University of Trento in Italy and her team demonstrated arithmetic in newly hatched chickens. The scientists reared the chicks with five identical objects, and the newborns imprinted on these objects, considering them their parents. But when the scientists subtracted two or three of the original objects and left the remainders behind screens, the chicks went looking for the larger number of objects, sensing that Mom was more like a three and not a two. Rugani also varied the size of the objects to rule out the possibility the chicks were identifying groups based simply on the fact that larger numbers of items take up more space than smaller numbers.

For the past five years Jessica Cantlon of the University of Rochester has been conducting a series of experiments with rhesus monkeys that shows how their numerical skills can rival those of humans. The monkeys, she found, could choose the lesser of two sets of objects when they were the same in size, shape and color. And when size, shape and color were varied, the monkeys showed no change in accuracy or reaction time. One animal, rewarded with Kool-Aid, was 10 to 20 percent less accurate than college students but beat them in reaction time. “The monkey didn’t mind missing every once in a while,” Cantlon recounts. “It wants to get past the mistake and on to the next problem where it can get more Kool-Aid, whereas college students can’t shake their worry over guessing wrong.”

Elizabeth Brannon of Duke University has conducted similar experiments with rhesus monkeys, getting them to match the number of sounds they hear to the number of shapes they see, proving they can do math across different senses. She also tested the monkeys’ ability to do subtraction by covering a number of objects and then removing some of them. In all cases, the monkeys picked the correct remainder at a rate greater than chance. And although they might not grasp the deeper concept of zero as a number, the monkeys knew it was less than two or one, conclude Brannon and her colleagues in the May Journal of Experimental Psychology: General.

Although Brannon feels that animals do not have a linguistic sense of numbers— they aren’t counting “one, two, three” in their heads—they can do a rough sort of math by summing sets of objects without actually using numbers, and she believes that ability is innate. Brannon thinks that it might have evolved from the need for territorial animals “to access the different sizes of competing groups and for foraging animals to determine whether it is good to stay in one area given the amount of food retrieved versus the amount of time invested.”

Irene Pepperberg of the Massachusetts Institute of Technology, famous for her 30-year work with Alex the parrot, says that even bees can learn to discriminate among small quantities. “So some degree of ‘number sense’ seems to be able to be learned even in invertebrates, and such learning is unlikely without some underlying neural architecture on which it is based,” she remarks.

Understanding the biological basis of number sense in animals could have relevance to people. According to Brannon, it may suggest to childhood educators that math, usually taught after age four or five, could actually be introduced earlier into the curriculum.

Source of Information : Scientific American September 2009