Wednesday, September 30, 2009

Powering NANOROBOTS - Steering Committee

One limitation of our first fluid-immersed nanorods was that they moved in random directions and were continuously undergoing random turns because of Brownian motion. In realistic applications, of course, nanomachines will need some mechanism to steer them toward their destination.

Our first attempt to solve the steering problem relied on a magnetic field. We embedded nickel disks in the rods. These disks react to magnetic fields like tiny compasses with their north-pole to south-pole axes perpendicular to the length of the cylinders.
A refrigerator magnet held a few millimeters away exerts enough torque on a cylinder to overcome Brownian motion’s tendency to turn the cylinder around at random. The only remaining force is along the length of the rod, supplied by the catalytic reaction. Our nanorods then move in straight lines and can be steered by turning the magnet. This motion is analogous to the behavior of bacteria that align themselves with the earth’s weak magnetic field. Similar motors can navigate in a micronscale magnetic labyrinth, following the field lines through twists and turns.

Last year Crespi and one of us (Sen) showed that the magnetically steered motors are able to pull “cargo” containers—plastic spheres about 10 times their size—through fluids. Many interesting applications can be envisioned for such cargo-bearing motors. For example, they could deliver drugs to particular cells in the body or shuttle molecules along a nanoscale assembly line, where the cargo could chemically bind to other molecules.

Steering nanorobots externally could be useful in some applications; for others, it will be essential that nanorobots be able to move autonomously. Velegol and Sen were excited to discover recently that our catalytic nanorods can follow chemical “bread crumb trails” the way bacteria do. Typically a bacterium moves by a series of straight runs interrupted by random turns. But when a straight run happens to swim up a chemical gradient (for example, the scent of food becoming more intense closer to the food itself), the bacterium extends the length of the straight run. Because favorable runs last longer than those in unfavorable directions, the net effect is that the bacterium eventually converges on its target, even though it has no direct way to steer itself—a strategy called chemotaxis. Our nanomotors move faster at higher concentrations of fuel, and this tendency effectively lengthens their straight runs. Consequently, they move on average toward a source of fuel, such as a gel particle soaked with hydrogen peroxide.

More recently, the two of us have also demonstrated motor particles that are driven by light, or phototaxis. These particles use light to break up molecules and create positive and negative ions. The two types of ions diffuse away at different speeds, setting up an electric field that causes the particles to move. Depending on the nature of the ions released and the charge on the particle, the particles are driven toward or away from the region of highest light intensity. An interesting twist on this technique is a light-driven system in which some particles act as “predators” and others as “prey.” In this case, one kind of particle gives off ions that cause the second kind to be driven toward it. The correlated motion of these particles bears a striking resemblance to white blood cells chasing down a bacterium.

Chemotaxis and phototaxis are still at the proof-of-principle stage, but they could lead to the design of “smart,” autonomous nanorobots, which could move independently toward their target, perhaps by harvesting energy from glucose or other fuels abundant inside organisms or in the environment. Our work can also be a starting point for the design of new robots that could communicate chemically with one another and perform collective functions, such as moving in swarms and forming patterns.

Source of Information : Scientific American(2009-05)

Tuesday, September 29, 2009

Powering NANOROBOTS - Credible Shrinking

Our miniaturized version of the Harvard engine was a gold-platinum rod about as long as a bacterial cell (two microns) and half as wide (350 nanometers). Our rods were mixed into the solution, rather than floating on the surface. Like the ATP-powered molecular motors inside the cell, these tiny catalytic cylinders were essentially immersed in their own fuel. And they did indeed move autonomously, at speeds of tens of microns per second, bearing an eerie resemblance under the microscope to live swimming bacteria [see video at www.SciAm.com/nanomotor]. As often happens in science, however, the hypothesis that led to the experiment was wrong. We had imagined our nanorods spewing tiny bubbles off their back and being pushed along by recoil. But what they actually do is more interesting, because it reminds nanotechnologists that we must think very differently about motion on small length scales.

At the macroscale, the notion of recoil makes good sense. When someone swims or rows a boat, their arms, legs or oars push water backward, and the recoil force pushes the body or boat forward. In this way, a swimmer or boat can glide forward even after one stops pushing. How far an object glides is determined by the viscous force, or drag, and by the inertia, a body’s resistance to changes in its velocity. The drag is proportional to the object’s width, whereas the inertia is proportional to the object’s mass, which in turn is proportional to the width to the third power. For smaller objects, inertia scales down much faster than drag, becoming negligible, so that drag wins out. On the micron scale, any gliding ends in about one microsecond, and the glide distance is less than one 100th of a nanometer. Hence, for a micronsize body in water, swimming is a bit like wading through honey. A nanomotor has no memory of anything that pushed on it—no inertia and inertial propulsion schemes (such as drifting conafter the recoil from bubbles) are hopeless. The way our nanorods actually work is that they apply a continuous force to prevail over the drag with no need for gliding. At the platinum end, each H2O2 molecule is broken down into an oxygen molecule, two electrons and two protons. At the gold end, electrons and protons combine with each H2O2 molecule to produce two water molecules. These reactions generate an excess of protons at one end of the rod and a dearth of protons at the other end; consequently, the protons must move from platinum to gold along the surface of the rod.

Like all positive ions in water, protons attract the negatively charged regions of water molecules and thus drag water molecules along as they move, propelling the rod in the opposite direction, as dictated by Newton’s law of motion that every action has an equal and opposite reaction.

Once this principle was established (with the help of our students and our Penn State collaborators Vincent H. Crespi, Darrell Velegol and Jeffrey Catchmark), several other catalytic nanomotor designs followed. And Adam Heller’s research group at the University of Texas at Austin and Joseph Wang’s group at Arizona State University showed that mixtures of different fuels— glucose and oxygen or H2O2 and hydrazine— could make motors run faster than they do with a single fuel.

Whereas freely suspended metal nanorods move with respect to the bulk solution, an immobilized metal structure in the presence of H2O2 will induce fluid flows at the interface between the structure and the fluid, thereby potentially powering the motion of something else immersed in the fluid. We have demonstrated this fluid-pumping effect on a gold surface patterned with silver.

Source of Information : Scientific American(2009-05)

Monday, September 28, 2009

Powering NANOROBOTS

Catalytic engines enable tiny swimmers to harness fuel from their environment and overcome the weird physics of the microscopic world.

Imagine that we could make cars, aircraft and submarines as small as bacteria or molecules. Microscopic robotic surgeons, injected in the body, could locate and neutralize the causes of disease—for example, the plaque inside arteries or the protein deposits that may cause Alzheimer’s disease. And nanomachines— robots having features and components at the nanometer scale—could penetrate the steel beams of bridges or the wings of airplanes, fixing invisible cracks before they propagate and cause catastrophic failures.

In recent years chemists have created an array of remarkable molecular-scale structures that could become parts of minute machines. James Tour and his co-workers at Rice University, for instance, have synthesized a molecularscale car that features as wheels four buckyballs (carbon molecules shaped like soccer balls), 5,000 times as small as a human cell.

But look under the hood of the nanocar, and you will not find an engine. Tour’s nanocars so far move only insofar as they are jostled by random collisions with the molecules around them, a process known as Brownian motion. This is the biggest current problem with molecular machines: we know how to build them, but we still do not know how to power them.

At the scales of living cells or smaller, that task poses some unique challenges. Air and water feel as thick as molasses, and Brownian motion militates against forcing molecules to move in precise ways. In such conditions, nanoscale versions of motors such as those that power cars or hair dryers—assuming that we knew how to build them that small—could never even start.

Nature, in contrast, provides many examples of nanomotors. To see the things they can do, one need only look at a living cell. The cell uses nanoengines to change its shape, push apart its chromosomes as it divides, construct proteins, engulf nutrients, shuttle chemicals around, and so on. All these motors, as well as those that power muscle contractions and the corkscrew motion of bacterial flagella, are based on the same principle: they convert chemical energy— usually stored as adenosine triphosphate, or ATP—into mechanical energy. And all exploit catalysts, compounds able to facilitate chemical reactions such as the breakdown of ATP. Researchers are now making exciting progress toward building artificial nanomotors by applying similar principles.

In 2004 we were part of a team at Pennsylvania State University that developed simple
nanomotors that catalytically con vert the energy stored in fuel molecules into motion. We took inspiration from a considerably larger catalytic motor reported in 2002 by Rustem Ismagilov and George Whitesides, both at Harvard University. The Harvard team had found that centimeter-scale “boats” with catalytic platinum strips on their stern would spontaneously move on the surface of a tank of water and hydrogen peroxide (H2O2). The platinum promoted the breakup of H2O2 into oxygen and water, and bubbles of oxygen formed that seemed to push the boats ahead by recoil, the way the exhaust coming out the back of a rocket gives it forward thrust.


Key Concepts
• Nanotechnology promises futuristic applications such as microscopic robots that assemble other machines or travel inside the body to deliver drugs or do microsurgery.

• These machines will face some unique physics. At small scales, fluids appear as viscous as molasses, and Brownian motion makes everything incessantly shake.

• Taking inspiration from the biological motors of living cells, chemists are learning how to power microsize and nanosize machines with catalytic reactions.


Source of Information : Scientific American(2009-05)

Sunday, September 27, 2009

Women, testosterone and finance - Risky business

Hormones, not sexism, explain why fewer women than men work in banks

THAT the risk-taking end of the financial industry is dominated by men is unarguable. But does it discriminate against women merely because they are women? Well, it might. But a piece of research just published in the Proceedings of the National Academy of Sciences by Paola Sapienza of Northwestern University, near Chicago, suggests an alternative—that it is not a person’s sex, per se, that is the basis for discrimination, but the level of his or her testosterone. Besides being a sex hormone, testosterone also governs appetite for risk. Control for an individual’s testosterone levels and, at least in America, the perceived sexism vanishes.

Dr Sapienza and her colleagues worked with aspiring bankers (MBA students from the University of Chicago). They measured the amount of testosterone in their subjects’ saliva. They also estimated the students’ exposure to the hormone before they were born by measuring the ratios of their index fingers to their ring fingers (a long ring finger indicates high testosterone exposure) and by measuring how accurately they could determine human emotions by observing only people’s eyes, which also correlates with prenatal exposure to testosterone.

The students were then presented with 15 risky choices. In each they had to decide between a 50:50 chance of getting $200 or a gradually increasing sure payout, which ranged from $50 up to $120. (Some of this money was actually paid over at the end of the experiment, to make the consequences real.) The point at which a participant decided to switch from the gamble to the sure thing was reckoned a reasonable approximation of his appetite for risk.

As the researchers suspected, women and men with the same levels of testosterone generally switched at the same time, demonstrating similar risk preferences. In other words, women who had more testosterone were more risk-loving than women with less, while the data for men at the lower end of the spectrum displayed a similar relationship. Curiously, the relationship between testosterone and risk taking was not as strong for men with moderate to high levels of the stuff, though previous studies have shown this relationship can be significant as well.

In all cases the correlation was strongest when the salivary measure of testosterone was used, suggesting that it is the here and now, rather than the developmental effects of testosterone on the brain, that is making the difference.

The researchers then followed the subjects’ progress after they graduated, to see what sort of careers they entered. As expected, men were more likely than women to choose a risky job in finance. Again, though, the difference was accounted for entirely by their levels of salivary testosterone. The researchers also studied the subjects’ personal investment portfolios after they had graduated, once in June 2008 (pre-crash) and again in January 2009 (post-crash). In a paper that has yet to be published, they demonstrate that the riskiness of these portfolios, too, was strongly correlated with subjects’ responses in the lottery game. The past year has, presumably, been kind to those with low testosterone levels.

Source of Information : The Economist August 29 2009

Saturday, September 26, 2009

Folic acid - On the pill

Fortification programmes may lead to overconsumption of folic acid

MOST people who seek to lead a healthy lifestyle know that they should eat an array of fruits and vegetables every day. But when good intentions go awry, or just as an insurance policy, there are always vitamin pills.

In particular, women hoping to become pregnant are advised in many countries to take double the recommended dose of folic acid. But because some people do not plan their pregnancies, several countries require that the grains used to make bread, breakfast cereals and suchlike be fortified with folic acid. It has worked. The Centres for Disease Control and Prevention, an American government agency, says that after fortification began, in 1998, the reported prevalence of spina bifida in babies declined by 31%. Research is also going on into whether high doses of the chemical prevent cardiovascular disease, strokes and cognitive decline. But can people consume too much folic acid?

The results of a study published in the Proceedings of the National Academy of Sciences suggest they can. Folic acid is a precursor of folate, a vitamin found in foods such as spinach and oranges. It is added to other foods because it was once thought to be the active vitamin. In fact, it is converted to folate in the liver by the addition of four hydrogen atoms.

However, Steven Bailey of the University of South Alabama and Bruce Ames of the University of California, Berkeley, warn that the liver has only a limited ability to make this conversion. This discovery is consistent with reports that unmetabolised folic acid is found in human blood and urine.

The good news is that the recommended daily dose of 0.4 mg is converted into folate in most people. The bad is that the amount put into cereals in America can lead people to consume up to 0.8 mg per standard serving. On top of this, pregnant women may be consuming a similar amount of folic acid from supplement pills.

The researchers warn that intakes of folic acid of more than 1mg a day, from whatever source, will increase the body’s exposure to circulating unmetabolised folic acid. This is not to be recommended, because high doses of folic acid are suspected of exacerbating certain cancers. That concern has led some countries, and the European Union collectively, to put programmes for grain fortification on hold. Dr Bailey and Dr Ames stress that folic-acid pills are good during pregnancy, and that aspiring mothers should not give them up. But for the rest the message is clear: eat your spinach.

Source of Information : The Economist August 29 2009

Friday, September 25, 2009

The virtues of biochar - A new growth industry?

Biochar could enrich soils and cut greenhouse gases as well

CHARCOAL has rather gone out of fashion. Before the industrial revolution, whole forests disappeared into the charcoal-burners’ maw to provide the carbon that ironmakers need to reduce their ore to metal. Then, an English ironmaker called Abraham Darby discovered how to do the job with coke. From that point onward, the charcoal-burners’ days were numbered. The rise of coal, from which coke is produced, began, and so did the modern rise of carbon dioxide in the atmosphere.

It is a sweet irony, therefore, that the latest fashion for dealing with global warming is to bring back charcoal. It has to be rebranded for modern consumers, of course, so it is now referred to as “biochar”. But there are those who think biochar may give humanity a new tool to attack the problem of global warming, by providing a convenient way of extracting CO2 from the atmosphere, burying it and improving the quality of the soil on the way.

Many of those people got together recently at the University of Colorado, to discuss the matter at the North American Biochar Conference. They looked at various ways of making biochar, the virtues of different raw materials and how big the benefits really would be.

The first inkling that putting charcoal in the ground might improve soil quality came over a century ago, when an explorer named Herbert Smith noticed that there were patches of unusually rich soils in the Amazon rainforest in Brazil. Most of the forest’s soil is heavily weathered and of poor quality. But the socalled “terra preta”, or “black earth”, is much more fertile.

This soil is found at the sites of ancient settlements, but it does not appear to be an accidental consequence of settlement. Rather, it looks as though the remains of burned plants have been mixed into it deliberately. And recently, some modern farmers inspired by Wim Sombroek, a Dutch soil researcher who died in 2003—have begun to do likewise.



Char grilled
The results are impressive. According to Julie Major, of the International Biochar Initiative, a lobby group based in Maine, infusing savannah in Colombia with biochar made from corn stover (the waste left over when maize is harvested) caused crops there to tower over their char-less peers. Christoph Steiner, of the University of Georgia, reported that biochar produced from chicken litter could do the same in the sandy soil of Tifton in that state. And David Laird, of America’s Department of Agriculture, showed that biochar even helped the rich soil of America’s Midwest by reducing the leaching from it of a number of nutrients, including nitrate, phosphate and potassium.

All of which is interesting. But it is the idea of using biochar to remove carbon dioxide from the atmosphere on a semi-permanent basis that has caused people outside the field of agriculture to take notice of the stuff. Sombroek wrote about the possibility in 1992, but only now is it being taken seriously.

In the natural carbon cycle, plants absorb CO2 as they grow. When they die and decompose, this returns to the atmosphere. If, however, they are subjected instead to pyrolysis—a process of controlled burning in a low-oxygen atmosphere—the result is charcoal, a substance that is mostly elemental carbon. Although life is, in essence, a complicated form of carbon chemistry, living creatures cannot process carbon in its elemental form. Charcoal, therefore, does not decay very fast. Bury it in the soil, and it will stay there. Some of the terra preta is thousands of years old.

Moreover, soil containing biochar releases less methane and less nitrous oxide than its untreated counterparts, probably because the charcoal acts as a catalyst for the destruction of these gases. Since both of these chemicals are more potent greenhouse gases than carbon dioxide, this effect, too, should help combat global warming. And the process of making biochar also creates beneficial by-products. These include heat from the partial combustion, a gaseous mixture called syngas that can be burned as fuel, and a heavy oil.

Taking all these things together—the burial of the charcoal and the substitution for fossil fuels of the heat, gas and oil produced by its manufacture—Johannes Lehmann of Cornell University and Jim Amonette of the Pacific Northwest National Laboratory in Washington state suggest that a reduction of between one and two gigatonnes of carbon emission a year might be achievable. That compares with current annual emissions of some 9.7 gigatonnes. But the truth is that the computer modelling involved in making these estimates is a work in progress, as researchers do not know a lot of pertinent things accurately enough: how much material is available for conversion, for example; how much land is available for biochar to be ploughed into; how much char that land could handle. Dr Amonette’s estimate is that 50 tonnes per hectare—a figure larger than that used in most of the experiments conducted so far—could go into soils without harming productivity. Some soils could take even more.

The claims for biochar are not supported by all, however. Biofuels Watch, a British lobby group, worries that a craze for the stuff could see virgin land tilled specifically to grow crops such as switchgrass, whose only purpose was to be pyrolised and buried. That tillage would release carbon dioxide and methane. But the alternative, growing those crops on existing farmland, would encourage the clearance of more land to grow the food crops that had been displaced. Indeed, Kelli Roberts, another researcher at Cornell, told the meeting that, taking all factors into account, growing switchgrass for biochar may do more harm than good. Corn stover, garden waste such as grass clippings, and offcuts from forestry and timber production are better bets, she reckons.

And if sequestration by biochar is deemed sensible, there remains the question of how, exactly, to go about it. Making the charcoal is not a problem. Pyrolising stoves are easy to construct and available models range from the portable to industrial-scale machines costing tens of thousands of dollars. Moreover, Jock Gill of Pellet Futures, a company based in Vermont that makes grass and wood pellets for use as fuel, told the meeting that a teenage protégé of his has invented a stove that can be fed continuously, rather than processing batches of raw material. If that proves successful, it would be a breakthrough of the sort that has enabled other industries (not least ironmaking) to take off in the past.

The benefits of improving their soil should be enough to persuade some farmers to make and bury biochar. Others, though, may need more incentives—probably in the form of carbon “offsets” that compensate for emissions elsewhere. In the rich world, Europe already caps carbon-dioxide emissions, and trades permission to emit the gas. America may soon do so too. CO2-emitting industries could pay farmers to buy stoves to char and sequester farm waste. That would mean working out how much of what kind of biochar counts as a tonne of CO2 sequestered, and would also need a lot of policing.



A charcoal sketch
If the details can be nailed down, though, farmers in poor countries could get in on the act too, through the Clean Development Mechanism, a United Nations’ programme that allows rich-world emitters to buy offsets in the poor world. And Lakshman Guruswamy, of the University of Colorado, told the meeting of another advantage if poor-world farmers can be brought in. Many of them burn wood, waste and dung indoors for heating and cooking. The soot released into the air as a consequence is also a climate-changer because, being dark, it absorbs heat. Much worse, though, about 1.6m people are killed each year by inhaling it. But pyrolytic stoves produce almost no soot—the carbon is all locked into the biochar. Worldstove, a firm based in Italy, seeks to provide small and simple pyrolising stoves to poor countries.

It is all, then, an intriguing idea. It certainly will not solve the carbon-dioxide problem, but it could be what Robert Socolow of Princeton University refers to as a wedge—one of a series of slices that, added together, do solve it. And there would be a nice historical justice in the substance that was displaced by coal playing an important role in cleaning up the mess that coal has left behind.

Source of Information : The Economist August 29 2009

Thursday, September 24, 2009

Progress in Tissue Engineering - Coming of Age

Despite the ongoing challenges we have described, engineered tissues are no longer a fantastical prospect. Simple manufactured tissues are already in clinical use, and this method of restoring or replacing biological function is now poised to become a viable therapy for millions of patients in need. As of late 2008, various tissueengineered products generated annual sales of nearly $1.5 billion.

Those figures are all the more impressive in light of setbacks to the field that occurred shortly after we last wrote for this magazine about the promise of tissue engineering. At the end of the 1990s and into the early 2000s, enthusiasm and investment were high, but with the burst of the Internet financial bubble, funding for biotechnology start-ups dwindled. Even companies with tissue-engineered products approved by the Food and Drug Administration had to restructure their business models, delaying introduction of their products to the market. Because engineered tissues are made from cells, biologically active chemicals and nonbiological scaffold materials, the constructs must undergo rigorous analysis by the FDA, which is costly and time-consuming. A lack of funding made conducting extensive clinical trials more difficult for companies. Ironically, the delay in commercializing some tissueengineered products had an upside—it bought time for the science to mature and for business approaches to become more sophisticated.

There is still room for improvement. Obtaining FDA approval is still a major hurdle, in part because cells obtained from different people may not behave alike and because recipients can have varying responses to the same kind of implant. Such unpredictability can make it difficult for the FDA to determine that a given engineered construct is safe and effective. Further research is therefore important to measure and understand variations between individuals and to account for them in clinical trials that study tissueengineered products. And future business models must include the extensive costs that will be associated with this work.

Still, armed with recent insights into how tissues develop and how the body repairs itself naturally, tissue engineers are now aiming to create second-generation products that are closer mechanically, chemically and functionally to their biological counterparts than ever before. Even in today’s strained economic climate, we expect that research into nanotechnology, stem cell biology, systems biology and tissue engineering will soon converge to yield fresh ideas for devising the sophisticated organ substitutes needed by so many people today.

Source of Information : Scientific American(2009-05)

Wednesday, September 23, 2009

Progress in Tissue Engineering - Architectural Advances

A decade ago researchers assumed that cells are smart: if we put the correct cell types in proximity to one another, they would “figure out” what to do to form their native tissues. To some degree, this approach is effective, but we now have a greater appreciation of the intricacy of signals exchanged among cells and their surroundings during organ and tissue development as well as during normal functioning, and we know the importance of providing a tailored environment in our constructs.

Further, every tissue in the body performs specific tasks that engineered replacements must be able to perform, and we are learning that replicating the underlying biology of the tissue in question as closely as possible is critical to generating tissues that can carry out their intended functions. In more complex organs, multiple cell types work in concert—in the liver, for instance, the cells’ jobs include detoxification and nutrient breakdown. Thus, the microarchitecture of tissues and the positioning of cells relative to one another must be re-created in tissue-engineered constructs to reproduce the desired functionality. Early tissue-engineering work used scaffolds made from assorted materials to try to replicate the 3-D shape of the tissue as well as crudely approximate this spatial cell organization. A number of advances in the past few years have enhanced the level of complexity of engineered tissues and reproduced the tissue environment more closely. For example, scaffolds have been made by removing all the cells from natural tissues, leaving only connective fibers. These shells can be used to grow engineered tissues that recreate a significant amount of the function of the original tissue. In one particularly impressive study, decellularized rodent heart scaffolds that were seeded with cardiac and endothelial cells produced cardiac muscle fibers and vascular structures that grew into a beating heart.

Assorted “printing” technologies can also be used to arrange cells precisely. By modifying standard ink-jet printers, engineers can dispense cells themselves or scaffold materials to generate tissues or frameworks onto which cells can be seeded. Mimicking the tissue’s natural topography also helps to guide the cells, and another technology borrowed from the engineering world, electrospinning, can produce scaffolds that resemble the texture of natural tissue matrix. Very thin polymer fibers are spun to form weblike scaffolds, which provide cells with a more natural 3-D environment, and the chemical and mechanical features of the polymer materials can be finely manipulated. David Kaplan of Tufts University has fashioned similar scaffolds from silk materials that resemble spider webs to generate ligaments and bone tissues.

Because the biological, chemical and mechanical properties of hydrogels can be readily manipulated, the gels are proving useful for supporting and encasing cells while enhancing the function of the resulting tissues. Hydrogels containing live cells can be “printed” or otherwise arranged and layered to delineate correct tissue structure. One of us (Khademhosseini) has shown, for example, that hydrogel-encased cell aggregates can be molded into any number of complementary shapes [see box on next page], then pooled together to self-organize into a larger complex pattern. This technique could be used to replicate the natural organization of cells in a tissue such as the liver, which is made up of hexagonal structures that each contain toxin-filtering cells surrounding a central blood vessel.

Some gels are designed so that their polymers link together in response to ultraviolet light, making it possible to sculpt the desired construct shape and then solidify it by exposing all or parts of the construct to light. Kristi Anseth of the University of Colorado at Boulder and Jennifer Elisseeff of Johns Hopkins University have generated cartilage and bone tissue using such photocrosslinkable hydrogels. Gels can also be imbued with a number of signaling molecules to promote tissue growth or differentiation. Samuel Stupp of Northwestern University has shown that neural stem cells differentiate into neurons within a hydrogel that incorporates small proteins that act as environmental signals directing the cells’ behavior. Helen M. Blau of Stanford University has also used hydrogels containing extracellular matrix components to control and study the properties of individual stem cells.

Finally, nanotechnology has been enlisted to generate engineered sheets of cells suitable for transplantation. Teruo Okano of Tokyo Women’s Medical University has produced surfaces coated with a temperature-responsive polymer that swells as the temperature is lowered from 37 to 20 degrees Celsius. Cells are first induced to form a single layer on these nanoengineered surfaces, then the temperature is lowered to swell the underlying substrate and detach the intact cell sheet. These cell sheets, which contain appropriate cellsecreted matrix molecules can then be stacked or rolled to build larger tissue constructs.

Although these advances have made a significant improvement in the range and diversity of scaffolds that can be generated, challenges persist in this area as well. One difficulty I a lack of knowledge of the concentrations and combinations of growth factors and extracellular molecules that are present at specific stages of development and wound healing in various tissues. A better understanding of these design parameters is needed to engineer tissues that mimic the body’s own healing and development. Thus, tissue engineers are looking to other fields for insights, including studies of gene and protein interactions in developing tissues and regenerating wounds. Incorporating these findings with advanced culture systems is helping us to better control the responses of cells outside the body, but more progress is needed.

Source of Information : Scientific American(2009-05)

Tuesday, September 22, 2009

Progress in Tissue Engineering - Suitable Cells

In most situations, building an implantable tissue from a patient’s own cells would be ideal because they are compatible with that person’s immune system. Realistically, such implants might also face fewer regulatory hurdles because the material is derived from the patient’s own body. The ability of normal cells to multiply in culture is limited, however, making it difficult to generate sufficient tissue for an implant. Socalled adult stem cells from the patient’s body or from a donor are somewhat more prolific, and they can be isolated from many sources, including blood, bone, muscle, blood vessels, skin, hair follicles, intestine, brain and liver.

Adult stem cells—which occur in adult tissues and are able to give rise to a variety of cell types characteristic of their native tissue—are difficult to identify, however, because they do not look very different from regular cells. Scientists therefore must look for distinctive surface proteins that serve as molecular markers to flag stem cells. The identification of additional markers would make it considerably easier to work with adult stem cells in tissue-engineering applications. Fortunately, over the past few years a number of major advances have been made, including development of novel methods of isolating the cells and inducing them to proliferate and to differentiate into various tissue types in culture.

Notably, Christopher Chen and Dennis Discher, both at the University of Pennsylvania, have demonstrated that mesenchymal stem cells, which are typically derived from muscle, bone or fat, will respond to mechanical cues from their surroundings. They have been shown to endifferentiate into the tissue that most closely resembles the stiffness of the substrate material on which they are growing. Other researchers have also shown that chemical signals from the substrate and surrounding environment are important for directing the differentiation of adult stem cells into one tissue type or another. Scientists disagree, though, about whether adult stem cells are able to give rise to cells outside their own tissue family—for instance, whether a mesenchymal stem cell could generate liver cells.

In contrast to adult stem cells, embryonic stem (ES) cells are easy to expand in culture and can differentiate into all the cell types of the human body. Langer, along with Shulamit Levenberg of the Technion-Israel Institute of Technology in Haifa and her colleagues, has demonstrated that ES cells can even be made to differentiate into a desired tissue type right on tissue-engineering scaffolds. This capability suggests the potential to make 3-D tissues on scaffolds directly from differentiating ES cells. These cells do present various challenges, however.

Directing the uniform differentiation of ES cells into the desired cell types is still quite difficult. In attempts to mimic the complex natural microenvironment of ES cells and to optimize their differentiation, investigators are testing many conditions simultaneously to find the right combination of cues from different materials and matrix chemicals. They are also screening various small molecules as well as signaling proteins to identify factors that control “stemness”—the cells’ ability to give rise to differentiated progeny while remaining undifferentiated themselves, ready to produce more new cells as needed.

Those insights could also be applied to producing cells with the capabilities of embryonic cells but fewer of the drawbacks. Beyond the difficulties just outlined, scientists are still unable to predict the behavior of transplanted stem cells in patients. Undifferentiated ES cells can form tumors, for instance, creating a risk of cancer if the cells are not all successfully differentiated before transplantation. In addition, researchers have been making efforts to address the ethical issues associated with deriving ES cells from human embryos by exploring approaches to producing ES-like cells from nonembryonic sources.

In the past couple of years remarkable progress has been made in producing ES-like cells from regular adult body tissue, such as skin cells. These altered cells, known as induced pluripotent stem (iPS) cells, are emerging as an exciting alternative to ES cells as a renewable resource for tissue engineering. In 2007 Shiro Yamanaka, then at Kyoto University, and James A. Thomson of the University of Wisconsin–Madison first showed that cells of adult tissue can be transformed to a primitive iPS state by reactivating a number of genetic pathways that are believed to underlie stemness. Reintroducing as few as four master regulatory genes into adult skin cells, for instance, caused the cells to revert to a primitive embryonic cell type. The early experiments used a virus to insert those genes into the cells, a technique that would be too dangerous to use in tissues destined for patients. More recent research has shown that a safer nonviral technique can be adapted to activate the same repertoire of stemness genes and even that activation of just a single regulatory gene may be sufficient. The rapid progress in this area has tissue engineers hopeful that soon a patient’s own cells, endowed with ES cell capabilities, could become the ideal material for building tissue constructs. And even as we experiment with these different cell types, tissue engineers are also refining our building methods.

Source of Information : Scientific American(2009-05)

Monday, September 21, 2009

Progress in Tissue Engineering - Delivering Life’s Blood

One reason that tissues such as skin and cartilage were among the first to be ready for human testing is that they do not require extensive internal vasculature. But most tissues do, and the difficulty of providing a blood supply has always limited the size of engineered tissues. Consequently, many scientists are focusing on designing blood vessels and incorporating them in engineered tissues. Any tissue that is more than a few 100 microns thick needs a vascular system because every cell in a tissue needs to be close enough to capillaries to absorb the oxygen and nutrients that diffuse constantly out of those tiny vessels. When deprived of these fuels, cells quickly become irreparably damaged. In the past few years a number of new approaches to building blood vessels—both outside tissues and within them—have been devised. Many techniques rely on an improved understanding of the environmental needs of endothelial cells (which form capillaries and line larger vessels), as well as an advanced ability to sculpt materials at extremely small scales. For example, when endothelial cells are laid on a bed of
scaffolding material whose surface is patterned with nanoscale grooves—1,000th the diameter of a human hair—they are encouraged to form a network of capillarylike tubes. The grooves mimic the texture of body tissues that endothelial cells rest against while forming natural blood vessels, thus providing an important environmental signal.

Microfabrication, the set of techniques used to etch microelectronics chips for computers and mobile phones, has also been employed to make capillary networks. Vacanti, with Jeffrey T. Borenstein of the Draper Laboratory in Cambridge, Mass., has generated arrays of microchannels to mimic tissue capillary networks directly within degradable polymer scaffolds, for instance. Inside these channels, endothelial cells can be cultured to form blood vessels while also acting as a natural barrier that minimizes the fouling effect of blood on the scaffold materials. An alternative is to use a membrane filter to separate the blood-carrying channels from the functional cells in a tissue construct. Another method for keeping cells and blood separate but close enough to exchange a variety of molecules is to suspend them within hydrogels, which are gelatinlike materials made from hydrated networks of polymers. Hydrogels chemically resemble the natural matrix that surrounds all cells within tissues. The functional cells can be encapsulated inside the material, and channels running through the gel can be lined with endothelial cells to engineer tissue like structures with a protovasculature.

Research from the laboratories of Laura Niklason of Yale University and Langer has shown that larger blood vessels can be generated by exposing scaffolds seeded with smooth muscle cells and endothelial cells to pulsating conditions inside a bioreactor. Arteries made in this environment, which is designed to simulate the flow of blood through vessels in the body, are mechanically robust and remain functional after being transplanted into animals. In addition to enabling tissue engineers to incorporate such vessels into larger constructs, the engineered tubes by themselves may provide grafts for bypass surgery in patients with atherosclerosis.

Although the ability to engineer capillarylike structures and larger blood vessels outside the body is a significant breakthrough, a working engineered tissue implant will have to connect quickly with the recipient’s own blood supply if the construct is to survive. Coaxing the body to form new vasculature is therefore an equally important aspect of this work. David Mooney of Harvard University, for example, has demonstrated that the controlled release of chemical growth factors from polymeric beads or from scaffold material itself can promote the formation of blood vessels that penetrate implanted tissue constructs.

Pervasis Therapeutics, with which Langer and Vacanti are affiliated, is conducting advanced clinical trials in which a variation of this principle is applied to healing a vascular injury. A three-dimensional scaffold containing smooth muscle and endothelial cells is transplanted adjacent to the site of the injury to provide growthstimulating signals and to promote natural rebuilding of the damaged blood vessel.

Despite these advances, a number of challenges still remain in making large vascularized tissues and vascular grafts, and scientists have not yet completely solved this problem. New blood vessels grow and penetrate an implanted tissue construct slowly, causing many of the construct’s cells to die for lack of a blood supply immediately after implantation. For this reason, tissue-engineering approaches that include a vascular system prefabricated within the tissue construct are very likely to be necessary for large transplants. Such prefabricated vessels may also be combined with controlled release of blood vessel–recruiting growth factors to induce further growth of the construct’s vessels. Because integrating the engineered vasculatures and those of the host is also critical, researchers need a better understanding of the cross talk between the host tissue cells and implanted cells to foster their connection. This need to decipher more of the signals that cells exchange with one another and with their environments also extends to other aspects of building a successful tissue implant, such as selecting the best biological raw materials.

Source of Information : Scientific American(2009-05)

Sunday, September 20, 2009

Reading bar codes with mobile phones

Snap it, click it, use it

A new way to deliver information to mobile phones is spreading around the world

NEGOTIATING his way across a crowded concourse at a busy railway station, a traveller removes his phone from his pocket and, using its camera, photographs a bar code printed on a poster. He then looks at the phone to read details of the train timetable displayed there. In Japan, such conveniences are commonplace, and almost all handsets come with the bar code-reading software already loaded. In America and Europe, though, they are only just being introduced.

Actually, calling them bar codes is a bit old-fashioned, because they store information in a two-dimensional (2-D) matrix of tiny squares, dots or other geometric patterns, rather than a stripe of black-and-white lines of varying thickness. When an image of the matrix is captured, software in the phone converts it into a web address, a piece of text or a number. If a number, it is sent to a remote computer which responds with an instruction that tells the phone to perform an action associated with that particular bar code.

In the case of the traveller, this might be calling up a web page on which the train timetable is displayed. Other 2-D bar codes might add an event to a calendar or display a coupon that entitles the bearer to a discount on a hamburger. Indeed, one magazine recently featured a bar code alongside archive images of famous swimsuit models. Lecherous readers who photographed it were rewarded with additional pictures.
In Japan, 2-D bar codes appear not only on posters and in magazines but also T-shirts, scarves and even as art. They can even be displayed on monitor screens, allowing people to store a web address for whatever they are looking at on a computer on their phones, for future recall.

In America and Europe, three types of bar code, called QR Code, Data Matrix and Ezcode, are likely to become common. The first two are free, open standards. Ezcode is owned by a New York-based firm called Scanbuy, but it, too, is available free, for general purposes. The firm behind it makes its money by charging advertisers and publishers when people use it. In July three mobile-phone operators in Spain—Orange, Telefónica and Vodafone—agreed to load software that recognises Ezcode. Scanbuy has also signed a deal with two Danish operators and two in South America. In the United States, a Samsung mobile phone, the Exclaim, has become the first to be sold with Scanbuy’s program already loaded. Meanwhile a Swiss software firm called Kaywa has been collaborating with Welt Kompakt, a condensed version of Die Welt, one of Germany’s leading newspapers, to run QR Codes next to articles. It has also developed software for SBB, the Swiss Federal Railway, so that passengers can scan 2-D bar codes on trains and at stations to call up timetables.



A game of tag
A new format for bar codes appeared in January, when Microsoft unveiled Tag, a system that uses colour to increase the density of information that can be stored. A Tag code occupies about one-eighth of the space of a comparable QR Code. It also allows coding elements to be incorporated into designs. One sample code shows a photograph of coloured jelly beans that includes Tag data. Others show company logos, balloons and birds. Bar codes that do not need specialised software to read them are also being developed. These encode less information than their more sophisticated counterparts but can be used by people who own simpler mobile phones, because the image-recognition process is handled elsewhere. The codes made by JAGTAG, of Princeton, New Jersey, for example, can be photographed using a camera phone and then sent to a messaging service that analyses the code and sends back appropriate information. Sports Illustrated used the JAGTAG system when it sent its readers those extra images of swimsuit models, and the system has also been used to advertise Nike, Sony and a restaurant chain called Qdoba. Bar codes, then, could be on the point of breaking out of their native environment. It has been a long and curious journey from the supermarket checkout.

Source of Information : The Economist August 22 2009

Saturday, September 19, 2009

The smell of death - The dogs have had their day

Analysing the smell of corpses may help to find the dead—and the living

IT IS morbid, but true, that one of the ways survivors of natural disasters are found is by bringing out bloodhounds that have been trained to find the dead. Often, when these dogs reveal corpses under rubble, rescuers also find previously undetected air pockets with living people in them. Dogs are useful when police officers are looking for the buried corpses of murder victims, too. Such animals, however, require feeding and housing when they are not being used, and expert handling when they are. It might be better, therefore, if they could be replaced by machines—and that is what Sarah Jones and Dan Sykes of Pennsylvania State University propose to do. As they outlined to this week’s meeting of the American Chemical Society, in Washington, DC, they have been analysing the smell of corpses, with a view to automating the process of detecting them.

The idea of doing this is not entirely new, but when Ms Jones and Dr Sykes looked at the previous studies they found that the corpses used were usually at least three days old. That is an almost inevitable delay if human cadavers are employed, since permission must be sought and forms filled in. But the two researchers wanted to know which chemicals emanate from freshly dead bodies, as well as from those that have been around for a few days. So they decided to compromise and work on pigs instead of people. Pigs are often used as substitutes for humans in early-stage medical experiments, as they are about the same size. For that reason, too, they decay at the same rate and go through the same phases of decomposition.

To carry out their experiment Ms Jones and Dr Sykes put dead pigs inside small wooden chambers that were covered with clear plastic sheets. The chambers, which were placed in the middle of a field, prevented large scavengers from coming along and dragging the corpses away. Insects and bacteria from the air above and the grass below, however, had easy access to the bodies.

The chemicals of decay were sampled by three collectors inserted through holes in the plastic sheet. The collectors contained fibres of different compositions, each of which absorbed a unique range of volatile molecules such as organic acids, alcohols, amines and ketones. The fibres were replaced at regular intervals over the course of seven days and taken away for analysis by a device called a gas chromatograph-mass spectrometer.

As Ms Jones and Dr Sykes told the meeting, the composition of the chemicals released by the corpses did, indeed, change over time. Acids made up 32% of the compounds detected in the first two days, but dropped to 25% after the third day. Some of the individual acids that were released, such as pentanoic and benzoic acid, were present
throughout all seven days that the experiment was conducted. Some, including tetradecanoic acid, were found early but not late in the decomposition process, while others, such as acetic and propanoic acid, were found late but not early. Similar results were found with other sorts of volatile compounds.

This information will be useful for establishing “time of death” in murder investigations. It could also be used to finetune an electronic nose, if such a device were to be used at the site of a disaster. Electronic noses rely on detecting changes in the electrical conductivity of various substances when they absorb particular target molecules, and could easily be brought in to replace dogs if they were thought reliable enough.

For that to happen, more research will be needed. The pigs will have to be staked out in a wide variety of environments, and more experiments on human corpses would be desirable, as well. The result, though, could be lives saved and murderers caught—as well as a few redundant bloodhounds.

Source of Information : The Economist August 22 2009

Friday, September 18, 2009

How to stop an outbreak

A mathematical model suggests a new way to allocate vaccines

THE existing formula is simple. When vaccinating against influenza, inoculate those most susceptible to the disease’s wrath. Such vulnerable types include the elderly (who are the most likely to die if infected) and infants (whose immune systems are not fully developed). This seems a reasonable policy, and it is the one that has long been promulgated by America’s Centres for Disease Control (CDC). Only recently has it been extended to include children up to the age of 18, on the basis that they are more likely than other people to catch flu in the first place, through enforced socialising at school—even though they are at little risk of dying from it.

According to Jan Medlock of Clemson University in South Carolina, and Alison Galvani of Yale, however, vaccinating those most at risk of bad effects is not the right way to deal with the disease. In a report published this week in Science, they argue that even with the extension of vaccination to school-age children, the existing policy of protecting the individual is still playing down the real public-health value of vaccines—namely that they create a so-called herd immunity which helps to break the disease’s chain of transmission.

They argue that it would be better to concentrate on vaccinating those most likely to spread the virus— both schoolchildren and people between the ages of 30 and 40, who are likely to be the parents of those children, and who are, at the moment, at the bottom of the vaccination priority list. That, at least, is the outcome of their mathematical model of how influenza spreads. Indeed, it is almost all of the outcomes. For in order to obtain a robust result, Drs Medlock and Galvani considered two different sorts of epidemic and five different definitions of an optimal conclusion. As model epidemics they chose those of 1918 (the famous “Spanish” flu that is reckoned to have killed 50m-100m) and 1957 (less lethal, but still pretty nasty). As definitions of a good outcome they started with two simple measures: the number of infections averted, and the number of deaths averted. They then went on to look at more sophisticated measures: the number of years of life saved (taking into account the ages of those who would otherwise have died), the “contingent valuation” of those lives and the economic cost of vaccinating.

Contingent valuation is based on surveys of the “disutility” of death at different ages. This provides a crude way to balance what has already been invested in a life against what might come of it. Measuring economic costs weighs the expense of both vaccination and illness against the net present value of the future earnings of someone who would otherwise die from the disease.

Yet no matter which outcome was looked at, nor which pattern of epidemic was chosen, the result was the same. The best approach to influenza is to vaccinate young people and their parents, not infants and the elderly. Moreover, it is a cheaper and more efficient option. Around 85m doses of vaccine are distributed in the United States in normal years. Dr Medlock and Dr Galvani reckon that if their approach were followed, that might be cut to just over 60m. As luck would have it, though, the new advice agrees more closely with the recommendations of the CDC’s advisory committee on immunisation practices about the best approach to the epidemic of N1H1 swine flu that is now circulating. For reasons still unknown, elderly people are not as susceptible to this strain as they are to others, and what deaths there have been have tended to occur among the young particularly young adults. The strategy of “vaccinate those at risk” thus coincides with “vaccinate the spreaders”. A fortunate coincidence, perhaps.

Source of Information : The Economist August 22 2009

Thursday, September 17, 2009

How to improve the delivery of vaccines

VACCINATIONS are a pain. Any two-year-old with access to reasonable health care will tell you that. But injections are bad for reasons other than the discomfort they cause. If syringes are reused without sterilisation, they can spread disease. The liquid vaccines they require are often temperamental, needing constant refrigeration and thus what is known as a “cold chain” to keep them in tip-top condition as they move from factory to clinic. And hypodermic needles do not even deliver vaccines to the best place in the body.

Vaccines work by activating the immune system. For that to happen, they have to meet up with what are known as antigen-presenting cells. These cells recognise alien invaders, chop them up and then carry characteristic bits of them to the rest of the immune system so that the invader can be recognised and appropriate measures taken. If an invader is new, this process allows the immune system to learn to recognise it, and respond more rapidly next time. A vaccine (which contains either a dead or an attenuated form of a pathogen) lets the immune system learn about a disease without the body having to undergo the infection. But antigen-presenting cells tend to congregate in places where pathogens arrive, such as the lungs and the skin—not in the muscles where the tip of a hypodermic needle ends up.

How to improve this state of affairs was the topic of two talks at this year’s meeting of the American Chemical Society, in Washington, DC. Both proposals eliminate the need for painful needles and the need to transport liquids around. And both deliver their cargoes to places where antigen-presenting cells abound.


Just breathe deeply
Robert Sievers of the University of Colorado is working on a new way to deliver a vaccine for measles. Vaccination against this disease has become controversial in effete Western circles because of the malign effects of one or two hysterically reported scientific studies which suggested (wrongly, it is now believed) that the vaccine might occasionally be hazardous. In many parts of the world, however, measles itself is a serious hazard. It kills about 200,000 children a year, and vaccination—using a liquid vaccine and a hypodermic—is the only way to prevent it. Because measles attacks through the respiratory tract, though, Dr Sievers reckons this is a better place to
send the vaccine than the muscles of the arm. And one way to do so would be to have the person being vaccinated inhale a finely powdered vaccine that coated every inch of his lungs. To make such a powder, Dr Sievers has perfected a trick that atomises liquid vaccine into tiny droplets. When the water evaporates, the particles of vaccine which remain are so small that they can spread through the lungs without clumping.

This is not as easy as it sounds. Atomisation nozzles are commonplace. They are used in everything from perfume bottles to car engines. But a nozzle alone is not enough. The droplets it produces—and hence the granules of the powder—are too big for Dr Sievers’s purposes. Instead, he has turbocharged the process by mixing the vaccine with ultra-high-pressure carbon dioxide.

When this mixture emerges from the nozzle, it bubbles like champagne. The bubbles fragment, forming ultra-tiny drops, and when these dry they leave a powder with granules that are between one and five microns across. This powder can be doled out easily in individual doses and inhaled from a bladder—and a monkey that did so was protected from influenza. Human trials are expected to begin next year in India.


Skin fix
Mark Prausnitz of the Georgia Institute of Technology and his team have devised a different way to replace the hypodermic needle. They have successfully inoculated mice against flu using scores of tiny needles that might be referred to as “hyperdermic”, because they do not fully penetrate the skin. This method, too, eliminates the need to transport liquid vaccine around. Dr Prausnitz’s vaccine-delivery device is a steel patch with an array of needles, each about half a millimetre long, on it. The array is coated with liquid vaccine mixed with a thickening agent. This dries, leaving the needles impregnated with solid vaccine. The resulting device is robust and reasonably insensitive to heat. To use it, you press it against someone’s skin (possibly your own). The needles painlessly puncture the skin, but do not go through it. Moisture from the body then dissolves the vaccine and it spreads into the skin in minutes.

That the patches are self-applicable is a huge bonus. Dr Prausnitz hopes it might be feasible to send vaccines to people by post during an epidemic, obviating the need for them to go to a clinic. At the moment Dr Prausnitz’s technique, like Dr Sievers’s, has been tried only on animals, but it has such elegance and simplicity that if human trials are successful it could prove to be important in controlling influenza. Exactly who should receive influenza vaccine is the subject of this article.

Source of Information : The Economist August 22 2009

Wednesday, September 16, 2009

Plan B: Our Only Option

Since the current world food shortage is trenddriven, the environmental trends that cause it must be reversed. To do so requires extraordinarily demanding measures, a monumental shift away from business as usual—what we at the Earth Policy Institute call Plan A—to a civilization- saving Plan B.

Similar in scale and urgency to the U.S. mobilization for World War II, Plan B has four components: a massive effort to cut carbon emissions by 80 percent from their 2006 levels by 2020; the stabilization of the world’s population at eight billion by 2040; the eradication of poverty; and the restoration of forests, soils and aquifers.

Net carbon dioxide emissions can be cut by systematically raising energy efficiency and investing massively in the development of renewable sources of energy. We must also ban deforestation worldwide, as several countries already have done, and plant billions of trees to sequester carbon. The transition from fossil fuels to renewable forms of energy can be driven by imposing a tax on carbon, while offsetting it with a reduction in income taxes.

Stabilizing population and eradicating poverty go hand in hand. In fact, the key to accelerating the shift to smaller families is eradicating poverty—and vice versa. One way is to ensure at least a primary school education for all children, girls as well as boys. Another is to provide rudimentary, village-level health care, so that people can be confident that their children will survive to adulthood. Women everywhere need access to reproductive health care and familyplanning services.

The fourth component, restoring the earth’s natural systems and resources, incorporates a worldwide initiative to arrest the fall in water tables by raising water productivity: the useful activity that can be wrung from each drop. That implies shifting to more efficient irrigation systems and to more water-efficient crops. In some countries, it implies growing (and eating) more wheat and less rice, a water-intensive crop. And for industries and cities, it implies doing what some are doing already, namely, continuously recycling water.

At the same time, we must launch a worldwide effort to conserve soil, similar to the U.S. response to the Dust Bowl of the 1930s. Terracing the ground, planting trees as shelterbelts against windblown soil erosion, and practicing minimum tillage—in which the soil is not plowed and crop residues are left on the field—are among the most important soil-conservation measures.

There is nothing new about our four interrelated objectives. They have been discussed individually for years. Indeed, we have created entire institutions intended to tackle some of them, such as the World Bank to alleviate poverty. And we have made substantial progress in some parts of the world on at least one of them—the distribution of family-planning services and the associated shift to smaller families that brings population stability.

For many in the development community, the four objectives of Plan B were seen as positive, promoting development as long as they did not cost too much. Others saw them as humanitarian goals—politically correct and morally appropriate. Now a third and far more momentous rationale presents itself: meeting these goals may be necessary to prevent the collapse of our civilization. Yet the cost we project for saving civilization would amount to less than $200 billion a year, a sixth of current global military spending. In effect, Plan B is the new security budget.

Source of Information : Scientific American(2009-05)

Tuesday, September 15, 2009

Less Soil, More Hunger

The scope of the second worrisome trend—the loss of topsoil—is also startling. Topsoil is eroding faster than new soil forms on perhaps a third of the world’s cropland. This thin layer of essential plant nutrients, the very foundation of civilization, took long stretches of geologic time to build up, yet it is typically only about six inches deep. Its loss from wind and water erosion doomed earlier civilizations.

In 2002 a U.N. team assessed the food situation in Lesotho, the small, landlocked home of two million people embedded within South Africa. The team’s finding was straightforward: “Agriculture in Lesotho faces a catastrophic future; crop production is declining and could cease altogether over large tracts of the country if steps are not taken to reverse soil erosion, degradation and the decline in soil fertility.”

In the Western Hemisphere, Haiti—one of the first states to be recognized as failing—was villaglargely self-sufficient in grain 40 years ago. In the years since, though, it has lost nearly all its forests and much of its topsoil, forcing the country to import more than half of its grain.

The third and perhaps most pervasive environmental threat to food security—rising surface temperature—can affect crop yields everywhere. In many countries crops are grown at or near their thermal optimum, so even a minor temperature rise during the growing season can shrink the harvest. A study published by the U.S. National Academy of Sciences has confirmed a rule of thumb among crop ecologists: for every rise of one degree Celsius (1.8 degrees Fahrenheit) above the norm, wheat, rice and corn yields fall by 10 percent.

In the past, most famously when the innovations in the use of fertilizer, irrigation and highyield varieties of wheat and rice created the “green revolution” of the 1960s and 1970s, the response to the growing demand for food was the successful application of scientific agriculture: the technological fix. This time, regrettably, many of the most productive advances in agricultural technology have already been put into practice, and so the long-term rise in land productivity is slowing down. Between 1950 and 1990 the world’s farmers increased the grain yield per acre by more than 2 percent a year, exceeding the growth of population. But since then, the annual growth in yield has slowed to slightly more than 1 percent. In some countries the yields appear to be near their practical limits, including rice yields in Japan and China.

Some commentators point to genetically modified crop strains as a way out of our predicament. Unfortunately, however, no genetically modified crops have led to dramatically higher yields, comparable to the doubling or tripling of wheat and rice yields that took place during the green revolution. Nor do they seem likely to do so, simply because conventional plant-breeding techniques have already tapped most of the potential for raising crop yields.



Arable Land Is Disappearing
Topsoil, another vital factor in maintaining the world’s food supply, is also essentially a nonrenewable resource: even in a healthy ecosystem supplied with adequate moisture and organic and inorganic material, it can take centuries to generate an inch of topsoil. If soil-stabilizing vegetation disappears—when forests are cut or overgrazing turns grassland into desert— topsoil is lost to the wind and the rain. Arable land is also threatened by roads, buildings and other nonfarm usage.


Source of Information : Scientific American(2009-05)

Monday, September 14, 2009

Water Shortages Mean Food Shortages

What about supply? The three environmental trends I mentioned earlier—the shortage of freshwater, the loss of topsoil and the rising temperatures (and other effects) of global warming— are making it increasingly hard to expand the world’s grain supply fast enough to keep up with demand. Of all those trends, however, the spread of water shortages poses the most immediate threat. The biggest challenge here is irrigation, which consumes 70 percent of the world’s freshwater. Millions of irrigation wells in many countries are now pumping water out of underground sources faster than rainfall can recharge them. The result is falling water tables in countries populated by half the world’s people, including the three big grain producers—China, India and the U.S.

Usually aquifers are replenishable, but some of the most important ones are not: the “fossil” aquifers, so called because they store ancient water and are not recharged by precipitation. For these—including the vast Ogallala Aquifer that underlies the U.S. Great Plains, the Saudi aquifer and the deep aquifer under the North China Plain—depletion would spell the end of pumping. In arid regions such a loss could also bring an end to agriculture altogether.

In China the water table under the North China Plain, an area that produces more than half of the country’s wheat and a third of its corn, is falling fast. Overpumping has used up most of the water in a shallow aquifer there, forcing well drillers to turn to the region’s deep aquifer, which is not replenishable. A report by the World Bank foresees “catastrophic consequences for future generations” unless water use and supply can quickly be brought back into balance.

As water tables have fallen and irrigation wells have gone dry, China’s wheat crop, the world’s largest, has declined by 8 percent since it peaked at 123 million tons in 1997. In that same period China’s rice production dropped 4 percent. The world’s most populous nation may soon be importing massive quantities of grain.

But water shortages are even more worrying in India. There the margin between food consumption and survival is more precarious. Millions of irrigation wells have dropped water tables in almost every state. As Fred Pearce reported in New Scientist: Half of India’s traditional hand-dug wells and millions of shallower tube wells have already dried up, bringing a spate of suicides among those who rely on them. Electricity blackouts are reaching epidemic proportions in states where half of the electricity is used to pump water from depths of up to a kilometer [3,300 feet].

A World Bank study reports that 15 percent of India’s food supply is produced by mining groundwater. Stated otherwise, 175 million Indians consume grain produced with water from irrigation wells that will soon be exhausted. The continued shrinking of water supplies could lead to unmanageable food shortages and social conflict.



Irrigation Can Lead to Severe Water Shortages
The greatest drain on supplies of freshwater is irrigation, which accounts for 70 percent of freshwater usage. Irrigation is essential to most high-yield farming, but many aquifers that supply irrigated crops are being drawn down faster than rain can recharge them. Furthermore, when farmers tap “fossil” aquifers, which store ancient water in rock impermeable to rain, they are mining a nonrenewable resource. Pumping from ever deeper wells is problematic in another way as well: it takes a lot of energy. In some states of India, half of the available electricity is used to pump water.

Source of Information : Scientific American(2009-05)

Sunday, September 13, 2009

The Problem of Failed States

Even a cursory look at the vital signs of our current world order lends unwelcome support to my conclusion. And those of us in the environmental field are well into our third decade of charting trends of environmental decline without seeing any significant effort to reverse a single one. In six of the past nine years world grain production has fallen short of consumption, forcing a steady drawdown in stocks. When the 2008 harvest began, world carryover stocks of grain (the amount in the bin when the new harvest begins) were at 62 days of consumption, a near record low. In response, world grain prices in the spring and summer of last year climbed to the highest level ever.

As demand for food rises faster than supplies are growing, the resulting food-price inflation puts severe stress on the governments of countries already teetering on the edge of chaos. Unable to buy grain or grow their own, hungry people take to the streets. Indeed, even before the steep climb in grain prices in 2008, the number of failing states was expanding. Many of their problems stem from a failure to slow the growth of their populations. But if the food situation continues to deteriorate, entire nations will break down at an ever increasing rate. We have entered a new era in geopolitics. In the 20th century the main threat to international security was superpower conflict; today it is failing states. It is not the concentration of power but its absence that puts us at risk.

States fail when national governments can no longer provide personal security, food security and basic social services such as education and health care. They often lose control of part or all of their territory. When governments lose their monopoly on power, law and order begin to disintegrate. After a point, countries can become so dangerous that food relief workers are no longer safe and their programs are halted; in Somalia and Afghanistan, deteriorating conditions have already put such programs in jeopardy.

Failing states are of international concern because they are a source of terrorists, drugs, weapons and refugees, threatening political stability everywhere. Somalia, number one on the 2008 list of failing states, has become a base for piracy. Iraq, number five, is a hotbed for terrorist training. Afghanistan, number seven, is the world’s leading supplier of heroin. Following the massive genocide of 1994 in Rwanda, refugees from that troubled state, thousands of armed soldiers among them, helped to destabilize neighboring Democratic Republic of the Congo (number six). Our global civilization depends on a functioning network of politically healthy nationstates to control the spread of infectious disease, to manage the international monetary system, to control international terrorism and to reach scores of other common goals. If the system for controlling infectious diseases—such as polio, SARS or avian flu—breaks down, humanity will be in trouble. Once states fail, no one assumes responsibility for their debt to outside lenders. If enough states disintegrate, their fall will threaten the stability of global civilization itself.

Failing States
Every year the Fund for Peace and the Carnegie Endowment for International Peace jointly analyze and score countries on 12 social, economic, political and military indicators of national well-being. Here, ranked from worst to better according to their combined scores in 2007, are the 20 countries in the world that are closest to collapse:
• Somalia
• Sudan
• Zimbabwe
• Chad
• Iraq
• Democratic Republic of the Congo
• Afghanistan
• Ivory Coast
• Pakistan
• Central African Republic
• Guinea
• Bangladesh
• Burma (Myanmar)
• Haiti
• North Korea
• Ethiopia
• Uganda
• Lebanon
• Nigeria
• Sri Lanka

SOURCE: “The Failed States Index 2008,” by the Fund for Peace and the Carnegie Endowment for International Peace, in Foreign Policy; July/August 2008



Key Factors in Food Shortages
The spreading scarcity of food is emerging as the central cause of state failure. Food shortages arise out of a tangled web of causes, effects and feedbacks whose interactions often intensify the effects of any one factor acting alone. Some of the most common factors are depicted in the diagram. According to the author, today’s food shortages are not the result of one-time, weather-driven crop failures but rather of four critical longterm trends (below): rapid population growth, loss of topsoil, spreading water shortages and rising temperatures.

Source of Information : Scientific American(2009-05)

Saturday, September 12, 2009

When Earth greened over

A thick, green carpet of photosynthetic life, on the scale of that seen today, exploded across Earth 850 million years ago — much earlier than thought — a new study suggests.
The matting — a mixture of algae, mosses and fungi — would have fixed atmospheric carbon into the soil, which would then have washed into the seas for burial, according to the study (L. P. Knauth & M. J. Kennedy Nature doi:10.1038/nature08213; 2009). With lower levels of carbon to react with, global levels of oxygen would have risen. The greening of ancient Earth could thus be indirectly responsible for the sudden evolution, beginning about 600 million years ago, of larger respirating animals with oxygen-hungry cells, say geologists Paul Knauth of Arizona State University in Tempe and Martin Kennedy of the University of California, Riverside.

“This is a profound event,” says Kennedy.n“It explains the rise of oxygen, and the timing of that rise.” The evidence the researchers provide is indirect: data compiled from thousands of samples of carbonate rock, such as limestone, that originally formed in ancient shallow seas. Analysis of carbon and oxygen isotopes in these rocks revealed that the influence of the freshwater run-off into these seas was as important in ancient times as it is today in forming carbonate rocks.

Because terrestrial plant life leaves indelible isotopic marks in modern carbonate rocks, the authors surmise that some sort of photosynthetic life — at the same global scale — was responsible for similar measurements they found in ancient rocks. In rocks older than 850 million years, they find starkly different isotopic signatures, which they interpret as an absence of carbon-rich material in freshwater run-off, and thus an absence of photosynthetic life on land.

The study contradicts other work that looks to the oceans, rather than land, to justify the same isotopic data. Other researchers argue that the oxygenation of Earth and the explosion of animals 600 million years ago arose from sudden and drastic changes in ocean water chemistry around the same time. The changes in ocean chemistry have been attributed to episodic releases of methane from ocean vents, and periods of ‘snowball Earth’, extreme glacial epochs when Earth may have been so cold that oceans froze over.

But Knauth and Kennedy say the isotopic records in the carbonate rocks reflect more than just the chemistry of the global oceans. They argue that most carbonate rocks undergo further stages of alteration where freshwater run-off is important. There are problems with the new theory, says Paul Falkowski, a geochemist at Rutgers University in New Brunswick, New Jersey — most notably that there isn’t much evidence for widespread plant life until around 400 million years ago. The hard tissues of vascular plants evolved around this time, but the softer tissues of mosses and fungi that came before would have been preserved less easily. Work with molecular clocks — which use genetic differences to estimate the timing of speciation — does suggest that terrestrial plants evolved from the types of plants that Knauth and Kennedy call for, around the time that they suggest.

But to have the effect on the carbonate record that they see, the ancient photosynthetic life would have needed to be operating on the scale that it is today — a worldwide carpeting of green. And that should have left something for posterity, says Nick Butterfield, a palaeobiologist at the University of Cambridge, UK. “In order to have a significant impact it has to be everywhere, all over the place,” he says. “And it can’t be, unless it has seeds and cuticles and adaptations for covering vast amounts of the terrestrial surfaces. If you’ve got those adaptations you can’t avoid turning up in the fossil record.”

Source of Informaion : Nature 09 July 2009

Friday, September 11, 2009

Flu jabs urged for developing countries

Influenza experts are recommending an extensive vaccination programme against seasonal flu in developing countries, in part to boost demand for vaccines so that firms can ramp up production to cope with pandemics. The message came from scientists and policy-makers who met on 2–3 July in Siena, Italy, to assess the gaps in their knowledge about the current H1N1 pandemic virus.

The governments of many developing countries remain to be convinced that flu is a major danger for their citizens relative to other health problems, says Abdullah
Brooks of the International Centre for Diarrhoeal Disease Research in Dhaka, Bangladesh. Yet Brooks presented research showing that around one-third of pneumonia deaths in children younger than 2 years old in his region can be attributed to the influenza virus. According to the United Nations agency UNICEF, pneumonia kills more than 2 million children under the age of five each year — more than any other disease.
At the meeting, experts recommended that pilot studies be conducted in developing countries to measure the prevalence of flu virus in sick children, and to assess how much a flu-vaccination programme would reduce the burden of disease in the countries.
UNICEF, health charities and the governments of rich nations would probably be approached for financial support.

As well as providing a major public-health benefit, the effort could create a larger, more stable market for seasonal flu vaccines in the future. “A few months ago we were discussing whether we would need to close some of our manufacturing plants because we were losing so much money on flu vaccines,” says Rino Rappuoli, head of vaccine research at Novartis in Siena, adding that the current H1N1 pandemic has helped to avert any closures as governments race to stock up on vaccines. For example, the firm was awarded US$289 million by the US Department of Health and Human Services (HHS) in Bethesda, Maryland, in May to produce H1N1 vaccine antigen as well as an adjuvant to amplify the immune response to the vaccine, thus reducing the amount of antigen needed in each shot and stretching manufacturing-plant capacity. Other vaccine companies, including GlaxoSmithKline, Sanofi Pasteur, CSL Biotherapies and MedImmune, will also benefit from $643 million in HHS orders. says George Daley, a researcher at Children’s Hospital Boston and the Harvard Stem Cell Institute in Cambridge, Massachusetts. “It’s flexible and science friendly.”

Sean Morrison, a stem-cell biologist at the University of Michigan in Ann Arbor, adds:
“The NIH has done what is best for the field by having their own registry — one list that everyone can work from.” Some scientists, including Daley, said that they were disappointed with the exclusion of embryos derived for research purposes, but pointed out that the agency intends to revisit the guidelines as the science evolves. The NIH guidelines depart in one significant way from existing National Academy of Sciences standards; they do not require consent from gamete donors — only from the couple seeking in vitro fertilization services. The guidelines respond to an executive order issued in March by President Barack Obama.

Source of Information : Nature 09 July 2009

Thursday, September 10, 2009

A New Kind of Food Shortage

The surge in world grain prices in 2007 and 2008—and the threat they pose to food security— has a different, more troubling quality than the increases of the past. During the second half of the 20th century, grain prices rose dramatically several times. In 1972, for instance, the Soviets, recognizing their poor harvest early, quietly cornered the world wheat market. As a result, wheat prices elsewhere more than doubled, pulling rice and corn prices up with them. But this and other price shocks were event-driven underdrought in the Soviet Union, a monsoon failure in India, crop-shrinking heat in the U.S. Corn Belt. And the rises were short-lived: prices typically returned to normal with the next harvest.

In contrast, the recent surge in world grain prices is trend-driven, making it unlikely to reverse without a reversal in the trends themselves. On the demand side, those trends include the ongoing addition of more than 70 million people a year; a growing number of people wanting to move up the food chain to consume highly grainintensive livestock products; and the massive diversion of U.S. grain to ethanol-fuel distilleries.

The extra demand for grain associated with rising affluence varies widely among countries. People in low-income countries where grain supplies 60 percent of calories, such as India, directly consume a bit more than a pound of grain a day. In affluent countries such as the U.S. and Canada, grain consumption per person is nearly four times that much, though perhaps 90 percent of it is consumed indirectly as meat, milk and eggs from grain-fed animals.

The potential for further grain consumption as incomes rise among low-income consumers is huge. But that potential pales beside the insatiable demand for crop-based automotive fuels. A fourth of this year’s U.S. grain harvest— enough to feed 125 million Americans or half a billion Indians at current consumption levels— will go to fuel cars. Yet even if the entire U.S. grain harvest were diverted into making ethanol, it would meet at most 18 percent of U.S. automotive fuel needs. The grain required to fill a 25-gallon SUV tank with ethanol could feed one person for a year.

The recent merging of the food and energy economies implies that if the food value of grain is less than its fuel value, the market will move the grain into the energy economy. That double demand is leading to an epic competition between cars and people for the grain supply and to a political and moral issue of unprecedented dimensions. The U.S., in a misguided effort to reduce its dependence on foreign oil by substituting grain-based fuels, is generating global food insecurity on a scale not seen before.

Source of Information : Scientific American(2009-05)

Jockeying for Food

As the world’s food security unravels, a dangerous politics of food scarcity is coming into play: individual countries acting in their narrowly defined self-interest are actually worsening the plight of the many. The trend began in 2007, when leading wheat exporting countries such as Russia and Argentina limited or banned their exports, in hopes of increasing locally available food supplies and thereby bringing down food prices domestically. Vietnam, the world’s second-biggest rice exporter after Thailand, banned its exports for several months for the same reason. Such moves may reassure those living in the exporting countries, but they are creating panic in importing countries that must rely on what is then left of the world’s exportable grain.

In response to those restrictions, grain importers are trying to nail down long-term bilateral trade agreements that would lock up future grain supplies. The Philippines, no longer able to count on getting rice from the world market, recently negotiated a three-year deal with Vietnam for a guaranteed 1.5 million tons of rice each year. Food-import anxiety is even spawning entirely new efforts by food-importing countries to buy or lease farmland in other countries.

In spite of such stopgap measures, soaring food prices and spreading hunger in many other countries are beginning to break down the social order. In several provinces of Thailand the predations of “rice rustlers” have forced villagers to guard their rice fields at night with loaded shotguns. In Pakistan an armed soldier escorts each grain truck. During the first half of 2008, 83 trucks carrying grain in Sudan were hijacked before reaching the Darfur relief camps.

No country is immune to the effects of tightening food supplies, not even the U.S., the world’s breadbasket. If China turns to the world market for massive quantities of grain, as it has recently done for soybeans, it will have to buy from the U.S. For U.S. consumers, that would mean competing for the U.S. grain harvest with 1.3 billion Chinese consumers with fast-rising incomes—a nightmare scenario. In such circumstances, it would be tempting for the U.S. to restrict exports, as it did, for instance, with grain and soybeans in the 1970s when domestic prices soared. But that is not an option with China. Chinese investors now hold well over a trillion U.S. dollars, and they have often been the leading international buyers of U.S. Treasury securities issued to finance the fiscal deficit. Like it or not, U.S. consumers will share their grain with Chinese consumers, no matter how high food prices rise.



HOW FAILE D STATE S Threaten Everyone
When a nation’s government can no longer provide security or basic services for its citizens, the resulting social chaos can have serious adverse effects beyond that nation’s own borders:
•  Spreading disease
•  Offering sanctuary to terrorists and pirates
•  Spreading the sale of drugs and weapons
•  Fostering political extremism
•  Generating violence and refugees, which can spill into neighboring states



Side Bets in the Game of Food Politics
Anxious to ensure future grain supplies, several nations are quietly making deals with grainproducing countries for rights to farm there. The practice tightens supplies for other importing nations and raises prices. Some examples:
•  China: Seeking to lease land in Australia, Brazil, Burma (Myanmar), Russia and Uganda
•  Saudi Arabia: Looking for farmland in Egypt, Pakistan, South Africa, Sudan, Thailand, Turkey and Ukraine
•  India: Agribusiness firms pursuing cropland in Paraguay and Uruguay
•  Libya: Leasing 250,000 acres in Ukraine in exchange for access to Libyan oil fields
•  South Korea: Seeking land deals in Madagascar, Russia and Sudan


Source of Information : Scientific American(2009-05)

The history of science - Kepler's world

Celebrating the work of a neglected astronomer

MUCH has been made of the 400th anniversary this year of Galileo pointing a telescope at the moon and jotting down what he saw (even though this had previously been accomplished by an Englishman, Thomas Harriot, using a Dutch telescope). But 2009 is also the 400th anniversary of the publication by Johannes Kepler, a German mathematician and astronomer, of “Astronomia Nova”. This was a treatise that contained an account of his discovery of how the planets move around the sun, correcting Copernicus’s own more famous but incorrectly formulated description of the solar system and establishing the laws for planetary motion on which Isaac Newton based his work.

Four centuries ago the received wisdom was that of Aristotle, who asserted that the Earth was the centre of the universe, and that it was encircled by the spheres of the moon, the sun, the planets and the stars beyond them. Copernicus had noticed inconsistencies in this theory and had placed the sun at the centre, with the Earth and the other planets travelling around the sun.

Some six decades later when Kepler tackled the motion of Mars, he proposed a number of geometric models, checking his results against the position of the planet as recorded by his boss, Tycho Brahe, a Danish astronomer. Kepler repeatedly found that his model failed to predict the correct position of the planet. He altered it and, in so doing, created first egg-shaped “orbits” (he coined the term) and, finally, an ellipse with the sun placed at one focus. Kepler went on to show that an elliptical orbit is sufficient to explain the movement of the other planets and to devise the laws of planetary motion that Newton built on.

A.E.L. Davis, of Imperial College, London, this week told astronomers and historians at the International Astronomical Union meeting in Rio de Janeiro that it was the rotation of the sun, as seen by Galileo and Harriot as they watched sunspots moving across its surface, that provided Kepler with what he thought was one of the causes of the planetary motion that his laws described, although his reasoning would today be considered entirely wrong.

In 1609 astronomy and astrology were seen as intimately related; mathematics and natural philosophy, meanwhile, were quite separate areas of endeavour. Kepler, however, sought physical mechanisms to explain his mathematical result. He wanted to know how it could be that the planets orbited the sun. Once he learned that the sun rotated, he comforted himself with the thought that the sun’s rays must somehow sweep the planets around it while a quasi magnetism accounted for the exact elliptical path. (Newton did not propose his theory of gravity for almost another 80 years.) As today’s astronomers struggle to determine whether they can learn from the past, Kepler’s tale provides a salutary reminder that only some explanations stand the test of time.

Source of Information : The Economist August 15 2009