Monday, August 31, 2009

What causes albinism? Are there any treatments for it?

Raymond Boissy, a dermatology professor at the University of Cincinnati College of Medicine, explains (as told to Coco Ballantyne): Albinism is a genetic disease causing partial or complete loss of pigmentation, or coloring, in the skin, eyes and hair. It arises from mutations affecting cells, called melanocytes, that produce the pigment melanin, which gives color to those body parts. In individuals with albinism, genetic alterations interfere with the melanocytes’ production of pigment or their ability to distribute it to keratinocytes, the major cell type of the skin’s outer layer.

The most common forms of albinism are oculocutaneous type 1 (OCA1) and type 2 (OCA2). Oculocutaneous means the disease affects the eyes (“oculo”) and skin (“cutaneous”). People with OCA1 have mutations in a gene called TYR that is responsible for production of the enzyme tyrosinase, used by cells to convert the amino acid tyrosine into pigment. OCA2, the most common form in Africa, results from a mutation in the OCA2 gene, which encodes the P protein—a protein whose role is not totally clear. This mutation is probably the oldest one causing albinism and, putatively, originated during humankind’s development in Africa.

Most people with OCA1 have white skin, white hair and pigmentless eyes. The iris, the colored part of the eye encircling the pupil, is pale, whereas the pupil itself may appear red. This redness results from light reflecting off blood vessels in the retina, the light-sensitive layer of tissue lining the back of the eyeball. Pupils ordinarily appear black because pigment molecules in the retina absorb light and prevent it from bouncing back out. People with OCA2 can make a small amount of pigment and thus may have somewhat less pronounced visual symptoms.

Individuals with albinism are often legally blind. Without melanin during the embryonic stage, the neuronal tracts leading from the eye to the visual cortex of the brain develop aberrantly, resulting in diminished depth perception. And in the absence of pigment in the eye, retinal photoreceptors can become overstimulated and send confusing messages to the brain, which often also produce a nystagmus, or fluttering of the eyes.

A dearth of skin pigment leaves people more susceptible to nonmelanoma skin cancers such as squamous cell carcinoma and basal cell carcinoma. Normally functioning melanocytes distribute pigment to keratinocytes to shield the nucleus and the DNA inside from the sun’s ultraviolet radiation. People with albinism may also experience premature skin aging, because UV-blocking melanin helps to prevent wrinkling and the loss of the skin’s elasticity.

Researchers such as geneticist Richard King of the University of Minnesota and cell biologist Vitali Alexeev of Thomas Jefferson University are working on gene therapies or drugs that would fix albinism-causing mutations. Scientists have had some success in correcting patches of depigmented skin and hair in mice, but they are a long way from translating this research to humans.


Source of Information : Scientific American(2009-06)

Sunday, August 30, 2009

Primal Programs

Rethinking cancer by seeing tumors as a cellular pregnancy By Christine Soares

One reason cancer is not considered a single disease but many is that every cancer cell seems to be dysfunctional in its own way. Random mutations in a cell’s DNA initiate its slide into abnormal behavior. And as additional mutations accumulate, that randomness is also thought to account for the diversity in different patients’ tumors, even when they are cancers of the same tissue. But evidence is growing that there is a method to the madness of tumor cells, making some scientists reevaluate the nature of cancer.

Analyzing tumors from dozens of tissue types, Isaac S. Kohane of the Harvard- MIT Division of Health Sciences and Technology has catalogued surprising yet familiar patterns of gene activity in cancer cells—they are the same programmed genetic instructions active during various stages of embryonic and fetal development. Entire suites of genes that drive an embryo’s early growth and the later formation of limbs and other structures in the womb normally go silent during the rest of life, but these genetic programs are switched back on in many tumor cells.

Grouping tumors according to the developmental stage their gene activity most resembles reveals predictive information about those tumors, Kohane has found. In groups of lung tumors, for instance, “malignancy and even time to death of actual patients were directly proportional to the ‘earliness’ of the gene signatures,” he says. In his largest and latest tumor study, Kohane showed that the same holds true across different types of cancer. Comparing gene activity for nearly three dozen kinds of cancer and precancerous conditions against a timeline of 10 developmental processes, he could group seemingly disparate diseases into three categories. Among the tumors with signatures characteristic of the earliest embryonic development stages were lung adenocarcinoma, colorectal adenoma, T cell lymphomas and certain thyroid cancers. The highly aggressive cancers in this group also tend to look most undifferentiated and embryonic. The tumors with gene signatures that mirrored third-trimester and neonatal developmental gene expression patterns tend to be slower-growing types, including prostate and ovarian cancers, adrenal adenoma and liver dysplasia. A third category of tumors represented a mixed bag, in which activity matched aspects of both the other two groups.

Similarities between embryos and tumors “should be paid attention to,” says pioneering cancer researcher Lloyd J. Old, chairman of the Ludwig Institute for Cancer Research New York Branch. “The reason this is so interesting is that the idea that cancer and development are in some way linked goes way back,” he explains. The 19th-century pathologist John Beard, for example, noted the similarity between tumors and the trophoblast, a part of an early embryo that eventually becomes the placenta. “If you’ve ever seen the trophoblast invading the uterus, it invades, spreads, creates a blood supply. It also suppresses the maternal immune system,” Old says. “All of those are characteristics of cancer.”

In his own research, Old has found common genetic programs at work in tumor cells and gametes. One subject of his immunology studies are the cancer/testis (CT) antigens, a group of proteins manufactured almost exclusively by tumors and by sperm- and egg-producing germ-line cells. The specifi city of CT antigens makes them ideal targets for cancer vaccines or antibody-based drugs, Old says; moreover, the activation of CT genes in tumors is telling. “These are programs that you and I used as gametes,” he explains. Seeing these primordial programs reactivated in tumors has led Old to describe cancer as a “somatic cell pregnancy.”

The fact that cancer cells switch on these normally silenced programs suggests to Old that the important characteristics of cancers are not random. “This is a fundamentally different way of thinking. A cell that mutates looks for genes that can help it flourish” and finds them in the suites of developmental genes, he says. “It’s a programmatic origin rather than a Darwinian origin” for cancer traits.

The two views of malignancy, however, do not necessarily confl ict. “It’s not as if accumulating mutations are at odds with the discernible program,” remarks Robert A. Weinberg of the Massachusetts Institute of Technology, noting that the activation of developmental programs could be a downstream consequence of the mutations. Weinberg showed last year that gene activity involved in maintaining embryonic stem cell identity is a common feature of the most undifferentiated-looking and aggressive tumors. Whether that kind of evidence indicates an embryonic program driving those cancers remains to be determined, he cautions: “It’s an interesting concept, but at this stage what they talk about is highly speculative. One can ascribe all manner of human traits to cancers and speculate that it will lead one into therapeutic insights. But the devil is in the details.”

Source of Information : Scientific American(2009-05)

Rise of the Goddess

We do not know how long the transformation of the Middle Eastern wildcat into an affectionate home companion took. Animals can be domesticated quite rapidly under controlled conditions. In one famous experiment, begun in 1959,Russian scientists using highly selective breeding produced tame silver foxes from wild ones in just 40 years. But without doors or windowpanes, Neolithic farmers would have been hard-pressed to control the breeding of cats even if they wanted to. It seems reasonable to suggest that the lack of human influence on breeding and the probable intermixing of house cats and wildcats militated against rapid taming, causing the metamorphosis to occur over thousands of years.

Although the exact timeline of cat domestication remains uncertain, long-known archaeological evidence affords some insight into the process. After the Cypriot find, the next oldest hints of an association between humans and cats are a feline molar tooth from an archaeological deposit in Israel dating to roughly 9,000 years ago and another tooth from Pakistan dating to around 4,000 years ago.

Testament to full domestication comes from a much later period. A nearly 3,700-year-old ivory cat statuette from Israel suggests the cat was a common sight around homes and villages in the Fertile Crescent before its introduction to Egypt. This scenario makes sense, given that all the other domestic animals (except the donkey) and plants were introduced to the Nile Valley from the Fertile Crescent. But it is Egyptian paintings from the so-called New Kingdom period—Egypt’s golden era, which began nearly
3,600 years ago—that provide the oldest known unmistakable depictions of full domestication. These paintings typically show cats poised under chairs, sometimes collared or tethered, and often eating from bowls or feeding on scraps. The abundance of these illustrations signifies that cats had become common members of Egyptian households by this time.

It is in large part as a result of evocative images such as these that scholars traditionally perceived ancient Egypt as the locus of cat domestication. Even the oldest Egyptian representations of wildcats are 5,000 to 6,000 years younger than the 9,500-year-old Cypriot burial, however. Although ancient Egyptian culture cannot claim initial domestication of the cat among its many achievements, it surely played a pivotal role in subsequently molding the domestication dynamic and spreading cats throughout the world. Indeed, the Egyptians took the love of cats to a whole new level. By 2,900 years ago the domestic cat had become the official deity of Egypt in the form of the goddess Bastet, and house cats were sacrificed, mummified and buried in great numbers at Bastet’s sacred city, Bubastis. Measured by the ton, the sheer number of cat mummies found there indicates that Egyptians were not just harvesting feral or wild populations but, for the first time in history, were actively breeding domestic cats.

Egypt officially prohibited the export of their venerated cats for centuries. Nevertheless, by 2,500 years ago the animals had made their way to Greece, proving the inefficacy of export bans. Later, grain ships sailed directly from Alexandria to destinations throughout the Roman Empire, and cats are certain to have been onboard to keep the rats in check. Thus introduced, cats could have established colonies in port cities and then fanned out from there. By 2,000 years ago, when the Romans were expanding their empire, domestic cats were traveling with them and becoming common throughout Europe. Evidence for their spread comes from the German site of Tofting in Schleswig, which dates to between the 4th and 10th centuries, as well as increasing references to cats in art and literature from that period. (Oddly, domestic cats seem to have reached the British Isles before the Romans brought them over—a dispersal that researchers cannot yet explain.)

Meanwhile, on the opposite side of the globe, domestic cats had presumably spread to the Orient almost 2,000 years ago, along well-established trade routes between Greece and Rome and the Far East, reaching China by way of Mesopotamia and arriving in India via land and sea. Then something interesting happened. Because no native wildcats with which the newcomers could interbreed lived in the Far East, the Oriental domestic cats soon began evolving along their own trajectory. Small, isolated groups of Oriental domestics gradually acquired distinctive coat colors and other mutations through a process known as genetic drift, in which traits that are neither beneficial nor maladaptive become fixed in a population.

This drift led to the emergence of the Korat, the Siamese, the Birman and other “natural breeds,” which were described by Thai Buddhist monks in a book called the Tamara Maew (meaning “Cat-Book Poems”) that may date back to 1350. The putative antiquity of these breeds received support from the results of genetic studies announced last year, in which Marilyn Menotti-Raymond of the National Cancer Institute and Leslie Lyons of the University of California, Davis, found DNA differences between today’s European and Oriental domestic cat breeds indicative of more than 700 years of independent cat breeding in Asia and Europe.

As to when house cats reached the Americas, little is known. Christopher Columbus and other seafarers of his day reportedly carried cats with them on transatlantic voyages. And voyagers onboard the Mayflower and residents of Jamestown are said to have brought cats with them to control vermin and to bring good luck. How house cats got to Australia is even murkier, although researchers presume that they arrived with European explorers in the 1600s. Our group at the U.S. National Institutes of Health is tackling the problem using DNA.


Have Ca ts, Will Travel

As agriculture and permanent human settlements spread from the Fertile Crescent to the rest of the world, so, too, did domestic cats.
The map below shows the earliest putative occurrences of house cats in regions around the globe.

Source of Information : Scientific American(2009-06)

Saturday, August 29, 2009

A Cat and Mouse Game?

With the geography and an approximate age of the initial phases of cat domestication established, we could begin to revisit the old question of why cats and humans ever developed a special relationship. Cats in general are unlikely candidates for domestication. The ancestors of most domesticated animals lived in herds or packs with clear dominance hierarchies. (Humans unwittingly took advantage of this structure by supplanting the alpha individual, thus facilitating control of entire cohesive groups.) These herd animals were already accustomed to living cheek by jowl, so provided that food and shelter were plentiful, they adapted easily to confinement.

Cats, in contrast, are solitary hunters that defend their home ranges fiercely from other cats of the same sex (the pride-living lions are the exception to this rule). Moreover, whereas most domesticates feed on widely available plant foods, cats are obligate carnivores, meaning they have a limited ability to digest anything but meat—a far rarer menu item. In fact, they have lost the ability to taste sweet carbohydrates altogether. And as to utility to humans, let us just say cats do not take instruction well. Such attributes suggest that whereas other domesticates were recruited from the wild by humans who bred them for specific tasks, cats most likely chose to live among humans because of opportunities they found for themselves.

Early settlements in the Fertile Crescent between 9,000 and 10,000 years ago, during the Neolithic period, created a completely new environment for any wild animals that were sufficiently flexible and inquisitive (or scared and hungry) to exploit it. The house mouse, Musmusculus domesticus, was one such creature. Archaeologists have found remains of this rodent, which originated in the Indian subcontinent, among the first human stores of wild grain from Israel, which date to around 10,000 years ago. The house mice could not compete well with the local wild mice outside, but by moving into people’s homes and silos, they thrived.

It is almost certainly the case that these house mice attracted cats. But the trash heaps on the outskirts of town were probably just as great a draw, providing year-round pickings for those felines resourceful enough to seek them out. Both these food sources would have encouraged cats to adapt to living with people; in the lingo of evolutionary biology, natural selection favored those cats that were able to cohabitate with humans and thereby gain access to the trash and mice.

Over time, wildcats more tolerant of living in human-dominated environments began to proliferate in villages throughout the Fertile Crescent. Selection in this new niche would have been principally for tameness, but competition among cats would also have continued to influence their evolution and limit how pliant they became. Because these proto–domestic cats were undoubtedly mostly left to fend for themselves, their hunting and scavenging skills remained sharp. Even today most domesticated cats are free agents that can easily survive independently of humans, as evinced by the plethora of feral cats in cities, towns and countrysides the world over.

Considering that small cats do little obvious harm, people probably did not mind their company. They might have even encouraged the cats to stick around when they saw them dispatching mice and snakes. Cats may have held other appeal, too. Some experts speculate that wildcats just so happened to possess features that might have preadapted them to developing a relationship with people. In particular, these cats have “cute” features—large eyes, a snub face and a high, round forehead, among others—that are known to elicit nurturing from humans. In all likelihood, then, some people took kittens home simply because they found them adorable and tamed them, giving cats a first foothold at the human hearth.

Why was F. s. lybica the only subspecies of wildcat to be domesticated? Anecdotal evidence suggests that certain other subspecies, such as the European wildcat and the Chinese mountain cat, are less tolerant of people. If so, this trait alone could have precluded their adoption into homes. The friendlier southern African and Central Asian wildcats, on the other hand, might very well have become domesticated under the right conditions. But F. s. lybica had the advantage of a head start by virtue of its proximity to the first settlements. As agriculture spread out from the Fertile Crescent, so, too, did the tame scions of F. s. lybica, filling the same niche in each region they entered—and effectively shutting the door on local wildcat populations. Had domestic cats from the Near East never arrived in Africa or Asia, perhaps the indigenous wildcats in those regions would have been drawn to homes and villages as urban civilizations developed.

Source of Information : Scientific American(2009-06)

Friday, August 28, 2009

Cat’s Cradle

It is by turns aloof and affectionate, serene and savage, endearing and exasperating.
Despite its mercurial nature, however, the house cat is the most popular pet in the world. A third of American households have feline members, sand more than 600 million cats live among humans worldwide. Yet as familiar as these creatures are, a complete understanding of their origins has proved elusive. Whereas other once wild animals were domesticated for their milk, meat, wool or servile labor, cats contribute virtually nothing in the way of sustenance or work to human endeavor. How, then, did they become commonplace fixtures in our homes?

Scholars long believed that the ancient Egyptians were the first to keep cats as pets, starting around 3,600 years ago. But genetic and archaeological discoveries made over the past five years have revised this scenario—and have generated fresh insights into both the ancestry of the house cat and how its relationship with humans evolved.

The question of where house cats first arose has been challenging to resolve for several reasons. Although a number of investigators suspected that all varieties descend from just one cat species— Felis silvestris, the wildcat—they could not be certain. In addition, that species is not confined to a small corner of the globe. It is represented by populations living throughout the Old World—from Scotland to South Africa and from Spain to Mongolia—and until recently scientists had no way genet of determining unequivocally which of these wildcat populations gave rise to the tamer, so called domestic kind. Indeed, as an alternative to the Egyptian origins hypothesis, some researchers had even proposed that cat domestication occurred in a number of different locations, with each domestication spawning a different breed. Confounding the issue was the fact that members of these wildcat groups are hard to tell apart from one another and from feral domesticated cats with so-called mackerel-tabby coats because all of them have the same pelage pattern of curved stripes and they interbreed freely with one another, further blurring population boundaries.


In 2000 one of us (Driscoll) set out to tackle the question by assembling DNA samples from some 979 wildcats and domestic cats in southern Africa, Azerbaijan, Kazakhstan, Mongolia and the Middle East. Because wildcats typically defend a single territory for life, he expected that the genetic composition of wildcat groups would vary across geography but remain stable over time, as has occurred in many other cat species. If regional indigenous groups of these animals could be distinguished from one another on the basis of their DNA and if the DNA of domestic cats more closely resembled that of one of the wildcat populations, then he would have clear evidence for where domestication began.

In the genetic analysis, published in 2007, Driscoll, another of us (O’Brien) and their colleagues focused on two kinds of DNA that molecular biologists traditionally examine to differentiate subgroups of mammal species: DNA from mitochondria, which is inherited exclusively from the mother, and short, repetitive sequences of nuclear DNA known as microsatellites. Using established computer routines, they assessed the ancestry of each of the 979 individuals sampled based on their genetic signatures. Specifically, they measured how similar each cat’s DNA was to that of all the other cats and grouped the animals having similar DNA together. They then asked whether most of the animals in a group lived in the same region.

The results revealed five genetic clusters, or lineages, of wildcats. Four of these lineages corresponded neatly with four of the known subspecies of wildcat and dwelled in specific places: F. silvestris silvestris in Europe, F. s. bieti in China, F. s. ornata in Central Asia and F. s. cafra in southern Africa. The fifth lineage, however, included not only the fifth known subspecies of wildcat—F. s. lybica in the Middle East—but also the hundreds of domestic cats that were sampled, including purebred and mixed-breed felines from the U.S., the U.K. and Japan. In fact, genetically, F. s. lybica wildcats collected in remote deserts of Israel, the United Arab Emirates and Saudi Arabia were virtually indistinguishable from domestic cats. That the domestic cats grouped with F. s. lybica alone among wildcats meant that domestic cats arose in a single locale, the Middle East, and not in other places where wildcats are common.

Once we had figured out where house cats came from, the next step was to ascertain when they had become domesticated. Geneticists can often estimate when a particular evolutionary event occurred by studying the quantity of random genetic mutations that accumulate at a steady rate over time. But this so-called molecular clock ticks a mite too slowly to precisely date events as recent as the past 10,000 years, the likely interval for cat domestication. To get a bead on when the taming of the cat began, we turned to the archaeological record. One recent find has proved especially informative in this regard.

In 2004 Jean-Denis Vigne of the National Museum of Natural History in Paris and his colleagues reported unearthing the earliest evidence suggestive of humans keeping cats as pets. The discovery comes from the Mediterranean island of Cyprus, where 9,500 years ago an adult human of unknown gender was laid to rest in a shallow grave. An assortment of items accompanied the body—stone tools, a lump of iron oxide, a handful of seashells and, in its own tiny grave just 40 centimeters away, an eight-month-old cat, its body oriented in the same westward direction as the human’s.

Because cats are not native to most Mediterranean islands, we know that people must have brought them over by boat, probably from the adjacent Levantine coast. Together the transport of cats to the island and the burial of the human with a cat indicate that people had a special, intentional relationship with cats nearly
10,000 years ago in the Middle East. This locale is consistent with the geographic origin we arrived at through our genetic analyses. It appears, then, that cats were being tamed just as humankind was establishing the first settlements in the part of the Middle East known as the Fertile Crescent.

The House Cats Ancestor

Source of Information : Scientific American(2009-06)

Thursday, August 27, 2009

Phosphorus - While Supplies Last

Altogether, phosphorus flows now add up to an estimated 37 million metric tons per year. Of that, about 22 million metric tons come from phosphate mining. The earth holds plenty ofphosphorus-rich minerals—those consideredeconomically recoverable—but most are not readily available. The International Geological Correlation Program (IGCP) reckoned in 1987 that there might be some 163,000 million metric tons of phosphate rock worldwide, corresponding to more than 13,000 million metric tons of phosphorus, seemingly enough to last nearly a millennium. These estimates, however,include types of rocks, such as high-carbonate minerals, that are impractical as sources because no economical technology exists to extract the phosphorus from them. The tallies also include deposits that are inaccessible because of their depth or location offshore; moreover, they may exist in underdeveloped or environmentally sensitive land or in the presence of high levels of toxic or radioactive contaminants such as cadmium, chromium, arsenic, lead and uranium.

Estimates of deposits that are economically recoverable with current technology—known as reserves—are at 15,000 million metric tons. That is still enough to last about 90 years at current use rates. Consumption, however, is likely to grow as the population increases and as people in developing countries demand a higher standard of living. Increased meat consumption, in particular, is likely to put more pressure on the land, because animals eat more food than the food they become.

Phosphorus reserves are also concentrated geographically. Just four countries—the U.S.,
China, South Africa and Morocco, together with its Western Sahara Territory—hold 83 percent of the world’s reserves and account for two thirds of annual production. Most U.S. phosphate comes from mines in Florida’s Bone Valley, a fossil deposit that formed in the Atlantic Ocean 12 million years ago. According to the U.S. Geological Survey, the nation’s reserves amount to 1,200 million metric tons. The U.S. produces about 30 million metric tons of phosphate rock a year, which should last 40 years, assuming today’s rate of production. Already U.S. mines no longer supply enough phosphorus to satisfy the country’s production of fertilizer, much of which is exported. As a result, the U.S. now imports phosphate rock. China has high-quality reserves, but it does not export; most U.S. imports come from Morocco.

Even more than with oil, the U.S. and much of the globe may come to depend on a single country for a critical resource. Some geologists are skeptical about the existence of a phosphorus crisis and reckon that estimates of resources and their duration are moving targets. The very definition of reserves is dynamic because, when prices increase, deposits that were previously considered too expensive to access reclassify as reserves. Shortages or price swings can stimulate conservation efforts or the development of extraction technologies. And mining companies have the incentive to do exploration only once a resource’s lifetime falls below a certain number of decades. But the completion of old mines spurs more exploration, which expands the known resources. For instance, 20 years ago geologist R. P. Sheldon pointed out that the rate of new resource discovery had been consistent over the 20th century. Sheldon also suggested that tropical regions with deep soils had been inadequately explored: these regions occupy 22 percent of the earth’s land surface but contain only 2 percent of the known phosphorus reserves.
Yet most of the phosphorus discovery has occurred in just two places: Morocco/Western Sahara and North Carolina. And much of North Carolina’s resources are restricted because they underlie environmentally sensitive areas. Thus, the findings to date are not enough to allay concerns about future supply. Society should therefore face the reality of an impending phosphorus crisis and begin to make a serious effort at conservation.


Rock Steady
The standard approaches to conservation apply to phosphorus as well: reduce, recycle and reuse. We can reduce fertilizer usage through more efficient agricultural practices such as terracing and no-till farming to diminish erosion [see
“No-Till: The Quiet Revolution,” by David R. Huggins and John P. Reganold; Scientific American, July 2008]. The inedible biomass harvested with crops, such as stalks and stems, should be returned to the soil with its phosphorus, as should animal waste (including bones) from meat and dairy production, less than half of which is now used as fertilizer. We will also have to treat our wastewater to recover phosphorus from solid waste. This task is difficult because residual biosolids are contaminated with many pollutants, especially heavy metals such as lead and cadmium, which leach from old pipes. Making agriculture sustainable over the long term begins with renewing our efforts to phase out toxic metals from our plumbing.

Half the phosphorus we excrete is in our urine, from which it would be relatively easy to recover. And separating solid and liquid human waste—which can be done in treatment plants or at the source, using specialized toilets— would have an added advantage. Urine is also rich in nitrogen, so recycling it could offset some of the nitrogen that is currently extracted from the atmosphere, at great cost in energy. Meanwhile new discoveries are likely just to forestall the depletion of reserves, not to prevent it. For truly sustainable agriculture, the delay would have to be indefinite. Such an achievement would be possible only with a world population small enough to be fed using natural and mostly untreated minerals that are low-grade sources of phosphorus. As with other resources, the ultimate question is how many humans the earth can really sustain. We are running out of phosphorus deposits that are relatively easily and cheaply exploitable. It is possible that the optimists are correct about the relative ease of obtaining new sources and that shortages can be averted. But given the stakes, we should not leave our future to chance.

Source of Information : Scientific American(2009-06)

Wednesday, August 26, 2009

Phosphorus A Looming Crisis

This underappreciated resource—a key part of fertilizers—is still decades from running out. But we must act now to conserve it, or future agriculture will collapse

As complex as the chemistry of life may be, the conditions for the vigorous growth of plants often boil down to three numbers, say, 19-12-5. Those are the percentages of nitrogen, phosphorus and potassium, prominently displayed on every package of fertilizer. In the 20th century the three nutrients enabled agriculture to increase its productivity and the world’s population to grow more than sixfold. But what is their source? We obtain nitrogen from the air, but we must mine phosphorus and potassium. The world has enough potassium to last several centuries. But phosphorus is a different story. Readily available global supplies may start running out by the end of this century. By then our population may have reached a peak that some say is beyond what the planet can sustainably feed.

Moreover, trouble may surface much sooner. As last year’s oil price swings have shown, markets can tighten long before a given resource is anywhere near its end. And reserves of phosphorus are even less evenly distributed than oil’s, raising additional supply concerns. The U.S. is the world’s second-largest producer of phosphorus (after China), at 19 percent of the total, but 65 percent of that amount comes from a single source: pit mines near Tampa, Fla., which may not last more than a few decades. Meanwhile nearly 40 percent of global reserves are controlled by a single country, Morocco, sometimes referred to as the “Saudi Arabia of phosphorus.” Although Morocco is a stable, friendly nation, the imbalance makes phosphorus a geostrategic ticking time bomb.

In addition, fertilizers take an environmental toll. Modern agricultural practices have tripled the natural rate of phosphorus depletion from the land, and excessive runoff into waterways is feeding uncontrolled algal blooms and throwing aquatic ecosystems off-kilter. While little attention has been paid to it as compared with other elements such as carbon or nitrogen, phosphorus has become one of the most significant sustainability issues of our time.


Green Revelation
My interest in phosphorus dates back to the mid-1990s, when I became involved in a NASA program aiming to learn how to grow food in space. The design of such a system requires a careful analysis of the cycles of all elements that go into food and that would need to be recycled within the closed environment of a spaceship. Such know-how may be necessary for a future trip to Mars, which would last almost three years. Our planet is also a spaceship: it has an essentially fixed total amount of each element. In the natural cycle, weathering releases phosphorus from rocks into soil. Taken up by plants, it enters the food chain and makes its way through every living being. Phosphorus—usually in the form of the phosphate ion PO4 3-—is an irreplaceable ingredient of life. It forms the backbone of DNA and of cellular membranes, and it is the crucial component in the molecule adenosine triphosphate, or ATP—the cell’s main form of energy storage. An average human body contains about 650 grams of phosphorus, most of it in our bones.

Land ecosystems use and reuse phosphorus in local cycles an average of 46 times. The mineral then, through weathering and runoff, makes its way into the ocean, where marine organisms may recycle it some 800 times before it passes into sediments. Over tens of millions of years tectonic uplift may return it to dry land.

Harvesting breaks up the cycle because it removes phosphorus from the land. In prescientific agriculture, when human and animal waste served as fertilizers, nutrients went back into the soil at roughly the rate they had been withdrawn. But our modern society separates food production and consumption, which limits our ability to return nutrients to the land. Instead we use them once and then flush them away. Agriculture also accelerates land erosion— because plowing and tilling disturb and expose the soil—so more phosphorus drains away with runoff. And flood control contributes to disrupting the natural phosphorus cycle. Typically river floods would redistribute phosphorus-rich sediment to lower lands where it is again available for ecosystems. Instead dams trap sedi-Acment, or levees confine it to the river until it washes out to sea. So too much phosphorus from eroded soil and from human and animal waste ends up in lakes and oceans, where it spurs massive, uncontrolled blooms of cyanobacteria (also known as blue-green algae) and algae. Once they die and fall to the bottom, their decay starves other organisms of oxygen, creating “dead zones” and contributing to the depletion of fisheries.

Source of Information : Scientific American(2009-06)

Tuesday, August 25, 2009

On the Other Hand

Double-hand transplantations could switch the handedness of patients. Two men who lost both hands in work injuries received transplants after three to four years of waiting. Despite such a long time—the brain typically reassigns areas linked with control of the amputated limb to other muscles— researchers at the French Center for Cognitive Neuroscience in Lyon found the patients’ brain could connect to the new hands, which subsequently could perform complex tasks (in a demonstration, one patient repaired electrical wires). Although both men were righthanded, their left hand connected with their brain at least a year sooner than their right hand did, and they stayed left-handed. The reason for this switch, reported online April 6 by the Proceedings of the National Academy of Sciences USA, is unclear—perhaps the prior dominance of the right hand made the corresponding brain regions less flexible to reconnections or the surgeries were done slightly differently.

Source of Information : Scientific American(2009-06)

Monday, August 24, 2009

Pulling Up Worms

The Conficker worm exposes computer flaws, fixes and fiends BY MICHAEL MOYER

Computer users could be forgiven if they kept their machines off on April 1. Since it first appeared last November, the malicious software known as the Conficker worm has established itself as one of the most powerful threats the Internet has seen in years, infecting an estimated 10 million computers worldwide. The malware slipped into machines running the Windows operating system and waited quietly for April Fools’ Day (the timing did not go unnoticed), when it was scheduled to download and execute a new set of instructions. Although no one knew what was to come, the worm’s sophistication provided a stark example of how the global malware industry is evolving into a model of corporate efficiency. At the same time, it raised calls for security researchers to steal a trick from their black hat counterparts.

A worm takes advantage of security holes in ubiquitous software—in this case, Microsoft Windows—to spread copies of itself. Conficker, though, was a strikingly advanced piece of code, capable of neutering a computer’s antivirus software and receiving updates that would give it more complex abilities. Its sudden march across the Web reignited interest in one of the most controversial ideas in security protection: the release of a “good” worm. Such software would spread like a worm but help to secure the machines it infected. The approach had already been attempted once before. In late 2003 the Waledac worm burrowed into Windows machines by exploiting the same vulnerability as the then widespread Blaster worm. Yet unlike Blaster, which was programmed to launch an attack against a Microsoft Web site, Waledac updated the infected machines with security patches.

On the surface, Waledac appeared to be a success. Yet this worm, like every worm, spiked network traffic and clogged the Internet. It also rebooted machines without users’ consent. (A common criticism of automatic security updates—and a key reason why many people decide to turn them off—is that updating a security patch requires restarting the computer, sometimes at inopportune moments.) More important, no matter how noble the purpose, a worm is an unauthorized intrusion.

After Waledac, the discussion about good worms went away, at least in part because worms themselves went away. “Back in the early 2000s, there weren’t strong business models for distributed malware,” says Philip Porras, program director of the nonprofit security research firm SRI International. Hackers, he explains, “were using [worms] to make statements and to gain recognition.” Worms would rope computers together into botnets—giant collections of zombie computers—which could then attempt to shut down legitimate Web sites. Exciting (if you’re into that sort of thing), but not very profitable.

In the past five years malware has grown ever more explicitly financial. “Phishers” send out e-mails to trick people into revealing user names and passwords. Criminals have also begun uploading to legitimate store sites hard-to-detect surveillance code that covertly intercepts credit-card information. The stolen information then goes up for sale on the Internet’s black market. An individual’s user name and password to a banking site can fetch anywhere from $10 to $1,000; credit-card numbers, which are more ubiquitous, go for as little as six cents. The total value of the goods that appear on the black market in the course of a year now exceeds $7 billion, according to Internet security company Symantec.

The tightly managed criminal organizations behind such scams—often based in Russia and former Soviet republics—treat malware like a business. They buy advanced ode on the Internet’s black market, customize it, then sell or rent the resulting botnet to the highest bidders. They extend the worm’s life span as long as possible by investing in updates—maintenance by another name. This assembly line–style approach to crime works: of all the viruses that Symantec has tracked over the past 20 years, 60 percent of them have been introduced in the past 12 months.

A week after the April 1 deadline, it became clear that the people responsible for Conficker had strong financial motivations. The worm downloaded a wellknown spam generator. In addition, computers infected with the worm also began to display a highly annoying “Windows Security Alert” pop-up warning every few minutes. The alerts claimed that the computer was infected with a virus, which was true enough. Yet these scareware warnings also promised that the only way to clean one’s machine was to download the $50 program advertised—credit-card payments only, please.

Ironically, routine updates could have prevented the worm’s spread in the first place. In fact, Conficker emerged a full four weeks after Microsoft released the “urgent” security patch that protected computers against it. Clearly, millions of machines were not being updated. And millions still probably are not properly immunized— a disturbing thought, considering that, even after its April actions, Conficker resumed waiting for further instructions.

Source of Information : Scientific American(2009-06)

Sunday, August 23, 2009

A Mechanism of Hot Air

A popular carbon-offset scheme may do little to cut emissions

A convenient way of cutting industrial gases that warm the planet was supposed to be the United Nation’s clean development mechanism (CDM). As a provision of the Kyoto Protocol, the CDM enables industrial nations to reduce their greenhouse gas emissions in part by purchasing “carbon offsets” from poorer countries, where green projects are more affordable. The scheme, which issued its first credits in 2005, has already transferred the right to emit an extra 250 million tons of carbon dioxide (CO2), and that could swell to 2.9 billion tons by 2012. Offsets will “play a more significant role” as emissions targets become tighter, asserts Yvo de Boer of the U.N. Framework Convention on Climate Change.

But criticism of the CDM has been mounting. Despite strenuous efforts by regulators, a significant fraction of the offset credits is fictitious “hot air” manufactured by accounting tricks, critics say. As a result, greenhouse gases are being emitted without compensating reductions elsewhere.

The accounting is rooted in a concept known as additionality. To earn credits, a project should owe its existence to the prospective earnings from carbon credits: the emissions reductions from the project should be additional to what would have happened in the absence of the CDM. Hence, the developers of a wind farm in India that replaces a coal-fired power plant could sell the difference in carbon emissions between the two projects as offsets— but not if the wind farm would have been built anyway. Many CDM projects, however, do not appear to be offsetting carbon output at all. The Berkeley, Calif.–based organization International Rivers discovered that a third of the CDM’s hydropower projects had been completed before they were accredited. Lambert Schneider of Germany’s Institute for Applied Ecology judged two fifths of the world’s CDM portfolio to be of similarly questionable additionality. Climatologist Michael Wara of Stanford University guesses the figure could be much higher, but, he says, “we have no way of knowing.”

Determining which projects are “additional” can be tricky, explains researcher Larry Lohmann of the Corner House, an environmental think tank based in Dorset, England. “There’s no such thing as a single world line, a single narrative of what would have happened without the project,” he points out. “It’s not a solvable problem.”

A related worry is that of perverse incentives. Consultants assessing a carbon-offset project often compare it with the accepted practice in the developing country where it will be located. Such an approach gives that country an incentive to take the most polluting line to maximize the credits they earn for a CDM project. Selling this artificially inflated credit could thus ultimately enable more carbon to be emitted than if the offset had not been created at all.

Take the controversy over gas flaring in Nigeria, where oil firms burn off 40 percent of the natural gas found with oil. The state-owned Nigeria Agip Oil Company plans instead to generate electricity from the waste gas of its Kwale plant, displacing fossil fuels that might otherwise have been consumed. That strategy would create credits of 1.5 million tons of carbon dioxide a year for sale. (A credit for a ton of CO2, called a certified emissions reduction, has been selling for about $15 in Europe.) The project is deemed additional because the prospect of selling offsets motivated the developers.

But activist Michael Karikpo of Oilwatch finds that classification to be “outrageous”— because routine flaring, which spews carcinogens such as benzene and triggers acid rain, is illegal in Nigeria. No company should profit from flouting the law, he adds: “It’s like a criminal demanding money to stop committing crimes.” Nevertheless, the incentive to declare a project as additional is powerful. Pan Ocean Oil Corporation based in Nigeria, has applied for CDM approval for an effort to process and market waste gas from its Ovade-Ogharefe oil field. Should the government begin enforcing the law against flaring, it would render the project nonadditional and sacrifice considerable benefits. The CDM’s executive board has strengthened its review process to improve the tests for additionality and to reduce perverse incentives. For instance, the board no longer accepts new projects for burning off HFC-23, a greenhouse gas produced during the manufacture of refrigerant, because the windfall credits it generated had created an incentive to set up chemical factories for the sole purpose of burning HFC-23. (Because of HFC- 23’s heat-trapping potency, one ton of it fetches 12,000 CO2 credits.)

Some observers think the CDM is too far gone to salvage. No amount of tinkering will repair such a “fundamental design flaw” as additionality, Wara contends. Last November the U.S. Government Accountability Office warned that carbon offsets “may not be a reliable long-term approach to climate change mitigation.” In January the European Commission determined that the CDM should be phased out for at least the more advanced developing countries, which would instead be pressured to accept binding commitments to limit emissions. Another proposal would replace the CDM with a fund for developing countries to build green projects without generating credits—thereby eliminating the entire concept of additionality.

Doing away with the CDM and other offsets could be hard, though, because they are the easiest way for industrial nations to meet their emissions targets. The U.S. is considering a bill to reduce emissions by an ambitious 20 percent by 2020, but its provisions are so generous that apparently the country could meet its goal just by buying offsets. The fate of the CDM will be decided in climate talks to be held in December in Copenhagen.



Carb on flare-up : Burning off waste gas in Nigerian oil fields—here near the city of Port Harcourt—spews pollutants and is illegal. But the practice persists in part because firms cantake advantage of carbon offsets to earn credits for projects that reduce gas flaring.

Saturday, August 22, 2009

An ill wind for wild chimps?

Simian immunodeficiency virus is associated with increased mortality in a subspecies of chimpanzee living under natural conditions in East Africa. This is worrying news for the chimpanzee populations involved.

Today’s pandemic strain of HIV-1 crossed the species barrier from chimpanzees (Pan troglodytes) to humans less than 100 years ago. Until now, it has been widely assumed that the precursor of HIV-1, chimpanzee simian immunodeficiency virus (SIVcpz), causes little, if any, illness in its animal host. On page 515 of this issue, however, Keele and colleagues1 show that small groups of wild chimpanzees naturally infected with SIVcpz do develop hallmarks of AIDS. Careful monitoring for almost a decade has revealed that SIV-infected animals of the eastern subspecies of chimpanzee (Pan troglodytes schweinfurthii) in the Gombe National Park in Tanzania have a markedly higher death rate than non-infected animals.

Chimpanzees — humans’ closest living relative — are not only genetically similar to humans but also share susceptibility to some infectious diseases. For instance, outbreaks of haemorrhagic fever caused by relatives of Ebola virus in chimpanzees and gorillas have resulted in marked mortality in wild populations2. However, subtle chronic diseases like those caused by lentiviruses such as SIV/HIV are more difficult to document in wild nonhuman primates than are acute diseases with high mortality rates.

More than 40 strains of SIV are known to cause natural infection in African nonhuman primates3, but few cases of AIDS have been recorded, and progression to illness after viral infection is thought to be rare. This lack of disease has been well documented in African green monkeys and sooty mangabeys4. In addition, a captive P. t. schweinfurthii is in good health almost 20 years after being identified as having naturally acquired SIVcpz infection. However, exceptions have been noted in primates in captivity. For example, a captive sooty mangabey developed classic symptoms of AIDS 18 years after natural infection with SIV5. This long incubation period exceeded the average lifespan of sooty mangabeys in the wild and, because the adaptation of SIV to its natural simian hosts would not necessarily include complete avirulence, it is difficult to draw conclusions from this isolated example. AIDS has also been documented in black mangabeys in captivity following transmission of SIVsm from sooty mangabeys, their naturally infected cousins4.

The outcome of infection with HIV or SIV is determined both by variants of the viral population and by individual host differences. In studies conducted largely in the 1980s, chimpanzees (mostly of the subspecies Pan troglodytes verus) that were infected with a particular laboratory strain of HIV-1 controlled the amount of virus in the blood, and remained healthy, although a human accidentally exposed to the same strain of HIV-1 developed AIDS6. A report of one case indicated that certain variants of HIV-1 can cause disease in chimpanzees7. However, such experiments are no longer permitted owing to the endangered status of chimpanzees.

Keele et al.1 are the first to examine the longterm effects of naturally transmitted SIVcpz in wild chimpanzee populations in their natural habitat. The research was made possible because the Gombe chimpanzees have been studied in their natural setting for many decades by Jane Goodall and colleagues, and these animals are accustomed to the presence of humans. Samples of faeces collected from the forest floor can be used to detect SIVcpz DNA and the infected individual can be identified by performing genetic fingerprinting on the same sample. In addition, antibodies directed against SIVcpz can be detected in urine. Using such techniques, 94 members of social groups of chimpanzees in Gombe were meticulously followed over a 9-year period without the need to capture them or take blood samples to determine rates of SIVcpz infection and its effects. Overall, 17 (18%) apes were infected with SIVcpz. Several infected animals that died were autopsied, revealing pathology compatible with human AIDS. The SIV-infected apes had a 10- to 16-fold higher mortality rate than did non-infected apes, and a smaller proportion of infected females gave birth to offspring, none of which survived for more than one year.

SIVcpz in chimpanzees is apparently less virulent than HIV-1 is in humans. Why? One hypothesis8 is that SIVcpz and its chimpanzee host have co-evolved over millions of years. But SIVcpz may in fact be a relatively recent introduction into chimpanzees — it seems to be a recombinant virus with genome components from SIVs of other monkeys, such as red-capped mangabeys and guenons9. A recent report10 estimates that the time of the most recent common ancestor of the SIVcpz strains of all chimpanzee subspecies is 1492, although this calculation is likely to be hotly debated. Alternative explanations for differences in virulence between HIV-1 and SIVcpz could be the immune status or evolved immune genetics of the affected populations, including differences among species in host factors that restrict certain types of viral infection9.

The Gombe chimpanzees belong to the eastern subspecies P. t. schweinfurthii. The western subspecies Pan troglodytes troglodytes, which harbours the SIVcpz variants that have given rise to HIV-1, has not yet been identified with AIDS-like illness. Could the increased virulence of SIVcpz in P. t. schweinfurthii be due to a recent cross-subspecies transmission of SIVcpz from P. t. troglodytes to an immunologically naive P. t. schweinfurthii population? It is notable that SIVcpz of the P. t. troglodytes subspecies has transmitted to humans on at least three occasions, whereas SIVcpz crossinfection to humans from P. t. schweinfurthii has not been documented.

In view of Keele and colleagues’ results1, why was the progression of SIV infection to AIDSlike illness not more apparent in chimpanzees in captivity? Much of the pathology of AIDS is linked to a general hyperactivation of the immune system that occurs before CD4 T-cell depletion, which is characteristic of AIDS11. Certainly the parasitic, bacterial and viral burden in P. t. schweinfurthii chimpanzees in their natural habitat might provide the activating environment to trigger progression to AIDS in a relatively immunologically naive subspecies. General theories of the evolution of virulence, and of the relationship between virulence and pathogen dispersal12, can help in elucidating why certain infections cause disease whereas others don’t — the abrupt changes in virulence on cross-host infection are ripe for modelling, taking into account the burgeoning knowledge of host genetic variation. Primate lentiviruses would be a promising starting point for this exercise.

The study by Keele et al.1 shows the benefit of multidisciplinary research, in which primatologists, pathologists, geneticists and molecular virologists joined forces. Their work indicates that SIVcpz causes AIDS-like disease in a subspecies of chimpanzee that is already endangered in the wild. In addition to the threat of AIDS and Ebola, the great apes may also be at risk of acquiring other infections that affect humans from their increasingly close contact with our species. Relatively minor human infections could prove to be serious pathogens in chimpanzees. We are not alone.


Source of Information : Nature 23 July 2009

Friday, August 21, 2009

ExxonMobil invests in algae for biofuel

Oil and gas company ExxonMobil, whose chief executive Rex Tillerson called the idea of ethanol as a biofuel “moonshine” in 2007, last week announced a US$600-million research alliance to develop biofuels from photosynthetic algae. The multi-year project sees Exxon team up with Synthetic Genomics, the biotechnology company in La Jolla, California, co-founded by Craig Venter, which numbers another oil and gas giant, BP, among its investors. Synthetic Genomics will receive $300 million — more, if deemed successful — to develop high-yielding algal strains and their large-scale cultivation. Exxon is spending an equal sum internally on engineering and manufacturing expertise to support the research.

Source of Information : Nature 23 July 2009

Thursday, August 20, 2009

Scientists strive to boost US–Cuban collaboration

A drive to increase scientific exchange between the United States and Cuba is off to a slow start. In the past four months, Cuban officials have cancelled two planned trips of top US scientific leaders to the island nation. Citing other visitors and events that took up their time, the officials have turned down requests for scientists to enter the country organized by the American Association for the Advancement of Science (AAAS) and the New America Foundation nongovernmental organization, both based in Washington DC. In April, the administration of US President Barack Obama said it would work to improve relationships between the two countries, including promoting the “freer flow of information”.

The organizers, who have had the trips in the works since before Obama took office, remain hopeful that a delegation might visit Cuba this autumn, says Lawrence Wilkerson, who was chief of staff to former Secretary of State Colin Powell and is working on a New America initiative aimed at Cuba. The delegation is expected to address topics such as tapping Cuba’s strengths in biotechnology, pharmaceuticals and studies involving hurricane research, food production and salt-resistant crops. “Of course we would like more scientific exchange,” says Miguel Abad Salazar, a researcher at the BIOECO conservation facility near Santiago in eastern Cuba.

Travel restrictions remain a major stumbling block for US–Cuban collaboration. For instance, US scientists seeking to travel to Cuba can’t use federal funds without special government permission. And any US scientist travelling to Cuba must get a licence from the treasury department to spend US dollars there, even if funds come from the private foundations that typically pay for such trips. It has also been nearly impossible for Cuban scientists to come to the United States; one immediate barrier is the US$150 nonrefundable fee for a visa application.

During the Obama administration, however, a handful of Cuban scientists have visited the United States, and US scientists have been increasingly venturing to Cuba. Observers say that the exchanges reflect a growing thaw in bilateral relations, which began before Obama’s election.

In May, for instance, David Winkler of Cornell University in Ithaca, New York, went to Cuba to teach an ornithology course to about two dozen scientists at a BIOECO meeting. He went as part of Cornell’s Neotropical Conservation Initiative, coordinated by Eduardo Iñigo-Elias, who has been studying in Cuba for years. “The students and scientists were as well trained as anywhere in Latin America,” says Winkler. “They would be great ambassadors to work on research projects in other countries.” His group hopes to develop such an exchange programme.

These individual exchanges, rather than a coordinated governmental programme, should be the wave of the future, says Peter Feinsinger, a wildlife conservationist who has visited Cuba about a dozen times in the past six years to train biologists. “I favour a scientific grass-roots initiative,” says Feinsinger, who works for Northern Arizona University in Flagstaff but is largely funded through the Wildlife Conservation Society in New York. “I think this will happen naturally.” US researchers often partner with colleagues in other countries to do fieldwork in Cuba. For instance, Kam-biu Liu, a geographer at Louisiana State University in Baton Rouge, collaborates with Matthew Peros, an ecologist at the University of Ottawa in Canada, to acquire sediment samples from Cuba to track hurricane history in the region. Peros has to perform the isotopic analysis on the cores for Liu, who cannot use US funds for the research. None of the US-based funds that flow into the Inter-American Institute for Global Change Research — the organization based outside São Paulo, Brazil, that funds the hurricane work — can similarly be used for work in Cuba. “I have to invent constructs to fund these projects” with funds from other sources, says Holm Tiessen, the institute’s director. Observers hope that more aggressive efforts to ease US–Cuban relations will be forthcoming as more people fill key jobs in the US Department of State.

Source of Information : Nature 23 July 2009

Wednesday, August 19, 2009

Regulators face tough flu-jab choices

Rich countries’ pandemic strategies may cause vaccine shortages elsewhere.

Imminent decisions on a strategy for H1N1 pandemic flu vaccination in the United States could leave other countries short of vital doses if it elects not to follow World Health Organization (WHO) advice on vaccine formulation. The United States is the biggest buyer among a group of rich countries whose combined orders for vaccine against the H1N1 2009 virus could potentially tie up most of the world’s pandemic vaccine production capacity for 6 months or longer , so depriving other countries of vaccine.

To counter this prospect, the WHO recommended on 13 July that countries use shots that contain adjuvants, chemicals that boost the immune system’s response to a vaccine. This allows smaller amounts of antigen — the molecule that stimulates the immune response — to be used in each dose, boosting the overall amount of vaccine available from existing production capacity and allowing orders to be filled more quickly.
The United States’ global responsibility to consider dose-sparing strategies is briefly alluded to in the minutes of a mid-June US National Bio defense Science Board meeting, released on 17 July : “Federal decisionmaking will affect not only the 300 million Americans who depend on the government to support the public health system but also people all around the world.”

The United States has certainly kept open the option of using adjuvants. It has already allocated almost US$2 billion for antigen and adjuvant to provide every American with up to two doses of vaccine . That sum includes orders of $483 million for Novartis’s MF59 adjuvant, and $215 million for GlaxoSmithKline’s AS03 adjuvant.

But although Canada and many European countries are set to use adjuvanted pandemic flu vaccines, the United States may do so only as a last resort. “All things being equal, an unadjuvanted vaccine is often just fine in terms of giving protection against influenza virus,” Anne Schuchat, director of the National Center for Immunization and Respiratory Diseases at the Centers for Disease Control and Prevention in Atlanta, Georgia, told a media briefing on 17 July.

“Adjuvant use would be contingent upon showing that it was needed or clearly beneficial,” added Jesse Goodman, acting chief scientist and deputy commissioner of the Food and Drug Administration (FDA). “But we want them on the table in case there are issues where they might be needed to protect people in this country.” If there is significant genetic drift in the virus, for example, adjuvanted vaccines are better able to handle such strain variations. And early attempts at pandemic vaccine manufacture are so far producing two to four times less antigen than seasonal flu strains, raising the threat that the world’s production capacity is actually much less than was hoped.

If each shot of pandemic flu vaccine contains 15 micrograms of antigen — the dose used in seasonal flu — and no adjuvant, annual global capacity stands at about 876 million doses, according to the WHO . But as virtually no one is immune to the virus , most experts say that each person will need two doses, immediately halving that capacity. Moreover, higher doses of antigen may be needed to get an adequate response , further reducing capacity. Using adjuvants would boost annual capacity — to more than two billion doses in some WHO projections . Europe is well placed to quickly authorize adjuvanted pandemic vaccines. Since 2003 , the European Medicines Agency (EMEA) has had a fast-track approval system in which manufacturers can prepare ‘mock-up dossiers’ — vaccine registration applications that use non-pandemic viral strains but for which pandemic strains can subsequently be substituted. Glaxo- SmithKline and Novartis already have mock-up dossiers in place for the H5N1 avian flu virus, and plan to file H1N1 substitutions by the end of July .

Although the EMEA requires the companies to provide new clinical testing and data as they roll out their products, the product itself can be approved in five days if the agency is satisfied that the extrapolation to the new strain is valid, says Martin Harvey-Allchurch, a spokesman for the EMEA. In contrast, the United States has never licensed an adjuvanted flu vaccine, and has no fast-track system in place, although the FDA can give emergency authorization for new vaccines. The regulators are also mindful of political and public concerns about mass vaccination of the population, given that a vaccination programme
in 1976 against a new strain of swine flu caused neurological side effects in about 1 in 100,000 people, and killed 25. Modern flu vaccines, however, have a very good safety record.

The WHO’s Global Advisory Committee on Vaccine Safety says “no significant safety concern or barriers” exist to using adjuvanted pandemic H1N1 vaccines. But regulatory agencies may have to approve pandemic vaccines— both adjuvanted and non-adjuvanted — without all the data they would normally require, warns Marie-Paule Kieny, the WHO’s vaccine research director. Some preliminary clinical and safety data may be available by September , when flu cases could surge in the Northern Hemisphere, but complete data for adults are unlikely to be available until the end of December, and not until February 2010 for children . Regulators would accompany pandemic vaccine rollouts with parallel clinical trials, and, as in any mass-vaccination campaign, extensive surveillance would monitor for any adverse side effects.

Source of Information : Nature 23 July 2009

Tuesday, August 18, 2009

US Congress revives hydrogen vehicle research

US funding for hydrogen-fuelled transportation research got a boost on 17 July as the House of Representatives voted to restore $85 million to the research budget . The administration of President Barack Obama had proposed cutting the funds altogether. In May, energy secretary Steven Chu sparked an uproar when he proposed slashin current spending on research into hydrogen-based energy technology by 60%, from $168 million this fiscal year to $68 million in 2010, and cutting funding entirely for work on hydrogen vehicles. Former president George W. Bush made hydrogen transportation a cornerstone of his energy research strategy, but Chu said biofuels and batteries offer a better short-term pathway to reducing oil use and greenhouse-gas emissions.

Advocates both among scientists and on Capitol Hill have rushed to defend the hydrogen programme in recent weeks. It seems to have worked: the House included a total of $153 million for hydrogen-energy research in its version of the 2010 energy and water spending bill.

In the Senate, appropriators have provided $190 million for hydrogen research — a 13% increase over the base budget for 2009 — although the full Senate has yet to take up the legislation. A final bill is unlikely to come for another few months , but some level of funding for hydrogen vehicle research is likely to survive. Also last week, a National Research Council (NRC) panel weighed in on the debate with a preliminary report on the FreedomCAR and Fuel Partnership, a research consortium involving industry and government. The NRC committee endorsed the general thrust of the transportation research agenda of the Department of Energy (DOE) but said it is concerned about efforts to scale back work on hydrogen-fuelled transport. Citing the long-term potential of hydrogen fuel cells, the panel said it is not yet clear which vehicle technologies will prevail in the market. “There was no disagreement on the DOE’s approach to put more emphasis on nearer-term technologies, but we felt that the long-term, high-risk, high-payoff nations. The Alliance of Small Island States has proposed a mechanism that is similar to the MCII proposal but with one key difference: its members want outright compensation, rather than just insurance, for long-term problems associated with issues such as ocean acidification and rising sea levels.


Responsible action
The word ‘compensation’ raises concerns in industrialized nations, who don’t want to sign a blank cheque, but the alliance isn’t backing down. Many see the language as a warning to industrial nations regarding the costs of inaction. “If you are one of these low-lying atolls in the Pacific, would you say ‘thank you very much’ to a deal that submerges your island over time?” asks M. J. Mace, a negotiator for the Federated States of Micronesia. “If there’s a deal, it’s got to address impacts, one way or another.”

One potential compromise under discussion would be to include an insurance mechanism in whatever deal is struck and then acknowledge the long-term compensation issue in a more symbolic manner. To date, the best model for large-scale multilateral insurance may be the Caribbean Catastrophe Risk Insurance Initiative. Launched in 2007 with $47 million in funding from several international donors, the index insurance pool now provides hurricane and earthquake insurance to 16 Caribbean nations. Much as the Ethiopian policy is tied to rainfall, the Caribbean policies are based on observations of wind speed and earthquake intensity.

That saves money on site audits and speeds up the process, providing money immediately after a crisis when it is needed most. “It gives them a bit of breathing room,” says Simon Young, the Washington DC-based head of the non-profit firm Caribbean Risk Managers, which manages the programme.

So far, the Caribbean programme has paid out nearly $1 million for an earthquake that affected St Lucia and Dominica in 2007, and $6.3 million to the Turks and Caicos Islands after Hurricane Ike last year. Young says that their models try to take into account factors such as building codes and other preventative action, which should lower premiums as well as lessen damage during a storm.

Insurance advocates acknowledge that spreading insurance tools around the globe would benefit developing nations regardless of global warming, as illustrated by the Caribbean initiative. But fears about increased droughts, floods and more severe weather that could be associated with global warming have added momentum. “Climate change is more or less a new impulse for promoting this,” says Thomas Loster, chairman of the Munich Re Foundation, a non-profit philanthropic branch of the German reinsurance giant. “But of course we should have done it 20 years ago.”

Source of information : Nature 23 July 2009

Monday, August 17, 2009

A metabolic switch to memory

Two therapeutic drugs have been found to enhance memory in immune cells
called T cells, apparently by altering cellular metabolism. Are changes in
T-cell metabolism the key to generating long-lived immune memory?

T lymphocytes respond to an acute infection with a massive burst of proliferation, generating effector T cells that counteract the pathogen. When the infection is cleared, most of these effector T cells die (the contraction phase of the immune response), but a minority lives on and changes into resting memory T cells that rapidly respond to future encounters with the same pathogen1. In this issue, Pearce et al.2 and Araki et al.3 report that two drugs, one used to control diabetes and the other to prevent organ-transplant rejection, markedly enhance memory T-cell development. Through their actions on major metabolic pathways in the cell, these drugs seem to promote the switch from growth to quiescent survival.

While investigating the role of a protein called TRAF6, which is a negative regulator of T-cell signalling, Pearce et al.2 noted that, although T cells in which TRAF6 was knocked out mounted a normal effector response to a pathogen, they left behind few if any memory cells. The authors performed a microarray analysis comparing the genes expressed by normal and TRAF6-deficient T cells at the time they change from effector to memory cells. In a eureka moment, they realized that TRAF6-knockout T cells display defects in the expression of genes involved in several metabolic pathways, including the fatty-acid oxidation pathway, implying that a metabolic switch in T cells might be affecting memorycell generation.

Pearce et al. followed up on this clue, and showed that the inability of TRAF6-deficient T cells to spawn long-lived memory T cells could be reversed by treatment with either the antidiabetes drug metformin or the immunosuppressant rapamycin. Both drugs are known to affect cellular metabolism, and treatment with either drug not only restored the memory T-cell response in TRAF6-deficient cells, but also greatly enhanced memory T-cell formation in normal cells, resulting in a superior recall response to a second infection.
In an independent study, Araki et al.3 examined the effect of treating mice with rapamycin during the various phases of a T-cell response to viral infection. Giving rapamycin during the first 8 days after infection (the proliferative phase) markedly increased the number of memory T cells 5 weeks later. This was due to an enhanced commitment of effector T cells to become memory precursor cells. When the authors administered rapamycin during the contraction phase of the T-cell response (days 8–35 after infection), the number of memory T cells did not increase, but there was a speeding up of the conversion of effector T cells to long-lived memory T cells with superior recall ability. Rapamycin inhibits mTOR (‘mammalian target of rapamycin’), a protein-kinase enzyme found in at least two multiprotein complexes— mTORC1, which is rapamycin sensitive, and mTORC2, which is largely resistant to inhibition by rapamycin4. To pinpoint the cellular target of rapamycin in their studies, Araki et al.3 used RNA-interference knockdown techniques to demonstrate that the mTORC1 complex, acting intrinsically in T cells, regulates memory-cell differentiation.

So both rapamycin and metformin seem to enhance T-cell memory formation. But do both drugs affect the same pathway(s), are the pathways interconnected, or do two different mechanisms lead to a similar outcome? Metformin activates AMPK, an enzyme that can inhibit mTOR activity in several ways, including directly targeting raptor, a component of rapamycin-sensitive mTORC1 (ref. 5). Both AMPK and mTOR sense and control the energy status of a cell (ATP:AMP ratio) and regulate key aspects of cell growth and, as part of this, glucose metabolism.

In a quiescent cell, most energy (in the form of ATP) is generated in the mitochondria through oxidative phosphorylation, including the oxidation of fatty acids and amino acids — catabolic metabolism. On activation, T cells massively increase their glucose uptake and shift to producing ATP by glycolysis (anabolic metabolism) instead of catabolically. mTOR is activated by signaling molecules, growth factors and antigen-induced T-cell-receptor signalling, and its activity enables a cell to increase glycolysis and ATP accumulation, which opposes AMPK activation6. Although some of the processes involved in the switch from catabolic to anabolic metabolism are fairly well understood, the reversal from an anabolic to a catabolic state is not as well characterized. One could speculate that rapamycin and metformin facilitate the switch from a glucose-dependent anabolic state (effector T cell) to a catabolic state of metabolism (memory T cell) by blocking mTORC1 activity. But how a change in the metabolic signature of a T cell could enhance memory T-cell numbers and function is unknown.

How can rapamycin, a drug known for its immunosuppressive effects, enhance the function and formation of T-cell memory? The answer may lie in dosage and, more importantly, timing. Whereas treatment with a low dose of rapamycin during the first 8 days after T-cell activation enhanced the numbers and function of memory T cells, a higher dose, closer to therapeutic levels, hampered the T-cell response, as would be expected of an immunosuppressant3. Interestingly, both papers2,3 clearly show that the higher dose of rapamycin enhanced memory T-cell function and recall ability if administered after day 8. At this point, the vigorous cell proliferation that is characteristic of the effector stage has ceased and cells begin to enter the more quiescent memory state. Recent data4 suggest that mTOR can form different complexes (aside from mTORC1 and mTORC2) depending on the phase of the cell cycle, and little is known about their inter action with rapa mycin. In addition, as the metabolic signature of a cell changes along with its activation state, rapamycin might differentially affect a cell depending on its cell cycle and metabolic state.

A long-standing paradigm in immunology proposes that, after the peak of the proliferative response, the programmed cell death of effector T cells is caused by a lack of growth also affect cell metabolism. However, recent experiments7 indicate that, in a physiological setting, effector T-cell viability and conversion to memory T cells are not regulated by competition for growth and survival factors. Thus, it is more likely that the metabolic switch is either programmed early after T-cell activation or occurs as a secondary effect after a quiescent stage has been entered. Both Pearce et al.2 and Araki et al.3 establish a crucial role for mTOR-mediated metabolic changes in enhancing T-cell memory. Does changing the metabolism of T cells through manipulation of mTOR hold promise for improving future vaccination strategies? mTOR is involved in regulating a plethora of functions in many cell types, and rapamycin administration is associated with many side effects. Thus, a more targeted approach will be required to harness their memory-enhancing ability. Identifying the downstream signaling pathways that lead to enhanced T-cell memory on inhibition of mTOR complexes will be a first step in that direction.



The metabolic state of T-cell memory. Naive T cells that have not been exposed to antigen are quiescent, but undergo metabolic conversion (catabolic metabolism to anabolic metabolism) on stimulation with an antigen such as a pathogen. This switch allows effector T cells to use mTORC1-dependent glycolytic energy production (anabolic metabolism) to sustain rapid proliferation and biosynthetic needs. At the end of the effector stage, T cells either die by programmed cell death or enter the quiescent memory stage and switch back to catabolic metabolism. Pearce et al.2 and Araki et al.3 show that rapamycin and metformin can enhance memory T-cell formation by inhibiting the protein complex mTORC1, thus leading to changes in cell metabolism

Sunday, August 16, 2009

Climatic plant power

Yves Godderis and Yannick Donnadieu

Levels of atmospheric carbon dioxide constrain vegetation types and thus also non-biological uptake during rock weathering. That’s the reasoning used to explain why CO2 levels did not fall below a certain point in the Miocene.

The world is currently at risk of overheating in response to all the carbon dioxide being pumped into the atmosphere from the use of fossil fuels: the current atmospheric concentration of CO2 is about 385 parts per million (p.p.m.), compared with a ‘pre-industrial’ level of around 280 p.p.m. But overheating is an atypical menace in the recent history of the Earth. Over most of the past 24 million years, it was the possibility of cooling that posed the main threat to life. Cooling, however, did not reach the levels of severity that might have been expected, and on page 85 of this issue Pagani et al.1 put forward a thought-provoking case as to why that was so. Since the end of the Eocene, around 40 million years ago, Earth’s climate has been naturally getting colder. In temporal terms, the cooling has sometimes occurred in discrete steps, sometimes as a long-term trend2.

Over the same interval, levels of atmospheric CO2 have fallen from around 1,400 p.p.m. at the end of the Eocene to possibly as low as 200 p.p.m. during the Miocene3 — the geological period between around 24 million and 5 million years ago. This long-term history of atmospheric CO2 is the result of the interplay between several processes. The degassing of the Earth through magmatic activity (for instance volcanic eruptions) is the main source of CO2, and the dissolution of continental rocks captures atmospheric CO2, which is eventually stored as marine carbonate sediments4. The efficiency of the dissolution process — chemical weathering — is heavily dependent on climate, but also depends on vegetation and physical erosion. The last two parameters boost CO2 uptake by rock weathering. In particular, land plants promote rock dissolution through the mechanical action of roots, and by acidifying the water in contact with rocks5. Acidification occurs through the release of organic acids and the large-scale accumulation of CO2 in soil through root respiration. Removing plants, particularly trees, may strongly decrease the dissolution rate of rocks and the ability of this process to consume atmospheric CO2. In their paper, Pagani and colleagues1 consider the potential role of the rise of many mountainous regions (orogens) over the past 40 million years, especially in the warm and humid low-latitude areas. In these mountain ranges, physical erosion would break down rocks and expose them to intense chemical weathering. The uptake of atmospheric CO2 would consequently increase, as indicated by the levels of CO2 measured, which could have declined to the lowest levels since multicellular life evolved on Earth some 500 million years ago. But what could stop this CO2 uptake pump?
Degassing through magmatic activity was probably declining at the same time (at best it remained constant), and tectonic activity accelerated mountain uplift over the past 24 million years. According to this line of evidence, CO2 levels should have plunged to below 200 p.p.m., with ice ultimately covering large surfaces of the Earth as a consequence. But that was not the case. Earth even experienced a warm spell between 18 million and 14 million years ago2. Pagani et al.1 propose an exciting hypothesis to explain why, 24 million years ago, CO2 might have levelled off at about 200 p.p.m., and then stuck there. They suggest that, when CO2 levels became too low, forests became starved and were progressively replaced by grasslands, particularly in the low-latitude orogens. Grasslands exert a much less vigorous effect on rocks than do trees. In consequence, runs the thinking, CO2 consumption due to weathering declined because of changing terrestrial ecosystems, which in turn stabilized atmospheric CO2 at around 200 p.p.m. Overall, the authors’ model provides an elegant twist on several ideas about the Earth system that emphasize the role of vegetation in dynamically regulating and fixing the lower limit of atmospheric CO2. But it also raises contentious issues.

First, in the model1, forest starvation is triggered by the low level of atmospheric CO2, and by elevated temperature. But do proxy estimates of conditions at that time confirm this paradoxical combination? Proxy measurements of CO2 levels include marine carbonate boron isotopes6, carbon-isotope values of alkenones produced by oceanic algae3 and the density of stomata — a measure of gas exchange — in fossil leaves7. Unlike the geochemical proxy records3,6, the more recent estimates based on the stomatal index7 depict a highly variable CO2 trend over the Miocene (in good agreement with climatic fluctuations), rather than a CO2 level stuck at 200 p.p.m. Furthermore, the estimates show that CO2 concentrations are above the forest-starvation level most of the time, oscillating between 300 and 500 p.p.m.

Second, in their model Pagani et al. assume that rock weathering generated by mountain uplift would have continuously consumed atmospheric CO2 until it reached the foreststarvation level. But there is evidence that the extra consumption of CO2 due to the Himalayan uplift, the most important orogeny of the recent past, occurred mainly through the burial of organic matter in the Bengal fan, and not through rock weathering8,9. In addition, the tectonic history of the past 24 million years is still subject to debate, and the timing of the uplift of the main mountain ranges, such as the Himalaya and Andes, is far from fully constrained10.

Finally, the link between weathering and continental vegetation is well recognized. But it is complex. Apart from acidifying water and mechanical effects, land plants also control the hydrology of soils. In humid tropical environments, about 70% of the rainfall is absorbed by land plants and then evaporates through their leaves. This effect should inhibit weathering reactions by limiting the amount of water available for rock dissolution. Also, in equatorial uplifted areas, intense erosion occurs through landslides triggered by heavy rainfall11. These landslides bring fresh rock material in contact with water by removing the soil mantle, promoting weathering and CO2 consumption. The role of vegetation cover in these systems might not be as significant as Pagani et al. suggest. The authors themselves acknowledge some of these limitations, and all in all have put forward a bold and provocative hypothesis. But accounting for all of the processes and constraints involved is probably beyond the capabilities of the first-order global models that Pagani et al. used, and more-complex and process-based modelling12,13 will be required to test their conclusions. Whatever the outcome, that should prove to be a fruitful exercise for carbon-cycle modellers intent on understanding the processes that drove climate and CO2 oscillations during the Miocene.

Source of Information : Nature 02 July 2009

Friday, August 14, 2009

A cellular view of regeneration

How the salamander regrows an entire limb after injury has flummoxed the wisest of scientists. A closer look at the cells involved in limb regeneration shows that remembering past origins may be crucial for this feat.

When a salamander loses an appendage, such as a limb, a remarkable series of events unfolds: a clump of cells forms at the site of the injury, and this deceptively simple structure, known as a blastema, regenerates the missing body parts. Skin, muscle, bone, blood vessels and neurons all arise from this collection of nondescript cells through patterning and self-assembly. So complete is the repair that it is difficult — if not impossible — to tell that the animal has been injured. It has long been believed that the cells responsible for this repair process are pluripotent, capable of giving rise to all tissue types. On page 60 of this issue, Kragl et al.1 describe a novel cell-tracking technique that reveals some surprising findings about the origins and fates of cells involved in salamander limb regeneration.

Far from being a mere curiosity confined to a few species, regeneration is a phenomenon that is broadly but unevenly distributed among the animal and plant kingdoms. Body-part regeneration is found in most animal phyla thus far examined, indicating that this biological attribute may actually be an ancient evolutionary invention2.
Yet how new body parts arise from old ones has been a riddle within a mystery dating back to antiquity. Regeneration in animals and plants has astounded successive generations of great thinkers, including Aristotle, Lazzaro Spallanzani, Voltaire, Charles Darwin and the biologist Thomas Hunt Morgan. Persistent efforts by current researchers in the field3–7 have begun to reveal crucial insights into this biological phenomenon. Nevertheless, much remains to be understood. Key to elucidating the mechanism of regeneration is defining the origins, lineages and fates of the cells that mount a regenerative response after injury. Because the cells that collect in the blastema look identical, it has long been thought that they have dedifferentiated from tissues near to the plane of amputation into a single population of pluripotent cells5. However, it is quite possible that blastema cells involved in regeneration have instead entered a migratory or proliferative state without changing their tissue identity or their lineage potential. Without robust methods to track the fate of cells from the time of amputation to their differentiation during regeneration, definitive conclusions about the nature of the regenerative cells are difficult to reach.

Kragl et al.1 investigate limb regeneration in the axolotl (Ambystoma mexicanum), a salamander endemic to Mexico and a favourite model system for studying vertebrate development. The authors use transgenic technology to introduce DNA coding for a green fluorescent protein molecule into the genome of axolotl cells. Once marked out by its permanent, glowing, green fluorescent ‘tattoo’, a specific tissue type (for instance, muscle or skin) can be transplanted into a host animal and followed in time and space after the host tissues have been amputated. Kragl et al. find that the cells that regenerate the amputated body parts in the axolotl are not pluripotent. Instead, they respect their developmental origins and restrict their differentiation potential accordingly.

For instance, labelled muscle cells at the site of amputation differentiate only into muscle and do not differentiate into other tissue types in the regenerating limb. The authors observed similar lineage restriction for epidermal cells, cartilage cells and Schwann cells
(a type of cell that ensheaths nerve axons). Dermal cells are the sole exception — they contribute cells to both the dermis and the skeleton of the regenerating limb. These data strongly suggest that cells tasked with regeneration retain a memory of their previous identity and, in some cases, of their position in the limb. The use of transgenic methodologies to map cell behaviour and population dynamics during vertebrate regeneration1 is without doubt a significant technical accomplishment that adds a new dimension to our understanding of regeneration. Like all important work, it also leaves us with questions that will probably occupy researchers for the next few years. One question concerns the contribution of connective-tissue fibroblast cells to the regenerating limb. Fibroblasts are relatively abundant in the limb and are known to contribute significantly to regenerative activities in salamanders. Are these cells lineage restricted, or can they give rise to other types of tissue? Because there are no reliable markers for identifying connective-tissue fibroblasts yet, Kragl and colleagues could not definitively answer this particularly interesting question.

Is lineage restriction in regenerating cells unique to the axolotl, or can these principles be applied to other commonly studied species of salamander? The animals used for these studies were juveniles, with a skeletal system made up of cartilage rather than bone. Most other studies in salamanders have been done in adult newts, which have ossified skeletons. Also, there are well-documented differences between the regenerative properties of newts and axolotls. For instance, whereas newts can regenerate the lens of the eye after its removal, the axolotl cannot9. These and other issues will have to be resolved before the biological significance of Kragl and colleagues’ findings1 can be fully appreciated.



Cell-tracking techniques used by Kragl and colleagues1. Tissue from axolotls expressing a transgene encoding the green fluorescent protein (GFP) molecule is transplanted into juvenile nontransgenic axolotls (upper panel). The transplanted tissue is included in the amputation site, and the fluorescently labelled cells are followed during limb regeneration. Alternatively, specific embryonic tissue is transplanted from GFP-labelled axolotl embryos into non-transgenic embryos (lower panel), which mature into juvenile axolotls with a tissue-specific fluorescent marker. These glowing cells can also be tracked after amputation.



Lineages are respected during axolotl limb regeneration. Kragl et al.1 find that, with the exception of cells in the dermis, the dedifferentiated cells contributing to limb regeneration largely remain lineage restricted. Because the experiments were carried out in juveniles, in which the bones have not fully ossified, the fate of bone cells after amputation is unknown. Similarly, owing to the paucity of markers, the fate of connective-tissue fibroblasts contributing to limb regeneration remains undefined. The blue arrow indicates that cartilage cells can sometimes contribute to the dermis.