Tuesday, January 31, 2012

10 Unsolved Mysteries -> Can Computers Be Made Out of Carbon?

Computer chips made out of graphene—a web of carbon atoms—could potentially be faster and more powerful than silicon- based ones. The discovery of graphene garnered the 2010 Nobel Prize in Physics, but the success of this and other forms of carbon nanotechnology might ultimately depend on chemists’ ability to create structures with atomic precision.
The discovery of buckyballs—hollow, cagelike molecules made entirely of carbon atoms—in 1985 was the start of something literally much bigger. Six years later tubes of carbon atoms arranged in a chicken wire–shaped, hexagonal pattern like that in the carbon sheets of graphite made their debut. Being hollow, extremely strong and stiff, and electrically conducting, these carbon nanotubes promised applications ranging from high-strength carbon composites to tiny wires and electronic devices, miniature molecular capsules, and water-filtration membranes.

For all their promise, carbon nanotubes have not resulted in a lot of commercial applications. For instance, researchers have not been able to solve the problem of how to connect tubes into complicated electronic circuits. More recently, graphite has moved to center stage because of the discovery that it can be separated into individual chicken wire–like sheets, called graphene, that could supply the fabric for ultraminiaturized, cheap and robust electronic circuitry. The hope is that the computer industry can use narrow ribbons and networks of graphene, made to measure with atomic precision, to build chips with better performance than silicon-based ones.

“Graphene can be patterned so that the interconnect and placement problems of carbon nanotubes are overcome,” says carbon specialist Walt de Heer of the Georgia Institute of Technology. Methods such as etching, however, are too crude for patterning graphene circuits down to the single atom, de Heer points out, and as a result, he fears that graphene technology currently owes more to hype than hard science. Using the techniques of organic chemistry to build up graphene circuits from the bottom up—linking together “polyaromatic” molecules containing several hexagonal carbon rings, like little fragments of a graphene sheet— might be the key to such precise atomicscale engineering and thus to unlocking the future of graphene electronics.

Source of Information : Scientific American Magazine

Saturday, January 28, 2012

10 Unsolved Mysteries -> How Many Elements Exist?

The periodic tables that adorn the walls of classrooms have to be constantly revised, because the number of elements keeps growing. Using particle accelerators to crash atomic nuclei together, scientists can create new “superheavy” elements, which have more protons and neutrons in their nuclei than do the 92 or so elements found in nature. These engorged nuclei are not very stable—they decay radioactively, often within a tiny fraction of a second. But while they exist, the new synthetic elements such as seaborgium (element 106) and hassium (element 108) are like any other insofar as they have well-defined chemical properties. In dazzling experiments, researchers have investigated some of those properties in a handful of elusive seaborgium and hassium atoms during the brief instants before they fell apart.

Such studies probe not just the physical but also the conceptual limits of the periodic table: Do superheavy elements continue to display the trends and regularities in chemical behavior that make the table periodic in the first place? The answer is that some do, and some do not. In particular, such massive nuclei hold on to the atoms’ innermost electrons so tightly that the electrons move at close to the speed of light. Then the effects of special relativity increase the electrons’ mass and may play havoc with the quantum energy states on which their chemistry— and thus the table’s periodicity—depends.

Because nuclei are thought to be stabilized by particular “magic numbers” of protons and neutrons, some researchers hope to find what they call the island of stability, a region a little beyond the current capabilities of element synthesis in which superheavies live longer. Yet is there any fundamental limit to their size? A simple calculation suggests that relativity prohibits electrons from being bound to nuclei of more than 137 protons. More sophisticated calculations defy that limit. “The periodic system will not end at 137; in fact, it will never end,” insists nuclear physicist Walter Greiner of the Johann Wolfgang Goethe University Frankfurt in Germany. The experimental test of that claim remains a long way off.

Source of Information : Scientific American Magazine

Thursday, January 26, 2012

10 Unsolved Mysteries -> How Does the Brain Think and Form Memories?

The brain is a chemical computer. Interactions between the neurons that form its circuitry are mediated by molecules: specifically, neurotransmitters that pass across the synapses, the contact points where one neural cell wires up to another. This chemistry of the mind is perhaps at its most impressive in the operation of memory, in which abstract principles and concepts—a telephone number, say, or an emotional association—are imprinted in states of the neural network by sustained chemical signals. How does chemistry create a memory that is both persistent and dynamic, as well as able to recall, revise and forget?

We now know parts of the answer. A cascade of biochemical processes, leading to a change in the amounts of neurotransmitter molecules in the synapse, triggers learning for habitual reflexes. But even this simple aspect of learning has shortand long-term stages. Meanwhile more complex so-called declarative memory (of people, places, and so on) has a different mechanism and location in the brain, involving the activation of a protein called the NMDA receptor on certain neurons. Blocking this receptor with drugs prevents the retention of many types of declarative memory.

Our everyday declarative memories are often encoded through a process called long-term potentiation, which involves NMDA receptors and is accompanied by an enlargement of the neuronal region that forms a synapse. As the synapse grows, so does the “strength” of its connection with neighbors—the voltage induced at the synaptic junction by arriving nerve impulses. The biochemistry of this process has been clarified in the past several years. It involves the formation of filaments within the neuron made from the protein actin—part of the basic scaffolding of the cell and the material that determines its size and shape. But that process can be undone during a short period before the change is consolidated if biochemical agents prevent the newly formed filaments from stabilizing.

Once encoded, long-term memory for both simple and complex learning is actively maintained by switching on genes that give rise to particular proteins. It now appears that this process can involve a type of molecule called a prion. Prions are proteins that can switch between two different conformations. One of the conformations is soluble, whereas the other is insoluble and acts as a catalyst to switch other molecules like it to the insoluble state, leading these molecules to aggregate. Prions were first discovered for their role in neurodegenerative conditions such as mad cow disease, but prion mechanisms have now been found to have beneficial functions, too: the formation of a prion aggregate marks a particular synapse to retain a memory.

There are still big gaps in the story of how memory works, many of which await filling with the chemical details. How, for example, is memory recalled once it has been stored? “This is a deep problem whose analysis is just beginning,” says neuroscientist and Nobel laureate Eric Kandel of Columbia University.

Coming to grips with the chemistry of memory offers the enticing and controversial prospect of pharmacological enhancement. Some memory-boosting substances are already known, including sex hormones and synthetic chemicals that act on receptors for nicotine, glutamate, serotonin and other neurotransmitters. In fact, according to neurobiologist Gary Lynch of the University of California, Irvine, the complex sequence of steps leading to long-term learning and memory means that there are many potential targets for such memory drugs.

Source of Information : Scientific American Magazine

Tuesday, January 24, 2012

10 Unsolved Mysteries -> How Does the Environment Influence Our Genes?

The old idea of biology was that who you are is a matter of which genes you have. It is now clear that an equally important issue is which genes you use. Like all of biology, this issue has chemistry at its core.

The cells of the early embryo can develop into any tissue type. But as the embryo grows, these so-called pluripotent stem cells differentiate, acquiring specific roles (such as blood, muscle or nerve cells) that remain fixed in their progeny. The formation of the human body is a matter of chemically modifying the stem cells’ chromosomes in ways that alter the arrays of genes that are turned on and off.

One of the revolutionary discoveries in research on cloning and stem cells, however, is that this modification is reversible and can be influenced by the body’s experiences. Cells do not permanently disable genes during differentiation, retaining only those they need in a “ready to work” state. Rather the genes that get switched off retain a latent ability to work—to give rise to the proteins they encode— and can be reactivated, for instance, by exposure to certain chemicals taken in from the environment.

What is particularly exciting and challenging for chemists is that the control of gene activity seems to involve chemical events happening at size scales greater than those of atoms and molecules—at the so-called mesoscale—with large molecular groups and assemblies interacting. Chromatin, the mixture of DNA and proteins that makes up chromosomes, has a hierarchical structure. The double helix is wound around cylindrical particles made from proteins called histones, and this string of beads is then bundled up into higher-order structures that are poorly understood. Cells exercise great control over this packing—how and where a gene is packed into chromatin may determine whether it is active or not.

Cells have specialized enzymes for reshaping chromatin structure, and these enzymes have a central role in cell differentiation. Chromatin in embryonic stem cells seems to have a much looser, open structure: as some genes fall inactive, the chromatin becomes increasingly lumpy and organized. “The chromatin seems to fix and maintain or stabilize the cells’ state,” says pathologist Bradley Bernstein of Massachusetts General Hospital.

What is more, such chromatin sculpting is accompanied by chemical modification of both DNA and histones. Small molecules attached to them act as labels that tell the cellular machinery to silence genes or, conversely, free them for action. This labeling is called “epigenetic” because it does not alter the information carried by the genes themselves.

The question of the extent to which mature cells can be returned to pluripotency— whether they are as good as true stem cells, which is a vital issue for their use in regenerative medicine—seems to hinge largely on how far the epigenetic marking can be reset.

It is now clear that beyond the genetic code that spells out many of the cells’ key instructions, cells speak in an entirely separate chemical language of genetics— that of epigenetics. “People can have a genetic predisposition to many diseases, including cancer, but whether or not the disease manifests itself will often depend on environmental factors operating through these epigenetic pathways,” says geneticist Bryan Turner of the University of Birmingham in England.

Source of Information : Scientific American Magazine

Sunday, January 22, 2012

10 Unsolved Mysteries -> How Do Molecules Form?

Molecular structures may be a mainstay of high school science classes, but the familiar picture of balls and sticks representing atoms and the bonds among them is largely a conventional fiction. The trouble is that scientists disagree on what a more accurate representation of molecules should look like. In the 1920s physicists Walter Heitler and Fritz London showed how to describe a chemical bond using the equations of then nascent quantum theory, and the great American chemist Linus Pauling proposed that bonds form when the electron orbitals of different atoms overlap in space. A competing theory by Robert Mulliken and Friedrich Hund suggested that bonds are the result of atomic orbitals merging into “molecular orbitals” that extend over more than one atom. Theoretical chemistry seemed about to become a branch of physics.

Nearly 100 years later the molecular or beta picture has become the most common one, but there is still no consensus among chemists that it is always the best way to look at molecules. The reason is that this model of molecules and all others are based on simplifying assumptions and are thus approximate, partial bunch of atomic nuclei in a cloud of electrons, with opposing electrostatic forces fighting a constant tug-of-war with one another, and all components constantly moving and reshuffling. Existing models of the molecule usually try to crystallize such a dynamic entity into a static one and may capture some of its salient properties but neglect others.

Quantum theory is unable to supply a unique definition of chemical bonds that accords with the intuition of chemists whose daily business is to make and break them. There are now many ways of describing molecules as atoms joined by bonds. According to quantum chemist Dominik Marx of Ruhr University Bochum in Germany, pretty much all such descriptions “are useful in some cases but fail in others.”

Computer simulations can now calculate the structures and properties of molecules from quantum first principles with great accuracy—as long as the number of electrons is relatively small. “Computational chemistry can be pushed to the level of utmost realism and complexity,” Marx says. As a result, computer calculations can increasingly be regarded as a kind of virtual experiment that predicts the course of a reaction. Once the reaction to be simulated involves more than a few dozen electrons, however, the calculations quickly begin to overwhelm even the most powerful supercomputer, so the challenge will be to see whether the simulations can scale up—whether, for example, complicated biomolecular processes in the cell or sophisticated materials can be modeled this way.

Source of Information : Scientific American Magazine

Friday, January 20, 2012

10 Unsolved Mysteries -> How Did Life Begin?

The moment when the first living beings arose from inanimate matter almost four billion years ago is still shrouded in mystery. How did relatively simple molecules in the primordial broth give rise to more and more complex compounds? And how did some of those compounds begin to process energy and replicate (two of the defining characteristics of life)? At the molecular level, all of those steps are, of course, chemical reactions, which makes the question of how life began one of chemistry.

The challenge for chemists is no longer to come up with vaguely plausible scenarios, of which there are plenty. For example, researchers have speculated about minerals such as clay acting as catalysts for the formation of the first self-replicating polymers (molecules that, like DNA or proteins, are long chains of smaller units); about chemical complexity fueled by the energy of deep-sea hydrothermal vents; and about an "RNA world," in which DNA’s cousin RNA—which can act as an enzyme and catalyze reactions the way proteins do—would have been a universal molecule before DNA and proteins appeared.

No, the game is to figure out how to test these ideas in reactions coddled in the test tube. Researchers have shown, for example, that certain relatively simple chemicals can spontaneously react to form the more complex building blocks of living systems, such as amino acids and nucleotides, the basic units of DNA and RNA. In 2009 a team led by John Sutherland, now at the MRC Laboratory of Molecular Biology in Cambridge, England, was able to demonstrate the formation of nucleotides from molecules likely to have existed in the primordial broth. Other researchers have focused on the ability of some RNA strands to act as enzymes, providing evidence in support of the RNA world hypothesis. Through such steps, scientists may progressively bridge the gap from inanimate matter to selfreplicating, self-sustaining systems.

Now that scientists have a better view of strange and potentially fertile environments in our solar system—the occasional flows of water on Mars, the petrochemical seas of Saturn’s moon Titan, and the cold, salty oceans that seem to lurk under the ice of Jupiter’s moons Europa and Ganymede—the origin of terrestrial life seems only a part of grander questions: Under what circumstances can life arise? And how widely can its chemical basis vary? That issue is made richer still by the discovery, over the past 16 years, of more than 500 extrasolar planets orbiting other stars—worlds of bewildering variety.

These discoveries have pushed chemists to broaden their imagination about the possible chemistries of life. For instance, NASA has long pursued the view that liquid water is a prerequisite, but now scientists are not so sure. How about liquid ammonia, formamide, an oily solvent like liquid methane or supercritical hydrogen on Jupiter? And why should
life restrict itself to DNA, RNA and proteins? After all, several artificial chemical systems have now been made that exhibit a kind of replication from the component parts without relying on nucleic acids. All you need, it seems, is a molecular system that can serve as a template for making a copy and then detach itself.

Looking at life on Earth, says chemist Steven Benner of the Foundation for Applied Molecular Evolution in Gainesville, Fla., “we have no way to decide whether the similarities [such as the use of DNA and proteins] reflect common ancestry or the needs of life universally.” But if we retreat into saying that we have to stick with what we know, he says, “we have no fun.”

Source of Information : Scientific American Magazine

Wednesday, January 18, 2012

Putting Diabetes on Autopilot

New devices may spare patients from monitoring their blood glucose

For millions of diabetes sufferers, life is a constant battle to keep their blood sugar balanced, which typically means they have to test their glucose levels and take insulin throughout the day. A new generation of “artificial pancreas” devices may make tedious diabetes micromanagement obsolete. In healthy people, the pancreas naturally produces insulin, which converts sugars and starches into energy. People with type 1 diabetes, however, do not produce any insulin of their own, and those with type 2 produce too little. All type 1 and many type 2 diabetics have to dose themselves with insulin to keep their bodies fueled—and doing so properly requires constant monitoring of blood sugar because appropriate dosages depend on factors such as how much patients eat or exercise. Stuart Weinzimer, an endocrinologist at Yale University, has devised an artificial pancreas that combines two existing technologies: a continuous glucose monitor, which uses an under-the-skin sensor to measure blood glucose levels every few minutes, and an insulin pump, which dispenses insulin through a tube that is also implanted under the skin.

The glucose sensor sends its data wirelessly to a pocket computer a little bigger than an iPhone that is loaded with software developed by Minneapolis-based Medtronic. The program scans the incoming data from the glucose monitor and directs the pump to dispense the correct amount of insulin.

At an American Diabetes Association meeting in June, Weinzimer and his colleagues reported that 86 percent of type 1 diabetics they studied who used the artificial pancreas reached target blood glucose levels at night, whereas only 54 percent of subjects who had to wake up to activate an insulin pump reached their target levels. Other, similar systems are in development at Boston University, the University of Cambridge and Stanford University.

Some technical glitches still need to be worked out. For example, the device occasionally has trouble adapting to drastic changes in glucose, such as those that occur after exercise. And it will have to go through more rounds of vetting, which could take years, including large-scale patient trials that will be required before the Food and Drug Administration can approve the device. Nevertheless, Weinzimer says that the enthusiastic responses h has gotten from his trial participants remind him why the long slog toward commercialization is worthwhile.

Source of Information : Scientific American Magazine

Friday, January 13, 2012

Spherical Eats

The chemistry of encased mussels and other edible orbs

A few years ago the renowned chef Ferran Adrià presented diners at his restaurant, elBulli, with a simple dish of brightorange caviar—or rather what looked like caviar: when the guests bit into the orbs, they burst into a mouthful of cantaloupe juice. Since that legendary bit of culinary trompe l’oeil, Adrià and other avant-garde chefs have created many more otherworldly dishes, including mussels that Adrià encases in transparent globes of their own juice.

Eating these spherical foods can evoke childlike joy as you roll the smooth balls around your mouth and explode them with your tongue. But making such confections is not so simple; a lot of chemistry goes into the process. Chefs have developed two ways to go about it: direct and reverse spherification. Both methods exploit the fact that some gelling mixtures do not set unless ions, charged molecules, are present. In the direct approach, the chef blends the food into a puree or broth that contains a gelling agent, such as sodium alginate or iota carrageenan, but that lacks coagulating ions. The cook separately prepares a setting bath that contains a source of the missing ions, such as calcium gluconate.

As soon as droplets or spoonfuls of the food fall into the bath, gelling begins. Surface tension pulls the beads into their distinctive round shape. A short dip in the bath yields liquid-filled balls encased in tissue-thin skin; a long soak produces chewy beads. The cook stops the gelling process by rinsing the beads and heating them to 85 degrees Celsius (185 degrees Fahrenheit) for 10 minutes. Reverse spherification inverts the process: calcium lactate or some other source of calcium ions is added to the edible liquid or puree—unless the food is naturally rich in calcium.

The bath contains unset gel made with deionized or distilled water, which is calciumfree. When the food goes in, the bath solution itself forms a skin of gel around it. The culinary
team at our lab has used spherification to make marbles of crystal-clear tomato water that enclose smaller spheres of basil oil. We have also found that this technique is a terrific way to make a very convincinglooking raw “egg” out of little more than water, ham broth (for the white) and melon juice (for the yolk). It tastes much better than it looks.

Source of Information : Scientific American Magazine

Tuesday, January 10, 2012

Gig.U Is Now in Session

Universities are piloting superfast Internet connections that may finally rival the speed of South Korea’s

The U.S. notoriously lags other countries when it comes to Internet speed. One recent report from Web analyst Akamai Technologies puts us in 14th place, far behind front-runner South Korea and also trailing Hong Kong, Japan and Romania, among other countries. The sticking point over faster broadband has been: Who will pay for it? Telecommunications companies have been leery of investing in infrastructure unless they are certain of demand for extra speed. American consumers, for their part, have been content to direct much of their Internet use to e-mail and social networks, which operate perfectly well at normal broadband speeds, and they have not been willing to pay a premium for speedier service.

The exception lies at the seat of learning. Universities and research institutes are always looking for a quicker flow of bits. “We think our researchers will be left behind without gigabit speeds,” says Elise Kohn, a former policy adviser for the Federal Communications Commission. Kohn and Blair Levin, who helped to develop the FCC’s National Broadband Plan—a congressionally mandated scheme to ensure broadband access to all Americans—are leading a collection of 29 universities spread across the country in piloting a network of one-gigabit-per-second Internet connections. The group, the University Community Next Generation Innovation Project—more commonly referred to as Gig.U—includes Duke University, the University of Chicago, the University of Washington and Arizona State University.

The average U.S. Internet speed today is 5.3 megabits per second, so Gig.U’s Internet would be many times faster than those available today, allowing users to download the equivalent of two high-definition movies in less than one minute and to watch streaming video with no pixelation or other interruptions. By comparison, the average Internet speed in South Korea is 14.4 megabits per second, and the country has pledged to connect every home to the Internet at one gigabit per second by 2012.

The U.S. gigabit networks will vary from site to site, depending on the approach that different Internet service providers propose to meet the diff ering needs of Gig.U members. “All our members are focused on next-generation networks, although some will need more than a gigabit, and others will need less,” Kohn says. Gig.U’s request-for-information period runs through November to solicit ideas from the local service providers upgrading to faster networks. These ideas will ultimately be funded by Gig.U members, as well as any nonprofits and private-sector companies interested in the project. Gig.U intends to accelerate the deployment of next-generation networks in the U.S. by encouraging researchers—students and professors alike—to develop new applications and services that can make use of ultrafast data-transfer rates.

Source of Information : Scientific American Magazine

Saturday, January 7, 2012

10 Unsolved Mysteries -> Can Computers Be Made Out of Carbon?

computer chips made out of graphene—a web of carbon atoms—could potentially be faster and more powerful than silicon-based ones. The discovery of graphene garnered the 2010 Nobel Prize in Physics, but the success of this and other forms of carbon nanotechnology might ultimately depend on chemists’ ability to create structures with atomic precision.

The discovery of buckyballs—hollow, cagelike molecules made entirely of carbon atoms—in 1985 was the start of something literally much bigger. Six years later tubes of carbon atoms arranged in a chicken wire–shaped, hexagonal pattern like that in the carbon sheets of graphite made their debut. Being hollow, extremely strong and stiff,
and electrically conducting, these carbon nanotubes promised applications ranging from high-strength carbon composites to tiny wires and electronic devices, miniature molecular capsules, and water-filtration membranes.

For all their promise, carbon nanotubes have not resulted in a lot of commercial applications. For instance, researchers have not been able to solve the problem of how to connect tubes into complicated electronic circuits. More recently, graphite has moved to center stage because of the discovery that it can be separated into individual chicken wire–like sheets, called graphene, that could supply the fabric for ultraminiaturized, cheap and robust electronic circuitry. The hope is that the computer industry can use narrow ribbons and networks of graphene, made to measure with atomic precision, to build chips with better performance than silicon-based ones.

“Graphene can be patterned so that the interconnect and placement problems of carbon nanotubes are overcome,” says carbon specialist Walt de Heer of the Georgia Institute of Technology. Methods such as etching, however, are too crude for patterning graphene circuits down to the single atom, de Heer points out, and as a result, he fears that graphene technology currently owes more to hype than hard science. Using the techniques of organic chemistry to build up graphene circuits from the bottom up—linking together “polyaromatic” molecules containing several hexagonal carbon rings, like little fragments of a graphene sheet— might be the key to such precise atomicscale engineering and thus to unlocking the future of graphene electronics.

Source of Information : Scientific American Magazine

More than Child’s Play

Young children think like researchers but lose the feel for the scientific method as they age

If your brownies came out too crispy on top but undercooked in the center, it would make sense to bake the next batch at a lower temperature, for more time or in a different pan—but not to make all three changes at once. Realizing that you can best tell which variable matters by altering only one at a time is a cardinal principle of scientific inquiry.

Since the 1990s studies have shown that children think scientifically— making predictions, carrying out mini experiments, reaching conclusions and revising their initial hypotheses in light of new evidence. But while children can play in a way that lets them ascertain cause and effect, and even though they have a rudimentary sense of probability (eight-month-olds are surprised if you reach into a bowl containing four times as many blue marbles as white ones and randomly scoop out a fistful of white ones), it was not clear whether they have an implicit grasp of a key strategy of experimental science: that by isolating variables and testing each independently, you can gain information.

To see whether children understand this concept, scientists at the Massachusetts Institute of Technology and Stanford University presented 60 four- and five-year-olds with a challenge. The researchers showed the kids that certain plastic beads, when placed individually on top of a special box, made green LED lights flash and music play. Scientists then took two pairs of attached beads, one pair glued together and the other separable, and demonstrated that both pairs activated the machine when laid on the box. That raised the possibility that only one bead in a pair worked. The children were then left alone to play. Would they detach the separable pair and place each bead individually on the machine to see which turned it on?

They did, the scientists reported in September in the journal Cognition. So strong was the kids’ sense that they could only figure out the answer by testing the components of a pair independently that they did something none of the scientists expected: when the pair was glued together, the children held it vertically so that only one bead at a time touched the box. That showed an impressive determination to isolate the causal variables, says Stanford’s Noah Goodman: “They actually designed an experiment to get the information they wanted.” That suggests basic scientific principles help very young children learn about the world.

The growing evidence that children think scientifically presents a conundrum: If even the youngest kids have an intuitive grasp of the scientific method, why does that understanding seem to vanish within a few years? Studies suggest that K–12 students struggle to set up a controlled study and cannot figure out what kind of evidence would support or refute a hypothesis. One reason for our failure to capitalize on this scientific intuition we display as toddlers may be that we are pretty good, as children and adults, at reasoning out puzzles that have something to do with real life but flounder when the puzzle is abstract, Goodman suggests—and it is abstract puzzles that educators tend to use when testing the ability to think scientifically. In addition, as we learn more about the world, our knowledge and beliefs trump our powers of scientific reasoning. The message for educators would seem to be to build on the intuition that children bring to science while doing a better job of making the connection between abstract concepts and real-world puzzles.

Source of Information : Scientific American Magazine

Tuesday, January 3, 2012

Exposure to the chemicals in everyday objects poses a hidden health threat

Toxins All around Us

Susan starts her day by jogging to the edge of town, cutting back through a cornfield for an herbal tea at the downtown Starbucks and heading home for a shower. It sounds like a healthy morning routine, but Susan is in fact exposing herself to a rogue’s gallery of chemicals: pesticides and herbicides on the corn, plasticizers in her tea cup, and the wide array of ingredients used to perfume her soap and enhance the performance of her shampoo and moisturizer. Most of these exposures are so low as to be considered trivial, but they are not trivial at all—especially considering that Susan is six weeks pregnant.

Scientists have become increasingly worried that even extremely low levels of some environmental contaminants may have significant damaging effects on our bodies—and that fetuses are particularly vulnerable to such assaults. Some of the chemicals that are all around us have the ability to interfere with our endocrine systems, which regulate the hormones that control our weight, our biorhythms and our reproduction. Synthetic hormones are used clinically to prevent pregnancy, control insulin levels in diabetics, compensate for a deficient thyroid gland and alleviate menopausal symptoms. You wouldn’t think of taking these drugs without a prescription, but we unwittingly do something similar every day. An increasing number of clinicians and scientists are becoming convinced that these chemical exposures contribute to obesity, endometriosis, diabetes, autism, allergies, cancer and other diseases. Laboratory studies—mainly in mice but sometimes in human subjects— have demonstrated that low levels of endocrine-disrupting chemicals induce subtle changes in the developing fetus that have profound health effects in adulthood and even on subsequent generations. The chemicals an expecting mother takes into her body during the course of a typical day may affect her children and her grandchildren.

This isn’t just a lab experiment: we have lived it. Many of us born in the 1950s, 1960s and 1970s were exposed in utero to diethylstilbestrol, or DES, a synthetic estrogen prescribed to pregnant women in a mistaken attempt to prevent miscarriage. An article in the June issue of the New England Journal of Medicine called the lessons learned about the effects of fetal human exposures to DES on adult disease “powerful.”

In the U.S., two federal agencies, the Food and Drug Administration and the Environmental Protection Agency, are responsible for banning dangerous chemicals and making sure that chemicals in our food and drugs have been thoroughly tested. Scientists and clinicians across diverse disciplines are concerned that the efforts of the EPA and the FDA are insufficient in the face of the complex cocktail of chemicals in our environment. Updating a proposal from last year, Senator Frank R. Lautenberg of New Jersey introduced legislation this year to create the Safe Chemicals Act of 2011. If enacted, chemical companies would be required to demonstrate the safety of their products before marketing them. This is perfectly logical, but it calls for a suitable screening-and-testing program for endocrine-disrupting chemicals. The need for such tests has been recognized for more than a decade, but no one has yet devised a sound testing protocol.

Regulators also cannot interpret the mounting evidence from laboratory studies, many of which use techniques and methods of analysis that weren’t even dreamed of when toxicology testing protocols were developed in the 1950s. It’s like providing a horse breeder with genetic sequence data for five stallions and asking him or her to pick the best horse. Interpreting the data would require a broad range of clinical and scientific experience.

That’s why professional societies representing more than 40,000 scientists wrote a letter to the FDA and EPA offering their expertise. The agencies should take them up on it. Academic scientists and clinicians need a place at the table with government and industry scientists. We owe it to mothers everywhere, who want to give their babies the best possible chance of growing into healthy adults.

Source of Information : Scientific American Magazine

Sunday, January 1, 2012

Improving Your Immune System

The most successful ways to help your immune system is through prevention and protection— in other words, practicing tactics like washing your hands properly, avoiding spoiled food, and staying up-to-date on your vaccines. Aside from vaccinations, none of these strategies actually affects your immune system. Instead, they help you fend off pathogens before they get inside.
Although there’s no clear-cut way to beef up your body’s defenses, some simple practices may help.

• Eat a balanced diet. There’s no immune system-boosting super food, but a balanced diet is a good basis for fighting disease. Scientists link deficiencies in folic acid and in vitamins A, B, C, and E, to poor immune-system function.

• Avoid stress. And not just the mental kind. The physical wear and tear of weight loss diets and intense exercise also damps down your immune response. So do injuries, infections, heavy drinking, and pack-a-day smoking.

• Get a proper night’s sleep. It’s obvious, but worth repeating—studies find that 8 hours of nightly sleep boosts the level of immune cells in your body.

• Be optimistic. It sounds a bit foofy, but studies have consistently found that you can get someone’s body to produce more T cells if you force them to meditate, relax, watch funny TV, or hang out with friends.

Source of Information : Oreilly - Your Body Missing Manual