Monday, November 30, 2009

See atoms and molecules more clearly than ever

MICROSCOPES capable of revealing the astonishing beauty of an atom can hardly be called blunt instruments. But to date, these tools have either been too destructive or offered disappointing resolution. Now researchers at IBM have come up with a delicate method which has provided unparalleled details of the structure of a molecule.

The earliest pictures of individual atoms were captured in the 1970S by blasting a target typically a chunk of metal – with a beam of electrons, a technique known as transmission electron microscopy (TEM). Later iterations of this technique, such as the TEAM project at the Lawrence Berkeley National Laboratory in California, achieved resolutions of less than the radius of a hydrogen atom. But while this method works for atoms in a lattice or thin layer, the electron bombardment destroys more fragile individual molecules.

Other techniques use a tiny stylus-like scanning probe. This can be used to measure either the effect of quantum tunnelling of electrons between the probe and the surface of the target, called scanning tunnelling microscopy (STM), or the attractive force between atoms in the probe and the target, called atomic force microscopy (AFM). These methods are suitable for individual molecules but have not been able to approach the detail ofTEM.

Leo Gross and colleagues at IBM in Zurich, Switzerland, modified the AFM technique to make the most detailed image yet of pentacene, an organic molecule consisting of five benzene rings. Although the molecule is highly fragile, the researchers were able to capture the details of the hexagonal carbon rings and deduce the position of the surrounding hydrogen atoms (Science, DOl: 10.1126/science.1176210). The team first fixed a single carbon monoxide molecule to the end of the probe. This allowed them to invoke a quantum mechanical effect called the Pauli exclusion principle, which says that electrons in the same quantum state cannot approach each other too closely. As the electrons around the pentacene and carbon monoxide molecules are in the same state, a repulsive force operated between them. The image was created by bumping the probe over the atoms in the molecule – much in the way we might navigate around in a dark bedroom. The researchers measured the amount of repulsive force the probe encountered at each point, and used this to construct a "force map" of the molecule. The level of detail available depends on the size of the probe : the smaller the tip, the better the picture. The image is " astonishing", says Oscar Custance of Japan's National Institute for Materials Science. In 2007, his team used AFM to distinguish individual atoms on a silicon surface . "This is the highest resolution I have ever seen," he says of the IBM image. The IBM researchers believe their technique may open the door to super-powerful computers made from components built out of precisely positioned atoms and molecules. The work may also provide insights into the actions of catalysts in chemical reactions, potentially allowing researchers to understand what is happening at the atomic level, says Gross. Mac Gregor Campbell.

Source of Information : NewScientist(2009-09-05)

Sunday, November 29, 2009

Antibiotics

A spineless solution - A better way to find novel antibiotics

NEW antibiotics are always welcome. Natural selection means the existing
ones are in constant danger that pathogens will evolve resistance to them. But
winnowing the few chemicals that have antibiotic effects from the myriad that
might do, but don’t, is tedious. So a technique invented recently by Frederick
Ausubel of Harvard University and his colleagues, which should help to speed
things up, is welcome.

Dr Ausubel’s method, the details of which have just been published in ACS Chemical Biology, employs nematode worms of a species called C. elegans as its sacrificial victims. C. elegans is one of the most intensively studied animals on Earth (it was the first to have its genome read completely). It is a mere millimetre long, and can be mass produced to order, so it is ideal for this sort of work.

Dr Ausubel set out to make an automated system that could infect worms with bacteria, treat them with chemical compounds that might have antibiotic effects, and then record the results. The device he has built starts by laying the worms on a “lawn” of pathogenic bacteria for 15 hours and then mixing them with water to create a sort of worm soup. It then places the infected worms into individual enclosures, using a machine called a particle sorter that is able to drop a precise number of worms (in this case 15) into each of 384 tiny wells arrayed on a single plate. These wells have, in turn, each been preloaded with a different chemical that is being tested for possible antibiotic properties. Once in place, the worms are left alone for five days.

Until now, researchers engaging in this sort of work have had to monitor each wellful of worms by eye (assisted by a microscope) to determine whether the inmates were alive or dead. To avoid this time-consuming process, Dr Ausubel and his team exposed their worms to an orange stain once the five days were over. The stain in question enters dead cells easily, but cannot enter living ones. They were thus able to distinguish the quick from the dead by colour, rather than propensity to wriggle.

Moreover, using a stain in this way meant they could automate the process by attaching a camera to the microscope, taking photographs of all 384 wells, and feeding the images into a computer that had been programmed to measure the area of orange in a well and contrast that with the total area occupied by worms. When they compared this automated mechanism for identifying dead worms with manual methods that depended upon human eyes, they found it was every bit as effective.

So far Dr Ausubel and his colleagues have managed to test around 37,000 compounds using their new method, and they have found 28 that have antibiotic properties. Their most exciting discovery is that some of these substances work in completely different ways from existing antibiotics. That means entirely new types of resistance mechanism would have to evolve in order for bacteria to escape their effects.

Mass screening of this sort is not, itself, a new idea in the search for drugs, but extending it so that it can study effects on entire animals rather than just isolated cells should make it even more productive. And worms, unlike, say, white mice, have few sentimental supporters in the outside world.

Source of Information : The Economist 2009-09-05

Saturday, November 28, 2009

Games lessons

It sounds like a cop-out, but the future of schooling may lie with video games SINCE the beginning of mass education, schools have relied on what is known in educational circles as “chalk and talk”. Chalk and blackboard may sometimes be replaced by felt-tip pens and a whiteboard, and electronics in the form of computers may sometimes be bolted on, but the idea of a pedagogue leading his pupils more or less willingly through a day based on periods of study of recognisable academic disciplines, such as mathematics, physics, history, geography and whatever the local language happens to be, has rarely been abandoned.

Abandoning it, though, is what Katie Salen hopes to do. Ms Salen is a games designer and a professor of design and technology at Parsons The New School for Design, in New York. She is also the moving spirit behind Quest to Learn, a new, taxpayer-funded school in that city which is about to open its doors to pupils who will never suffer the indignity of snoring through double French but will, rather, spend their entire days playing games.


Source of Information : The Economist 2009-09-05

Quest to Learn draws on many roots. One is the research of James Gee of the University of Wisconsin. In 2003 Dr Gee published a book called “What Video Games Have to Teach Us About Learning and Literacy”, in which he argued that playing such games helps people develop a sense of identity, grasp meaning, learn to follow commands and even pick role models. Another is the MacArthur Foundation’s digital media and learning initiative, which began in 2006 and which has acted as a test-bed for some of Ms Salen’s ideas about educational-games design. A third is the success of the Bank Street School for Children, an independent primary school in New York that practises what its parent, the nearby Bank Street College of Education, preaches in the way of interdisciplinary teaching methods and the encouragement of pupil collaboration.

Ms Salen is, in effect, seeking to mechanise Bank Street’s methods by transferring much of the pedagogic effort from the teachers themselves (who will now act in an advisory role) to a set of video games that she and her colleagues have devised. Instead of chalk and talk, children learn by doing—and do so in a way that tears up the usual subjectbased curriculum altogether.

Periods of maths, science, history and so on are no more. Quest to Learn’s school day will, rather, be divided into four 90-minute blocks devoted to the study of “domains”. Such domains include Codeworlds (a combination of mathematics and English), Being, Space and Place (English and social studies), The Way Things Work (maths and science) and Sports for the Mind (game design and digital literacy). Each domain concludes with a two-week examination called a “Boss Level”—a common phrase in video-game parlance.


Freeing the helots
In one of the units of Being, Space and Place, for example, pupils take on the role of an ancient Spartan who has to assess Athenian strengths and recommend a course of action. In doing so, they learn bits of history, geography and public policy. In a unit of The Way Things Work, they try to inhabit the minds of scientists devising a pathway for a beam of light to reach a target. This lesson touches on maths, optics—and, the organisers hope, creative thinking and teamwork. Another Way-Things-Work unit asks pupils to imagine they are pyramid-builders in ancient Egypt. This means learning about maths and engineering, and something about the country’s religion and geography.

Whether things will work the way Ms Salen hopes will, itself, take a few years to find out. The school plans to admit pupils at the age of 12 and keep them until they are 18, so the first batch will not leave until 2016. If it fails, traditionalists will no doubt scoff at the idea that teaching through playing games was ever seriously entertained. If it succeeds, though, it will provide a model that could make chalk and talk redundant. And it will have shown that in education, as in other fields of activity, it is not enough just to apply new technologies to existing processes—for maximum effect you have to apply them in new and imaginative ways.

Friday, November 27, 2009

The origin of diabetes - Don't blame your genes

They may simply be getting bad instructions—from you

GENES are acquired at conception and carried to the grave. But the same gene can be expressed differently in different people—or at different times during an individual’s life. The differences are the result of what are known as epigenetic marks, chemicals such as methyl groups that are sometimes attached to a gene to tell it to turn out more of a vital protein, or to stop making that protein altogether.

Many researchers believe epigenetic marks hold the key to understanding, and eventually preventing, a number of diseases—and one whose epigenetic origins they are particularly interested in is type 2, or lateonset, diabetes. Juleen Zierath and her colleagues at the Karolinska Institute in Stockholm, Sweden, are trying to find out how people develop insulin resistance, the underlying cause of type 2 diabetes.

Insulin is a hormone produced by the pancreas. When all is going well, it lets cells know when they need to mop up glucose from the blood, usually just after a person has eaten. If the hormone is absent or is produced in insufficient quantities because of damage to the pancreatic cells that secrete it, the result is classical (or type 1) diabetes. But people with insulin resistance—and thus the late-onset version of the disease—do produce insulin. Their problem is that their glucose-absorbing cells cannot heed its advice. The sugar stays in their bloodstreams, where it damages the vessels, leading to ailments such as heart disease, kidney failure and blindness.

As they report in Cell Metabolism, Dr Zierath and her team decided to look at one of the main consumers of glucose: muscle tissue. They took muscle biopsies from 17 healthy people, 17 people with type 2 diabetes and eight people with early signs of insulin resistance, so-called “pre-diabetics”. They then compared the patterns of the methyl groups attached to the genes of the healthy volunteers with those of the diabetic and pre-diabetic ones.

As it turned out, they found hundreds of genes in which the patterns differed systematically, so to whittle the problem down they concentrated on those involved in the function of the mitochondria. These are the components of a cell that extract energy from glucose and use it to manufacture a chemical called ATP, which is the universal fuel of biological processes. Having fewer or less effective mitochondria causes a drop in demand for glucose, and might thus cause a cell to become insulin resistant.

Even narrowing the question down like this, though, left 44 genes to look at. Of these, Dr Zierath and her team picked one called PGC-1 alpha for further study. This gene is involved in the development of mitochondria, and the extra epigenetic marks the researchers found on it in diabetics and pre-diabetics had the effect of instructing the cells the marked genes were located in to produce fewer and smaller mitochondria than is normal.

The next question was how those marks got there. It is well known that poor diet and lack of exercise make insulin resistance more likely, so one hypothesis is that these things change the epigenetic marks on genes such as PGC-1 alpha. To test that idea, the researchers bathed cells in glucose and fats (chosen as surrogates for bad diet and lack of exercise for obvious reasons) and also in inflammation-producing proteins called cytokines. These proteins, they knew, are produced abundantly in the obese. And obesity, the consequence of bad diet and lack of exercise, is another risk factor for type 2 diabetes. Lo and behold, doses of both fats and cytokines caused PGC-1 alpha to be methylated.

Next, Dr Zierath wanted to know if she could prevent that. So, this time, before bathing the healthy cells in fats or cytokines, the team added a chemical that blocks the activity of DNMT3B, an enzyme which they found methylates PGC-1 alpha. When that was done, no extra methyl groups appeared. These findings have two interesting implications. First, the fact the team was able to stop PGC-1 alpha being methylated suggests that a drug might be developed to do the same. Second, they show that bodily abuse can stretch all the way down to the genetic level. As Dr Zierath puts it, “we are not victims of our genes. If anything, our genes are victims of us.”

Source of Information : The Economist 2009-09-05

Thursday, November 26, 2009

Egg

The answer to the age-old riddle is biologically obvious

In March 2006, on the occasion of the release of Chicken Little on DVD, Disney convened a panel to put an end to the long-standing riddle: Which came first, the chicken or the egg? The verdict was unanimous. “The first chicken must have differed from its parents by some genetic change [that] caused this bird to be the first ever to fulfill our criteria for truly being a chicken,” said John Brookfield, an evolutionary biologist at the University of Nottingham in England. “Thus the living organism inside the eggshell would have had the same DNA as the chicken that it would develop into, and thus would itself be a member of the species of chicken.” What we recognize as the DNA of a chicken exists first inside an egg. Egg came first. Yet despite the unified front of the three-person panel—David Papineau, a philosopher of science, and Charles Bourns, a chicken farmer, agreed in spirit with Brookfield’s analysis—the question is at best incomplete, at worst misleading. If we take “chicken” to mean a member of Gallus gallus domesticus (a subspecies of junglefowl that evolved in Southeast Asia and has been domesticated for perhaps 10,000 years), we could ask at what point the first member of this species appeared (and whether it was in bird or egg form). Yet speciation is not a process that happens in an instant or in an individual. It takes generations on generations of gradual change for a group of animals to cease interbreeding with another group; only then can we say that speciation has occurred. Viewed in this way, it does not make sense to talk about the first chicken or the first egg. There was only the first group of chickens—some of whom, presumably, were in egg form. And if one relaxes the species qualification, then the race is not even close. Invertebrates as simple as sponges rely on some form of egg for reproduction, which means that eggs probably predate the Cambrian explosion in biodiversity of 530 million years ago. Fish and amphibians lay gelatinous eggs; ancestors of reptiles and birds laid the first shelled eggs 340 million years ago, and that innovation, which allowed their eggs to survive and mature on dry land, enabled the rise of land vertebrates long before the first rooster crowed.

Source of Information : Scientific American September 2009

Wednesday, November 25, 2009

Teeth

They long predate the smile

Paleontologists used to wonder whether the first teeth were on the inside or the outside of prehistoric bodies. Sharks are covered in thousands of tiny denticles–toothlike nubs of dentine and collagen that make sharkskin coarse to the touch. If the denticles of some very early vertebrate had migrated into the jaw, grown larger and gained new functions, the speculation went, they could have given rise to modern choppers. But over the past decade fossil and genetic evidence has confirmed that teeth are much older than even the ancient shark lineage—indeed, older than the jaw or the denticle. And they originated inside the body, though not in the mouth.

The first sets of teeth belonged to eel-like swimmers that lived some 525 million years ago and ranged from four to 40 centimeters long. Collectively they are known as conodonts for the ring of long, conical teeth in their pharynx. Some fish species still have a set of vestigial teeth in their throat, but pharyngeal teeth for the most part are believed to have migrated forward into the mouth, perhaps as the jaw was evolving. Supporting that idea, the programmed gene activity that builds teeth differs from the instructions that build a jaw, even though both types of structure grow in tandem. The marriage of tooth and jaw, however, likely gave rise to specialized tooth shapes. By the 10th day of a human embryo’s development, molecular signaling that initiates tooth formation is taking place between two basic embryonic tissue layers. At the same time, signals from the growing jaw imprint a shape onto the primordial tooth that cannot be changed. Even when the bud of a future molar, for instance, is transplanted into a different area of the jaw, the final tooth will become whatever its original location fated it to be. Unfortunately, dental researchers are finding it difficult to recapitulate half a billion years of evolution in the laboratory. Because burgeoning teeth depend on information from the budding embryonic jaw, work toward generating replacement teeth from dental stem cells focuses on growing them in the desired location in the recipient’s mouth–but scientists are not yet sure the adult jaw can provide the necessary signals to shape made-toorder teeth.

Source of Information : Scientific American September 2009

Tuesday, November 24, 2009

Ball Bearings

Cheap steel was key to allowing the routine design of parts that rolled against one another

If the utility of an invention were somehow derived from the genius of its inventor, it would be pardonable that so many sources trace the idea for the ball bearing to a 1497 drawing by Leonardo da Vinci. But good ideas, like useful evolutionary traits, tend to emerge more than once, in diverse times and places, and the idea of arranging for parts to roll against one another instead of sliding or slipping is very old indeed. The Egyptians already had the basic idea when they moved great blocks of stone on cylindrical rollers. Similar ideas occurred to the builders of Stonehenge as early as 1800 B.C. and to the craftsmen who constructed the cylindrical-shaped bearings on the wheel hubs of wagons around 100 B.C. (On these wagons the axle turned with the wheels, so the bearings enabled the axle to roll against the wagon chassis.)

The first design for a ball bearing that would support the axle of a carriage did not appear until 1794, in a patent filed by a Welsh ironmaster named Philip Vaughan. Ball bearings between the wheel and the axle enabled the axle to remain fixed to the carriage chassis. But cast iron ball bearings were brittle and tended to crack under stress. It took the invention of the Bessemer process for making inexpensive steel, plus the invention of the bicycle, to fix the ball bearing permanently in the minds of engineers everywhere. Jules-Pierre Suriray, a Parisian bicycle mechanic, patented his steel ball-bearing design in 1869, and in that same year a bicycle outfitted with Suriray’s ball bearings won an international cycling race. The demand for ball bearings—on automobiles, tanks or guidance systems—has pushed manufacturers ever closer to the ideal of shaping a perfect sphere. No turning wheel will survive for long on its axle without ball bearings machined to a tolerance of less than a thousandth, or even a 10-thousandth, of an inch. Many sources claim that the most perfect spheres occur in the bearings of computer hard drives, but in fact that honor goes to the ping pong–size spheres of fused quartz that serve as gyroscopic bearings for the satellite Gravity Probe B. Its gyroscopes are 30 million times more accurate than any other gyroscope ever built.


Source of Information : Scientific American September 2009

Monday, November 23, 2009

Coriolis Effect

The earth’s spin influences hurricanes but not toilets

In the final year of World War I, when the German military pointed its largest artillery at Paris from a distance of 75 miles, the troops adjusted the trajectory for many factors that could be ignored with less powerful guns. In particular, a subtle influence from the rotation of the earth—the Coriolis effect or force—would have shifted all their shots by about half a mile. Decades earlier a Parisian scientist by the name of Gaspard-Gustave de Coriolis had written down the equations describing that effect as a part of his 1835 paper analyzing machines with rotating parts, such as waterwheels. The Coriolis effect can arise in any situation involving rotation. If you stand anywhere on a counterclockwise-turning carousel, for instance, and throw a ball in any direction, you will see the ball’s trajectory curve to its right. Someone standing next to the carousel will see the ball move in a straight line, but in your rotating frame of reference the ball's direction of motion swings around clockwise. A new force appears to act on the ball. On the spinning earth, we see a similar (but much weaker) force acting on moving objects. As well as deflecting the paths of long-range artillery shells and ballistic missiles, the Coriolis effect is what causes cyclones (which includes hurricanes and typhoons) to spin clockwise south of the equator and counterclockwise north of it. Indeed, the Coriolis effect is the reason that winds in general tend to flow around regions of high and low pressure, running parallel to the lines of constant pressure on a weather map (“isobars”), instead of flowing directly from high to low pressure at right angles to the isobars. In the Northern Hemisphere, air flowing radially inward across the isobars toward the low pressure would be deflected to the right. The motion reaches a steady state with the wind encircling the low-pressure area—the pressure gradient pushing inward and the Coriolis force outward. A popular factoid claims that water running down a drain turns in one direction in the Southern Hemisphere and the opposite way in the Northern Hemisphere. That idea is a myth: although the Coriolis force is strong enough to direct the winds of hurricanes when acting over hundreds of miles for days, it is far too weak to stir a small bowl of water in the scant seconds the water takes to run down the drain.

Source of Information : Scientific American September 2009

Sunday, November 22, 2009

Artificial Heart

Did the wrong man get credit for the world’s first permanent pump?

In January 1982 surgeons at the University of Utah implanted the first permanent artificial heart into Barney Clark, a 61-year-old dentist from Seattle who was hours from death as he went into the operating room. He would live another 112 days. The work was a triumph for Willem Kolff, founder of the university’s Division of Artificial Organs and head of the team that developed Clark’s new heart. Yet in the weeks that followed the surgery, Kolff’s name began to be left out of the frantic media coverage. Nearly three decades later he has been all but forgotten. Perhaps he should have named the heart after himself.

Kolff was already one of the world’s foremost inventors of artificial organs when he moved in 1967 from the Cleveland Clinic to Utah. Ten years earlier he had invented the first working artificial kidney; that same year he began work on a heart. At Utah, Kolff led a team of more than 200 doctors and scientists who were pushing to advance the field of artificial organs. In 1971 he hired Robert Jarvik, a budding researcher in biomechanics who seemed to have a knack for engineering. Jarvik began medical school the next year and continued to work on improving the heart through his graduation in 1976. Kolff had a tradition of naming new versions of the heart after young investigators in his lab to keep them motivated and prevent them from moving elsewhere. Jarvik was project manager for the iteration that came to be named Jarvik-7. That device was approved for use by the Food and Drug Administration in 1981. Jarvik was 35 years old when Clark received the heart that bore his name. He appeared at the press conference that announced the implant in scrubs, although he did not take part in the surgery. Jarvik continued to attend press conferences at the center, while Kolff kept a low profile. Perhaps it is not surprising that the world came to associate a seminal piece of engineering—the work of hundreds, over a course of years— with one man. After all, it had his name on it.

Source of Information : Scientific American September 2009

Saturday, November 21, 2009

Antibiotics

These wonder-drug molecules might have evolved to help bacteria speak with their neighbors, not kill them

Most medically important antibiotics come from soil bacteria. Conventional wisdom holds that dirt microbes evolved these compounds as lethal weapons in the fierce battle waged beneath our feet for food and territory. For more than 15 years microbiologist Julian Davies of the University of British Columbia has been arguing otherwise. “They’re talking, not fighting,” Davies says. His respected if not wholly accepted theory is that bacteria use most of the small molecules we call antibiotics for communication. As evidence, Davies points out that in nature, soil bacteria secrete antibiotics at trace levels that do not come close to killing their microbial neighbors. “Only when we use them at unnaturally high concentrations do we find that these chemicals inhibit bacteria,” he explains. Moreover, in Davies’s Vancouver laboratory, his staff has been eavesdropping on the flurry of gene activity in bacteria exposed to low-dose antibiotics. The researchers equip their bacteria with glow-in-the-dark lux genes that provide a fluorescent signal when other linked genes are active; then they watch those genetic “switchboards” light up in a chorus of responses to antibiotic exposure. The call-and-response activity resembles that of cells responding to hormones, Davies observes, or of “quorum-sensing” bacteria that assess their own numbers. “I’m not saying that some of these compounds couldn’t be used as weapons in nature,” Davies says. “But that’s not what we’re seeing.” He notes that a gram of soil contains more than 1,000 different types of bacteria. “They’re all thriving there together and clearly not killing one another.” Davies proposes that many antibiotics may help coordinate bacterial activities such as swarming, biofilm formation and diverse interactions with their multicellular hosts. Davies’s theory implies both good news and bad for the world of medicine. Bacterial communities (and not just those in dirt) might be treasure troves of chemicals with microbe-killing drug potential. Davies and his colleagues have already found candidate molecules among gut bacteria such as Escherichia coli. But for every new antibiotic, there may also already be plenty of corresponding resistance genes. After all, the same bacteria that regularly produce and respond to antibiotics need mechanisms for protecting themselves from potentially toxic effects. And in the geneswapping world of bacteria, it doesn’t take long for such DNA instructions to jump from one species to many once a new antibiotic comes into widespread medical use.

Source of Information : Scientific American September 2009

Friday, November 20, 2009

Scotch Tape

Most new inventions quickly fall into oblivion; some stick

In 1930 food-packing companies were enthralled with the relatively new and improved film called cellophane, a transparent polymer made from cellulose. Cellophane wrappers could help keep packaged food fresh yet would still allow customers a view of the contents. Sealing cellophane packages satisfactorily was a problem, however, until the 3M Company invented and trademarked Scotch tape—a name that the public nonetheless widely uses for all adhesive-backed cellophane tapes. (The analogous product Sellotape, introduced seven years later in Europe, has the same problems with generic use of its name.) Engineers call the glue in Scotch tape a pressure-sensitive adhesive. It does not stick by forming chemical bonds with the material it is placed on, says Alphonsus Pocius, a scientist at the 3M Corporate Research Materials Laboratory in St. Paul, Minn. Instead applied pressure forces the glue to penetrate the tiniest microscopic irregularities on the material’s surface. Once there, it will resist coming back out, thus keeping the tape stuck in place. The glue “has to be halfway between liquid and solid,” Pocius explains: fluid enough to spread under pressure but viscous enough to resist flowing. Concocting the right kind of glue is only part of the invention, however. The typical adhesive tape contains not just two materials (glue and backing, which can be cellophane or some other plastic) but four. A layer of primer helps the glue stick to the plastic, while on the other side a “release agent” makes sure that the glue does not stick to the top. Otherwise,
Scotch tape would be impossible to unroll.

Adhesive tape recently caught the attention of physicists. Researchers showed that unrolling tape in a vacuum chamber releases x-rays, and they used those x-rays to image the bones in their fingers as a demonstration. The discovery could lead to cheap, portable (and even musclepowered) radiography machines. The unrolling creates electrostatic charges, and electrons jumping across the gap between tape and roll produce x-rays. In the presence of air the electrons are much slower and produce no x-rays. But try unrolling tape in a completely dark room, and you will notice a faint glow.

Source of Information : Scientific American September 2009

Thursday, November 19, 2009

Insurance

Its probability-based view of misfortunes helped to shape the scientific outlook

The first “insurance policy” on record is probably the Codex Hammurabi, circa 1780 B.C., which you can still read in the original at the Louvre Museum in Paris if you are nimble with ancient Sumerian legalese. It avers that shippers whose goods were lost or stolen in transit would be compensated by the state. (How did shippers prove their claims? A sworn declaration before a god was good enough for the king of Babylon.)

Another 3,500 years or so passed before a catastrophe—the Great Fire of London in 1666—begat the first instance of “modern” insurance: a formal setup whereby people paid premiums to companies to bail them out in an emergency; actuaries for the companies set the premium rates based on risk of payout. Such insurance depended on advancements in higher mathematics—namely, probability theory. That development has been insurance’s lasting and profound legacy for modern life, coloring the way we think about so many things, including ourselves.

Mathematical probability theory began in the mid-16th century, when European scholars first applied hard analysis to gambling games. The goal, a hallmark of the Enlightenment, was to lay reason on randomness. Deadly storms, plagues and other misfortunes were understood to be merely unfortunate but natural (and rare) events, not portents—less scourges to be feared and more mysteries to be solved. Thus did probability crunching find its way into modern science. Geneticists use it to divine the likelihood that parents will have children with a particular birth defect. Particle physicists use it to allay fears that the new supercollider will produce an Earth-swallowing black hole. We organize our lives—from indulgences to duties—with the probabilistic expiration date of our life span in mind. At every turn, we subconsciously intuit that this or that outcome is likely to happen, but those intuitions are pliable. It is the real-world testing of our biases—the scientific method—that confirms or kills them.

The legacy of insurance industry risk crunching is not all positive: its fingerprints are all over the recent massive upheaval on Wall Street. A formula published in 2000 by actuary David X. Li, who went on to head research divisions at Citigroup and Barclays Capital, and widely used by economists and bankers to estimate the risk of asset-backed securities borrowed a key component from life insurance. The formula, called a Gaussian cupola function, was not so much an application of actuarial science as a misapplication of it. As it turns out, the default risk of financial instruments cannot be predicted in the same way that, say, the death risk of spouses can.

Source of Information : Scientific American September 2009

Wednesday, November 18, 2009

External Ears

They guide sound to the sensitive middle ear

Looking more like a baby salamander than anything else, a sixweek- old human embryo has tiny paddles for hands, dark dots for eyes and on either side of its shallow mouth slit, half a dozen small bumps destined to form an ear. By nine weeks, these “hillocks” will migrate up the face as the jaw becomes more pronounced and start taking on the recognizable shell shape so handy for holding up eyeglasses. Because development often reprises stages of evolution, the growth of embryonic ears in tandem with the jaw is no accident: the sound-transmitting middle ear bones that are a distinguishing feature of mammals evolved from what used to be gill arches in fish and jawbones in reptiles. The tympanic membrane, or eardrum, that sits just outside the middle ear evolved separately and repeatedly in the ancestors of frogs, turtles, lizards, birds and mammals. Reptilian eardrums can do no more than crudely transmit low-frequency vibrations. To mammals, which have a fancier middle-ear setup, higher-frequency sounds are also audible; external skin and cartilage flaps, called pinnae, are thought to have evolved to capture and funnel those sounds more effectively. The entire human ear structure amplifies sounds by only about 10 to 15 decibels, but our pinnae also usefully modulate the frequency of sounds entering the ear canal. As the contours of the pinnae reflect incoming vibrations, they slightly delay the higher-frequency sounds in a way that cancels out some of them. This so-called notch-filtering effect preferentially delivers sounds in the range of human speech to the inner ear.

Pinnae also help to detect where a sound comes from. Perhaps no animal has a keener directional hearing sense than bats, whose pinnae range in shapes and sizes tailored to the frequencies of each species’ own sonar signals. Another night hunter that relies heavily on hearing, the barn owl, instead uses its large ruff of facial feathers to capture sound and clues to its source. Studies of how human pinnae filter and reflect sounds are informing the design of hearing aids to better reproduce natural aural mechanics. Robots and automated surveillance cameras that turn toward the sound of a disturbance are also being modeled on the human head and external ears.

Source of Information : Scientific American September 2009

Tuesday, November 17, 2009

Batteries

Their inventor may not have known how they actually work

A battery’s power comes from the tendency of electric charge to migrate between different substances. It is the power that Italian scientist Alessandro Volta sought to tap into when he built the first battery at the end of 1799. Although different designs exist, the basic structure has remained the same ever since. Every battery has two electrodes. One, the anode, wants to give electrons (which carry a negative electric charge) to the other, the cathode. Connect the two through a circuit, and electrons will flow and carry out work—say, lighting a bulb or brushing your teeth. Simply shifting electrons from one material to another, however, would not take you very far: like charges repel, and only so many electrons can accumulate on the cathode before they start to keep more electrons from joining. To keep the juice going, a battery balances the charges within its innards by moving positively charged ions from the anode to the cathode through an electrolyte, which can be solid, liquid or gelatinous. It is the electrolyte that makes the battery work, because it allows ions to flow but not electrons, whereas the external circuit allows electrons to flow but not ions.

For example, a charged lithium-ion battery—the type that powers cell phones and laptop computers—has a graphite anode stuffed with lithium atoms and a cathode made of some lithium-based substance. During operation, the anode's lithium atoms release electrons into the external circuit, where they reach the more electron-thirsty cathode. The lithium atoms stripped of their electrons thus become positively charged ions and are attracted toward the electrons accumulating in the cathode. They can do so by flowing through the electrolyte. The ions’ motion restores the imbalance of charges and allows the flow of electricity to continue—at least until the anode runs out of lithium. Recharging the battery reverses the process: a voltage applied between the two electrodes makes the electrons (and the lithium ions) move to the graphite side. This is an uphill struggle, energetically speaking, which is why it amounts to storing energy in the battery.

When he built his first battery, Volta was trying to replicate the organs that produce electricity in torpedoes, the fish also known as electric rays, says Giuliano Pancaldi, a science historian at the University of Bologna in Italy. Volta probably went by trial and error before settling on using metal electrodes and wet cardboard as an electrolyte. At the time, no one knew about the existence of atoms, ions and electrons. But whatever the nature of the charge carriers, Volta probably was not aware that in his battery, the positive charges moved in opposition to the “electric fluid” moving outside. “It took a century before experts reached a consensus on how the battery works,” Pancaldi says.

Source of Information : Scientific American September 2009

Monday, November 16, 2009

Troubleshooting Your Skin

Now that you understand the two layers of skin that wrap your body, you can make sense of a whole host of skin insults and injuries.

• Cuts. Shallow cuts that don’t dig past the epidermis won’t produce any blood. Instead, your skin will just ooze clear liquid. If you see blood, you’ve hit the much thicker and much tougher dermis.

• Stretch marks. Extreme skin stretching (for example, stretching caused by extreme weight gain or pregnancy), can tear the dermis, leaving stretch marks behind. Although they fade with time and are completely harmless, stretch marks never disappear. (Incidentally, 75 to 90 percent of women get some sort of stretch marks during pregnancy. Special skin creams seem to offer little help, and the natural elasticity of your skin appears to determine how severely you’re affected.)

• Blisters. A blister is a fluid-filled pocket that forms between the epidermis and the dermis. The culprit is usually friction from a repetitive motion. Blisters heal fastest (and with no chance of infection) if you leave them undisturbed and unbroken.

• Bruises. A bruise is an injury that causes blood to seep from damaged tissue into the dermis. The bruise remains until your body reabsorbs the blood.

• Warts. A wart is a small, rough skin tumor that’s usually triggered by a virus (and you usually pick up that virus from a damp surface, like a community swimming pool). Warts are more likely to cause pain and embarrassment than any serious complications. However, they can be tenacious, especially those that appear on the undersides of your feet. If you can’t kill off a wart with the standard, over-thecounter products, visit your doctor for a more powerful approach, like a dab of super-cold liquid nitrogen.

Source of Information : Oreilly - Your Body Missing Manual

Sunday, November 15, 2009

How the Sun Created Race (and Kinky Hair)

The paradox between the positive and negative effects of the sun is at the root of humanity’s wide range of skin colors. Races that have traditionally lived in the glare of the equatorial sun have darker skin that offers natural sun protection. But here’s the curious bit: If you follow the tree of humanity back far enough, you’ll find that we all have relatively dark-skinned ancestors. The lightskinned people of today descended from mostly European races that lost their built-in sunscreen over generations of life in colder, dimmer lands. The obvious question is why. After all, if the sun is so dangerous, surely everyone can use a bit of natural sunblock.

The going theory is that when dark-skinned people moved north and south, their natural sunscreen starved them of vitamin D. But the genetic misfits with light skin absorbed more sun, providing the vitamin D they needed, and they thrived. The sun played the same role in shaping humanity’s hair. Races that needed sun protection developed UV-scattering Afro hair to shield their scalps. Races that needed more vitamin D developed straight, kink-free hair that allows light to pass through to the skin below. In other words, if the sun wasn’t harmful, no one would sport dark skin. And if vitamin D wasn’t important, there would be no white people.

Source of Information : Oreilly - Your Body Missing Manual

Saturday, November 14, 2009

Sun Damage

So far, you’ve heard about the good side of the sun—its ability to fuel your skin’s vitamin D factory (at least in warmer seasons). But here’s the scarypart: The dangers of the sun far outweigh its benefits.

The problem, of course, is skin cancer. Skin cancer is by far the most commonform of cancer, trouncing lung, breast, prostate, and colorectal cancers. However, many skin cancers disfigure the skin without threatening your life. Only the type of skin cancer called melanoma is likely to spread from your skin to the rest of your body, which it can do quite quickly.

The culprit is the ultraviolet radiation in sunlight. Ultraviolet rays can be divided into three types: UVA, UVB, and UVC, from least worrisome to most dangerous. UVA can age the skin and may play a role in skin cancer, but science still considers it to be the least harmful. UVB is the type of radiation that fires up vitamin D production and triggers sunburns (and eventually skin cancer). UVC is more dangerous still, but in most parts of the world it’s blocked by the ozone layer, which means it never reaches your skin.

UV exposure increases your risk for all types of skin cancer. But like many things that have adverse health consequences (smoking, obesity, and so on), there’s a lag between the behavior and its effect. In the case of skin cancer, the blistering sunburn you get in your early twenties might lead to skin cancer 30 or 40 years later. Furthermore, the effects of sun exposure are cumulative, so it may take many years of sun-inflicted damage before you harm some of your skin’s genetic material beyond repair.

Most dermatologists believe that skin cancer is highly preventable if you practice good sun habits:

• Reduce sun exposure. Don’t linger in the sun between 10 a.m. and 3 p.m. (11 a.m. and 4 p.m. during daylight-saving time). If you find yourself in strong sun, seek shade.

• Cover up. Always wear a wide-brimmed hat, long-sleeved shirt, and long pants on sunny days.

• Use sunscreen. Look for a product that protects against both UVA and UVB rays and offers a sun protection factor (SPF) of at least 15. Applied properly, a sunscreen of SPF 15 protects the skin from 93% of UVB radiation. Higher SPF numbers are better and may block more UVA, but the difference is not nearly as significant as the numbers imply.

• Use sunscreen properly. Apply sunscreen 15 to 30 minutes before going out. Repeat every two or three hours, more often if you’re swimming.

• Avoid tanning beds. Although tanning beds use UVA rays rather than the more damaging UVBs, they’re far from harmless. Even occasional tanning sessions accelerate skin aging and are likely to increase your risk of skin cancer.

• Learn from your mistakes. If you end up with a sunburn—even a mild one—figure out which rule you broke and resolve to avoid the risk next time. Remember: Sun damage accumulates over your lifetime.

These guidelines are particularly important if you have light skin, a large number of moles, or a family history of melanoma, all of which single you out for greater risk.

Despite decades of use and study, the science of sunscreens isn’t settled. Although sunscreens clearly reduce the occurrence of less harmful types of skin cancer, several studies have found that they offer no protection from deadly melanoma. The reason for this discrepancy is unknown—some experts believe sunscreen gives people a false sense of security, allowing them to stay out longer in potentially harmful sun. Others believe the culprit is not using enough sunscreen or not applying it properly, while still others blame old sunscreen formulations that failed to block UVA rays and contained now-banned ingredients. The best advice is to use sunscreen in conjunction with all the good advice in this list.

Scientists believe that skin exposure is particularly risky for children. Each severe sunburn before the age of 18 ratchets up the risk that skin cancer will develop later in life. So make an extra effort to follow these rules with children and teenagers, keep babies under one year old out of direct sun in the summer, and never leave infants playing or napping in the sun.

Source of Information : Oreilly - Your Body Missing Manual

Friday, November 13, 2009

Only humans allowed

Computing: Can online puzzles that force internet users to prove that they really are human be kept secure from attackers?

ON THE internet, goes the old joke, nobody knows you’re a dog. This is untrue, of course. There are many situations where internet users are required to prove that they are human—not because they might be dogs, but because they might be nefarious pieces of software trying to gain access to things. That is why, when you try to post a message on a blog, sign up with a new website or make a purchase online, you will often be asked to examine an image of mangled text and type the letters into a box. Because humans are much better at pattern recognition than software, these online puzzles—called CAPTCHAs—can help prevent spammers from using software to automate the creation of large numbers of bogus e-mail accounts, for example.

Unlike a user login, which proves a specific identity, CAPTCHAs merely show that “there’s really a human on the other end”, says Luis von Ahn, a computer scientist at Carnegie Mellon University and one of the people responsible for the ubiquity of these puzzles. Together with Manuel Blum, Nicholas J. Hopper and John Langford, Dr von Ahn coined the term CAPTCHA (which stands for “completely automated public Turing test to tell computers and humans apart”) in a paper published in 2000.

But how secure are CAPTCHAs? Spammers stepped up their efforts to automate the solving of CAPTCHAs last year, and in recent months a series of cracks have prompted both Microsoft and Google to tweak the CAPTCHA systems that protect their web-based mail services. “We modify our CAPTCHAs when we detect new abuse trends,” says Macduff Hughes, engineering director at Google. Jeff Yan, a computer scientist at Newcastle University, is one of many researchers interested in cracking CAPTCHAs. Since the bad guys are already doing it, he told a spam-fighting conference in Amsterdam in June, the good guys should do it too, in order to develop more secure designs.

That CAPTCHAs work at all illuminates a failing in artificial-intelligence research, says Henry Baird, a computer scientist at Lehigh University in Pennsylvania and an expert in the design of text-recognition systems. Reading mangled text is an everyday skill for most people, yet machines still find it difficult.

The human ability to recognise text as it becomes more and more distorted is remarkably resilient, says Gordon Legge at the University of Minnesota. He is a researcher in the field of psychophysics—the study of the perception of stimuli. But there is a limit. Just try reading small text in poor light, or flicking through an early issue of Wired. “You hit a point quite close to your acuity limit and suddenly your performance crashes,” says Dr Legge. This means designers of CAPTCHAs cannot simply increase the amount of distortion to foil attackers. Instead they must mangle text in new ways when attackers figure out how to cope with existing distortions. Mr Hughes, along with many others in the field, thinks the lifespan of text-based CAPTCHAs is limited. Dr von Ahn thinks it will be possible for software to break text CAPTCHAs most of the time within five years. A new way to verify that internet users are indeed human will then be needed. But if CAPTCHAs are broken it might not be a bad thing, because it would signal a breakthrough in machine vision that would, for example, make automated book-scanners far more accurate.



CAPTCHA me if you can
Looking at things the other way around, a CAPTCHA system based on words that machines cannot read ought to be uncrackable. And that does indeed seem to be the case for ReCAPTCHA, a system launched by Dr von Ahn and his colleagues two years ago. It derives its source materials from the scanning in of old books and newspapers, many of them from the 19th century. The scanners regularly encounter difficult words (those for which two different character-recognition algorithms produce different transliterations). Such words are used to generate a CAPTCHA by combining them with a known word, skewing the image and adding extra lines to make the words harder to read. The image is then presented as a CAPTCHA in the usual way.

If the known word is entered correctly, the unknown word is also assumed to have been typed in correctly, and access is granted. Each unknown word is presented as a CAPTCHA several times, to different users, to ensure that it has been read correctly. As a result, people solving CAPTCHA puzzles help with the digitisation of books and newspapers.

Even better, the system has proved to be far better at resisting attacks than other types of CAPTCHA. “ReCAPTCHA is virtually immune by design, since it selects words that have resisted the best textrecognition algorithms available,” says John Douceur, a member of a team at Microsoft that has built a CAPTCHA-like system called Asirra. The ReCAPTCHA team has a member whose sole job is to break the system, says Dr von Ahn, and so far he has been unsuccessful. Whenever the in-house attacker appears to be making progress, the team responds by adding new distortions to the puzzles.

Even so, researchers are already looking beyond text-based CAPTCHAs. Dr von Ahn’s team has devised two image-based schemes, called SQUIGL-PIX and ESP-PIX, which rely on the human ability to recognize particular elements of images. Microsoft’s Asirra system presents users with images of several dogs and cats and asks them to identify just the dogs or cats. Google has a scheme in which the user must rotate an image of an object (a teapot, say) to make it the right way up. This is easy for a human, but not for a computer.

The biggest flaw with all CAPTCHA systems is that they are, by definition, susceptible to attack by humans who are paid to solve them. Teams of people based in developing countries can be hired online for $3 per 1,000 CAPTCHAs solved. Several forums exist both to offer such services and parcel out jobs. But not all attackers are willing to pay even this small sum; whether it is worth doing so depends on how much revenue their activities bring in. “If the benefit a spammer is getting from obtaining an e-mail account is less than $3 per 1,000, then CAPTCHA is doing a perfect job,” says Dr von Ahn.

Source of Information : The Economist 2009-09-05

Thursday, November 12, 2009

Memories are made of this

Computing: Memory chips based on nanotubes and iron particles might be capable of storing data for a billion years

FEW human records survive for long, the 16,000-year-old Paleolithic cave paintings at Lascaux, France, being one exception. Now researchers led by Alex Zettl of the University of California, Berkeley, have devised a method that will, they reckon, let people store information electronically for a billion years.

Dr Zettl and his colleagues constructed their memory cell by taking a particle of iron just a few billionths of a metre (nanometres) across and placing it inside a hollow carbon nanotube. They attached electrodes to either end of the tube. By applying a current, they were able to shuttle the particle back and forth. This provides a mechanism to create the “1” and “0” required for digital representation: if the particle is at one end it counts as a “1”, and at the other end it is a “0”.

The next challenge was to read this electronic information. The researchers found that when electrons flowed through the tube, they scattered when they came close to the particle. The particle’s position thus altered the nanotube’s electrical resistance on a local scale. Although they were unable to discover exactly how this happens, they were able to use the effect to read the stored information.

What makes the technique so durable is that the particle’s repeated movement does not damage the walls of the tube. That is not only because the lining of the tube is so hard; it is also because friction is almost negligible when working at such small scales.

Theoretical studies suggest that the system should retain information for a long time. To switch spontaneously from a “1” to a “0” would entail the particle moving some 200 nanometres along the tube using thermal energy. At room temperature, the odds of that happening are once in a billion years. In tests, the stored digital information was found to be remarkably stable. Yet the distance between the ends of the tube remains small enough to allow for speedy reading and writing of the memory cell when it is in use.

The next challenge will be to create an electronic memory that has millions of cells instead of just one. But if Dr Zettl succeeds in commercialising this technology, digital decay itself could become a thing of the past.

Source of Information : The Economist 2009-09-05

Wednesday, November 11, 2009

PCS GET HANDY

MICROSOFT’S NEW OS BRINGS IPHONE-LIKE MULTITOUCH TECH TO COMPUTERS


THE TREND
Multitouch screens, which can register more than one finger-press at a time, will let computers trade keyboards and mice for simple strokes and pinches. The models shown here are just the start. Nearly every major PC maker will introduce touch-y designs of various shapes and sizes in the coming months.


WHY NOW
Microsoft Windows 7, which launches October 22, is the first major computer operating system designed to work with multitouch displays. Because it incorporates the software code needed to understand your gestures, manufacturers can now include these screens more easily than ever before.


HOW YOU’LL BENEFIT
Use your fingers instead of a mouse in almost any program; for instance, pinch to zoom out in Google Earth, or drag a finger to scroll through a Web page in Firefox. Developers are also beginning to build applications that use touch in new and more-creative ways—such as in 3-D design programs that let you morph virtual products with a twist—so that formerly complicated tasks will become as easy as a tap.



THE LAPTOP
The T400s looks like an ordinary 14.1-inch laptop, but a touchscreen frees you from the tiny cursor. For instance, you can rearrange two photos at once by dragging them, or partygoers can point at a song they want to hear. Its capacitive screen senses the electrical conductivity of fingers and even recognizes up to four touches at a time. Lenovo T400S with Multitouch Option From $2,000; lenovo.com



THE TABLET
Lose the keyboard entirely with a laptop whose display spins and folds to hide the keys. You can use a stylus to write or draw precisely, since the 13.3-inch screen includes both a fleshsensing capacitive layer and the same electronic-pen-based layer used by graphic artists. Don’t worry about penmanship: Windows 7 boasts better handwriting recognition. Fujitsu Life-Book T5010 with Multitouch Option From $1,860; fujitsu.com



THE DESKTOP
A 21.5-inch widescreen display makes it easy for even big fingers to hit their mark. Tap where you want to enter text, and up pops Windows 7’s virtual keyboard, which you can enlarge to take advantage of the big screen. Poke at letters using either your fingers or the end of a pencil, since the camera-based optical touchscreen can detect when any opaque object comes in contact. MSI Wind Top All-in-One PC From $730; us.msi.com

Source of Information : Popular Science November

Hard act to follow

Environment: Making softwoods more durable could reduce the demand for unsustainably logged tropical hardwoods

ONE of the reasons tropical forests are being cut down so rapidly is demand for the hardwoods, such as teak, that grow there. Hardwoods, as their name suggests, tend to be denser and more durable than softwoods. But unsustainable logging of hardwoods destroys not only forests but also local creatures and the future prospects of the people who lived there.

It would be better to use softwood, which grows in cooler climes in sustainably managed forests. Softwoods are fast-growing coniferous species that account for 80% of the world’s timber. But the stuff is not durable enough to be used outdoors without being treated with toxic preservatives to protect it against fungi and insect pests. These chemicals eventually wash out into streams and rivers, and the wood must be retreated. Moreover, at the end of its life, wood that has been treated with preservatives in this way needs to be disposed of carefully.

One way out of this problem would be an environmentally friendly way of making softwood harder and more durable—something that a Norwegian company called Kebony has now achieved. It opened its first factory in January.

Kebony stops wood from rotting by placing it in a vat containing a substance called furfuryl alcohol, which is made from the waste left over when sugarcane is processed. The vat is then pressurised, forcing the liquid into the wood. Next the wood is dried and heated to 110ºC. The heat transforms the liquid into a resin, which makes the cell walls of the wood thicker and stronger.

The approach is similar to that of a firm based in the Netherlands called Titan Wood. Timber swells when it is damp and shrinks when it is dry because it contains groups of atoms called hydroxyl groups, which absorb and release water. Titan Wood has developed a technique for converting hydroxyl groups into acetyl groups (a different combination of atoms) by first drying the wood in a kiln and then treating it with a chemical called acetic anhydride. The result is a wood that retains its shape in the presence of water, and is no longer recognised as wood by grubs that would otherwise attack it. It is thus extremely durable.

The products made by both companies are completely recyclable, environmentally friendly and create woods that are actually harder than most tropical hardwoods. The strengthened softwoods can be used in everything from window frames to spas to garden furniture. Treated maple is also being adopted for decking on yachts. The cost is similar to that of teak, but the maple is more durable and easier to keep clean.

Obviously treating wood makes it more expensive. But because it does not need to receive further treatments—a shed made from treated wood would not need regular applications of creosote, for example—it should prove economical over its lifetime. Kebony reckons that its pine cladding, for example, would cost a third less than conventionally treated pine cladding over the course of 40 years. Saving money, then, need not be at the expense of helping save the planet.

Source of Information : The Economist 2009-09-05

Tuesday, November 10, 2009

Washing without water

Environment: A washing machine uses thousands of nylon beads, and just a cup of water, to provide a greener way to do the laundry

SYNTHETIC fibres tend to make low quality clothing. But one of the properties that makes nylon a poor choice of fabric for a shirt, namely its ability to attract and retain dirt and stains, is being exploited by a company that has developed a new laundry system. Its machine uses no more than a cup of water to wash each load of fabrics and uses much less energy than conventional devices.

The system developed by Xeros, a spin-off from the University of Leeds, in England, uses thousands of tiny nylon beads each measuring a few millimetres across. These are placed inside the smaller of two concentric drums along with the dirty laundry, a squirt of detergent and a little water. As the drums rotate, the water wets the clothes and the detergent gets to work loosening the dirt. Then the nylon beads mop it up.

The crystalline structure of the beads endows the surface of each with an electrical charge that attracts dirt. When the beads are heated in humid conditions to the temperature at which they switch from a crystalline to an amorphous structure, the dirt is drawn into the core of the bead, where it remains locked in place.

The inner drum, containing the clothes and the beads, has a small slot in it. At the end of the washing cycle, the outer drum is halted and the beads fall through the slot; some 99.95% of them are collected.

Because so little water is used and the warm beads help dry the laundry, less tumble drying is needed. An environmental consultancy commissioned by Xeros to test its system reckoned that its carbon footprint was 40% smaller than the most efficient existing systems for washing and drying laundry.

The first machines to be built by Xeros will be aimed at commercial cleaners and designed to take loads of up to 20 kilograms. Customers will still be able to use the same stain treatments, bleaches and fragrances that they use with traditional laundry systems. Nylon may be nasty to wear, but it scrubs up well inside a washing machine.

Source of Information : The Economist 2009-09-05

Monday, November 9, 2009

The digital geographers

The internet: Detailed digital maps of the world are in widespread use. They are compiled using both high-tech and low-tech methods

IT IS a damp, overcast Monday morning in Watford, an undistinguished town north of London that seems to offer little to the casual visitor. But one man is eagerly snapping photographs. In fact, he is working with six high-resolution cameras, all of which are attached to the roof of the car in which he is being driven. He sits in the passenger seat with a keyboard on his lap, tapping occasionally and muttering into a microphone. A computer screen built into the dashboard shows the car’s progress as a luminous dot travelling across a map of the town. The man is a geographic analyst for NAVTEQ, one of a small group of companies that are creating new, digital maps of the world.

Each keystroke he makes denotes a feature in the outside world that is added to the map displayed on the screen. New details are also recorded in audio form. Once the journey is finished, the analyst can also pick out new details while watching a video playback. All this information is transferred from a server in the car’s boot to NAVTEQ’s database.

Companies such as NAVTEQ and its rivals, which include Tele Atlas and Microsoft, always start a new map by going to trusted sources such as local governments or mapping organisations. This information can be corroborated using aerial or satellite photography. Only when these sources are exhausted do they switch to the more expensive process of gathering data themselves. The digital maps they create are used mostly by motorists in rich countries. But the same companies are now creating maps of the developing world, which is requiring them to do things in somewhat different ways.

A geographic analyst in India would probably have deserted his vehicle, finding it impractical to manoeuvre on the country’s crowded urban streets. Instead, he would go on foot and use a pen to annotate a map printed on paper, a technique abandoned by his Western counterparts a decade ago. Official mapmaking in some poor countries is far from comprehensive, leaving the likes of NAVTEQ or Tele Atlas to generate the most accurate maps available.

The type of data that must be gathered also varies. Navigation in wealthy Western markets generally requires gathering the information that is of most interest to motorists. But lower levels of car ownership in poor countries makes such information less relevant. Instead, the proliferation of mobile phones in countries such as China or India, many of which incorporate satellite-positioning chips, may make pedestrian navigation more relevant for local customers. Mapmakers are more likely to spend time hanging around bus stations collecting timetables, or finding the quickest route, which is not always the most direct one, from a city’s railway station to its main shopping street. All this information has to be constantly refreshed, sometimes several times a year.

To reduce the cost of sending staff on such reconnaissance trips, mapping companies are asking their customers to do more of the work. Tele Atlas, for example, gathers data from users of satellite-navigation systems made by TomTom, a firm based in the Netherlands. Drivers can report errors and suggest new features, or can agree to submit data passively: the TomTom device automatically logs their vehicle’s position, leaving a trail where it has travelled. It is then possible to calculate the vehicle’s direction and speed, which can help identify the class of road on which it is travelling. Altitude measurements mean the road’s gradient can be determined. Other information can also be deduced. If a lot of cars all seem to be driving across what was thought to be a ploughed field, for example, then it is likely that a new road has been built. Such detective work keeps the company’s mapping database up to date.

In some parts of the world, however, mapmaking relies heavily on voluntary contributions. Google’s Map Maker service, for example, makes up for the lack of map data for much of the world by asking volunteers to provide it. Among its contributors is Tim Akinbo, a Nigerian software developer who got involved with the project last year. He has mapped recognisable features in Lagos, where he lives, as well as his home town of Jos. Churches, banks, office buildings and cinemas all feature on his map.

His working method is relatively simple. His mobile phone does not have satellite positioning, but he can use it to call up Google Maps, see what is on the map in a particular area and make a note of things to add. He then goes online when he gets home to add new features.

Why should people freely give up their time to improve local maps? Mr Akinbo explains that local businesses could use Map Maker to alert potential customers to their existence. “They will be contributing to a tool from which other people can benefit, as well as themselves,” he explains. With enough volunteers a useful map can be created without the need for fancy camera-toting cars.

Source of Information : The Economist 2009-09-05

Sunday, November 8, 2009

The taxonomy of tumours

Medicine: A new technique aims to measure the activity of a tumour, and could also help provide a new way to classify cancers

ONCOLOGISTS would like to be able to classify cancers not by whereabouts in the body they occur, but by their molecular origin. They know that certain molecules become active in tumours found in certain parts of the body. Both head-and-neck cancers and breast cancers, for example, have an abundance of molecules called epidermal growth-factor receptors (EGFRs). Now a team from Cancer Research UK’s London Research Institute has taken a step towards this goal. Their technique can already identify how advanced a person’s cancer is, and thus how likely it is to return after treatment.

At present, pathologists assess how advanced a cancer is by taking a sample, known as a biopsy, and examining the concentration within it of specific receptors, such as EGFRs, that are known to help cancers spread. Peter Parker had the idea of employing a technique called fluorescence resonance-energy transfer (FRET), which is used to study interactions between individual protein molecules, to see if he could find out not only how many receptors there are in a biopsy, but also how active they are.

The technique uses two types of antibody, each attached to a fluorescent dye molecule. Each of the two types is selected to fuse with a different part of an EGFR molecule, but one will do so only when the receptor has become active.

Pointing a laser at the sample causes the first dye to become excited and emit energy. With an activated receptor, the second dye will be attached nearby and so will absorb some of the energy given off by the first. Measuring how much energy is transferred between the two dyes indicates the activity of the receptors.

Dr Parker’s idea was implemented by his colleague Banafshe Larijani. She and her colleagues used FRET to measure the activity of receptors in 122 head-and-neck cancers. They found that the higher the activity of the receptors they examined, the more likely it was the cancers would return quickly following treatment. The technique was found to be a better prognostic tool than conventional visual analysis of receptor density.

To speed things up, engineers in the same group have now created an instrument that automates the analysis. Tumour biopsies are placed on a microscope slide and stained with antibodies. The system then points the laser at the samples, records images of the resulting energy transfer and interprets those images to provide FRET scores. Results are available in as little as an hour, compared with four or five days using standard methods.

Having established the principle with head-and-neck cancer, the team hopes to extend it. They are beginning a large-scale trial to see whether FRET can accurately “hindcast” the clinical outcomes associated with 2,000 breast-cancer biopsies. Moreover, if patterns of receptor-activation for other types of cancers can be characterised, the technique could be applied to all solid tumours (ie, cancers other than leukaemias and lymphomas).

If they succeed, it will be good news for researchers who want to switch from classifying cancers anatomically to classifying them biochemically. Most cancer specialists think that patients with tumours in different parts of the body that are triggered by the same genetic mutations may have more in common than those whose tumours are in the same organ, but have been caused by different mutations. The new approach could help make such classification routine. That could, in turn, create a new generation of therapies and help doctors decide which patients should receive them, and in which combinations and doses.

Source of Information : The Economist 2009-09-05

Saturday, November 7, 2009

Air power

Energy: Batteries that draw oxygen from the air could provide a cheaper, lighter and longerlasting alternative to existing designs

MOBILE phones looked like bricks in the 1980s. That was largely because the batteries needed to power them were so hefty. When lithium-ion batteries were invented, mobile phones became small enough to be slipped into a pocket. Now a new design of battery, which uses oxygen from ambient air to power devices, could provide even an smaller and lighter source of power. Not only that, such batteries would be cheaper and would run for longer between charges.

Lithium-ion batteries have two electrodes immersed in an electrically conductive solution, called an electrolyte. One of the electrodes, the cathode, is made of lithium cobalt oxide; the other, the anode, is composed of carbon. When the battery is being charged, positively charged lithium ions break away from the cathode and travel in the electrolyte to the anode, where they meet electrons brought there by a charging device. When electricity is needed, the anode releases the lithium ions, which rapidly move back to the cathode. As they do so, the electrons that were paired with them in the anode during the charging process are released. These electrons power an external circuit.

Peter Bruce and his colleagues at the University of St Andrews in Scotland came up with the idea of replacing the lithium cobalt oxide electrode with a cheaper and lighter alternative. They designed an electrode made from porous carbon and lithium oxide. They knew that lithium oxide forms naturally from lithium ions, electrons and oxygen, but, to their surprise, they found that it could also be made to separate easily when an electric current passed through it. They exposed one side of their porous carbon electrode to an electrolyte rich in lithium ions and put a mesh window on the other side of the electrode through which air could be drawn. Oxygen from the air took the place of the cobalt oxide.

When they charged their battery, the lithium ions migrated to the anode where they combined with electrons from the charging device. When they discharged it, lithium ions and electrons were released from the anode. The ions crossed the electrolyte and the electrons travelled round the external circuit. The ions and electrons met at the cathode, and combined with the oxygen to form lithium oxide that filled the pores in the carbon.

Because the oxygen being used by the battery comes from the surrounding air, the device that Dr Bruce’s team has designed can be a mere one-eighth to one-tenth the size and weight of modern batteries, while still carrying the same charge. Making such a battery is also expected to be cheaper. Lithium cobalt oxide accounts for 30% of the cost of a lithium-ion battery. Air, however, is free.

Source of Information : The Economist 2009-09-05

Friday, November 6, 2009

Trappings of waste

Materials science: Plastic beads may provide a way to mop up radiation in nuclear powerstations and reduce the amount of radioactive waste

NUCLEAR power does not emit greenhouse gases, but the technology does have another rather nasty byproduct: radioactive waste. One big source of low-level waste is the water used to cool the core in the most common form of reactor, the pressurised-water reactor. A team of researchers led by Börje Sellergren of the University of Dortmund in Germany, and Sevilimedu Narasimhan of the Bhabha Atomic Research Centre in Kalpakkam, India, think they have found a new way to deal with it. Their solution is to mop up the radioactivity in the water with plastic.

In a pressurised-water reactor, hot water circulates at high pressure through steel piping, dissolving metal ions from the walls of the pipes. When the water is pumped through the reactor’s core, these ions are bombarded by neutrons and some of them become radioactive. The ions then either settle back into the walls of the pipes, making the pipes themselves radioactive, or continue to circulate, making the water radioactive. Either way, a waste-disposal problem is created.

Because the pipes are steel, most of the ions are iron. When the commonest isotope of iron (56Fe) absorbs a neutron, the result is not radioactive. The steel used in the pipes, however, is usually alloyed with cobalt to make it stronger. When common cobalt (59Co) absorbs a neutron the result is 60Co, which is radioactive and has a half-life of more than five years.

At present, nuclear engineers clean cobalt from the system by trapping it in what are known as ionexchange resins. These swap bits of themselves for ions in the water flowing over them. Unfortunately, the ion-exchange technique traps many more non-radioactive iron ions than radioactive cobalt ones.

To overcome that problem Drs Sellergren and Narasimhan have developed a polymer that binds to cobalt while ignoring iron. They made the material using a technique called molecular imprinting, which involves making the polymer in the presence of cobalt ions, and then extracting those ions by dissolving them in hydrochloric acid. The resulting cobalt-sized holes tend to trap any cobalt ions that blunder into them, with the result that a small amount of the polymer can mop up a lot of radioactive cobalt.

The team is now forming the new polymer into small beads that can pass through the cooling systems of nuclear power-stations. Concentrating radioactivity into such beads for disposal would be cheaper than trying to get rid of large volumes of low-level radioactive waste, according to Dr Sevilimedu. He thinks that the new polymer could also be used to decontaminate decommissioned nuclear power-stations where residual radioactive cobalt in pipes remains a problem.

Nuclear power is undergoing a renaissance. Some 40 new nuclear power-stations are being built around the world. The International Atomic Energy Agency estimates that a further 70 will be built over the next 15 years, most of them in Asia. That is in addition to the 439 reactors which are already operating. So there will be plenty of work for the plastic beads, if Drs Sellergren and Narasimhan can industrialise their process.

Source of Information : The Economist 2009-09-05

Thursday, November 5, 2009

Keeping a grip

Transport: A new type of tyre, equipped with built-in sensors, can help avoid a skid—and could also improve fuel-efficiency

FEW sensations of helplessness match that of driving a car that unexpectedly skids. In a modern, wellequipped (and often expensive) car, electronic systems such as stability and traction control, along with anti-lock braking, will kick in to help the driver avoid an accident. Now a new tyre could detect when a car is about to skid and switch on safety systems in time to prevent it. It could also improve the fuelefficiency of cars to which it is fitted.

The Cyber Tyre, developed by Pirelli, an Italian tyremaker, contains a small device called an accelerometer which uses tiny sensors to measure the acceleration and deceleration along three axes at the point of contact with the road. A transmitter in the device sends those readings to a unit that is linked to the braking and other control systems.

The accelerometers in the Cyber Tyre contain two tiny structures, the distance between which changes during acceleration, altering the electrical capacitance of the device, which is measured and converted into a voltage. Powered by energy scavengers that exploit the vibration of the tyre, the device encapsulating the accelerometers and the transmitter is about 2.5 centimetres in diameter and about the thickness of a coin.

Constantly monitoring the forces that tyres are subjected to as they grip the road could help reduce fuel consumption by optimising braking and suspension. Moreover, it could promote the greater use of tyres with a low rolling-resistance, which are often fitted to hybrid vehicles. These save fuel by reducing the resistance between the tyre and the road but, to do so, they have a reduced grip, especially in the wet. If fitted with sensors, such tyres could be more closely monitored and controlled in slippery conditions.

Pirelli believes its new tyre could be fitted to cars in 2012 or 2013, but this will depend on getting carmakers to incorporate the necessary monitoring and control systems into their vehicles. As with most innovations, these are expected to be available in upmarket models first, and cheaper cars later. But if the introduction in 1973 of Pirelli’s steel-belted Cinturato radial tyre is any guide, devices that make cars safer will be adopted rapidly.

Source of Information : The Economist 2009-09-05

Wednesday, November 4, 2009

Vitamin D

Not long ago, vitamin D was considered dull and definitely unsexy. Sure, it was known to help your body absorb calcium and prevent rickets (a childhood disease that softens the bones and causes debilitating deformities). But adding a dash of vitamin D to milk and a few other vitamin-fortified foods solved the problem, and no one thought much about vitamin D— until recently.

Today, vitamin D has leapt to the forefront of the supplement world, thanks to several new studies that suggest it plays a role in the prevention of cancer and other diseases. It’s no longer treated as a simple calcium-booster— vitamin D now has its own starring role as a hormone that triggers a range of cellular processes. Time will tell if science validates this promising new research, or if it becomes another dead end in the vast maze of nutrition science. In the meantime, there’s good reason to make sure your body has a solid dose of the stuff.

Vitamin D is naturally present in very few foods, but your skin has the ability to create this wonder drug when you expose it to the ultraviolet rays of the sun. The cells that carry out this operation lie at the bottom of your epidermis. You need surprisingly little exposure to the sun to maintain a healthy supply of vitamin D. The rule of thumb is 10 or 15 minutes of direct sun exposure, two or three times a week, on just part of your body (say, your face, hands, and arms). After that, it’s time to reach for the sunscreen.

Unfortunately, the vitamin D manufacturing process doesn’t work well in diffuse sunlight—say, in the winter months of a Northern state. Cloud cover and pollution also dramatically reduce the amount of ultraviolet light that reaches your skin. For example, in Boston, sunlight is too weak to trigger vitamin D synthesis from November through February. To make up the difference, you can take a vitamin D supplement—typically, 1,000 IU each day (look for this measure on the bottle), until summer rolls around again. This is roughly the amount of vitamin D that you’d get from 10 glasses of milk.

Supplementing your diet with vitamin D is particularly important if you have brown or black skin, because this natural sunscreen makes it more difficult to synthesize vitamin D.

The key point to remember is that the amount of sun exposure you need to synthesize vitamin D is very little in the summer months (or in a tropical climate). But in late fall and winter, you can run around in boxer shorts without producing a microgram of vitamin D.

Source of Information : Oreilly - Your Body Missing Manual

Tuesday, November 3, 2009

Controlling the Mite Population

If you have dust mite allergies or asthma, you may be able to improve your life with a bit of extra work. These tips can help cut down on the number of dust mite colonies that live with you:

• Control dust. Vacuum often, dust flat surfaces, switch from carpet to hardwood floors, and remove knickknacks that collect dust. None of these steps will kill dust mites, but you can keep their numbers down by reducing their food supply.

• Control humidity. Dust mites thrive in moist environments. Sadly, no matter how dry your house is, your breathing and perspiration provides more than enough dampness to keep them happy in your bedding.

• Use cold and heat. If you can wash your bedding at scaldingly high temperatures— at least 130 degrees Farenheit—you can kill the mites that are there (although this obviously has no effect on the many more mites in your mattress). If you have a plushy object you can’t launder, like a child’s stuffed toy, a day in the freezer will also kill the mites, although it may leave lint on your frozen peas.

• Use allergenic covers. Many companies sell zippered covers for mattresses and pillows that can reduce the number of dust mites that get into your bedding and the amount of allergenic excrement that comes floating out once they’re in it. Of course, some mattress covers are about as comfortable as sleeping on a vinyl tablecloth. And frequent laundering may stretch the microscopic pores of the cover so that they’re big enough to let everything through anyway. If you decide to try this approach, it’s worth doing some research before you buy.

• When travelling, don’t think about it. Sure, there are probably plenty of dust mites in hotels, bed-and-breakfasts, and so on, but you’ll be home soon enough. If you really must feed your paranoia, obsess about something more serious, like bed bugs (see http://en.wikipedia.org/wiki/Bedbug for travel tips that can help you spot these very unwelcome bedmates).

Source of Information : Oreilly - Your Body Missing Manual

Monday, November 2, 2009

The Protective Wrapper

When people think about the purpose of skin, most settle on the obvious— the way a few millimeters of tissue keeps their blood from oozing messily out of their body.

While a bit of skin certainly helps hold you together, it also plays several additional roles. First and foremost, its a protective barrier that separates you from the harsh world outside. It helps keep water and nutrients inside your body, where they belong, and it keeps undesirable elements—like toxins and marauding bacteria—outside.


Building a Barrier
To understand how your skin works its defensive mojo, you first need to understand that its actually made up of two distinct layers: the epidermis (which is on the very outside) and the dermis (which is just underneath the epidermis).

The epidermis is your body’s first line of defense. It transforms dead skin cells into a tough, protective layer.

Healthy skin cells start at the bottom of your epidermis, about 1/3 of an inch down, living an easy life and cheerily reproducing. As these cells mature, they get ready to face the outside world by producing a fibrous, waterproof compound called keratin. Keratin is a biological wonder substance. Your body uses it to build your nails and hair, and its the basis of some of the sexier trimmings of other animals, including claws, horns, hooves, scales, shells, and beaks.

When your body produces fresh skin cells, these newcomers push the older cells out of the crowded neighborhood at the base of the epidermis and toward the surface of the skin. The trip takes anywhere from a couple of weeks to a month. By the time a skin cell reaches the surface, its little more than a dead, scale-like structure thats filled with keratin but none of the ordinary cellular machinery. Each surface skin cell lasts about 30 days on the outside, which means you get an entirely new skin every month.

On most of your body, the epidermis is barely thicker than this page. However, the skin on the palms of your hands and the soles of your feet is much thicker, so it can spend all day slapping up against the outside world without wearing off.

Cells are the smallest building block of life. All living creatures—from slimy amoebas to still slimier car salesmen—are made up of cells. Your body contains trillions of cells, many of which don’t belong to you at all. (In fact, the teeny bacteria that digest food in your intestines account for more than half of the cells in your body. Although the process isnt as dramatic, humans shed their skin (and replace it) more often than snakes do. So the next time you act all repulsed by a reptile, perhaps it should really be the other way around.


Shedding Your Skin
Every day, you lose millions of dead skin cells. They don’t fall off all at once—instead, you leave a trail of shed skin everywhere you go. We could tell you how many you lose each minute, but it’s really not that important and likely to make you a little nauseous. (All right, if you insist—30,000 or so scales of skin flake off your body every minute. Right now, on your clothes, on whatever piece of furniture you’re sitting on, and so on. Over the course of a year, you lose about a pound of the stuff.)

You might wonder why you never see much of this skin lying around. That’s because once your skin leaves your body, it’s known by another name: dust. Good estimates suggest that the majority of the material you vacuum off your carpet every week (or every month, or every year) are errant skin flakes. That means that when you clean your house, you’re vacuuming up bits and pieces of yourself and the people who live around you. Yes, there’s some genuine sock lint in there, some cookie crumbs, and a bit of trackedin- from-outside dirt, but it’s mostly skin. Because skin flakes are thin and nearly transparent, your household dust almost always has a light, silverygrey color.

If you want to take a look at your dead skin before it ends up somewhere else, you can try this somewhat unsettling experiment: Stick a piece of clear tape on the back of your hand, strip it off, and then hold it up to a light. You’ll find hundreds of freshly shed skin cells preserved for your inspection.


The Creature That Eats Your Skin
It turns out that your skin flakes have yet another name: lunch. That’s what they are to an unusual family of creatures that exists on a diet made up entirely of dead skin. (And no, they’re not zombies.) The culprits are dust mites— very tiny, distant relatives of the common household spider. Dust mites live in our houses by the millions, with most of them taking up residence in upholstered furniture, drapery, carpets, and— above all—mattresses. Dust mites need just three things for a life of contentment: warmth, moisture, and a steady diet of skin flakes. In your bed, they get all three.

You won’t actually see the dust mites that share your home, because they’re vanishingly small (a family of mites could pack themselves into the period at the end of this sentence). But if you looked at one under a microscope, you’d see an otherworldly, eight-legged creature.

If you’re like most people, dust mites are no big deal and you can safely forget about them. But for some people (estimates suggest one to three people out of 10), dust mites can trigger allergies and even asthma attacks. Common symptoms of dust-mite allergies include sore eyes, an itchy throat, and sneezing fits. If you think you might be allergic to dust mites, it’s worth going to an allergy specialist, who can give you a quick and painless skin-prick test. If you are allergic, you may want to use some of the tips in the box on the next page to help reduce your symptoms.

The problem isn’t the mites themselves—it’s their excrement and (ironically enough) the skin they shed. And here’s more information you probably don’t want to know: Dust mites actually eat and excrete the same skin flake several times, until they’ve finally digested all the goodness out of it.

Before you let the idea of dust mites ruin your day, remind yourself that, unlike some other mites and other nasties, dust mites don’t actually live on your skin— they live in the fabric of the objects around you. In fact, dust mites have absolutely no interest in crawling on your body.

As far as critters you don’t want to think about go, there’s good news, too: Two stubborn skin dwellers that have plagued humankind for generations—the human flea and the body louse—are no longer much to worry about. In Elizabethan times, these creatures crawled into bed with virtually everyone, rich and poor. Today, thanks to relatively simple conveniences like scalding-hot water and laundry machines, these pests (and the unrelenting itchiness they cause) are virtually unknown in the Western world.

Source of Information : Oreilly - Your Body Missing Manual

Sunday, November 1, 2009

Skin Care

It might make you a little queasy, but everything you do to care for your skin—slathering on moisturizer, scrubbing with a sponge, and so on—you do to a layer of lifeless cells. Your morning shower involves scrubbing away the oldest and loosest skin cells— not to reveal the living cells underneath (which arent tough enough to face the outside world)—but to reveal more dead skin. In this respect, people are rather like trees, covered in a dead-as-a-doornail layer of protective bark. But don’t give up on your skin just yet. Dead as it may be, your skin cells still need proper upkeep. Here are some points to consider:

• Basic cleaning. Neatniks take heart—even dead skin needs a regular bath. If you leave your dead skin undisturbed, it will mix with sweat and dirt to form a very tasty snack for the bacteria that live on your skin. As the bacteria digest this mixture, they produce a foul smell that will earn you some extra personal space on the subway.

• Moisturizing. Ordinary soaps are harsh and drying. They strip away the natural oils in your skin. Unfortunately, this dry skin loses its natural protection against bacteria, which can then slip in through cracks and fissures in your skin. To keep your defenses up, rub lotion on your hands when they become dry (for many people, that means after every washing), and use gentle cleansers on other parts of your body (like your face).

• Exfoliation. Some people swear by special scrubs and brushes for removing dead skin cells. While exfoliation may improve the feel of your skin and temporarily enhance its appearance, exfoliation overachievers are likely to end up with dry, inflamed skin. So if you’re an exfoliating junkie, limit your sessions to twice a week, and moisturize your skin to replace the natural oils youve just scrubbed away.

Source of Information : Oreilly - Your Body Missing Manual