Saturday, June 30, 2012

Is Raw Cookie Dough a Killer?

For many, baking cookies is a labor of love that’s sweetened by the occasional stealthy scoop of raw cookie dough. But there’s a sinister side to this guilty pleasure: Public health officials warn that raw eggs can contain stomach-churning salmonella bacteria, which can cause fever, diarrhea, and even death. So should cookie bakers keep their fingers to themselves?

First, it’s important to realize that no one really knows how many eggs are contaminated with salmonella. In the past, experts thought that salmonella lived on eggshells, but couldn’t make its way into an egg without traveling through a hairline crack. Today we know that salmonella can pass from the ovaries of infected hens straight into their developing eggs.

In the Northeastern states of the U.S., solid estimates suggest that 1 in 10,000 eggs are contaminated with salmonella. That means you could eat an entire batch of two-egg cookie dough and face only a 0.02 percent chance of a night on the toilet. Of course, these figures are only estimates—some studies put the number of infected eggs at 1 in 20,000, while at least one ratchets it up to 1 in 700.

Even then, tainted dough may not be as dangerous as it seems. Studies show that salmonella needs the power of numbers to wreak its damage, and healthy volunteers don’t get a serious infection unless they ingest about 1 million salmonella organisms. (This is notably different from dangerous strains of E. coli, which can breach your body’s defenses in very small numbers—as few as 200 bacteria.) And if you do get infected with salmonella, the odds are overwhelming that you’ll be back on your feet in a week with nothing worse than some painful memories.

The bottom line? Eating raw cookie dough is particularly risky for young children, pregnant women, the elderly, and people with impaired immune systems—all of whom are more likely to suffer dangerous complications. (And to be consistently paranoid about egg safety, none of these individuals should eat a runny-yoked egg, which may still harbor bacteria.) But an average, healthy adult with a normally functioning immune system has a relatively small risk of serious health trouble. On the other hand, exercise caution when dealing with foods that traditionally use raw eggs—such as Caesar salad dressing, eggnog, and homemade ice cream. These foods aren’t eaten immediately, which gives bacteria time to multiply and reach more dangerous levels. To keep these foods safe, make them with pasteurized egg products.

Source of Information : Oreilly - Your Body Missing Manual

Wednesday, June 27, 2012

Nano-Size Germ Killers

Tiny knives could be important weapons against superbugs

Drug-resistant tuberculosis is roaring through Europe, according to the World Health Organization. Treatment options are few—antibiotics do not work on these highly evolved strains—and about 50 percent of people who contract the disease will die from it. The grim situation mirrors the fight against other drug-resistant diseases such as MRSA, a staph infection that claims 19,000 lives in the U.S. every year. Hope comes in the form of a nanotech knife. Scientists working at IBM Research– Almaden have designed a nanoparticle capable of utterly destroying bacterial cells by piercing their membranes.

The nanoparticles’ shells have a positive charge, which binds them to negatively charged bacterial membranes. “The particle comes in, attaches, and turns itself inside out and drills into the membrane,” says Jim Hedrick, an IBM materials scientist working on the project with collaborators at Singapore’s Institute of Bioengineering and Nanotechnology. Without anintact membrane, the bacterium shrivels away like a punctured balloon. The nanoparticles are harmless to humans—they do not touch red blood cells, for instance—because human cell membranes do not have the same electrical charge that bacterial membranes do. After the nanostructures have done their job, enzymes break them down, and the body flushes them out.

Hedrick hopes to see human trials of the nanoparticles in the next few years. If the approach holds up, doctors could squirt nanoparticle-infused gels and lotions onto hospital patients’ skin, warding off MRSA infections. Or workers could inject the particles into the bloodstream to halt systemic drug-resistant organisms, such as streptococci, which can cause sepsis and death. Even if it succeeds, such a treatment would have to overcome any unease over the idea of nanotech drills in the bloodstream. But the nastiest bacteria on the planet won’t succumb easily.

Source of Information : Scientific American Magazine 

Sunday, June 24, 2012

Crops That Don’t Need Replanting

Year-round crops can stabilize the soil and increase yields. They may even fight climate change

Before agriculture, most of the planet was covered with plants that lived year after year. These perennials were gradually replaced by food crops that have to be replanted every year. Now scientists are contemplating reversing this shift by creating perennial versions of familiar crops such as corn and wheat. If they are successful, yields on farmland in some of the world’s most desperately poor places could soar. The plants might also soak up some of the excess carbon in the earth’s atmosphere.

Agricultural scientists have dreamed of replacing annuals with equivalent perennials for decades, but the genetic technology needed to make it happen has appeared only in the past 10 or 15 years, says agroecologist Jerry Glover. Perennials have numerous advantages over crops that must be replanted every year: their deep roots prevent erosion, which helps soil hold onto critical minerals such as phosphorus, and they require less fertilizer and water than annuals do. Whereas conventionally grown monocrops are a source of atmospheric carbon, land planted with perennials does not require tilling, turning it into a carbon sink. Farmers in Malawi are already getting radically higher yields by planting rows of perennial pigeon peas between rows of their usual staple, corn. The peas are a much needed source of protein for subsistence farmers, but the legumes also increase soil water retention and double soil carbon and nitrogen content without reducing the yield of the primary crop on a given plot of land.

Taking perennials to the next level—adopting them on the scale of conventional crops—will require a significant scientific effort, however. Ed Buckler, a plant geneticist at Cornell University who plans to develop a perennial version of corn, thinks it will take five years to identify the genes responsible for the trait and another decade to breed a viable strain. “Even using the highest-technology approaches available, you’re talking almost certainly 20 years from now for perennial maize,” Glover says. Scientists have been accelerating the development of perennials by using advanced genotyping technology. They can now quickly analyze the genomes of plants with desirable traits to search for associations between genes and those traits. When a first generation of plants produces seeds, researchers sequence young plants directly to find the handful out of thousands that retain those traits (rather than waiting for them to grow to adulthood, which can take years).

Once perennial alternatives to annual crops are available, rolling them out could have a big impact on carbon emissions. The key is their root systems, which would sequester, in each cubic meter of topsoil, an amount of carbon equivalent to 1 percent of the mass of that dirt. Douglas Kell, chief executive of the U.K.’s Biotechnology and Biological Sciences Research Council, has calculated that replacing 2 percent of the world’s annual crops with perennials each year could remove enough carbon to halt the increase in atmospheric carbon dioxide. Converting all of the planet’s farmland to perennials would sequester the equivalent of 118 parts per million of carbon dioxide— enough, in other words, to pull the concentration of atmospheric greenhouse gases back to preindustrial levels.

Source of Information : Scientific American Magazine 

Wednesday, June 20, 2012

Microbe Miners

Bacteria extract metals and clean up the mess afterward

Mining hasn’t changed much since the Bronze Age: to extract valuable metal from an ore, apply heat and a chemical agent such as charcoal. But this technique requires a lot of energy, which means that it is too expensive for ores with lower metal concentrations.

Miners are increasingly turning to bacteria that can extract metals from such low-grade ores, cheaply and at ambient temperatures. Using the bacteria, a mining firm can extract up to 85 percent of a metal from ores with a metal concentration of less than 1 percent by simply seeding a waste heap with microbes and irrigating it with diluted acid. Inside the heap Acidithiobacillus or Leptospirillum bacteria oxidize iron and sulfur for energy. As they eat, they generate reactive ferric iron and sulfuric acid, which degrade rocky materials and free the valued metal.

Biological techniques are also being used to clean up acidic runoff from old mines, extracting a few last precious bits of metal in the process. Bacteria such as Desulfovibrio and Desulfotomaculum neutralize acids and create sulfides that bond to copper, nickel and other metals, pulling them out of solution.

Biomining has seen unprecedented growth in recent years as a result of the in-creasing scarcity of high-grade ores. Nearly 20 percent of the world’s copper comes from biomining, and production has doubled since the mid-1990s, estimates mining consultant Corale Brierley. “What mining companies used to throw away is what we call ore today,” Brierley says. The next step is unleashing bacterial janitors on mine waste. David Barrie Johnson, who researches biological solutions to acid mine drainage at Bangor University in Wales, estimates that it will take 20 years before bacterial mine cleanup will pay for itself. “As the world moves on to a less carbon-dependent society, we have to look for ways of doing things that are less energy-demanding and more natural,” Johnson says. “That’s the long-term objective, and things are starting to move nicely in that direction.”

Source of Information : Scientific American Magazine 

Sunday, June 17, 2012

Currency without Borders

The world’s first digital currency cuts out the middleman and keeps users anonymous

Imagine if you were to walk into a deli, order a club sandwich, throw some dollar bills down and have the cashier say to you, “That’s great. All I need now is your name, billing address, telephone number, mother’s maiden name, and bank account number.” Most customers would balk at these demands, and yet this is precisely how everyone pays for goods and services over the Internet.

There is no currency on the Web that is as straightforward and anonymous as the dollar bill. Instead we rely on financial surrogates such as credit-card companies to handle our transactions (which pocket a percentage of the sale, as well as your personal information). That could change with the rise of Bitcoin, an all-digital currency that is as liquid and anonymous as cash. It’s “as if you were taking a dollar bill, squishing it into your computer and sending it out over the Internet,” says Gavin Andresen, one of the leaders of the Bitcoin network.

Bitcoins are bits—strings of code that can be transferred from one user to another over a peer-to-peer network. Whereas most strings of bits can be copied ad infinitum (a property that would render any currency worthless), users can spend a Bitcoin only once. Strong cryptography protects Bitcoins against would-be thieves, and the peer-to-peer network eliminates the need for a central gatekeeper such as Visa or PayPal. The system puts power in the hands of the users, not financial middlemen.

Bitcoin borrows concepts from well-known cryptography programs. The software assigns every Bitcoin user two unique codes: a private key that is hidden on the user’s computer and a public address that everyone can see. The key and the address are mathematically linked, but figuring out someone’s key from his or her address is practically impossible. If I own 50 Bitcoins and want to transfer them to a friend, the software combines my key with my friend’s address. Other people on the network use the relation between my public address and private key to verify that I own the Bitcoins that I want to spend, then transfer those Bitcoins using a code-breaking algorithm. The first computer to complete the calculations is awarded a few Bitcoins now and then, which recruits a diverse collective of users to maintain the system.

The first reported Bitcoin purchase was pizza sold for 10,000 Bitcoins in early 2010. Since then, exchange rates between Bitcoin and the U.S. dollar have bounced all over the scale like notes in a jazz solo. Because of the currency’s volatility, only the rare online merchant will accept payment in Bitcoins. At this point, the Bitcoin community is small but especially enthusiastic— just like the early adopters of the Internet.

Source of Information : Scientific American Magazine 

Wednesday, June 13, 2012

A Circuit in Every Cell

Progress for tiny biocomputers

Researchers in nanomedicine have long dreamed of an age when molecular-scale computing devices could be embedded in our bodies to monitor health and treat diseases before they progress. The advantage of such computers, which would be made of biological materials, would lie in their ability to speak the biochemical language of life.

Several research groups have recently reported progress in this field. A team at the California Institute of Technology, writing in the journal Science, made use of DNA nanostructures called seesaw gates to construct logic circuits analogous to those used in microprocessors. Just as siliconbased components use electric current to represent 1’s and 0’s, bio-based circuits use concentrations of DNA molecules in a test tube. When new DNA strands are added to the test tube as “input,” the solution undergoes a cascade of chemical interactions to release different DNA strands as “output.” In theory, the input could be a molecular indicator of a disease, and the output could be an appropriate therapeutic molecule.

A common problem in constructing a computer in a test tube is that it is hard to control which interactions among molecules occur. The brilliance of the seesaw gate is that a particular gate responds only to particular input DNA strands. In a subsequent Nature paper, the Caltech researchers showed off the power of their technique by building a DNAbased circuit that could play a simple memory game. A circuit with memory could, if integrated into living cells, recognize and treat complex diseases based on a series of biological clues.

This circuitry has not been integrated into living tissue, however, in part because its ability to communicate with cells is limited. Zhen Xie of the Massachusetts Institute of Technology and his collaborators have recently made progress on this front. As they reported in Science, they designed an RNAbased circuit that was simpler but could still distinguish modified cancer cells from noncancerous cells and, more important, trigger the cancer cells to self-destruct. Both techniques have been used only in artificial scenarios. Yet the advances in DNA-based circuits offer a new, powerful platform to potentially realize researchers’ long-held biocomputing dreams.

Source of Information : Scientific American Magazine 

Saturday, June 9, 2012

Microwaves and the Speed of Light

New physics tricks for the most underestimated of kitchen appliances

You can find a microwave oven in nearly any American kitchen— indeed, it is the one truly modern cooking tool that is commonly at hand—yet these versatile gadgets are woefully underestimated. Few see any culinary action more sophisticated than reheating leftovers or popping popcorn. That is a shame because a microwave oven, when used properly, can cook certain kinds of food perfectly, every time. You can even use it to calculate a fundamental physical constant of the universe. Try that with a gas burner.

To get the most out of your microwave, it helps to understand that it cooks with light waves, much like a grill does, except that the light waves are almost five inches (12.2 centimeters) from peak to peak—a good bit longer in wavelength than the infrared rays that coals put out. The microwaves are tuned to a frequency (2.45 gigahertz, usually) to which molecules of water and, to a lesser extent, fat resonate.

The water and oil in the exterior inch or so of food soaks up the microwave energy and turns it into heat; the surrounding air, dishes and walls of the oven do not. The rays do not penetrate far, so trying to cook a whole roast in a microwave is a recipe for disaster. But a thin fish is another story. The cooks in our research kitchen found a fantastic way to make tilapia in the microwave. Sprinkle some sliced scallions and ginger, with a
splash of rice wine, over a whole fish, cover it tightly with plastic wrap and microwave it for six minutes at a power of 600 watts. (Finish it off with a drizzle of hot peanut oil, soy sauce and sesame oil.)

The cooking at 600 W is what throws many chefs. To heat at a given wattage, check the power rating on the back of the oven (800 W is typical) and then multiply that figure by the power setting (which is given either as a percentage or in numbers from one to 10 representing 10 percent steps). A 1,000-W oven, for example, produces 600 W at a power setting of 60 percent (or “6”). To “fry” parsley brushed with oil, cook it at 600 W for about four minutes. To dry strips of marinated beef into jerky, cook at 400 W for five minutes, flipping the strips once a minute.

If you are up for slightly more math, you can perform a kitchen experiment that
Albert Einstein would have loved: prove that light really does zip along at almost 300 million meters per second. Cover a cardboard disk from a frozen pizza with slices of Velveeta and microwave it at low power until several melted spots appear. (You don’t want it rotating, so if your oven has a carousel, prop the cardboard
above it.) Measure the distance (in meters) between the centers of the spots. That distance is half the wavelength of the light, so if you double it and multiply by 2.45 billion (the frequency in cycles per second), the result is the velocity of the rays bouncing about in your oven.

Source of Information : Scientific American Magazine 

Friday, June 1, 2012

How to See the Invisible

Augumented-reality apps uncover the hidden reality all around you

Everybody’s amazed by touch-screen phones. They’re so thin, so powerful, so beautiful! But this revolution is just getting under way. Can you imagine what these phones will be like in 20 years? Today’s iPhones and Android phones will seem like the Commodore 64. “Why, when I was your age,” we’ll tell our grandchildren, “phones were a third of an inch thick!” Then there are the apps. Right now we’re all delighted to do simple things on our phones, like watch videos and play games. But the ingredients in the modern app phone—camera, GPS, compass, accelerometer, gyroscope, Internet connection—make it the perfect device for the next wave of software. Get ready for augmented reality (AR). That term usually refers to a live-camera view with superimposed informational graphics. The phone becomes a magic looking glass, identifying physical objects in the world around you.

If you’re color-blind like me, then apps like Say Color or Color ID represent a classic example of what augmented reality can do. You hold up the phone to a piece of clothing or a paint swatch—and it tells you by name what color the object is, like dark green or vivid red. You’ve gone to your last party wearing mismatched clothes.

Other apps change what you see. When a reader sent me a link to a YouTube video promoting Word Lens, I wrote back, “Ha-ha, very funny.” It looked so magical, I thought it was fake. But it’s not. You point the iPhone’s camera at a sign or headline in Spanish. The app magically replaces the original text with an English translation, right there in the video image in real time—same angle, color, background material, lighting. Somehow the app erases the original text and replaces it with new lettering. (There’s an English-to-Spanish mode, too.) Some of the most promising AR apps are meant to help you when you’re out and about. Apps like New York Nearest Subway and Metro AR let you look down at the ground and see colorful arrows that show you which subway lines are underneath your feet. Raise the phone perpendicular to the ground, and you’ll see signs for the subway stations—how far away they are and which subway lines they serve. When you’re in a big city, apps like Layar and Wikitude let you peer through the phone at the world around you. They overlay icons for information of your choice: real estate listings, ATM locations, places with Wikipedia entries, public works of art, and so on. Layar boasts thousands of such overlays.

There are AR apps that show you where the hazards are on golf courses (Golfscape GPS Rangefinder), where you parked your car (Augmented Car Finder), who’s using Twitter in the buildings around you (Tweet360), what houses are for sale near you and for how much (ZipRealty Real Estate), how good and how expensive a restaurant is before you even go inside (Yelp), the names of the stars and constellations over your head (Star Walk, Star Chart), the names and details of the mountains in front of you (Panoramascope, Peaks), what crimes have recently been committed in the neighborhoods around you (SpotCrime), and dozens more. Several of these apps are not, ahem, paragons of software stability. And many, like Layar, are pointless outside of big cities because there aren’t enough data points to overlay.

As much fun as they are to use, AR apps mean walking through your environment with your eyes on your phone, held at arm’s length—a posture with unfortunate implications for social interaction, serendipitous discovery and avoiding bus traffic. Furthermore, there’s already been much bemoaning of our society’s decreasing reliance on memory; in the age of Google, nobody needs to learn the presidents, the state capitals or the periodic table. AR apps are only going to make things worse. Next thing you know, AR apps will identify our friends using facial recognition. Can’t you just see it? You’ll be at a party, and someone will come up to you and say, “Hey, how are you—” (consulting the phone) “—David?” But every new technology has its rough edges, and somehow we muddle through. Someday we will boggle our grandchildren’s minds with tales of life before AR—if we can remember their names.

Source of Information : Scientific American Magazine