Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Friday, June 1, 2012

How to See the Invisible

Augumented-reality apps uncover the hidden reality all around you

Everybody’s amazed by touch-screen phones. They’re so thin, so powerful, so beautiful! But this revolution is just getting under way. Can you imagine what these phones will be like in 20 years? Today’s iPhones and Android phones will seem like the Commodore 64. “Why, when I was your age,” we’ll tell our grandchildren, “phones were a third of an inch thick!” Then there are the apps. Right now we’re all delighted to do simple things on our phones, like watch videos and play games. But the ingredients in the modern app phone—camera, GPS, compass, accelerometer, gyroscope, Internet connection—make it the perfect device for the next wave of software. Get ready for augmented reality (AR). That term usually refers to a live-camera view with superimposed informational graphics. The phone becomes a magic looking glass, identifying physical objects in the world around you.

If you’re color-blind like me, then apps like Say Color or Color ID represent a classic example of what augmented reality can do. You hold up the phone to a piece of clothing or a paint swatch—and it tells you by name what color the object is, like dark green or vivid red. You’ve gone to your last party wearing mismatched clothes.

Other apps change what you see. When a reader sent me a link to a YouTube video promoting Word Lens, I wrote back, “Ha-ha, very funny.” It looked so magical, I thought it was fake. But it’s not. You point the iPhone’s camera at a sign or headline in Spanish. The app magically replaces the original text with an English translation, right there in the video image in real time—same angle, color, background material, lighting. Somehow the app erases the original text and replaces it with new lettering. (There’s an English-to-Spanish mode, too.) Some of the most promising AR apps are meant to help you when you’re out and about. Apps like New York Nearest Subway and Metro AR let you look down at the ground and see colorful arrows that show you which subway lines are underneath your feet. Raise the phone perpendicular to the ground, and you’ll see signs for the subway stations—how far away they are and which subway lines they serve. When you’re in a big city, apps like Layar and Wikitude let you peer through the phone at the world around you. They overlay icons for information of your choice: real estate listings, ATM locations, places with Wikipedia entries, public works of art, and so on. Layar boasts thousands of such overlays.

There are AR apps that show you where the hazards are on golf courses (Golfscape GPS Rangefinder), where you parked your car (Augmented Car Finder), who’s using Twitter in the buildings around you (Tweet360), what houses are for sale near you and for how much (ZipRealty Real Estate), how good and how expensive a restaurant is before you even go inside (Yelp), the names of the stars and constellations over your head (Star Walk, Star Chart), the names and details of the mountains in front of you (Panoramascope, Peaks), what crimes have recently been committed in the neighborhoods around you (SpotCrime), and dozens more. Several of these apps are not, ahem, paragons of software stability. And many, like Layar, are pointless outside of big cities because there aren’t enough data points to overlay.

As much fun as they are to use, AR apps mean walking through your environment with your eyes on your phone, held at arm’s length—a posture with unfortunate implications for social interaction, serendipitous discovery and avoiding bus traffic. Furthermore, there’s already been much bemoaning of our society’s decreasing reliance on memory; in the age of Google, nobody needs to learn the presidents, the state capitals or the periodic table. AR apps are only going to make things worse. Next thing you know, AR apps will identify our friends using facial recognition. Can’t you just see it? You’ll be at a party, and someone will come up to you and say, “Hey, how are you—” (consulting the phone) “—David?” But every new technology has its rough edges, and somehow we muddle through. Someday we will boggle our grandchildren’s minds with tales of life before AR—if we can remember their names.

Source of Information : Scientific American Magazine

Friday, April 6, 2012

3-D, Hold the Glasses

A breakthrough may lead to more widespread adoption of 3-D TVs

Three-dimensional television got a major marketing push nearly two years ago from the consumer electronics and entertainment industries, yet the technology has one major limitation: viewers need special eyeglasses to experience the 3-D effect. Now the marketing experts say that the technology will never catch on in a big way unless viewers can toss the glasses entirely.

Although 3-D technology sans specs is available for small screens on smartphones and portable gaming devices, these devices use backlit LCDs, which can be a big battery drain and limits how small the gadgets can be made. More recently, researchers have begun to use light-emitting diodes, which show more promise. They are developing autostereoscopic 3-D using tiny prisms that would render 3-D images without glasses. Because these LEDs get their lighting from organic compounds that glow in response to electric current, they can be thinner, lighter and more flexible than LCDs. The innovation is detailed in the August issue of the journal Nature Communications.

The researchers—from Seoul National University, Act Company and Minuta Technology—used an array of microscale prisms placed on a screen to create a filter that guides the light in one direction or another. Using such a prism array— which the researchers refer to as a Lucius prism after the Latin name meaning “shining and bright”—they were able to display an object on the screen that could be seen only when viewed from a particular angle.

By manipulating the intensity of light, the scientists could show from the same screen two distinctly different images— one to a viewer’s left eye and a second to the right eye. Seeing the two images together creates a sense of depth that the brain perceives as 3-D—all without the help of special lenses.

Some researchers have reported success with other approaches to glasses-free 3-D. The HTC EVO 3D and LG Optimus 3D smartphones, for example, feature parallax barrier screens made with precision slits that allow each eye to see a different set of pixels. Unfortunately, this approach requires the viewer to look at the screen at a very specific angle to experience the 3-D effect, a drawback that this new technique may be able to overcome.

Source of Information : Scientific American Magazine

Friday, March 16, 2012

A Cybersecurity Nightmare

We can’t keep malware out of our computers if we can’t find it

The world of cybersecurity is starting to resemble a paranoid thriller. Shadowy figures plant malicious software, or “malware,” in our computers. They slip it into e-mails. They transmit it over the Internet. They infect us with it through corrupted Web sites. They plant it in other programs. They design it to migrate from device to device—laptops, flash drives, smartphones, servers, copy machines, iPods, gaming consoles—until it’s inside our critical systems. As even the most isolated systems periodically need new instructions, new data or some kind of maintenance, any system can be infected.

The effect could be devastating. After lying dormant for months or years, malware could switch on without any action on the part of those who launched it. It could disable emergency services, cause factories to make defective products, blow up refineries and pipelines, poison drinking water, make medical treatments lethal, wreck electric generators, discredit the banking system, ground airplanes, cause trains to collide, and turn our own military equipment against us.

Many public officials are now aware that something needs to be done. Putting aside worries about privacy and civil liberties, they propose giant government programs to search our critical computer systems and scan everything that goes into them.

But here’s where the plot thickens. We don’t actually know how to scan for malware. We can’t stop it, because we can’t find it. We can’t always recognize it even if we are looking right at it.

Like a thriller character who discovers he doesn’t know whom to trust, cybersecurity experts start running through the options. Can we recognize malware by its identifying characteristics? No, because each piece of malware can be different, and it can keep changing its appearance. Can we recognize it by the tools it needs to spread? No, because the malware might be a payload inserted by someone or something else.

Can we find malware by looking in likely hiding places? No, because it could be in a hiding place we didn’t know was there— an area of memory we can’t see or some component we didn’t even realize had a memory. It could be moving around even as we’re looking for it. It could copy itself into the place we just looked and erase itself from the place we’re about to look.

Can we create a safe area, bit by bit, reading every line of code in each program to make sure it’s innocent? The problem is that we can look directly at a line of malware and not recognize it. Sometimes a tiny modification in a line of code can cause a malicious effect. Malware doesn’t need to be in the individual lines of code. The malicious part of the malware might be the sequence of operations that causes a normal instruction to be carried out at exactly the wrong time.

If all else fails, can we recognize malware by what it does? This won’t work either. Malware can take control of every display, message box, graphic or reading. It can make sure you see only what it wants you to see. If you do manage to catch it doing something bad, it might be too late. If the first time a malicious program operates it turns your missiles back at you, fries your electric generators or blows up your refineries, it won’t do much good to recognize it by that behavior.

We truly can’t trust anything. The very computers we are using to search for malware might be the vehicles delivering it. Our authentication systems could be authenticating programs infected with malware. Our encryption systems could be encrypting malware. Even if we manage to come up with an effective barrier, we will not know which side the malware is on.

This is the world many cybersecurity professionals are currently living in. We are stopping most malware, most of the time. But we don’t have a reliable solution for the cases where it might matter most. America and its allies have always been good at coming up with timely breakthroughs when they are most needed. We need one now.

Source of Information : Scientific American Magazine

Tuesday, January 10, 2012

Gig.U Is Now in Session

Universities are piloting superfast Internet connections that may finally rival the speed of South Korea’s

The U.S. notoriously lags other countries when it comes to Internet speed. One recent report from Web analyst Akamai Technologies puts us in 14th place, far behind front-runner South Korea and also trailing Hong Kong, Japan and Romania, among other countries. The sticking point over faster broadband has been: Who will pay for it? Telecommunications companies have been leery of investing in infrastructure unless they are certain of demand for extra speed. American consumers, for their part, have been content to direct much of their Internet use to e-mail and social networks, which operate perfectly well at normal broadband speeds, and they have not been willing to pay a premium for speedier service.

The exception lies at the seat of learning. Universities and research institutes are always looking for a quicker flow of bits. “We think our researchers will be left behind without gigabit speeds,” says Elise Kohn, a former policy adviser for the Federal Communications Commission. Kohn and Blair Levin, who helped to develop the FCC’s National Broadband Plan—a congressionally mandated scheme to ensure broadband access to all Americans—are leading a collection of 29 universities spread across the country in piloting a network of one-gigabit-per-second Internet connections. The group, the University Community Next Generation Innovation Project—more commonly referred to as Gig.U—includes Duke University, the University of Chicago, the University of Washington and Arizona State University.

The average U.S. Internet speed today is 5.3 megabits per second, so Gig.U’s Internet would be many times faster than those available today, allowing users to download the equivalent of two high-definition movies in less than one minute and to watch streaming video with no pixelation or other interruptions. By comparison, the average Internet speed in South Korea is 14.4 megabits per second, and the country has pledged to connect every home to the Internet at one gigabit per second by 2012.

The U.S. gigabit networks will vary from site to site, depending on the approach that different Internet service providers propose to meet the diff ering needs of Gig.U members. “All our members are focused on next-generation networks, although some will need more than a gigabit, and others will need less,” Kohn says. Gig.U’s request-for-information period runs through November to solicit ideas from the local service providers upgrading to faster networks. These ideas will ultimately be funded by Gig.U members, as well as any nonprofits and private-sector companies interested in the project. Gig.U intends to accelerate the deployment of next-generation networks in the U.S. by encouraging researchers—students and professors alike—to develop new applications and services that can make use of ultrafast data-transfer rates.

Source of Information : Scientific American Magazine

Sunday, August 1, 2010

Divining the Right Drug

A new device may take the guesswork out of prescribing an antidepressant that works

Imagine suffering from the crushing weight of major depression, then finally getting diagnosed and starting treatment with a drug—only to realize after two months that the medication, despite its unpleasant side effects, is not alleviating your depression. Unfortunately, this experience is far from rare: more than two thirds of patients with depression have no luck with the first medication they are prescribed and must also endure the withdrawal effects that come with discontinuing a drug before trying a new one. Finding the right treatment can prove a lengthy, painful process of trial and error. A new technology, however, may bypass this ordeal by gauging very early in a treatment regimen how well a drug is working based on the patient’s brain waves.

The technology, called quantitative electroencephalography (QEEG), measures a person’s brain-wave pattern with EEG and then compares it with a database of normal samples to detect abnormal function. In a study published in the September 2009 issue of the journal Psychiatry Research, scientists used QEEG to record brain activity in subjects with major depressive disorder before they began treatment, after one week on an antidepressant and after eight weeks on the drug—the period it takes such drugs to achieve full effect. Changes in the QEEG readout after just one week of medication predicted 74 percent of the time whether patients would experience either a recovery or a remission of symptoms by the end of eight weeks.

“There appear to be changes in brain electrical activity that occur as early as a week, when the patient isn’t feeling any different,” says Andrew Leuchter, a psychiatrist at the University of California, Los Angeles, and lead author of the study. The result “proved [this QEEG-based technique] was in the range of something that could be useful to patients,” he states.

Further research is needed to verify the technique’s promise, so Leuchter estimates it may be several years before QEEG can be used in the clinic. Still, the technique presents a much needed way to judge a drug’s efficacy, says psychologist D. Corydon Hammond, a professor at the University of Utah School of Medicine, who was not involved in the study. “Psychiatry has been in drastic need of more scientific and objective methods for medication selection for years,” Hammond says. He praised the study as “important” and added, “Many more like it are needed and with other conditions besides depression.”

Source of Information : Scientific American Mind March-April 2010

Sunday, June 20, 2010

Tasting the Light

Device lets the visually impaired “see” with their tongues BY MANDY KENDRICK

The late neuroscientist Paul Bach-y-Rita hypothesized in the 1960s that “we see with our brains not our eyes.” Now a noninvasive device trades on that thinking and aims to partially restore the experience of seeing for the visually impaired by relying on the nerves on the tongue’s surface to send light signals to the brain.

First demonstrated in 2003 by neuroscientists at Middleton, Wis.–based Wicab (a company co-founded by Bachy-Rita), the device could finally be ready for sale at the end of the year. Called BrainPort, it tries to substitute for the two million optic nerves that transmit visual signals from the retina to the brain’s primary visual cortex. Visual data are collected through a small digital video camera mounted on the center of sunglasses worn by the user. Bypassing the eyes, the data go to a handheld base unit, which houses such features as zoom control and light settings as well as a central processing unit (CPU), which converts digital signals into electrical pulses.

From the CPU, the signals are sent to the tongue via a “lollipop,” an electrode array about nine square centimeters that sits directly on the tongue, which seems to be an ideal organ for sensing electric current. (Saliva is also a good conductor.) Moreover, the tongue’s nerve fibers are densely packed and are closer to the surface relative to other touch organs. The surfaces of fingers, for example, are covered with a layer of dead cells called the stratum corneum.

Each electrode on the lollipop corresponds to a set of pixels. White pixels yield a strong electrical pulse, whereas black pixels translate into no signal. The nerves at the tongue surface receive the incoming electrical signals, which feel a little like Pop Rocks or champagne bubbles to the user.

Typically within 15 minutes of using the device, blind people can begin interpreting spatial information via BrainPort, says William Seiple, research director at the nonprofit vision health care and research organization Lighthouse International. The electrodes spatially correlate with the pixels so that if the camera detects light fixtures in the middle of a dark hallway, electrical stimulations will occur along the center of the tongue. “It becomes a task of learning, no different than learning to ride a bike,” says Wicab neuroscientist Aimee Arnoldussen, adding that the “process is similar to how a baby learns to see. Things may be strange at first, but over time they become familiar.”

Seiple works with four patients who are training with Brain- Port once a week. He notes that his patients have learned how to quickly find doorways and elevator buttons, read letters and numbers, and pick out cups and forks at the dinner table without having to fumble around. “At first, I was amazed at what the device could do,” he says. “One guy started to cry when he saw his first letter.” The researchers have yet to figure out if the electrical information is transferred to the brain’s visual cortex, where sight information is normally sent, or to its somato sensory cortex, where touch data from the tongue are interpreted.

To develop criteria for monitoring the progress of artificial sight, optometrist Amy Nau of the University of Pittsburgh Medical Center’s Eye Center will further test BrainPort, along with other devices such as retinal and cortical implant chips. “We can’t just throw up an eye chart. We have to take a step back and describe the rudimentary precepts that these people are getting,” she says. Nau is particularly interested in BrainPort because it is noninvasive, unlike implants.

“Many people who have acquired blindness are desperate to get their vision back,” she points out. According to the National Institutes of Health, at least one million Americans older than 40 are legally blind, with vision that is 20/200 or worse or that has a field of view of less than 20 degrees. Adult vision loss costs the country about $51.4 billion a year.

Although sensory substitution techniques cannot fully restore sight, they do provide the information necessary for spatial orientation. Wicab had planned to submit BrainPort to the U.S. Food and Drug Administration for approval at the end of August, says Robert Beckman, president and chief executive officer of the company. He notes that the device could be approved for market by the end of 2009 for about $10,000 a machine.

Source of Information : Scientific American October 2009

Friday, November 13, 2009

Only humans allowed

Computing: Can online puzzles that force internet users to prove that they really are human be kept secure from attackers?

ON THE internet, goes the old joke, nobody knows you’re a dog. This is untrue, of course. There are many situations where internet users are required to prove that they are human—not because they might be dogs, but because they might be nefarious pieces of software trying to gain access to things. That is why, when you try to post a message on a blog, sign up with a new website or make a purchase online, you will often be asked to examine an image of mangled text and type the letters into a box. Because humans are much better at pattern recognition than software, these online puzzles—called CAPTCHAs—can help prevent spammers from using software to automate the creation of large numbers of bogus e-mail accounts, for example.

Unlike a user login, which proves a specific identity, CAPTCHAs merely show that “there’s really a human on the other end”, says Luis von Ahn, a computer scientist at Carnegie Mellon University and one of the people responsible for the ubiquity of these puzzles. Together with Manuel Blum, Nicholas J. Hopper and John Langford, Dr von Ahn coined the term CAPTCHA (which stands for “completely automated public Turing test to tell computers and humans apart”) in a paper published in 2000.

But how secure are CAPTCHAs? Spammers stepped up their efforts to automate the solving of CAPTCHAs last year, and in recent months a series of cracks have prompted both Microsoft and Google to tweak the CAPTCHA systems that protect their web-based mail services. “We modify our CAPTCHAs when we detect new abuse trends,” says Macduff Hughes, engineering director at Google. Jeff Yan, a computer scientist at Newcastle University, is one of many researchers interested in cracking CAPTCHAs. Since the bad guys are already doing it, he told a spam-fighting conference in Amsterdam in June, the good guys should do it too, in order to develop more secure designs.

That CAPTCHAs work at all illuminates a failing in artificial-intelligence research, says Henry Baird, a computer scientist at Lehigh University in Pennsylvania and an expert in the design of text-recognition systems. Reading mangled text is an everyday skill for most people, yet machines still find it difficult.

The human ability to recognise text as it becomes more and more distorted is remarkably resilient, says Gordon Legge at the University of Minnesota. He is a researcher in the field of psychophysics—the study of the perception of stimuli. But there is a limit. Just try reading small text in poor light, or flicking through an early issue of Wired. “You hit a point quite close to your acuity limit and suddenly your performance crashes,” says Dr Legge. This means designers of CAPTCHAs cannot simply increase the amount of distortion to foil attackers. Instead they must mangle text in new ways when attackers figure out how to cope with existing distortions. Mr Hughes, along with many others in the field, thinks the lifespan of text-based CAPTCHAs is limited. Dr von Ahn thinks it will be possible for software to break text CAPTCHAs most of the time within five years. A new way to verify that internet users are indeed human will then be needed. But if CAPTCHAs are broken it might not be a bad thing, because it would signal a breakthrough in machine vision that would, for example, make automated book-scanners far more accurate.



CAPTCHA me if you can
Looking at things the other way around, a CAPTCHA system based on words that machines cannot read ought to be uncrackable. And that does indeed seem to be the case for ReCAPTCHA, a system launched by Dr von Ahn and his colleagues two years ago. It derives its source materials from the scanning in of old books and newspapers, many of them from the 19th century. The scanners regularly encounter difficult words (those for which two different character-recognition algorithms produce different transliterations). Such words are used to generate a CAPTCHA by combining them with a known word, skewing the image and adding extra lines to make the words harder to read. The image is then presented as a CAPTCHA in the usual way.

If the known word is entered correctly, the unknown word is also assumed to have been typed in correctly, and access is granted. Each unknown word is presented as a CAPTCHA several times, to different users, to ensure that it has been read correctly. As a result, people solving CAPTCHA puzzles help with the digitisation of books and newspapers.

Even better, the system has proved to be far better at resisting attacks than other types of CAPTCHA. “ReCAPTCHA is virtually immune by design, since it selects words that have resisted the best textrecognition algorithms available,” says John Douceur, a member of a team at Microsoft that has built a CAPTCHA-like system called Asirra. The ReCAPTCHA team has a member whose sole job is to break the system, says Dr von Ahn, and so far he has been unsuccessful. Whenever the in-house attacker appears to be making progress, the team responds by adding new distortions to the puzzles.

Even so, researchers are already looking beyond text-based CAPTCHAs. Dr von Ahn’s team has devised two image-based schemes, called SQUIGL-PIX and ESP-PIX, which rely on the human ability to recognize particular elements of images. Microsoft’s Asirra system presents users with images of several dogs and cats and asks them to identify just the dogs or cats. Google has a scheme in which the user must rotate an image of an object (a teapot, say) to make it the right way up. This is easy for a human, but not for a computer.

The biggest flaw with all CAPTCHA systems is that they are, by definition, susceptible to attack by humans who are paid to solve them. Teams of people based in developing countries can be hired online for $3 per 1,000 CAPTCHAs solved. Several forums exist both to offer such services and parcel out jobs. But not all attackers are willing to pay even this small sum; whether it is worth doing so depends on how much revenue their activities bring in. “If the benefit a spammer is getting from obtaining an e-mail account is less than $3 per 1,000, then CAPTCHA is doing a perfect job,” says Dr von Ahn.

Source of Information : The Economist 2009-09-05

Thursday, November 12, 2009

Memories are made of this

Computing: Memory chips based on nanotubes and iron particles might be capable of storing data for a billion years

FEW human records survive for long, the 16,000-year-old Paleolithic cave paintings at Lascaux, France, being one exception. Now researchers led by Alex Zettl of the University of California, Berkeley, have devised a method that will, they reckon, let people store information electronically for a billion years.

Dr Zettl and his colleagues constructed their memory cell by taking a particle of iron just a few billionths of a metre (nanometres) across and placing it inside a hollow carbon nanotube. They attached electrodes to either end of the tube. By applying a current, they were able to shuttle the particle back and forth. This provides a mechanism to create the “1” and “0” required for digital representation: if the particle is at one end it counts as a “1”, and at the other end it is a “0”.

The next challenge was to read this electronic information. The researchers found that when electrons flowed through the tube, they scattered when they came close to the particle. The particle’s position thus altered the nanotube’s electrical resistance on a local scale. Although they were unable to discover exactly how this happens, they were able to use the effect to read the stored information.

What makes the technique so durable is that the particle’s repeated movement does not damage the walls of the tube. That is not only because the lining of the tube is so hard; it is also because friction is almost negligible when working at such small scales.

Theoretical studies suggest that the system should retain information for a long time. To switch spontaneously from a “1” to a “0” would entail the particle moving some 200 nanometres along the tube using thermal energy. At room temperature, the odds of that happening are once in a billion years. In tests, the stored digital information was found to be remarkably stable. Yet the distance between the ends of the tube remains small enough to allow for speedy reading and writing of the memory cell when it is in use.

The next challenge will be to create an electronic memory that has millions of cells instead of just one. But if Dr Zettl succeeds in commercialising this technology, digital decay itself could become a thing of the past.

Source of Information : The Economist 2009-09-05

Wednesday, November 11, 2009

Hard act to follow

Environment: Making softwoods more durable could reduce the demand for unsustainably logged tropical hardwoods

ONE of the reasons tropical forests are being cut down so rapidly is demand for the hardwoods, such as teak, that grow there. Hardwoods, as their name suggests, tend to be denser and more durable than softwoods. But unsustainable logging of hardwoods destroys not only forests but also local creatures and the future prospects of the people who lived there.

It would be better to use softwood, which grows in cooler climes in sustainably managed forests. Softwoods are fast-growing coniferous species that account for 80% of the world’s timber. But the stuff is not durable enough to be used outdoors without being treated with toxic preservatives to protect it against fungi and insect pests. These chemicals eventually wash out into streams and rivers, and the wood must be retreated. Moreover, at the end of its life, wood that has been treated with preservatives in this way needs to be disposed of carefully.

One way out of this problem would be an environmentally friendly way of making softwood harder and more durable—something that a Norwegian company called Kebony has now achieved. It opened its first factory in January.

Kebony stops wood from rotting by placing it in a vat containing a substance called furfuryl alcohol, which is made from the waste left over when sugarcane is processed. The vat is then pressurised, forcing the liquid into the wood. Next the wood is dried and heated to 110ºC. The heat transforms the liquid into a resin, which makes the cell walls of the wood thicker and stronger.

The approach is similar to that of a firm based in the Netherlands called Titan Wood. Timber swells when it is damp and shrinks when it is dry because it contains groups of atoms called hydroxyl groups, which absorb and release water. Titan Wood has developed a technique for converting hydroxyl groups into acetyl groups (a different combination of atoms) by first drying the wood in a kiln and then treating it with a chemical called acetic anhydride. The result is a wood that retains its shape in the presence of water, and is no longer recognised as wood by grubs that would otherwise attack it. It is thus extremely durable.

The products made by both companies are completely recyclable, environmentally friendly and create woods that are actually harder than most tropical hardwoods. The strengthened softwoods can be used in everything from window frames to spas to garden furniture. Treated maple is also being adopted for decking on yachts. The cost is similar to that of teak, but the maple is more durable and easier to keep clean.

Obviously treating wood makes it more expensive. But because it does not need to receive further treatments—a shed made from treated wood would not need regular applications of creosote, for example—it should prove economical over its lifetime. Kebony reckons that its pine cladding, for example, would cost a third less than conventionally treated pine cladding over the course of 40 years. Saving money, then, need not be at the expense of helping save the planet.

Source of Information : The Economist 2009-09-05

Tuesday, November 10, 2009

Washing without water

Environment: A washing machine uses thousands of nylon beads, and just a cup of water, to provide a greener way to do the laundry

SYNTHETIC fibres tend to make low quality clothing. But one of the properties that makes nylon a poor choice of fabric for a shirt, namely its ability to attract and retain dirt and stains, is being exploited by a company that has developed a new laundry system. Its machine uses no more than a cup of water to wash each load of fabrics and uses much less energy than conventional devices.

The system developed by Xeros, a spin-off from the University of Leeds, in England, uses thousands of tiny nylon beads each measuring a few millimetres across. These are placed inside the smaller of two concentric drums along with the dirty laundry, a squirt of detergent and a little water. As the drums rotate, the water wets the clothes and the detergent gets to work loosening the dirt. Then the nylon beads mop it up.

The crystalline structure of the beads endows the surface of each with an electrical charge that attracts dirt. When the beads are heated in humid conditions to the temperature at which they switch from a crystalline to an amorphous structure, the dirt is drawn into the core of the bead, where it remains locked in place.

The inner drum, containing the clothes and the beads, has a small slot in it. At the end of the washing cycle, the outer drum is halted and the beads fall through the slot; some 99.95% of them are collected.

Because so little water is used and the warm beads help dry the laundry, less tumble drying is needed. An environmental consultancy commissioned by Xeros to test its system reckoned that its carbon footprint was 40% smaller than the most efficient existing systems for washing and drying laundry.

The first machines to be built by Xeros will be aimed at commercial cleaners and designed to take loads of up to 20 kilograms. Customers will still be able to use the same stain treatments, bleaches and fragrances that they use with traditional laundry systems. Nylon may be nasty to wear, but it scrubs up well inside a washing machine.

Source of Information : The Economist 2009-09-05

Monday, November 9, 2009

The digital geographers

The internet: Detailed digital maps of the world are in widespread use. They are compiled using both high-tech and low-tech methods

IT IS a damp, overcast Monday morning in Watford, an undistinguished town north of London that seems to offer little to the casual visitor. But one man is eagerly snapping photographs. In fact, he is working with six high-resolution cameras, all of which are attached to the roof of the car in which he is being driven. He sits in the passenger seat with a keyboard on his lap, tapping occasionally and muttering into a microphone. A computer screen built into the dashboard shows the car’s progress as a luminous dot travelling across a map of the town. The man is a geographic analyst for NAVTEQ, one of a small group of companies that are creating new, digital maps of the world.

Each keystroke he makes denotes a feature in the outside world that is added to the map displayed on the screen. New details are also recorded in audio form. Once the journey is finished, the analyst can also pick out new details while watching a video playback. All this information is transferred from a server in the car’s boot to NAVTEQ’s database.

Companies such as NAVTEQ and its rivals, which include Tele Atlas and Microsoft, always start a new map by going to trusted sources such as local governments or mapping organisations. This information can be corroborated using aerial or satellite photography. Only when these sources are exhausted do they switch to the more expensive process of gathering data themselves. The digital maps they create are used mostly by motorists in rich countries. But the same companies are now creating maps of the developing world, which is requiring them to do things in somewhat different ways.

A geographic analyst in India would probably have deserted his vehicle, finding it impractical to manoeuvre on the country’s crowded urban streets. Instead, he would go on foot and use a pen to annotate a map printed on paper, a technique abandoned by his Western counterparts a decade ago. Official mapmaking in some poor countries is far from comprehensive, leaving the likes of NAVTEQ or Tele Atlas to generate the most accurate maps available.

The type of data that must be gathered also varies. Navigation in wealthy Western markets generally requires gathering the information that is of most interest to motorists. But lower levels of car ownership in poor countries makes such information less relevant. Instead, the proliferation of mobile phones in countries such as China or India, many of which incorporate satellite-positioning chips, may make pedestrian navigation more relevant for local customers. Mapmakers are more likely to spend time hanging around bus stations collecting timetables, or finding the quickest route, which is not always the most direct one, from a city’s railway station to its main shopping street. All this information has to be constantly refreshed, sometimes several times a year.

To reduce the cost of sending staff on such reconnaissance trips, mapping companies are asking their customers to do more of the work. Tele Atlas, for example, gathers data from users of satellite-navigation systems made by TomTom, a firm based in the Netherlands. Drivers can report errors and suggest new features, or can agree to submit data passively: the TomTom device automatically logs their vehicle’s position, leaving a trail where it has travelled. It is then possible to calculate the vehicle’s direction and speed, which can help identify the class of road on which it is travelling. Altitude measurements mean the road’s gradient can be determined. Other information can also be deduced. If a lot of cars all seem to be driving across what was thought to be a ploughed field, for example, then it is likely that a new road has been built. Such detective work keeps the company’s mapping database up to date.

In some parts of the world, however, mapmaking relies heavily on voluntary contributions. Google’s Map Maker service, for example, makes up for the lack of map data for much of the world by asking volunteers to provide it. Among its contributors is Tim Akinbo, a Nigerian software developer who got involved with the project last year. He has mapped recognisable features in Lagos, where he lives, as well as his home town of Jos. Churches, banks, office buildings and cinemas all feature on his map.

His working method is relatively simple. His mobile phone does not have satellite positioning, but he can use it to call up Google Maps, see what is on the map in a particular area and make a note of things to add. He then goes online when he gets home to add new features.

Why should people freely give up their time to improve local maps? Mr Akinbo explains that local businesses could use Map Maker to alert potential customers to their existence. “They will be contributing to a tool from which other people can benefit, as well as themselves,” he explains. With enough volunteers a useful map can be created without the need for fancy camera-toting cars.

Source of Information : The Economist 2009-09-05

Sunday, November 8, 2009

The taxonomy of tumours

Medicine: A new technique aims to measure the activity of a tumour, and could also help provide a new way to classify cancers

ONCOLOGISTS would like to be able to classify cancers not by whereabouts in the body they occur, but by their molecular origin. They know that certain molecules become active in tumours found in certain parts of the body. Both head-and-neck cancers and breast cancers, for example, have an abundance of molecules called epidermal growth-factor receptors (EGFRs). Now a team from Cancer Research UK’s London Research Institute has taken a step towards this goal. Their technique can already identify how advanced a person’s cancer is, and thus how likely it is to return after treatment.

At present, pathologists assess how advanced a cancer is by taking a sample, known as a biopsy, and examining the concentration within it of specific receptors, such as EGFRs, that are known to help cancers spread. Peter Parker had the idea of employing a technique called fluorescence resonance-energy transfer (FRET), which is used to study interactions between individual protein molecules, to see if he could find out not only how many receptors there are in a biopsy, but also how active they are.

The technique uses two types of antibody, each attached to a fluorescent dye molecule. Each of the two types is selected to fuse with a different part of an EGFR molecule, but one will do so only when the receptor has become active.

Pointing a laser at the sample causes the first dye to become excited and emit energy. With an activated receptor, the second dye will be attached nearby and so will absorb some of the energy given off by the first. Measuring how much energy is transferred between the two dyes indicates the activity of the receptors.

Dr Parker’s idea was implemented by his colleague Banafshe Larijani. She and her colleagues used FRET to measure the activity of receptors in 122 head-and-neck cancers. They found that the higher the activity of the receptors they examined, the more likely it was the cancers would return quickly following treatment. The technique was found to be a better prognostic tool than conventional visual analysis of receptor density.

To speed things up, engineers in the same group have now created an instrument that automates the analysis. Tumour biopsies are placed on a microscope slide and stained with antibodies. The system then points the laser at the samples, records images of the resulting energy transfer and interprets those images to provide FRET scores. Results are available in as little as an hour, compared with four or five days using standard methods.

Having established the principle with head-and-neck cancer, the team hopes to extend it. They are beginning a large-scale trial to see whether FRET can accurately “hindcast” the clinical outcomes associated with 2,000 breast-cancer biopsies. Moreover, if patterns of receptor-activation for other types of cancers can be characterised, the technique could be applied to all solid tumours (ie, cancers other than leukaemias and lymphomas).

If they succeed, it will be good news for researchers who want to switch from classifying cancers anatomically to classifying them biochemically. Most cancer specialists think that patients with tumours in different parts of the body that are triggered by the same genetic mutations may have more in common than those whose tumours are in the same organ, but have been caused by different mutations. The new approach could help make such classification routine. That could, in turn, create a new generation of therapies and help doctors decide which patients should receive them, and in which combinations and doses.

Source of Information : The Economist 2009-09-05

Saturday, November 7, 2009

Air power

Energy: Batteries that draw oxygen from the air could provide a cheaper, lighter and longerlasting alternative to existing designs

MOBILE phones looked like bricks in the 1980s. That was largely because the batteries needed to power them were so hefty. When lithium-ion batteries were invented, mobile phones became small enough to be slipped into a pocket. Now a new design of battery, which uses oxygen from ambient air to power devices, could provide even an smaller and lighter source of power. Not only that, such batteries would be cheaper and would run for longer between charges.

Lithium-ion batteries have two electrodes immersed in an electrically conductive solution, called an electrolyte. One of the electrodes, the cathode, is made of lithium cobalt oxide; the other, the anode, is composed of carbon. When the battery is being charged, positively charged lithium ions break away from the cathode and travel in the electrolyte to the anode, where they meet electrons brought there by a charging device. When electricity is needed, the anode releases the lithium ions, which rapidly move back to the cathode. As they do so, the electrons that were paired with them in the anode during the charging process are released. These electrons power an external circuit.

Peter Bruce and his colleagues at the University of St Andrews in Scotland came up with the idea of replacing the lithium cobalt oxide electrode with a cheaper and lighter alternative. They designed an electrode made from porous carbon and lithium oxide. They knew that lithium oxide forms naturally from lithium ions, electrons and oxygen, but, to their surprise, they found that it could also be made to separate easily when an electric current passed through it. They exposed one side of their porous carbon electrode to an electrolyte rich in lithium ions and put a mesh window on the other side of the electrode through which air could be drawn. Oxygen from the air took the place of the cobalt oxide.

When they charged their battery, the lithium ions migrated to the anode where they combined with electrons from the charging device. When they discharged it, lithium ions and electrons were released from the anode. The ions crossed the electrolyte and the electrons travelled round the external circuit. The ions and electrons met at the cathode, and combined with the oxygen to form lithium oxide that filled the pores in the carbon.

Because the oxygen being used by the battery comes from the surrounding air, the device that Dr Bruce’s team has designed can be a mere one-eighth to one-tenth the size and weight of modern batteries, while still carrying the same charge. Making such a battery is also expected to be cheaper. Lithium cobalt oxide accounts for 30% of the cost of a lithium-ion battery. Air, however, is free.

Source of Information : The Economist 2009-09-05

Friday, November 6, 2009

Trappings of waste

Materials science: Plastic beads may provide a way to mop up radiation in nuclear powerstations and reduce the amount of radioactive waste

NUCLEAR power does not emit greenhouse gases, but the technology does have another rather nasty byproduct: radioactive waste. One big source of low-level waste is the water used to cool the core in the most common form of reactor, the pressurised-water reactor. A team of researchers led by Börje Sellergren of the University of Dortmund in Germany, and Sevilimedu Narasimhan of the Bhabha Atomic Research Centre in Kalpakkam, India, think they have found a new way to deal with it. Their solution is to mop up the radioactivity in the water with plastic.

In a pressurised-water reactor, hot water circulates at high pressure through steel piping, dissolving metal ions from the walls of the pipes. When the water is pumped through the reactor’s core, these ions are bombarded by neutrons and some of them become radioactive. The ions then either settle back into the walls of the pipes, making the pipes themselves radioactive, or continue to circulate, making the water radioactive. Either way, a waste-disposal problem is created.

Because the pipes are steel, most of the ions are iron. When the commonest isotope of iron (56Fe) absorbs a neutron, the result is not radioactive. The steel used in the pipes, however, is usually alloyed with cobalt to make it stronger. When common cobalt (59Co) absorbs a neutron the result is 60Co, which is radioactive and has a half-life of more than five years.

At present, nuclear engineers clean cobalt from the system by trapping it in what are known as ionexchange resins. These swap bits of themselves for ions in the water flowing over them. Unfortunately, the ion-exchange technique traps many more non-radioactive iron ions than radioactive cobalt ones.

To overcome that problem Drs Sellergren and Narasimhan have developed a polymer that binds to cobalt while ignoring iron. They made the material using a technique called molecular imprinting, which involves making the polymer in the presence of cobalt ions, and then extracting those ions by dissolving them in hydrochloric acid. The resulting cobalt-sized holes tend to trap any cobalt ions that blunder into them, with the result that a small amount of the polymer can mop up a lot of radioactive cobalt.

The team is now forming the new polymer into small beads that can pass through the cooling systems of nuclear power-stations. Concentrating radioactivity into such beads for disposal would be cheaper than trying to get rid of large volumes of low-level radioactive waste, according to Dr Sevilimedu. He thinks that the new polymer could also be used to decontaminate decommissioned nuclear power-stations where residual radioactive cobalt in pipes remains a problem.

Nuclear power is undergoing a renaissance. Some 40 new nuclear power-stations are being built around the world. The International Atomic Energy Agency estimates that a further 70 will be built over the next 15 years, most of them in Asia. That is in addition to the 439 reactors which are already operating. So there will be plenty of work for the plastic beads, if Drs Sellergren and Narasimhan can industrialise their process.

Source of Information : The Economist 2009-09-05

Thursday, November 5, 2009

Keeping a grip

Transport: A new type of tyre, equipped with built-in sensors, can help avoid a skid—and could also improve fuel-efficiency

FEW sensations of helplessness match that of driving a car that unexpectedly skids. In a modern, wellequipped (and often expensive) car, electronic systems such as stability and traction control, along with anti-lock braking, will kick in to help the driver avoid an accident. Now a new tyre could detect when a car is about to skid and switch on safety systems in time to prevent it. It could also improve the fuelefficiency of cars to which it is fitted.

The Cyber Tyre, developed by Pirelli, an Italian tyremaker, contains a small device called an accelerometer which uses tiny sensors to measure the acceleration and deceleration along three axes at the point of contact with the road. A transmitter in the device sends those readings to a unit that is linked to the braking and other control systems.

The accelerometers in the Cyber Tyre contain two tiny structures, the distance between which changes during acceleration, altering the electrical capacitance of the device, which is measured and converted into a voltage. Powered by energy scavengers that exploit the vibration of the tyre, the device encapsulating the accelerometers and the transmitter is about 2.5 centimetres in diameter and about the thickness of a coin.

Constantly monitoring the forces that tyres are subjected to as they grip the road could help reduce fuel consumption by optimising braking and suspension. Moreover, it could promote the greater use of tyres with a low rolling-resistance, which are often fitted to hybrid vehicles. These save fuel by reducing the resistance between the tyre and the road but, to do so, they have a reduced grip, especially in the wet. If fitted with sensors, such tyres could be more closely monitored and controlled in slippery conditions.

Pirelli believes its new tyre could be fitted to cars in 2012 or 2013, but this will depend on getting carmakers to incorporate the necessary monitoring and control systems into their vehicles. As with most innovations, these are expected to be available in upmarket models first, and cheaper cars later. But if the introduction in 1973 of Pirelli’s steel-belted Cinturato radial tyre is any guide, devices that make cars safer will be adopted rapidly.

Source of Information : The Economist 2009-09-05

Monday, October 26, 2009

Portable dialysis machines

Kidney machines go mobile
DIALYSIS is not as bad as dying, but it is pretty unpleasant, nonetheless. It involves being hooked up to a huge machine, three times a week, in order to have your blood cleansed of waste that would normally be voided, via the kidneys, as urine. To make matters worse, three times a week does not appear to be enough. Research now suggests that daily dialysis is better. But who wants to tied to a machine—often in a hospital or a clinic—for hours every day for the rest of his life?

Victor Gura, of the University of California, Los Angeles, hopes to solve this problem with an invention that is now undergoing clinical trials. By going back to basics, he has come up with a completely new sort of dialyser—one you can wear.

A traditional dialyser uses around 120 litres of water to clean an individual’s blood. This water flows past one side of a membrane while blood is pumped past the other side. The membrane is impermeable to blood cells and large molecules such as proteins, but small ones can get through it. Substances such as urea (a leftover from protein metabolism) and excess phosphate ions therefore flow from the blood to the water. The good stuff, such as sodium and chloride ions, stays in the blood because the cleansing water has these substances dissolved in it as well, and so does not absorb more of them.

Both water and blood require a lot of pumping. Those pumps are heavy and need electrical power. The first thing Dr Gura did, therefore, was dispose of them. The reason for using big pumps is to keep dialysis sessions short. If machines are portable that matters less. So Dr Gura replaced the 10kg pumps of a traditional machine with small ones weighing only 380 grams. Besides being light, these smaller pumps use less power. That means batteries can be employed instead of mains electricity—and modern lithiumion batteries, the ones Dr Gura chose, are also light, and thus portable.

To reduce the other source of weight, the water, Dr Gura and his team designed disposable cartridges containing materials that capture toxins from the cleansing water, so that it can be recycled. The upshot is a device that weighs around 5kg and can be strapped to a user’s waist. Indeed, at a recent demonstration in London, one patient was able to dance while wearing the dialyser—for joy, presumably, at no longer having to go to hospital so often.

Source of Information : The Economist 2009-10-03

Monday, October 5, 2009

Chlorophyll Power

Quantum details of photosynthesis could yield better solar cells BY MICHAEL MOYER

As nature’s own solar cells, plants convert sunlight into energy via photosynthesis.
New details are emerging about how the process is able to exploit the strange behavior of quantum systems, which could lead to entirely novel approaches to capturing usable light from the sun.

All photosynthetic organisms use protein- based “antennas” in their cells to capture incoming light, convert it to energy and direct that energy to reaction centers—critical trigger molecules that release electrons and get the chemical conversion rolling. These antennas must strike a difficult balance: they must be broad enough to absorb as much sunlight as possible yet not grow so large that they impair their own ability to shuttle the energy on to the reaction centers.

This is where quantum mechanics becomes useful. Quantum systems can exist in a superposition, or mixture, of many different states at once. What’s more, these states can interfere with one another—adding constructively at some points, subtracting at others. If the energy going into the antennas could be broken into an elaborate superposition and made to interfere constructively with itself, it could be transported to the reaction center with nearly 100 percent efficiency.

A new study by Mohan Sarovar, a chemist at the University of California, Berkeley, shows that some antennas— namely, those found on a certain type of green photosynthetic bacteria—do just that. Moreover, nearby antennas split incoming energy between them, which leads not just to mixed states but to states that are entangled over a broad (in quantum terms) distance. Gregory Scholes, a chemist at the University of Toronto, shows in a soon to be published study that a species of marine algae utilizes a similar trick. Interestingly, the fuzzy quantum states in these systems are relatively longlived, even though they exist at room temperature and in complicated biological systems. In quantum experiments in the physics lab, the slightest intrusion will destroy a quantum superposition (or state). These studies mark the first evidence of biological organisms that exploit strange quantum behaviors. A better understanding of this intersection of microbiology and quantum information, researchers say, could lead to “bioquantum” solar cells that are more efficient than today’s photovoltaics.

Source of Information : Scientific American September 2009