Popular Posts

Tuesday, 26 January 2010

Solar-Powered Future

EFFORTS TO DRIVE DOWN COSTS REMAIN PARAMOUNT, WHILE OVERCAPACITY MAY DOOM SOME SOLAR CELL MAKERS.


September,09 swept in with good news for Solyndra Inc. The maker of cylindrical solar photovoltaic panels was announced as the recipient of a $535 million loan guarantee from the U.S. Department of Energy to finance construction of the first phase of its second solar panel manufacturing facility.

This loan guarantee is the first issued under the American Recovery and Reinvestment Act. Vice President Joe Biden, who announced the loan finalization, identified it as "part of the unprecedented investment this administration is making in renewable energy."

Solar projects seem to make news almost daily. Just ahead of the Solyndra announcement, for example, e Solar in August unveiled its 5-megawatt Sierra Sun Tower solar power plant, and Solient Energy opened a pilot production facility capable of producing 1 megawatt of solar panels annually, the company said.

In the USA more than 62,000 new solar thermal and solar electric installations were completed in 2008, up 16% over the previous year, according to the Interstate Renewable Energy Council. On a global scale, capacity is expected to boom in 2009. Research firm Display Search has forecast solar cell producing capacity growing by 56% in 2009.

That capacity growth has a potential downside. DisplaySearch forecasts a 17% drop in PV module demand in 2009, with a recovery beginning in 2010. For now that means Charles Annis says "the PV industry is currently experiencing an enormous over-supply that is causing rapid price erosion and potentially setting the stage for the failure of multiple cell manufacturers".
At the heart of solar energy research is the drive to reduce manufacturing costs. Researchers at the DOE's Lawrence Berkeley National Laboratory and the University of California at Berkeley report they have demonstrated a way to fabricate efficient solar cells from low-cost, flexible materials. The design grows optically active semiconductors in arrays of nano-scale pillars. The nano-pillar array offers a greater surface to collect light than two-dimensional solar cells.

Meanwhile, solar company Sky Fuels has teamed with the National Renewable Energy Laboratory (NREL) to develop the Sky Trough Parabolic Trough Solar Concentrating Collector. Designed for utility-sized power generation, this innovation allows Sky Trough to reduce installed costs by 35%, according to the NREL. And at Kansas State University, professor Ryszard Jankowiak has received a grant to study photosynthetic complexes from a type of bacteria. The research could one day aid in the development of devices that more efficiently convert solar energy into electricity.



Quantum 'trampoline' to test gravity
WAY TO TEST THE STRENGTH OF GRAVITY WITH HIGH ACCURACY

To test theories such as general relativity, the strength of gravity is measured precisely using ensembles of supercold atoms falling in a vacuum chamber. These ensembles are called "Bose-Einstein condensates".

BECs act in a quantum-mechanical wave-like fashion and interfere with each other. The interference pattern depends on the paths the atoms take, so gravity's effect on how fast they fall can be calculated by analysing the pattern with an interferometer. The longer the fall, the more precise the measurement – but the harder it is to keep the ensemble intact.

"The longer your interferometer, the more precise is your measurement," says Thomas Bourdel of the Charles Fabry Institute of Optics in Palaiseau, France. "But you are limited by the size of your apparatus."

Now Philippe Bouyer of the Institute of Optics in Palaiseau, France, along with Bourdel and colleagues have increased the fall time with a "quantum trampoline".

In a microscopic chamber, they fired a specially designed laser pulse at the falling BECs. The pulse affected the BECs in the same way that a crystal lattice can affect light: since the atoms exhibit wave-like behaviour, they can be diffracted in a similar way to light in a crystal.

By tuning the laser, the team were able to split up the wave, causing some of its components to bounce upwards. When the parts fell back down, the laser was pulsed so they split again, and so on. Eventually the parts recombined in an interference pattern.

The device is less precise than existing atom interferometers, but the team plan to improve precision markedly by, for instance, using lighter atoms. Lighter atoms like helium and lithium will levitate for longer after each bounce than heavier atoms. This has the same effect as creating a longer interferometer with heavier atoms.



SOLAR SAILING
DESPITE EARLIER FAILURES, THE PLANETARY SOCIETY IS GEARING UP TO TEST ANOTHER SOLAR SAIL IN SPACE IN A YEAR

Earlier this week, the Planetary Society, a space advocacy group in Pasadena, California, received an anonymous donation to build and launch a small solar-sail driven spacecraft.

The Society hopes to launch the sail in about a year as part of a three-stage plan to demonstrate the viability of solar sail propulsion, which has never been tested in orbit. The group says it is the only practical technology that might be used for interstellar travel, since the light generates a small but constant pressure that should accelerate a sail to high speeds over time.

New Scientist caught up with Louis Friedman, the organization's executive director, to find out more about the promise and challenge of solar sailing.

A solar sail is a device that collects sunlight and transfers the energy of the sunlight to the momentum of the spacecraft. It uses pure light, reflecting off the sail, so you want a large area to collect a lot of photons and you want it highly reflective so you get a high efficiency of them bouncing off. We use aluminised mylar.

Solar Sail:

Solar sails (also called light sails or photon sails) are a form of spacecraft propulsion using the radiation pressure of light from a star or laser to push enormous ultra-thin mirrors to high speeds.
According to the Einstein relation, E = pc, photons have momentum, and hence light reflecting from a surface exerts a small amount of radiation pressure. In 1924, the Russian space engineer Friedrich Zander proposed that, since light provides a small amount of thrust, this effect could be used as a form of space propulsion requiring no fuel. Gathered across a large area, this thrust can provide significant acceleration. Over time, this acceleration can build considerable speed.

Changing course can be accomplished in two ways. First, the sail can allow gravity from a nearby mass, such as a star or planet, to alter its direction. Second, the sail can tilt away from the light source. This changes the direction of acceleration because any force applied to a sail's plane pushes at an angle perpendicular to its surface. Smaller auxiliary vanes can be used to gently pull the main sail into its new position.
The Idea:

The pressure of sunlight was noted by the early pioneers who discovered light. Felix Tisserand and others noted how light pressure affected comet [tails] back in the 19th century. The use of light to propel a spacecraft, that idea was invented by Fridrich Tsander and Konstantin Tsiolkovsky back in the Soviet Union in the 1920s. But it wasn't until the 1970s that anyone thought about practically doing it. Solar pressure force has been measured on spacecraft many times and it's been used in manoeuvres, but never as the single force to propel your way around in space.



VIRTUAL GRAPHICS CONTACT LENS
A CONTACT LENS FITTED WITH AN LED AND THE CIRCUITRY TO HARVEST POWER FROM RADIO WAVES IS THE FIRST STEP TOWARDS A NEW KIND OF HEAD-UP DISPLAY

A contact lens that harvests radio waves to power an LED is paving the way for a new kind of display. The lens is a prototype of a device that could display information beamed from a mobile device.

Realising that display size is increasingly a constraint in mobile devices, Babak Parviz at the University of Washington, in Seattle, hit on the idea of projecting images into the eye from a contact lens.

One of the limitations of current head-up displays is their limited field of view. A contact lens display can have a much wider field of view. "Our hope is to create images that effectively float in front of the user perhaps 50 cm to 1 m away," says Parviz.

His research involves embedding nanoscale and microscale electronic devices in substrates like paper or plastic. He also wears contact lenses. "It was a matter of putting the two together," he says.

Fitting a contact lens with circuitry is challenging. The polymer cannot withstand the temperatures or chemicals used in large-scale microfabrication, Parviz explains. So, some components – the power-harvesting circuitry and the micro light-emitting diode – had to be made separately, encased in a biocompatible material and then placed into crevices carved into the lens.

One obvious problem is powering such a device. The circuitry requires 330 microwatts but doesn't need a battery. Instead, a loop antenna picks up power beamed from a nearby radio source. The team has tested the lens by fitting it to a rabbit.

Parviz says that future versions will be able to harvest power from a user's cell phone, perhaps as it beams information to the lens. They will also have more pixels and an array of microlenses to focus the image so that it appears suspended in front of the wearer's eyes.

Despite the limited space available, each component can be integrated into the lens without obscuring the wearer's view, the researchers claim. As to what kinds of images can be viewed on this screen, the possibilities seem endless. Examples include subtitles when conversing with a foreign-language speaker, directions in unfamiliar territory and captioned photographs. The lens could also serve as a head-up display for pilots or gamers.

Mark Billinghurst, director of the Human Interface Technology Laboratory, in Christchurch, New Zealand, is impressed with the work. "A contact lens that allows virtual graphics to be seamlessly overlaid on the real world could provide a compelling augmented reality experience," he says. This prototype is an important first step in that direction, though it may be years before the lens becomes commercially available, he adds.

The University of Washington team will present their prototype at the Biomedical Circuits and Systems (BioCas 2009) conference at Beijing later this month.



CAMERA CAN FOLLOW FIRING NEURONS
Slow motion just got a whole lot slower, with a camera sensor able to film action at 1 million frames per second.

The black and white device is quick enough to capture impulses hurtling through firing nerve cells, and its resolution is good enough to film the microsecond-long pulse-like nerve signals that speed through networks of neurons at up to 180 kilometres per hour.

Capturing frames that last one-millionth of a second requires great sensitivity to light, as well as precise timing. The device uses an array of single-photon detectors, or SPADs, each hooked up to a tiny stopwatch. The stopwatch records when the SPAD is hit by an incoming photon, with an accuracy of around 100 picoseconds.

Wider view:

Each SPAD and its timer together act as a single-pixel camera, a setup that has been used for several years, says Edoardo Charbon at the Delft University of Technology in the Netherlands.

Charbon is the coordinator of the pan-European Megaframe project, which is the first to make a silicon chip that combines many such devices into a image sensor. The chip works like the one in a digital camera and so can snap whole objects, not just tiny spots like individual SPADs. The current chip contains an array of 1024 SPADs and stopwatches: "No one has operated so many on a single chip before," says Charbon.


Short exposure:

Each Megaframe image is captured in just a few nanoseconds, and the device itself can capture one image per microsecond, or 1 million frames every second. "If every pixel was hit by a photon every microsecond, then you could measure 1024 million photons per second – that's one gigameasurement every second," says Charbon. In reality, however, not enough photons collide with SPADs to give such resolution.

The sensor could be fitted with a conventional camera lens, for example in mobile gadgets, says Charbon. But for now the team has attached it to a microscope to capture the firing of neurons. They use a technique called fluorescence lifetime imaging microscopy. It exploits the fact that, when illuminated, some molecules absorb photons before discharging the energy shortly afterwards in a second photon of another colour.

The Megaframe sensor detects those emitted photons and measures how long they take to appear after the initial photon is absorbed. This can reveal the properties of the emitting molecule. "The distribution varies in a predictable way depending on the local environment – for instance, the calcium concentration," Charbon says.

Because the ion channels in neurons fire when there is a build-up of calcium around them, the technique offers a way to monitor neuron activity. And because the chip can handle up to 1024 photons at the same time, it can record a moving image of the neuron to show exactly how a nerve signal travels through it.

Speed of thought:

Using the Megaframe chip to capture a million images a second, it will be possible to "film" the impulses moving around a small network of neurons, says Charbon.

Carl Petersen at the Swiss Federal Institute of Technology in Lausanne used a previous generation of the chip, containing fewer SPADs, to "film" similar processes. "This new chip will be extremely useful," he says.

Neuroscientist Alessandro Esposito at the University of Cambridge is excited about the new perspectives Megaframe could provide.

He says the new chip can map events so fast-moving that currently they can only be recorded by electrical measurements which give no spatial information. "The Megaframe impact on lifetime sensing will be momentous," he says. It could, for instance, lead to a better understanding of the molecular basis of cancer.



LETS SEE THROUGH WALLS: Transparent wall
AUGMENTED REALITY SYSTEM LETS YOU SEE THROUGH WALLS

If only drivers could see through walls, blind corners and other dangerous road junctions would be much safer. Now an augmented reality system has been built that could just make that come true.

The prototype uses two cameras: one that captures the driver's view and a second that sees the scene behind a view-blocking wall. A computer takes the feed from the second camera and layers it on top of the images from the first so that the wall appears to be transparent.

This makes it simple to glance "through" a wall to see what's going on behind it. But the techniques needed to combine them were challenging to develop, says Yaser Sheikh of Carnegie Mellon University in Pittsburg, Pennsylvania.

Altered images:

The view of the hidden scene needs to be skewed so that it looks as if it were being viewed from the position of the person using the system. The system does this by spotting landmarks seen by both cameras: the one seeing the hidden view and the one with the same view as the user.

Sheikh and his colleagues also had to develop software that transforms moving objects in the images to avoid distortion.

Ultimately, the team want to build the system into a car. An onboard video processor would tune into a wireless feed from a roadside camera with a view of the hidden scene, such as a stretch of road behind a blind corner, and project the image of the hidden scene onto the windscreen rather than a monitor.

The project is funded by Denso, a car parts manufacturer based in Kariya, Japan.

Future view:

"It's an interesting peek into the future," says Bruce Thomas of the University of South Australia in Adelaide. He points out that many cities already have networks of CCTV cameras that could provide footage of hidden scenes.

Such a network could be supplemented by images from cameras mounted on many cars, says Shiekh. The Carnegie team is working on software that integrates feeds in footage from such sources into the system.

But Thomas adds that several formidable hurdles will have to be cleared before the technology can be used on public highways. Fast, powerful data processing and communication would be required to make the system work usefully in a moving car in real time.



BREATH FRESH AIR, TRANSFORMS STEM CELLS
BREATH OF FRESH AIR AND TRANSFORMS YOUR STEM CELLS

Mimicking the environment experienced by cells in the windpipe is enough to transform stem cells into a range of different lung cells. Such "physical" techniques could be used to create specialized tissues when growth factors alone aren't enough.

Lindsey Van Haute of the Free University of Brussels (VUB) in Belgium and colleagues spread human embryonic stem cells onto a porous membrane. The cells were fed from beneath by nutrients and from above by a fluid that encouraged them to multiply. Removing two chemicals from the growth fluid kick started differentiation.

Four days later, the team removed the fluid covering the cells, leaving them open to the air while still being sustained and supported from below, as they would be in the trachea. After 24 days the cells had developed surface proteins that identified them as specific types of lung cell, including alveolar cells, which allow the exchange of gases, and cilia, which expel bacteria and dirt.

Physical influence:

"Our study proves differentiation into lung cells is influenced by physical forces," says Van Haute. Previously, stem cells have been made to differentiate into lung cells using a cocktail of different growth factors, but Haute says using physical forces might be simpler.

"Physical forces are certainly a factor in getting the lung lining to be fully functional," says Anne Bishop at Imperial College London, who has made alveolar cells from mouse stem cells using growth factors alone. "But I find it difficult to believe that raw stem cells would differentiate through to these uncommon types of cells solely in response to physical forces."

Haute's team plans to use the cells they created to study lung diseases, such as cystic fibrosis. Such cells might one day be used to treat people with damaged lung tissue, but only if the cells can be made from a person's own tissue. This could be done by converting the cells first into induced pluripotent stem cells.

No comments:

Post a Comment