lunes, 20 de octubre de 2014

'Green' solar cell is made from plants

To make super cheap solar cells, MIT researchers look to commandeer the process of photosynthesis in plants.


In a mashup of biology and electronics, researchers said they've made progress in making low-cost solar cell from plants.

A paper published in Scientific Reports today describes an improved method for making electricity-producing "biophotovoltaics" without the sophisticated laboratory equipment previously needed. Researchers said custom-designed chemicals could be mixed with green plants, even grass clippings, to create a photovoltaic material by harnessing photosynthesis.

"Take that bag (of chemicals), mix it with anything green and paint it on the roof," said MIT researcher Andreas Mershin, who is one of the paper's co-authors, in a statement. He imagines that this sort of cheap solar cell could be used by people in developing countries who don't have the power grid to charge lamps or cell phones.

The advance represents a 10,000 percent efficiency improvement on previous plant-based solar cells, but these cells are far from being practical. Experimental solar cells made using this process only convert 0.1 percent of sunlight to electricity, which would need to improve tenfold to be practical, Mershin said.

Scientists for years have sought to make solar cells from the set of molecules within plant cells that do the work of photosynthesis, called photosystem-I. However, this material required specialized thin-film deposition and optical equipment. And the current produced was too low.

Related stories

Mershin was able to create a workable solar cell made using a combination of new materials that isolate the PS-I molecules and form an array of tiny zinc oxide nanowires, which carry the flow of current and provide a large surface area. These nanowires, which also provide structure for a multi-layered solar cell, can be grown at room temperature on a variety of flexible and inexpensive substrates, according to the paper.

"After many ears of research, we've managed to make the process of extracting this protein and stabilizing it and putting on a surface that is made in a way to allow for the photovoltaic effect to happen to be very easy," he said in a video provided by MIT.


In their paper, the researchers note a number of challenges to these "green" solar cells, including the durability and efficiency. But the initial performance tests for this new technique offers a promising route for further research, they said. "Commandeering this intricately organized photosynthetic nanocircuitry and re-wiring it to produce electricity carries the promise of inexpensive and environmentally friendly solar power," according to the paper.

ORIGINAL: CNet
February 2, 2012 7:52 AM PST

Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong

Photo-Illustration: Randi Klett

The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. 
Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool’s errand. 
Despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.

Those may sound like the Luddite ravings of a crackpot who breached security at an IEEE conference. In fact, the opinions belong to IEEE Fellow Michael I. Jordan, Pehong Chen Distinguished Professor at the University of California, Berkeley. Jordan is one of the world’s most respected authorities on machine learning and an astute observer of the field. His CV would require its own massive database, and his standing in the field is such that he was chosen to write the introduction to the 2013 National Research Council report “Frontiers in Massive Data Analysis.” San Francisco writer Lee Gomes interviewed him for IEEE Spectrum on 3 October 2014.

Michael Jordan on…

1- Why We Should Stop Using Brain Metaphors When We Talk About Computing

IEEE Spectrum: I infer from your writing that you believe there’s a lot of misinformation out there about deep learning, big data, computer vision, and the like.

Michael Jordan: Well, on all academic topics there is a lot of misinformation. The media is trying to do its best to find topics that people are going to read about. Sometimes those go beyond where the achievements actually are. Specifically on the topic of deep learning, it’s largely a rebranding of neural networks, which go back to the 1980s. They actually go back to the 1960s; it seems like every 20 years there is a new wave that involves them. In the current wave, the main success story is the convolutional neural network, but that idea was already present in the previous wave. And one of the problems with both the previous wave, that has unfortunately persisted in the current wave, is that people continue to infer that something involving neuroscience is behind it, and that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.

Spectrum: As a member of the media, I take exception to what you just said, because it’s very often the case that academics are desperate for people to write stories about them.

Michael Jordan: Yes, it’s a partnership.

Spectrum: It’s always been my impression that when people in computer science describe how the brain works, they are making horribly reductionist statements that you would never hear from neuroscientists. You called these “cartoon models” of the brain.

Michael Jordan: I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea on
  • how neurons are storing information
  • how they are computing, 
  • what the rules are, 
  • what the algorithms are, 
  • what the representations are, and 
  • the like. 
So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

Spectrum: In addition to criticizing cartoon models of the brain, you actually go further and criticize the whole idea of “neural realism”—the belief that just because a particular hardware or software system shares some putative characteristic of the brain, it’s going to be more intelligent. What do you think of computer scientists who say, for example, “My system is brainlike because it is massively parallel.

miércoles, 8 de octubre de 2014

Stefan Hell (Nobel Prize in Chemistry 2014): STED - Insights into the nanoworld

Ok, so do you wonder what all the fuss is about with the Chemistry Nobel Prize this year : ) -? If so, this film will do the trick & tell you all about it! A quick introduction of Max Planck researcher Stefan Hell & and his pioneering work in the field of ultra-high resolution fluorescence microscopy.



Optical microscopes cannot distinguish between objects that are closer together than about 200 nanometers – about one two hundredth of a hair's breadth. The reason for this is the wave nature of light, the half wavelength of which roughly corresponds to those 200 nanometers. The STED microscopy developed by Stefan Hell is the first optical microscope technology to go beyond this magic barrier, enabling researchers to gain fascinating insights into the nanoworld.

ORIGINAL: What's The Big Deal