Saturday, May 02, 2009

Researchers Construct Carbon Nanotube Device That Can Detect Colors of the Rainbow


Researchers at Sandia National Laboratories have created the first carbon nanotube device that can detect the entire visible spectrum of light, a feat that could soon allow scientists to probe single molecule transformations, study how those molecules respond to light, observe how the molecules change shapes, and understand other fundamental interactions between molecules and nanotubes.

Carbon nanotubes are long thin cylinders composed entirely of carbon atoms. While their diameters are in the nanometer range (1-10), they can be very long, up to centimeters in length.

The carbon-carbon bond is very strong, making carbon nanotubes very robust and resistant to any kind of deformation. To construct a nanoscale color detector, Sandia researchers took inspiration from the human eye, and in a sense, improved on the model.

When light strikes the retina, it initiates a cascade of chemical and electrical impulses that ultimately trigger nerve impulses. In the nanoscale color detector, light strikes a chromophore and causes a conformational change in the molecule, which in turn causes a threshold shift on a transistor made from a single-walled carbon nanotube.

“In our eyes the neuron is in front of the retinal molecule, so the light has to transmit through the neuron to hit the molecule,” says Sandia researcher Xinjian Zhou. “We placed the nanotube transistor behind the molecule—a more efficient design.”

Zhou and his Sandia colleagues François Léonard, Andy Vance, Karen Krafcik, Tom Zifer, and Bryan Wong created the device. The team recently published a paper, “Color Detection Using Chromophore-Nanotube Hybrid Devices,” in the journal Nano Letters.

The idea of carbon nanotubes being light sensitive has been around for a long time, but earlier efforts using an individual nanotube were only able to detect light in narrow wavelength ranges at laser intensities. The Sandia team found that their nanodetector was orders of magnitude more sensitive, down to about 40 W/m2—about 3 percent of the density of sunshine reaching the ground. “Because the dye is so close to the nanotube, a little change turns into a big signal on the device,” says Zhou.

The research is in its second year of internal Sandia funding and is based on Léonard’s collaboration with the University of Wisconsin to explain the theoretical mechanism of carbon nanotube light detection. Léonard literally wrote the book on carbon nanotubes—The Physics of Carbon Nanotubes, published September 2008.

Léonard says the project draws upon Sandia’s expertise in both materials physics and materials chemistry. He and Wong laid the groundwork with their theoretical research, with Wong completing the first-principles calculations that supported the hypothesis of how the chromophores were arranged on the nanotubes and how the chromophore isomerizations affected electronic properties of the devices.

To construct the device, Zhou and Krafcik first had to create a tiny transistor made from a single carbon nanotube. They deposited carbon nanotubes on a silicon wafer and then used photolithography to define electrical patterns to make contacts.

The final piece came from Vance and Zifer, who synthesized molecules to create three types of chromophores that respond to either the red, green, or orange bands of the visible spectrum. Zhou immersed the wafer in the dye solution and waited a few minutes while the chromophores attached themselves to the nanotubes.

The team reached their goal of detecting visible light faster than they expected—they thought the entire first year of the project would be spent testing UV light. Now, they are looking to increase the efficiency by creating a device with multiple nanotubes.

“Detection is now limited to about 3 percent of sunlight, which isn’t bad compared with a commercially available digital camera,” says Zhou. “I hope to add some antennas to increase light absorption.”

A device made with multiple carbon nanotubes would be easier to construct and the resulting larger area would be more sensitive to light. A larger size is also more practical for applications.

Now, they are setting their sites on detecting infrared light. “We think this principle can be applied to infrared light and there is a lot of interest in infrared detection,” says Vance. “So we’re in the process of looking for dyes that work in infrared.”

This research eventually could be used for a number of exciting applications, such as an optical detector with nanometer scale resolution, ultra-tiny digital cameras, solar cells with more light absorption capability, or even genome sequencing. The near-term purpose, however, is basic science.

“A large part of why we are doing this is not to invent a photo detector, but to understand the processes involved in controlling carbon nanotube devices,” says Léonard.

The next step in the project is to create a nanometer-scale photovoltaic device. Such a device on a larger scale could be used as an unpowered photo detector or for solar energy. “Instead of monitoring current changes, we’d actually generate current,” says Vance. “We have an idea of how to do it, but it will be a more challenging fabrication process.”

--------------------------------------------------------------------------------

Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif.Researchers at Sandia National Laboratories have created the first carbon nanotube device that can detect the entire visible spectrum of light, a feat that could soon allow scientists to probe single molecule transformations, study how those molecules respond to light, observe how the molecules change shapes, and understand other fundamental interactions between molecules and nanotubes.

Carbon nanotubes are long thin cylinders composed entirely of carbon atoms. While their diameters are in the nanometer range (1-10), they can be very long, up to centimeters in length.

The carbon-carbon bond is very strong, making carbon nanotubes very robust and resistant to any kind of deformation. To construct a nanoscale color detector, Sandia researchers took inspiration from the human eye, and in a sense, improved on the model.

When light strikes the retina, it initiates a cascade of chemical and electrical impulses that ultimately trigger nerve impulses. In the nanoscale color detector, light strikes a chromophore and causes a conformational change in the molecule, which in turn causes a threshold shift on a transistor made from a single-walled carbon nanotube.

“In our eyes the neuron is in front of the retinal molecule, so the light has to transmit through the neuron to hit the molecule,” says Sandia researcher Xinjian Zhou. “We placed the nanotube transistor behind the molecule—a more efficient design.”

Zhou and his Sandia colleagues François Léonard, Andy Vance, Karen Krafcik, Tom Zifer, and Bryan Wong created the device. The team recently published a paper, “Color Detection Using Chromophore-Nanotube Hybrid Devices,” in the journal Nano Letters.

The idea of carbon nanotubes being light sensitive has been around for a long time, but earlier efforts using an individual nanotube were only able to detect light in narrow wavelength ranges at laser intensities. The Sandia team found that their nanodetector was orders of magnitude more sensitive, down to about 40 W/m2—about 3 percent of the density of sunshine reaching the ground. “Because the dye is so close to the nanotube, a little change turns into a big signal on the device,” says Zhou.

The research is in its second year of internal Sandia funding and is based on Léonard’s collaboration with the University of Wisconsin to explain the theoretical mechanism of carbon nanotube light detection. Léonard literally wrote the book on carbon nanotubes—The Physics of Carbon Nanotubes, published September 2008.

Léonard says the project draws upon Sandia’s expertise in both materials physics and materials chemistry. He and Wong laid the groundwork with their theoretical research, with Wong completing the first-principles calculations that supported the hypothesis of how the chromophores were arranged on the nanotubes and how the chromophore isomerizations affected electronic properties of the devices.

To construct the device, Zhou and Krafcik first had to create a tiny transistor made from a single carbon nanotube. They deposited carbon nanotubes on a silicon wafer and then used photolithography to define electrical patterns to make contacts.

The final piece came from Vance and Zifer, who synthesized molecules to create three types of chromophores that respond to either the red, green, or orange bands of the visible spectrum. Zhou immersed the wafer in the dye solution and waited a few minutes while the chromophores attached themselves to the nanotubes.

The team reached their goal of detecting visible light faster than they expected—they thought the entire first year of the project would be spent testing UV light. Now, they are looking to increase the efficiency by creating a device with multiple nanotubes.

“Detection is now limited to about 3 percent of sunlight, which isn’t bad compared with a commercially available digital camera,” says Zhou. “I hope to add some antennas to increase light absorption.”

A device made with multiple carbon nanotubes would be easier to construct and the resulting larger area would be more sensitive to light. A larger size is also more practical for applications.

Now, they are setting their sites on detecting infrared light. “We think this principle can be applied to infrared light and there is a lot of interest in infrared detection,” says Vance. “So we’re in the process of looking for dyes that work in infrared.”

This research eventually could be used for a number of exciting applications, such as an optical detector with nanometer scale resolution, ultra-tiny digital cameras, solar cells with more light absorption capability, or even genome sequencing. The near-term purpose, however, is basic science.

“A large part of why we are doing this is not to invent a photo detector, but to understand the processes involved in controlling carbon nanotube devices,” says Léonard.

The next step in the project is to create a nanometer-scale photovoltaic device. Such a device on a larger scale could be used as an unpowered photo detector or for solar energy. “Instead of monitoring current changes, we’d actually generate current,” says Vance. “We have an idea of how to do it, but it will be a more challenging fabrication process.”

--------------------------------------------------------------------------------

Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif.
Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

Saturday, April 25, 2009

Vitamin D 'is a hormone'


A new study has suggested that vitamin D isn't really a vitamin at all -- it's actually a hormone made inside the body without any help from the sun.
An international team has carried out the study and concluded that the increase of vitamin D in our modern diets is based on a common belief which is actually a misconception with potential consequences.
ÒWhat we have confirmed with our recent research is that vitamin D is a hormone that is made by the body itself. Our bodies hormonal control system was being overwhelmed by the amount of external vitamin D,Ó lead researcher Prof Trevor G Marshall at Murdoch University in Australia said.
The researchers go on to explode another long held belief about this secosteriod previously known as vitamin D. ÒYou don't have to ingest any vitamin D in order to be perfectly healthy,Ó Prof Marshall said.
So no more need for expensive supplements, no more basking in the sun to put us in a better mood? And what about the thinking that suggests vitamin D is vital in production of serotonin, an essential element linked to helping maintaining normal brain chemical function? ÓWhat we've shown is that all forms of vitamin D from outside the body are counterproductive to body's own ability to regulate its own internal production,Ó he said.

This conclusion doesn't mean a dramatic change of lifestyle where we must all suddenly shun the sun but the researchers do acknowledge that people have only been at risk of vitamin D overexposure from about the same time as when bikinis made an appearance.
ÒHistorically the amount of sunshine which people have typically been getting was adequate, certainly up until the mid twentieth century when we started to do silly things like sunbathing and wearing bikinis, and before that time people were already sourcing enough vitamin D from everyday foods like fish, mushrooms and eggs,Ó Prof Marshall said.
The World’s First, High Performance & Environmentally Benign Power Generation Unit


If the device is applied to 30% of the cars in Japan, we can expect a 960,000kl oil substitution effect which is 1.5 times the assumed effect from solar energy generation of the year 2010.


A research team led by associate professor Tsutomu Iida (Tokyo University of Science - Faculty of Industrial Science and Technology – Materials Science and Technology) has developed a power generation unit driven by wasted heat composed of Magnesium silicide (Mg2Si), a filtered by-product of Si-LSI and solar cell cast wafer production.

This device, being the first in the world, has been successful in bulk-quantity synthesis, at the same time increasing the thermoelectric conversion rate. Approximately 2,500W/m² (per unit) of power generation and 3,000 hrs of continuous operation has been made possible, sufficiently fulfilling the criteria for commercial use.

By applying this device in industrial shaft furnaces and/or car engines, we can expect drastic reduction in fuel consumption and prevention of global warming. Implementation of this device has already been determined partially as an experiment for practical use and there are high expectations of application of this device in industrial furnaces nation-wide.

Mg2Si Power Generation Unit Driven by Waste Heat - Capability & Effect; This device has an effective temperature range from 200 degree to 600 degree; hence, there is high hope for implementation in industrial furnaces, cars, etc. When applied to gas-powered vehicles, 500~1,000W of electrical energy can be recycled, increasing the energy use efficiency.

If the device is applied to 30% of the cars in Japan, we can expect a 960,000kl oil substitution effect which is 1.5 times the assumed effect from solar energy generation of the year 2010. Also, if the device is implemented in industrial shaft furnaces which have an approximately 10% energy use efficiency, the efficiency will increase by 1.5 times and CO2 emission will be reduced by one-third. Iida’s research team is continuing research for further improvement in energy conversion and durability of the device.

Background of R&D and Success of Material Development;

Today, our main source of energy is fossil fuel. However the energy use efficiency has only gone up to 30% leaving 70% to be disposed as “wasted heat”. Efficiently reusing “wasted heat” and reducing fossil fuel consumption and CO2 emission, from the perspective of preventing global warming, has attracted attention from all around the world. The popularization of power generation from wasted heat has gone through many obstacles such as scarcity of material, cost, toxicity, etc. Iida’s research team has focused on the low cost and environmentally friendly silicon.

They have successfully managed to mass produce Magnesium silicide (Mg2Si), from the by-product of Si-LSI and solar cell cast wafer production, providing high-performance in power generation driven by wasted heat.

- Progress of Research -
Discovery of Magnesium silicide, material for Thermoelectric Conversion driven by Wasted Heat

Heretofore the compound of Lead and Tellurium (Pb-Te), have been known to be the material for thermoelectric conversion from wasted heat. However, Lead being hazardous and Tellurium being scarce, development of new material of low environmental burden was hoped for. Iida’s research team has been successful in being pioneers of mass producing Magnesium silicide from Silicon which is found abundantly on earth, holding the characteristic of being nonhazardous. Through this discovery of environmentally benign material, the realization of an environmentally benign technology of thermoelectric conversion through wasted heat was made possible.

Development of Magnesium silicide from Silicon by-product Silicon, being the main raw material for Magnesium silicide, is widely known to be a necessary ingredient for semiconductors throughout the electronic industry. However, in production of ultrapure silicon, much energy is consumed and more than half of the material comes out as Silicon sludge, usually being disposed of. This not only pollutes the environment, but raises the cost of production and also prevents further development of new materials and/or technology.

The research team has been successful in mass producing material for thermoelectric conversion through this waste product at a substantially low cost. Through this, thermoelectric conversion from wasted heat has become even more environmentally benign.

Profile - Tsutomu Iida
Background:
March, 1995: Meiji University Graduate School – completed PhD course
April, 1995: Japan Society for the Promotion of Science – Fellowship
July, 1995: Federal Republic of Germany – Volkswagen Foundation
April, 1997: Tokyo University of Science - Faculty of Industrial
Science and Technology – Materials Science and Technology
To Present
Major Field: Semiconductor Material Engineering
Field of Research: Semiconductor Energy Material (Thermoelectric Material / Solar Cell Material) ‘Environmentally Friendly’ Semiconductor Material

Research Content: Due to mass consumption of fossil fuel and also
for prevention of global warming, research and development of energy conversion materials are being conducted. Solar energy being the main source of reusable energy, development of solar cell material and thermoelectric conversion material is being conducted. Due to the fact that many elemental devices which are used for energy conversion tending to be toxic, development of environmentally benign semiconductor energy material continues to be conducted. Environmentally benign semiconductors are composed of semiconductor material which abundantly exists on earth and is highly ‘earth-friendly’.

Research Content:
1. Development of Thermoelectric Conversion Elemental Device through Magnesium silicide
2. Development of High Efficiency Solar Cells through Silicon Germanium
3. Photodecomposition and Hydrogen Composition of water through Semiconductor Photocatalyst

Website:http://web.mac.com/iida_lab/

Contact for this information -
Tokyo University of Science Technology Licensing Organization
Administrator: Niki
TEL: 03-5225-1089
e-mail:niki_tamotsu@admin.tus.ac.jp
Indus Script Encodes Language, Reveals New Study of Ancient Symbols

The Rosetta Stone allowed 19th century scholars to translate symbols left by an ancient civilization and thus decipher the meaning of Egyptian hieroglyphics.

But the symbols found on many other ancient artifacts remain a mystery, including those of a people that inhabited the Indus valley on the present-day border between Pakistan and India. Some experts question whether the symbols represent a language at all, or are merely pictograms that bear no relation to the language spoken by their creators.

A University of Washington computer scientist has led a statistical study of the Indus script, comparing the pattern of symbols to various linguistic scripts and nonlinguistic systems, including DNA and a computer programming language. The results, published online Thursday by the journal Science, found the Indus script's pattern is closer to that of spoken words, supporting the hypothesis that it codes for an as-yet-unknown language.

"We applied techniques of computer science, specifically machine learning, to an ancient problem," said Rajesh Rao, a UW associate professor of computer science and engineering and lead author of the study. "At this point we can say that the Indus script seems to have statistical regularities that are in line with natural languages."

Co-authors are Nisha Yadav and Mayank Vahia at the Tata Institute of Fundamental Research in Mumbai, India; Hrishikesh Joglekar, a software engineer from Mumbai; R. Adhikari at the Institute of Mathematical Sciences in Chennai, India; and Iravatham Mahadevan at the Indus Research Center in Chennai. The research was supported by the Packard Foundation and the Sir Jamsetji Tata Trust.

The Indus people were contemporaries of the Egyptian and Mesopotamian civilizations, inhabiting the Indus river valley in present-day eastern Pakistan and northwestern India from about 2600 to 1900 B.C. This was an advanced, urbanized civilization that left written symbols on tiny stamp seals, amulets, ceramic objects and small tablets.

"The Indus script has been known for almost 130 years," said Rao, an Indian native with a longtime personal interest in the subject. "Despite more than 100 attempts, it has not yet been deciphered. The underlying assumption has always been that the script encodes language."

In 2004 a provocative paper titled The Collapse of the Indus-Script Thesis claimed that the short inscriptions have no linguistic content and are merely brief pictograms depicting religious or political symbols. That paper's lead author offered a $10,000 reward to anybody who could produce an Indus artifact with more than 50 symbols.

Taking a scientific approach, the U.S.-Indian team of computer scientists and mathematicians looked at the statistical patterns in sequences of Indus symbols. They calculated the amount of randomness allowed in choosing the next symbol in a sequence. Some nonlinguistic systems display a random pattern, while others, such as pictures that represent deities, follow a strict order that reflects some underlying hierarchy. Spoken languages tend to fall between the two extremes, incorporating some order as well as some flexibility.

The new study compared a well-known compilation of Indus texts with linguistic and nonlinguistic samples. The researchers performed calculations on present-day texts of English; texts of the Sumerian language spoken in Mesopotamia during the time of the Indus civilization; texts in Old Tamil, a Dravidian language originating in southern India that some scholars have hypothesized is related to the Indus script; and ancient Sanskrit, one of the earliest members of the Indo-European language family. In each case the authors calculated the conditional entropy, or randomness, of the symbols' order.

They then repeated the calculations for samples of symbols that are not spoken languages: one in which the placement of symbols was completely random; another in which the placement of symbols followed a strict hierarchy; DNA sequences from the human genome; bacterial protein sequences; and an artificially created linguistic system, the computer programming language Fortran.

Results showed that the Indus inscriptions fell in the middle of the spoken languages and differed from any of the nonlinguistic systems.

If the Indus symbols are a spoken language, then deciphering them would open a window onto a civilization that lived more than 4,000 years ago. The researchers hope to continue their international collaboration, using a mathematical approach to delve further into the Indus script.

"We would like to make as much headway as possible and ideally, yes, we'd like to crack the code," Rao said. "For now we want to analyze the structure and syntax of the script and infer its grammatical rules. Someday we could leverage this information to get to a decipherment, if, for example, an Indus equivalent of the Rosetta Stone is unearthed in the future."


More information about the Indus civilization and language is at http://www.harappa.com

Thursday, April 23, 2009

New Family of Proteins - TPC2

International research collaborators have identified a new family of proteins, TPC2 (two-pore channels), that facilitates calcium signaling from specialized subcellular organelles. It is the first to isolate TPC2 as a channel that binds to nucleotide nicotinic acid adenine dinucleotide phosphate (NAADP), a second-signaling messenger, resulting in the release of calcium from intracellular stores. According to the researchers, this new discovery may have broad implications in cell biology and human disease research.

“The discovery was the result of many researchers working as one international team toward a unified outcome. We are very appreciative of all the collaborators’ efforts,” said Jianjie Ma, PhD, professor of physiology and biophysics at UMDNJ-Robert Wood Johnson Medical School. “We are proud to be part of a study that will stand as the foundation for further exploration of human disease, helping researchers to better understand how calcium contributes to cell growth and disorders, including aging-related cardiac disease, diabetes, lysosomal cell dysfunction and the metastasis of cells in cancer.”

According to the researchers, the mechanism for how NAADP triggers the release of calcium, as well as the specific sites of calcium store targeted for release, were previously unknown. These findings indicate that NAADP, through its interaction with TPC2, targets a specific store of calcium in lysosomes, a specialized subunit within the cell that contain digestion enzymes and regulate cell function.

The study was a collaboration of investigative teams at four universities, including the laboratory of Dr. Michael Zhu at the Ohio State University, the laboratory of Dr. A. Mark Evans at the University of Edinburgh and the laboratory of Dr. Antony Galione at the University of Oxford.

The research was supported by grants from the United Kingdom’s Wellcome Trust and the British Heart Foundation, the United States’ National Institutes of Health, and the American Heart Association.

UMDNJ-ROBERT WOOD JOHNSON MEDICAL SCHOOL
As one of the nation’s leading comprehensive medical schools, Robert Wood Johnson Medical School of the University of Medicine and Dentistry of New Jersey is dedicated to the pursuit of excellence in education, research, health care delivery, and the promotion of community health. In cooperation with Robert Wood Johnson University Hospital, the medical school’s principal affiliate, they comprise New Jersey’s premier academic medical center. In addition, Robert Wood Johnson Medical School has 34 hospital affiliates and ambulatory care sites throughout the region.

As one of the eight schools of the University of Medicine and Dentistry of New Jersey with 2,800 full-time and volunteer faculty, Robert Wood Johnson Medical School encompasses 22 basic science and clinical departments and hosts centers and institutes including The Cancer Institute of New Jersey, the Child Health Institute of New Jersey, the Center for Advanced Biotechnology and Medicine, the Environmental and Occupational Health Sciences Institute, and the Stem Cell Institute of New Jersey. The medical school maintains educational programs at the undergraduate, graduate and postgraduate levels for more than 1,500 students on its campuses in New Brunswick, Piscataway, and Camden, and provides continuing education courses for health care professionals and community education programs.

Monday, April 20, 2009

How genes are controlled in mammals

The international FANTOM consortium announces publication of three milestone papers in the prestigious journal Nature Genetics that will challenge current notions of how genes are controlled in mammals.

FANTOM, or Functional Annotation of the Mammalian cDNA, which is organized by RIKEN Omics Science Center (OSC), has leading scientists in Australia, Switzerland, Norway, South Africa, Sweden, Canada, Denmark, Italy, Germany, Singapore, UK, and the United States. The consortium has been providing the scientific community with extensive databases on the mammalian genome that describe molecular function, biology, and cell components. FANTOM has become a world authority on the mammalian transcriptome, the set of all messenger RNA showing active genetic expression at one point in time. Other major discoveries are that approximately 70% of the genome is transcribed and that more than half of the expressed genes are likely non-coding RNAs (ncRNAs) that do not code proteins; thus, the prevailing theory that only 2% of the genome is transcribed into mRNA coding to proteins needed to be reexamined. Now in its fourth stage, FANTOM4, led by OSC’s Dr. Yoshihide Hayashizaki, has in over 3 years of laborious research developed a novel technology for producing a genome-wide promoter expression profile, established a mathematical scheme for describing the data obtained, and extracted key genomic elements that play dominant roles in the maintenance of cellular conditions.

In the current research, OSC has broadened its original technology CAGE (Cap Analysis of Gene Expression) and created deepCAGE, which takes advantage of next-generation sequencing to both precisely identify transcription start sites genome wide as well as to quantify the expression of each start site. The deepCAGE technology was applied to a differentiating acute myeloid leukemia cell line (ACL) to provide genome-wide time course dynamics of expression at the level of individual promoters — specific sequences on the DNA providing binding sites for RNA polymerase and the protein transcription factors that recruit them. The consortium built a quantitative model of the genome-wide gene expression dynamics that identified the key regulator motifs driving the differentiation, the time-dependent activities of the transcription regulators binding the motifs, and the genome-wide target promoters of each motif.

Validation of the model was performed by knocking down each transcription factor with small interfering RNAs. This first report of a large-scale gene network based on experimental data set is certain to generate much excitement in the scientific community. This information is also important for life science and medical researchers who are trying to uncover the processes by which cells undergo conversion or become cancerous, and for those attempting to determine how to control the growth and differentiation of stem cells and ensure their safety for use in regenerative medicine. Dr. Harukazu Suzuki, the scientific coordinator of the consortium, had this to say, “We are proud that we have created groundbreaking research in understanding more about how genes regulate cells at the molecular level and we want to acknowledge all consortium members for their great contribution to the research effort.”

The FANTOM consortium has also expanded earlier discoveries of transcriptional complexity by exploring repetitive elements found throughout mammalian genomes with DeepCAGE. These elements, which constitute up to half of the genome, have been generally considered to be junk or parasitic DNA. However, the team has found that the repetitive elements are broadly expressed and 6 to 30% of mouse and human mRNAs are derived from repetitive element promoters. These RNAs are often tissue-specific and dynamically controlled, and control the output of the genome through a variety of mechanisms. The FANTOM4 collaborators have also identified yet another type of short RNA, referred to as tiRNA (transcription initiation RNA) or tiny RNAs, in the human, chicken, and Drosphilia. They are about 18 nucleotides (nt) in length and are found within -60 to +120 nt of transcription start sites and may actually be widespread in metazoans (animals). A BioMed Central Thematic Series features even more FANTOM 4 research papers in Genome Biology and several BMC journals.

Contact:

RIKEN Omics Science Center
Director: Yoshihide Hayashizaki
Project director: Harukazu Suzuki

TEL: +81-45-503-2222 FAX: +81-45-503-9216

Wednesday, April 01, 2009

Distinguishing Single Cells With Nothing But Light

Researchers at the University of Rochester have developed a novel optical technique that permits rapid analysis of single human immune cells using only light.

Availability of such a technique means that immunologists and other cellular researchers may soon be able to observe the responses of individual cells to various stimuli, rather than relying on aggregate statistical data from large cell populations. Until now scientists have not had a non-invasive way to see how human cells, like T cells or cancer cells, activate individually and evolve over time.

As reported today in a special biomedical issue of Applied Optics, this is the first time clear differences between two types of immune cells have been seen using a microscopy system that gathers chemical and structural information by combining two previously distinct optical techniques, according to senior author Andrew Berger, associate professor of optics at the University of Rochester.

Berger and his graduate student Zachary Smith are the first to integrate Raman and angular-scattering microscopy into a single system, which they call IRAM.

“Conceptually it’s pretty straightforward—you shine a specified wavelength of light onto your sample and you get back a large number of peaks spread out like a rainbow,” says Berger. “The peaks tell you how the molecules you’re studying vibrate and together the vibrations give you the chemical information.”

According to Smith, “Raman spectroscopy is essentially an easy way to get a fingerprint from the molecule.”

Structural information is simultaneously gathered by examining the angles at which light incident on a sample is bumped off its original course.

Together the chemical and structural information provide the data needed to classify and distinguish between two different, single cells. Berger and Smith verified this by looking at single granulocytes—a type of white blood cell—and peripheral blood monocytes.

“One of the big plusses with our system is that it’s a non-labeling approach for studying living cells,” says Berger.

IRAM differs from most standard procedures where markers are inserted in, or attached to cells. If a marker sticks to one cell, and not the other, you can tell which cell is which on the basis of specific binding properties.

While markers are often adequate for studying cells at a single point in time, monitoring a cell over time as it changes is more problematic, since the marker can affect dynamic cell activities, like membrane transport. And internal markers actually involve punching holes in the membrane, damaging or killing the cell in the process.

“Our method uses only light to effectively reach inside the cell,” says Smith. “We can classify internal differences in the cell without opening it up, attaching anything to it, or preparing it in any special way. It’s really just flipping a switch.”

Despite being relatively intense, the light used with IRAM does not harm or inhibit normal cell functionality. This is because the wavelength of the light can be precisely calibrated to minimize absorption by the cells. The near-infrared spectrum has proven particularly optimal for allowing almost all of the light to pass through the cells.

With the availability of a technique where making a measurement does not alter cellular activity, scientists will be able to better observe individual cell responses to stimuli, which Berger and Smith suspect may have far reaching implications for current understandings of cell activation and development.

“In the cell sensing community it’s currently a pretty hot area to figure out how to analyze activation responses on a cell-by-cell basis,” says Berger. “If individual information was available on top of existing ensemble data, you’d have a richer understanding of immune responses.”

Perfecting IRAM has been a stepping stone process so far. Now that individual cells can be distinguished, Berger and Smith are actively investigating activation processes more explicitly. Preliminary IRAM experiments conducted on T cells have revealed perceivable differences between the initial resting state of a T cell and its state following an encounter with an invader.

The next step will be to use IRAM to gather data continuously so that scientists can effectively watch single cells undergo activation and react to stimuli in real-time. The ability to know not only about the aggregate responses of cells, but also be able to observe the earliest changes among individual cells, may be of profound importance in time-critical areas, such as cancer research and immunology.

“There’s an obvious desire among cell researchers to be able to deliver a controlled stimulant to a single cell and then study its response over time,” says Berger. “The clinical insights that might arise are currently in the realm of speculation. We won’t know until we can do it—and now we can.”

Saturday, March 21, 2009

New Systems for Storing Electrical Energy
In order to save money and energy, many people are purchasing hybrid electric cars or installing solar panels on the roofs of their homes. But both have a problem -- the technology to store the electrical power and energy is inadequate.

Battery systems that fit in cars don't hold enough energy for driving distances, yet take hours to recharge and don't give much power for acceleration. Renewable sources like solar and wind deliver significant power only part time, but devices to store their energy are expensive and too inefficient to deliver enough power for surge demand.

Researchers at the Maryland NanoCenter at the University of Maryland have developed new systems for storing electrical energy derived from alternative sources that are, in some cases, 10 times more efficient than what is commercially available. The results of their research are available in the latest issue of Nature Nanotechnology.

"Renewable energy sources like solar and wind provide time-varying, somewhat unpredictable energy supply, which must be captured and stored as electrical energy until demanded," said Gary Rubloff, director of the University of Maryland's NanoCenter. "Conventional devices to store and deliver electrical energy -- batteries and capacitors -- cannot achieve the needed combination of high energy density, high power, and fast recharge that are essential for our energy future."

Researchers working with Professor Rubloff and his collaborator, Professor Sang Bok Lee, have developed a method to significantly enhance the performance of electrical energy storage devices.

Using new processes central to nanotechnology, they create millions of identical nanostructures with shapes tailored to transport energy as electrons rapidly to and from very large surface areas where they are stored. Materials behave according to physical laws of nature. The Maryland researchers exploit unusual combinations of these behaviors (called self-assembly, self-limiting reaction, and self-alignment) to construct millions -- and ultimately billions -- of tiny, virtually identical nanostructures to receive, store, and deliver electrical energy.

"These devices exploit unique combinations of materials, processes, and structures to optimize both energy and power density -- combinations that, taken together, have real promise for building a viable next-generation technology, and around it, a vital new sector of the tech economy," Rubloff said.

"The goal for electrical energy storage systems is to simultaneously achieve high power and high energy density to enable the devices to hold large amounts of energy, to deliver that energy at high power, and to recharge rapidly (the complement to high power)," he continued.

Electrical energy storage devices fall into three categories. Batteries, particularly lithium ion, store large amounts of energy but cannot provide high power or fast recharge. Electrochemical capacitors (ECCs), also relying on electrochemical phenomena, offer higher power at the price of relatively lower energy density. In contrast, electrostatic capacitors (ESCs) operate by purely physical means, storing charge on the surfaces of two conductors. This makes them capable of high power and fast recharge, but at the price of lower energy density.

The Maryland research team's new devices are electrostatic nanocapacitors which dramatically increase energy storage density of such devices - by a factor of 10 over that of commercially available devices - without sacrificing the high power they traditionally characteristically offer. This advance brings electrostatic devices to a performance level competitive with electrochemical capacitors and introduces a new player into the field of candidates for next-generation electrical energy storage.

Where will these new nanodevices appear? Lee and Rubloff emphasize that they are developing the technology for mass production as layers of devices that could look like thin panels, similar to solar panels or the flat panel displays we see everywhere, manufactured at low cost. Multiple energy storage panels would be stacked together inside a car battery system or solar panel. In the longer run, they foresee the same nanotechnologies providing new energy capture technology (solar, thermoelectric) that could be fully integrated with storage devices in manufacturing.

This advance follows soon after another accomplishment, the dramatic improvement in performance (energy and power) of electrochemical capacitors (ECC's), thus 'supercapacitors,' by Lee's research group, published recently in the Journal of the American Chemical Society. (Figure 1). Efforts are under way to achieve comparable advances in energy density of lithium (Li) ion batteries but with much higher power density.

"The University of Maryland's successes are built upon the convergence and collaboration of experts from a wide range of nanoscale science and technology areas with researchers already in the center of energy research," Rubloff said.

The Research Team

Gary Rubloff is Minta Martin Professor of Engineering in the materials science and engineering department and the Institute for Systems Research at the University of Maryland's A. James Clark School of Engineering. Sang Bok Lee is associate professor in the Department of Chemistry and Biochemistry at the College of Chemical and Life Sciences and WCU (World Class University Program) professor at KAIST (Korea Advanced Institute of Science and Technology) in Korea. Lee and Rubloff are part of a larger team developing nanotechnology solutions for energy capture, generation, and storage at Maryland. Their collaborators on electrical energy storage include Maryland professors Michael Fuhrer (physics), associate director of the Maryland Nanocenter Reza Ghodssi (electrical and computer engineering), John Cumings (materials science engineering), Ray Adomaitis (chemical and biomolecular engineering), Oded Rabin (materials science and engineering), Janice Reutt-Robey (chemistry), Robert Walker (chemistry), Chunsheng Wang (chemical and biomolecular engineering), Yu-Huang Wang (chemistry) and Ellen Williams (physics), director of the Materials Research Science and Engineering Center at the University of Maryland.
Why are nanorods so small?
A new study answers a key question at the very heart of nanotechnology: Why are nanorods so small?

Researchers at Rensselaer Polytechnic Institute have discovered the origins of nanorod diameter, demonstrating that the competition and collaboration among various mechanisms of atomic transport hold the key to nanorod size. The researchers say it is the first study to identify the fundamental reasons why nearly all nanorods have a diameter on the order of 100 nanometers.

“Scientists have been fabricating nanorods for decades, but no one has ever answered the question, ‘Why is that possible?’” said Hanchen Huang, professor in Rensselaer’s Department of Mechanical, Aerospace, and Nuclear Engineering, who led the study. “We have used computer modeling to identify, for the first time, the fundamental reasons behind nanorod diameter. With this new understanding, we should be able to better control nanorods, and therefore design better devices.”

Results of the study, titled “A characteristic length scale of nanorods diameter during growth,” were recently published in the journal Physical Review Letters.

When fabricating nanorods, atoms are released at an oblique angle onto a surface, and the atoms accumulate and grow into nanorods about 100 nanometers in diameter. A nanometer is one billionth of a meter in length.

The accumulating atoms form small layers. After being deposited onto a layer, it takes varying amounts of energy for atoms to travel or “step” downward to a lower layer, depending on the step height. In a previous study, Huang and colleagues calculated and identified these precise energy requirements. As a result, the researchers discovered the fundamental reason nanorods grow tall: as atoms are unable to step down to the next lowest layer, they begin to stack up and grow higher.

It is the cooperation and competition of atoms in this process of multi-layer diffusion that accounts for the fundamental diameter of nanorods, Huang shows in the new study. The rate at which atoms are being deposited onto the surface, as well as the temperature of the surface, also factor into the equation.

“Surface steps are effective in slowing down the mass transport of surface atoms, and aggregated surface steps are even more effective,” Huang said. “This extra effectiveness makes the diameter of nanorods around 100 nanometers; without it the diameter would go up to 10 microns.”

Beyond advancing scientific theory, Huang said the discovery could have implications for developing photonic materials and fuel cell catalysts.

Huang co-authored the paper with Rensselaer Research Scientist Longguang Zhou.

Funding for this research was provided by the U.S. Department of Energy Office of Basic Energy Science.

Tuesday, February 17, 2009

What’s Feeding Cancer Cells?

Cancer cells need a lot of nutrients to multiply and survive. While much is understood about how cancer cells use blood sugar to make energy, not much is known about how they get other nutrients. Now, researchers at the Johns Hopkins University School of Medicine have discovered how the Myc cancer-promoting gene uses microRNAs to control the use of glutamine, a major energy source. The results, which shed light on a new angle of cancer that might help scientists figure out a way to stop the disease, appear Feb. 15 online at Nature.

“While we were looking for how Myc promotes cancer growth, it was unexpected to find that Myc can increase use of glutamine by cancer cells,” says Chi V. Dang, M.D., Ph.D., the Johns Hopkins Family Professor of Oncology at Johns Hopkins. “This surprising discovery only came about after scientists from several disciplines came together across Hopkins to collaborate — it was a real team effort.”

In their search to learn how Myc promotes cancer, the researchers teamed up with protein experts, and using human cancer cells with Myc turned on or off, they looked for proteins in the cell’s powerhouse — the mitochondria — that appeared to respond to Myc. They found eight proteins that were distinctly turned up in response to Myc.

At the top of the list of mitochondrial proteins that respond to Myc was glutaminase, or GLS, which, according to Dang, is the first enzyme that processes glutamine and feeds chemical reactions that make cellular energy. So the team then asked if removing GLS could stop or slow cancer cell growth. Compared to cancer cells with GLS, those lacking GLS grew much slower, which led the team to conclude that yes, GLS does affect cell growth stimulated by Myc.

The researchers then wanted to figure out how Myc enhances GLS protein expression. Because Myc can control and turn on genes, the team guessed that Myc might directly turn on the GLS gene, but they found that wasn’t the case. “So then we thought, maybe there’s an intermediary, maybe Myc controls something that in turn controls GLS,” says Ping Gao, Ph.D., a research associate in hematology at Johns Hopkins.

They then built on previous work done with the McKusick-Nathans Institute of Genetic Medicine at Hopkins where they discovered that Myc turns down some microRNAs, small bits of RNA that can bind to and inhibit RNAs, which contain instructions for making proteins. The team looked more carefully at the GLS RNA and found that it could be bound and regulated by two microRNAs, called miR23a and miR23b, pointing to the microRNAs as the intermediary that links Myc to GLS expression.

“Next we want to study GLS in mice to see if removing it can slow or stop cancer growth,” says Gao. “If we know how cancer cells differ from normal cells in how they make energy and use nutrients, we can identify new pathways to target for designing drugs with fewer side effects.”

This study was funded by the National Institutes of Health, the National Cancer Institute, the Rita Allen Foundation, the Leukemia and Lymphoma Society and the Sol Goldman Center for Pancreatic Cancer Research.

Authors on the paper are Ping Gao, Irina Tchernyshyov, Tsung-Cheng Chang, Yun-Sil Lee, Karen Zeller, Angelo De Marzo, Jennifer Van Eyk, Joshua Mendell and Chi V. Dang, of Johns Hopkins; and Kayoko Kita and Takfumi Ochi of Teikyo University in Japan.

Wednesday, January 21, 2009

This image illustrates the new "colorimetric technique" developed by researchers at Florida Atlantic University to map four dimensions (4D) of brain data using EEG signals at once. Using this fourth dimension will dramatically change the way neuroscientists are able to understand how the brain operates, shedding insight on a number of psychiatric and neurological disorders and opening up new ways to study therapeutic interventions, in particular the effects of drugs.

Groundbreaking Technique Reveals Modus Operandi of the Intact Living Brain

Dynamical Theory and Novel 4D Colorimetric Method Reveal the Essential Modus Operandi of the Intact Living Brain--Study shows how areas in the brain integrate and segregate at the same time.

For the brain to achieve its intricate functions such as perception, action, attention and decision making, neural regions have to work together yet still retain their specialized roles. Excess or lack of timely coordination between brain areas lies at the core of a number of psychiatric and neurological disorders such as epilepsy, schizophrenia, autism, Parkinson’s disease, sleep disorders and depression. How the brain is coordinated is a complex and difficult problem in need of new theoretical insights as well as new methods of investigation. In groundbreaking research published in the January 2009 issue and featured on the cover of Progress in Neurobiology, researchers at Florida Atlantic University’s Center for Complex Systems and Brain Sciences in the Charles E. Schmidt College of Science propose a theoretical model of the brain’s coordination dynamics and apply a novel 4D colorimetric method to human neurophysiological data collected in the laboratory. The article, titled “Brain coordination dynamics: true and false faces of phase synchrony and metastability,” is co-authored by Drs. Emmanuelle Tognoli, an expert in neurophysiology and research assistant professor in the Center’s Human Brain and Behavior Laboratory, and J. A. Scott Kelso, the Glenwood and Martha Creech Eminent Scholar in Science and founder of the Center. The authors’ theory and data show that both tendencies co-occur in the brain and are essential for its normal function. Their research demonstrates that coordination involves a subtle kind of ballet in the brain, and like dancers, cortical areas are capable of coming together as an ensemble (integration) while still exhibiting a tendency to do their own thing (segregation).

“A lot of emphasis in neuroscience these days is on what the parts do,” said Kelso. “But understanding the coordination of multiple parts in a complex system such as the brain is a fundamental challenge. Using our approach, key predictions of cortical coordination dynamics can now be tested, thereby revealing the essential modus operandi of the intact living brain.”

Tognoli and Kelso developed a novel colorimetric technique that simultaneously maps four dimensions of brain data (magnitude, 2D of cortical surface and time) in order to capture true synchronization in electroencephalographic (EEG) signals. Because of the fourth dimension afforded by this colorimetric method, it is possible to observe and interpret oscillatory activity of the entire brain as it evolves in time, millisecond by millisecond. Moreover, the authors’ method applies to continuous non-averaged EEG data thereby de-emphasizing the notion of “an average brain.” The authors demonstrate that only in continuous EEG can real synchronization be sorted from false synchronization – a kind of synchronization that arises from the spread of electrical fields and volume conduction rather than from genuine interactions between brain areas.

Most of the time, activity from multiple brain areas look coordinated; however, in actuality, there is far less synchrony than what appears to be. With the support of mathematical models that reproduce the biases of real brain records in synthetic data, the authors show how to tell apart real and false episodes of synchronization. For the first time, true episodes of brain coordination can be spotted directly in EEG records and carefully analyzed.

In addition to shedding insight on the way the brain normally operates, Tognoli and Kelso’s research provides a much-needed framework to understand the coordination dynamics of brain areas in a variety of pathological conditions. Their approach allows a precise parsing of “brain states” and is likely to open up new ways to study therapeutic interventions, in particular the effects of drugs (pharmaco-dynamics). Their approach will also help improve the design of brain computer interfaces used to help people who are paralyzed.

“In the future, it may be possible to fluently read the processes of the brain from the EEG like one reads notes from a musical score,” said Tognoli. “Our technique is already providing a unique view on brain dynamics. It shows how activity grows and dies in individual brain areas and how multiple areas engage in and disengage from working together as a coordinated team.”

In addition to simple linear synchronization between brain areas, the authors describe more subtle modes of coordination during which areas may cooperate (integrate) and at the same time retain their functional specificity (segregation).

“This property of metastability falls out of our theory and is crucial for the brain,” said Kelso. “The brain is a complex nonlinear dynamical system, and it needs to coordinate the activity of diverse and remotely connected parts in order to extract and communicate meaningful information.”

Tognoli points out that subtle regimes of coordination are advantageous for the brain and are faster, more powerful and less energetically costly, thereby creating rich modes of interaction that surpass those of simple linear modes of coordination.

For a long time, scientists have strictly emphasized one kind of synchronization called ‘inphase’ or ‘zero-lag synchrony’ looking only at who is coordinated with whom and not observing the details of how they are coordinated. Through their research, Tognoli and Kelso have shown that the brain uses a much wider repertoire of synchronization patterns than just inphase. For example, brain areas may lock their oscillations together but keep a different phase.

This characteristic is also a key to the brain’s dynamic complexity. Areas may encode distinct information when they coordinate with one phase difference or another, and the brain may finely tune itself, such as in learning, by altering the lag at which its areas coordinate rather than just switching synchrony on and off. Such a brain would have a far greater combinatorial and computational power than the old model of the ‘inphase brain’. But to understand the principals at work, the lag or ‘relative phase’ between coupled oscillations in the brain needs to be systematically studied.

“This work lies at the intersection of neuroscience and complexity science,” said Dr. Gary Perry, dean of the Charles E. Schmidt College of Science. “Drs. Kelso and Tognoli have successfully developed the specific conceptual and methodological tools needed to capture and observe these important features in empirical data. Their unique approach and findings will help to shed light on some of world’s most debilitating and costly health disorders.”

The authors’ research is supported by the National Institute of Mental Health, National Institute of Neurological Disorders and Stroke, National Science Foundation, U.S. Office of Naval Research and the Davimos Family Endowment for Excellence in Science.

- FAU -

Florida Atlantic University opened its doors in 1964 as the fifth public university in Florida. Today, the University serves more than 26,000 undergraduate and graduate students on seven campuses strategically located along 150 miles of Florida's southeastern coastline. Building on its rich tradition as a teaching university, with a world-class faculty, FAU hosts ten colleges: College of Architecture, Urban & Public Affairs, Dorothy F. Schmidt College of Arts & Letters, the Charles E. Schmidt College of Biomedical Science, the Barry Kaye College of Business, the College of Education, the College of Engineering & Computer Science, the Harriet L. Wilkes Honors College, the Graduate College, the Christine E. Lynn College of Nursing and the Charles E. Schmidt College of Science.
Frogs Are Being Eaten to Extinction

The global trade in frog legs for human consumption is threatening their extinction, according to a new study by an international team including University of Adelaide researchers.

The researchers say the global pattern of harvesting and decline of wild populations of frogs appears to be following the same path set by overexploitation of the seas and subsequent “chain reaction” of fisheries collapses around the world.

The researchers have called for mandatory certification of frog harvests to improve monitoring and help the development of sustainable harvest strategies.

University of Adelaide ecologist Associate Professor Corey Bradshaw says frogs legs are not just a French delicacy.

“Frogs legs are on the menu at school cafeterias in Europe, market stalls and dinner tables across Asia to high end restaurants throughout the world,” says Associate Professor Bradshaw, from the University’s School of Earth and Environmental Sciences and also employed as a Senior Scientist by the South Australian Research and Development Institute (SARDI).

“Amphibians are already the most threatened animal group yet assessed because of disease, habitat loss and climate change - man’s massive appetite for their legs is not helping.”

The annual global trade in frogs for human consumption has increased over the past 20 years with at least 200 million and maybe over 1 billion frogs consumed every year. Only a fraction of the total trade is assessed in world trade figures.

Indonesia is the largest exporter of frogs by far and its domestic market is 2-7 times that.

“The frogs’ legs global market has shifted from seasonal harvest for local consumption to year-round international trade,” says Associate Professor Bradshaw. “But harvesting seems to be following the same pattern for frogs as with marine fisheries - initial local collapses in Europe and North America followed by population declines in India and Bangladesh and now potentially in Indonesia.

“Absence of essential data to monitor and manage the wild harvest is a large concern.”

The study team also includes researchers from the Memorial University of Newfoundland in Canada, the National University of Singapore and Harvard University. A paper about the study is soon to be published online in the journal Conservation Biology.

Saturday, December 27, 2008

Small Molecule Triggers Bacterial Community

While bacterial cells tend to be rather solitary individuals, they are also known to form intricately structured communities called biofilms. But until now, no one has known the mechanisms that cause isolated bacteria to suddenly aggregate into a social network. New insights from the lab of Harvard Medical School microbial geneticist Roberto Kolter reveal previously unknown communication pathways that cause such social phenomenon.

Using the non-pathogenic Bacillus subtilis as a model organism, Kolter and postdoctoral researcher Daniel Lopez discovered a group of natural, soil-based products that trigger communal behavior in bacteria. One molecule in particular, surfactin, is produced by B. subtilis. Biofilm formation begins when surfactin, and other similar molecules, cause bacteria to leak potassium. As potassium levels decline, a membrane protein on the bacterium stimulates a cascade of gene activity that signals neighboring cells to form a quorum. As a result, biofilms form.

The authors note that it’s still unclear how biofilm formation benefits the bacteria, and they hypothesize that it might be an antibacterial defense against competing species. Still, the notion that a single small molecule can induce multicellularity intrigues the researchers.

“Typically, scientists try to discover new antibiotics through some rather blunt means, like simply looking to see if one bacterium can kill another,” says Kolter. “This discovery of a single molecule causing such a dramatic response in bacteria hints at a new and potentially effective way to possibly discover antibiotics.”

These findings are published in the Proceedings of the National Academy of Sciences.
Spinning Spigots: 'Missing Link' in Spider Evolution Discovered

New interpretations of fossils have revealed an ancient missing link between today’s spiders and their long-extinct ancestors. The research by scientists at the University of Kansas and at Virginia’s Hampden-Sydney College may help explain how spiders came to weave webs.

The research focuses on fossil animals called Attercopus fimbriunguis. While modern spiders make silk threads with modified appendages called spinnerets, the fossil animals wove broad sheets of silk from spigots on plates attached to the underside of their bodies. Unlike spiders, they had long tails.

The research findings by Paul Selden, Gulf-Hedberg Distinguished Professor of Invertebrate Paleontology at KU and William Shear, Trinkle Professor of Biology at Hampden-Sydney College, were published this week in the Proceedings of the National Academy of Sciences.

Selden and Shear first discovered the fossils almost 20 years ago. At that time the specimens were thought to be the oldest spider fossils known, dating back to the Devonian Period, about 380 million years ago. Unearthed in upstate New York, the fossils were among the first animals to live on land in North America.

New finds near the same location, in Gilboa, New York, caused the paleontologists to reinterpret their original findings. The new fossils included silk-spinning organs, called spigots, arranged on the edges of broad plates making up the undersides of the animals. The researchers identified parts of a long, jointed tail not found in any previously known spider, but common among some of the spiders’ more primitive relatives.

“We think these ‘tailed spiders’ represent an entirely new kind of animal, not known before from living or fossil examples.” Shear said. “They were more primitive than spiders in many ways, and may be spider ancestors.” Besides having tails and spinning silk from broad plates, the animals also seem to lack poison glands.

Selden added, “This new information also allows us to reinterpret other fossils once thought to be spiders, and this evidence suggests these Uraraneida, or pre-spiders, existed for more than 100 million years, living alongside real spiders, which evolved later.”

The paleontologists think that Attercopus developed silk-spinning spigots in order to line burrows, make homing trails, and possibly to subdue prey, but were not capable of making webs because of the limited mobility of the spigots. True spiders may have arisen when the genetic information for certain appendages was “turned back on” and the spigots moved onto them. The appendages became the modern spiders’ spinnerets, which can move freely and create patterned webs.
Selden is director of the KU Paleontological Institute at the Biodiversity Institute, one of eight designated research centers on campus that report to the KU Office of Research and Graduate Studies.

Saturday, December 20, 2008



Discovery: New Tooth Cavity Protection

Clarkson University Center for Advanced Materials Processing Professor Igor Sokolov and graduate student Ravi M. Gaikwad have discovered a new method of protecting teeth from cavities by ultrafine polishing with silica nanoparticles.

The researchers adopted polishing technology used in the semiconductor industry (chemical mechanical planarization) to polish the surface of human teeth down to nanoscale roughness. Roughness left on the tooth after the polishing is just a few nanometers, which is one-billionth of a meter or about 100,000 times smaller than a grain of sand.

Sokolov and Gaikwad showed that teeth polished in this way become too “slippery” for the "bad" bacteria that is responsible for the destruction of dental enamel. As a result the bacteria can be removed fairly easily before they cause damage to the enamel.

Although silica particles have been used before for tooth polishing, polishing with nanosized particles has not been reported. The researchers hypothesized that such polishing may protect tooth surfaces against the damage caused by cariogenic bacteria, because the bacteria can be removed easily from such polished surfaces.

The Clarkson researchers' findings were published in the October issue of the Journal of Dental Research, the dentistry journal with the top worldwide scientific impact index.

Sokolov is a professor of physics, professor of chemical and biomolecular science, and director of Clarkson's Nanoengineering and Biotechnology Laboratories Center (NABLAB). Gaikwad is a graduate student in physics.

Read more at http://jdr.iadrjournals.org/cgi/content/short/87/10/980.

Saturday, December 13, 2008

Selenium, Vitamin E Do Not Prevent Prostate Cancer

Findings from one of the largest cancer chemoprevention trials ever conducted have concluded that selenium and vitamin E taken alone or in combination for an average of five and a half years did not prevent prostate cancer, according to a team of researchers coordinated by the Southwest Oncology Group (SWOG) and led by scientists at The University of Texas M. D. Anderson Cancer Center and Cleveland Clinic.

Data and analysis gathered through Oct. 23, 2008, from the Selenium and Vitamin E Cancer Prevention Trial (SELECT) were published in the Dec. 9 issue of the Journal of the American Medical Association (JAMA) by Scott M. Lippman, M.D., professor and chair of Thoracic/Head and Neck Medical Oncology at M. D. Anderson, Eric A. Klein, M.D., of the Cleveland Clinic Lerner College of Medicine, and 30 coauthors from the United States, Puerto Rico and Canada.

Funded by the National Cancer Institute (NCI) with some additional contribution from the National Center for Complementary and Alternative Medicine, the Phase III trial began recruitment in August 2001 and aimed to determine whether selenium, vitamin E, or both could prevent prostate cancer and other diseases in relatively healthy men. The study followed 35,533 participants from 427 sites in the United States, Canada and Puerto Rico. The randomized, placebo-controlled and double-blind trial divided the participants into four intervention groups: selenium, vitamin E, both selenium and vitamin E, and placebos.

Supplement Cases 5-year prostate cancer diagnosis
Placebo 416 4.43 percent
Selenium 432 4.56 percent
Vitamin E 473 4.93 percent
Selenium + Vitamin E 437 4.56 percent



The study found no evidence of benefit from selenium, vitamin E, or both. Additionally, the data showed two statistically non-significant findings of concern: slightly increased risks of prostate cancer in the vitamin E group and type two diabetes mellitus in the selenium group. Both trends may be due to chance and were not observed in the group taking selenium and vitamin E together.

An independent data and safety monitoring committee reached the same conclusion and recommended supplementation be discontinued Oct. 23 for lack of evidence of benefit.

"SELECT presented a unique opportunity to improve the lives of men from every social and ethnic background through chemoprevention," said Lippman, who serves as a national study coordinator. "Although supplementation has been discontinued, we will continue to follow these men and monitor their health for approximately three more years, conducting regular prostate screening tests and questioning them about diabetes and other health issues. Doing so is critical not only to determine any possible long-term effects of the selenium and vitamin E, but also in order to gain a better understanding of prostate and other cancers and age-related disease."

Prostate cancer is the most common male cancer in the U.S. and the second leading cause of cancer deaths overall. The American Cancer Society estimates that more than 180,000 American men will be diagnosed with prostate cancer this year and nearly 29,000 will die from the disease. African-American men have a 60 percent higher incidence rate of prostate cancer and are two times more likely to die from the disease compared with Caucasian men.

Elise Cook, M.D., an associate professor in M. D. Anderson's Department of Clinical Cancer Prevention and the location's principal investigator, served as the chair of SELECT's Minority and Medically Underserved Subcommittee. "Our site has placed a strong emphasis on recruiting African-American men to participate. Of the 387 men we follow, 101 of those are African-American. It is important we continue to follow these men to determine long-term effects and complete the ancillary studies in which many participate," said Cook.

SELECT was based upon the secondary outcomes from two previous cancer prevention trials. The first, a 1996 study of selenium versus placebo to prevent non-melanoma skin cancer, showed that although the supplement did not reduce the risk of skin cancer, selenium did reduce prostate cancer by two-thirds; and in the second, a 1998 study conducted by Finnish researchers determined that although vitamin E did not prevent lung cancer in more than 29,000 male smokers, it did result in 32 percent fewer prostate cancers in men taking the supplement.

"Preliminary data suggesting benefits - no matter how promising - cannot reliably result in new clinical recommendations until they've been tested in definitive trials," said Ernest T. Hawk, M.D., vice president and division head of M. D. Anderson's Cancer Prevention and Population Sciences.

Although the SELECT trial did not turn out as we'd hoped - identifying a new way to reduce men's risk of prostate cancer - it was nevertheless extremely valuable by generating definitive evidence. Cancer prevention advances by rigorous science."

Identity of SELECT participants will remain blinded to prevent the introduction of any unintentional bias, however, they may be unblinded upon request. The sub-studies, funded and conducted by the National Institutes of Health's National Heart, Lung and Blood Institute, the National Institute of Aging, the National Eye Institute and the NCI, will continue without the participants taking any supplementation. These ancillary studies were evaluating the effects of selenium and vitamin E on chronic obstructive pulmonary disease, the development of Alzheimer's disease, the development of age-related macular degeneration and cataracts, and the development of colon polyps.

Lippman commented, "We are grateful to each of the 387 Houston-area men who committed to participating in this study through M. D. Anderson. Prevention trials are an important direction for the future of cancer research. SELECT played an important role in the study of the prevention of prostate cancer and we hope to learn more about why these supplements didn't do more to prevent prostate cancer as the study continues."

Co-authors with Lippman, Klein and Cook include 30 colleagues from the Southwest Oncology Group.



About M. D. Anderson
The University of Texas M. D. Anderson Cancer Center in Houston ranks as one of the world's most respected centers focused on cancer patient care, research, education and prevention. M. D. Anderson is one of only 41 Comprehensive Cancer Centers designated by the National Cancer Institute. For four of the past six years, M. D. Anderson has ranked No. 1 in cancer care in "America's Best Hospitals," a survey published annually in U.S. News and World Report.

Friday, December 05, 2008

Researchers Test Mobile Alert System for Cell Phones
In the first field trial of its kind, Georgia Tech’s Wireless Emergency Communications project tested the Federal Communications Commission’s (FCC) Commercial Mobil Alert System to see how well it met the needs of people with vision and hearing impairments. They found three areas where they will recommend changes to the FCC.

• Although 90 percent of participants who are blind or have low vision found the alert attention signal to be loud enough and long enough to get their attention, only 70 percent of deaf and hard of hearing participants indicated the same regarding the vibrating cadence. Comments regarding the vibrating cadence suggested that it would only be effective if the individual were holding the phone in their hand, but easily missed if in a purse or even in one’s pocket.

• All hearing participants expressed concern that the early part of the message was missed because the tone went too quickly into the 90-character spoken alert, causing the first few words of the message to be missed. The required Commercial Mobile Alert System message format places the event type first (i.e., tornado, flood, etc.) so crucial information may not be heard by blind consumers using text-to-speech software on their mobile phones to access the alerts. Many suggested the need for a header such as “This is a…” to allow for more clarity. Such a header is currently employed by the Emergency Alert System (EAS) messages broadcast on television, radio and cable systems.

• Deaf and hard of hearing participants commented that they would like to see enhancements such as strobe lights, screen flashes and stronger vibrating cadences. While these enhancements can be addressed by cell phone manufacturers, they aren’t required to do so by the FCC.

The tests were conducted on November 12, 2008, with 30 subjects. The results will be presented to the FCC and others during the State of Technology conference in September.

The FCC established the Commercial Mobile Alert System in 2008 to provide a framework for commercial mobile service providers to voluntarily transmit emergency alerts to their subscribers. The Rehabilitation Engineering Research Center for Wireless Technologies’ Wireless Emergency Communications project has been developing software and conducting field tests on how to make the emergency alert system accessible for people with sensory disabilities who use mobile devices.

Tech’s Wireless Emergency Communications project received additional federal funding to field test the provisions of Commercial Mobile Alert System that affect accessibility, such as the limitation of 90 characters, not permitting URLs, and volume limits including specific vibrating cadences and alert tones. By conducting this field test, they will provide the FCC and the wireless industry with concrete evidence from the perspective of end-users on how the Commercial Mobile Alert System would be better able to serve the specific needs of people with sensory disabilities. Most recommendations, however, would render the system more effective for all consumers. For example, participants suggested repeating the attention signal and vibrating cadences in intervals until they are shut off by the user to ensure the receipt of the alert by an individual who is away from their phone, asleep, driving or unable to hear or see.

The field test recruited participants from the Atlanta Area School for the Deaf, Atlanta Public School System, the Wireless Rehabilitation Engineering Research Center Consumer Advisory Network and the Georgia Radio Reading Service (GaRRS). Subjects were as diverse in their sensory limitations as they were in their technical skill level, ranging from those who were fully deaf or fully blind to those with enhanced hearing (hearing aid/cochlear implants) or enhanced vision (glasses/contacts).

Though field test participant’s names are usually held in the strictest confidence, one participant agreed to go on the record.

“I applaud PBA and Georgia Tech for their effort in bringing this very important issue to the public,” said Georgia State Representative Bob Smith. “We must continue to make this a priority, to seek innovative and creative ways to notify people with disabilities and tirelessly work to improve and perfect the notification system. It is paramount that Georgians are aware that people with various disabilities, more than any time in our history, need to be informed of catastrophic events.”

This is the second field test hosted by project partner Public Broadcasting Atlanta. PBA recognized the importance of this community project and how it aligned with its vision of implementing a Local Education Network System (LENS) capable of convening individuals, organizations and communities. MetroCast Atlanta, a component of LENS, would serve as an emergency information network for schools, city officials and citizens in the event of natural or terrorist disaster.

The mobile devices and cellular service used in this field test were the result of a generous donation from WEC industry partner AT&T. For more information on WEC, go to www.wirelessrerc.org. Funding for the CMAS parameter field test was made possible by the U.S. Department of Education’s National Institute on Disability and Rehabilitation Research, grant # H133E060061.

Monday, December 01, 2008

A balancing act : Aging and Alzheimer’s disease

Cognitive decline may occur during aging, or due to genetic mutations that predispose individuals to develop Alzheimer’s disease. Now, a team of scientists, led by Akihiko Takashima at the RIKEN Brain Science Institute in Wako, has found support for their hypothesis that a similar molecular abnormality could account for cognitive dysfunction during both Alzheimer’s disease and aging. They report their findings in a recent issue of PLoS ONE (1).

The researchers subjected aged mice (19–25 months) and adult mice (9–15 months) harboring genetic mutations associated with Alzheimer’s disease to a test of spatial memory—the Morris water maze. Both groups of mice were trained to find a platform submerged in a pool of water based on visual cues around the pool (Fig. 1). When this training period was complete, the researchers could assess how well the mice remembered the platform location by determining how much time the mouse spends near the platform during a ‘probe trial’. They found that both the aged mice and the mice with the Alzheimer’s disease mutations had spatial memory deficits.

The neurotransmitter GABA (γ-aminobutyric acid) controls inhibitory signaling in the brain, and GABA receptor blockers have previously been shown to improve cognition in aging rats. To see if this was also true in Alzheimer mutant mice, the researchers administered a GABA receptor blocker, and saw restoration of normal spatial memory in the Morris water maze. The treated mice were also better at recognizing a new object placed into their cage, which is a measure of ‘declarative memory’.

The researchers then examined synaptic plasticity in a part of the brain that plays a role in spatial memory—the hippocampus. They found deficits in synaptic plasticity in hippocampal slices from both aging and Alzheimer mutant mice. However, normal synaptic plasticity could be restored by adding a GABA receptor blocker. This suggests that both aging and Alzheimer’s disease mutations may affect memory by increasing GABA-mediated inhibitory signaling in the hippocampus.

These findings show that GABA receptor blockers may be an effective therapeutic strategy to enhance cognitive function during both aging and Alzheimer’s disease. This work also indicates that an imbalance between excitatory and inhibitory signaling in the brain may result in memory dysfunction. Yuji Yoshiike, the study’s first author, says the findings suggest that “even when memory declines because of the accumulation of neurotoxic molecules during aging, memory may be improved by restoring the balance between synaptic excitation and inhibition.”

Reference

1. Yoshiike, Y., Kimura, T., Yamashita, S., Furudate, H., Mizoroki, T., Murayama, M. & Takashima, A. GABAA receptor-mediated acceleration of aging-associated memory decline in APP/PS1 mice and its pharmacological treatment by picrotoxin. PLoS ONE 3, e3029 (2008).
Bringing Galatea to life

Inflammation is a key step in the progression of heterotopic ossification – where soft tissue turns into bone – according to research. The study shows that an inhibitor of the disease gene’s protein product is partially therapeutic, and therefore offers hope for this devastating condition.

In a reverse of the ancient myth of Pygmalion, where a statue comes to life, sufferers of heterotopic ossification have their fibrous tissue ‘ossified’ — in effect, turning the patients into statues. A major form of heterotopic ossification is fibrodysplasia ossificans progressiva (FOP), which in about 98% of cases results from a mutation in a specific bone morphogenetic protein receptor.

A mouse model of FOP involving the same mutation found in people has yet to be made, but Paul Yu and colleagues have now developed a mouse model of the general phenomenon by expressing a related version of the mutated receptor. They found that just expressing the mutant version of the protein receptor was not sufficient to cause the disease – an inflammatory stimulus was also needed. Yu’s team also show that inhibiting inflammation with glucocorticoids—a treatment commonly used in the clinic—helps reduce the incidence of heterotopic ossification in their model.

Importantly, the authors also show that a small molecule inhibitor of the protein receptor likewise reduced the incidence of disease progression. This form of treatment represents a potential breakthrough, as long-term use of glucocorticoids causes severe side-effects. The authors caution, however, that much more research is needed before the drug could be considered for human trials.

Author contact:
Paul Yu (Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA)
Tel: +1 617 643 3493; E-mail: pbyu@partners.org

Wednesday, November 19, 2008

Cell Phone/Brain Tumor Connection Remains Inconclusive – But They Pose Neurological Health Risks
There has been much speculation over the last few years about whether cell phones increase the risk of developing a brain tumor. Research has not conclusively answered this question, which has left consumers confused. The majority of studies that have been published in scientific journals do not have sufficient evidence to show that cell phones increase the risk of brain tumors. The problem is that cell phone technology is in its infancy, so none of these studies could analyze long-term risks. This unknown is a particular issue for children, who will face a lifetime of cell phone usage. While the cell phone/brain tumor connection remains inconclusive, the American Association of Neurological Surgeons (AANS) cautions that cell phones present plenty of other risks to people’s neurological health.

Several studies show cell phones are a leading cause of automobile crashes. It is estimated that drivers distracted by cell phones are four times more likely to be in a motor vehicle accident. The following are some sobering statistics:

~According to a Harvard University study, an estimated 2,600 people die and 12,000 suffer serious to moderate injuries each year in cell phone-related accidents.
~A Canadian study analysis of 26,798 cell phone calls made during the 14-month study period showed that the risk of an automobile accident was four times higher when using a cell phone.
~National statistics indicate that an estimated 50,000 traumatic brain injury-related deaths occur annually in the United States, 25,000-35,000 of which are attributed to motor vehicle accidents.

A few recent cases treated in U.S. hospital emergency rooms:

~A 29-year-old male was talking on his cell phone while on an escalator, fell backwards, and lacerated his head.
~A 25-year-old male was talking on his cell phone and walked into a street sign, lacerating his head.
~A 43-year-old female fell down 13-14 steps while talking on her cell phone, after drinking alcohol. She suffered a neck sprain and contusions to her head, back, shoulder, and leg.
~A 50-year-old female suffered nerve damage which was related to extensive cell phone usage. She felt pain in her fingers and the length of her arm while holding her cell phone, and was diagnosed with cervical radiculopathy.
~A 39-year-old man suffered a head injury after crashing into a tree on his bicycle while texting
~A 16-year-old boy suffered a concussion because he was texting and walked into a telephone pole.

Cell Phone Injury Prevention Tips

~Talk hands free by using an earpiece or on speaker mode whenever possible.
~Follow all cell phone laws applicable to your city and state – these vary greatly.
~Use your cell phone only when safely parked, or have a passenger use it.
~Do not dial the phone or take notes while driving, cycling, skateboarding, rollerblading, etc.
~Never text message while driving, walking, cycling, skateboarding, rollerblading, etc.
~Never text message or use a cell phone while performing any physical activities that require attention.
~If your phone rings while driving, let the call go into voice mail and respond later when you are safely parked.