Monday, 31 December 2012

Teenagers without Internet access at home are educationally disadvantaged, UK study suggests

Dec. 30, 2012 — A major in-depth study examining how teenagers in the UK are using the internet and other mobile devices says the benefits of using such technologies far outweigh any perceived risks.

The findings are based on a large-scale study of more than 1,000 randomly selected households in the UK, coupled with regular face-to-face interviews with more than 200 teenagers and their families between 2008 and 2011.

While the study reflects a high level of parental anxiety about the potential of social networking sites to distract their offspring, and shows that some parents despair at their children's tendency to multitask on mobile devices, the research by Oxford University's Department of Education concludes that there are substantial educational advantages in teenagers being able to access the internet at home.

Teenagers who do not have access to the internet in their home have a strong sense of being 'educationally disadvantaged', warns the study. At the time of the study, the researchers estimated that around 10 per cent of the teenagers were without online connectivity at home, with most of this group living in poorer households. While recent figures from the Office of National Statistics suggest this dropped to five per cent in 2012, the researchers say that still leaves around 300,000 children without internet access in their homes.

The researchers' interviews with teenagers reveal that they felt shut out of their peer group socially and also disadvantaged in their studies as so much of the college or school work set for them to do at home required online research or preparation. One teenager, whose parents had separated, explained that he would ring his father who had internet access and any requested materials were then mailed to him through the post.

Researcher Dr Rebecca Eynon commented: 'While it's difficult to state a precise figure for teenagers without access to the internet at home, the fact remains that in the UK, there is something like 300,000 young people who do not -- and that's a significant number. Behind the statistics, our qualitative research shows that these disconnected young people are clearly missing out both educationally and socially.'

In an interview with a researcher, one 14-year old boy said: 'We get coursework now in Year 9 to see what groups we're going to go in Year 10. And people with internet, they can get higher marks because they can like research on the internet…my friends are probably on it [MSN] all the day every day. And like they talk about it in school, what happened on MSN.'

Another teenager, aged 15, commented: 'It was bell gone and I have a lot of things that I could write and I was angry that I haven't got a computer because I might finish it at home when I've got lots of time to do it. But because when I'm at school I need to do it very fast.'

Strikingly, this study contradicts claims that others have made about the potential risks of such technologies adversely affecting the ability of teenagers to concentrate on serious study. The researchers, Dr Chris Davies and Dr Rebecca Eynon, found no evidence to support this claim. Furthermore, their study concludes that the internet has opened up far more opportunities for young people to do their learning at home.

Dr Davies said: 'Parental anxiety about how teenagers might use the very technologies that they have bought their own children at considerable expense is leading some to discourage their children from becoming confident users. The evidence, based on the survey and hundreds of interviews, shows that parents have tended to focus on the negative side -- especially the distracting effects of social networking sites -- without always seeing the positive use that their children often make of being online.'

Teenagers' experiences of the social networking site Facebook appear to be mixed, says the study. Although some regarded Facebook as an integral part of their social life, others were concerned about the number of arguments that had escalated due to others wading in as a result of comments and photographs being posted.

The age of teenagers using Facebook for the first time was found to go down over the three year period from around 16 years old in 2008 to 12 or 13 years old by 2011. Interviews reveal that even the very youngest teenagers who were not particularly interested felt under some peer pressure to join. But the study also suggests that the popularity of Facebook is waning, with teenagers now exploring other forms of social networking.

Dr Davies commented: 'There is no steady state of teenage technology use -- fashions and trends are constantly shifting, and things change very rapidly when they do change.'

The research was part funded by Becta, the British Educational Communications and Technology Agency, a non-departmental public body formed under the last Labour government. The study findings are contained in a new book entitled, Teenagers and Technology, due to be published by Routledge in January 2013.

Teenagers and Technology: http://www.routledge.com/books/details/9780415684583/

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Oxford.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, 27 December 2012

Cardiovascular disease: The mechanics of prosthetic heart valves

Dec. 20, 2012 — Computer simulations of blood flow through mechanical heart valves could pave the way for more individualized prosthetics.

Every year, over 300,000 heart valve replacement operations are performed worldwide. Diseased valves are often replaced with mechanical heart valves (MHVs), which cannot yet be designed to suit each patient's specific needs. Complications such as blood clots can occur, which can require patients to take blood-thinning medication.

To investigate why such complications occur, Vinh-Tan Nguyen at A*STAR's Institute of High Performance Computing, Singapore, together with scientists at the National University of Singapore and institutions across the USA, have developed a new computer model to simulate the dynamics of blood flow through MHVs1.

"The current practice for heart valve replacement in patients is a one-size-fits-all approach where a patient is implanted with the best-fit valve available on the market," explains Nguyen. "The valves are well designed for general physiological conditions, but may not be suitable for each individual's particular heart condition."

The researchers focused on the blood flow dynamics in a prosthetic valve known as a bileaflet MHV. This type of MHV contains two mobile leaflets, or gates, which are held in place by hinges. The leaflets open and close in response to blood flow pressures through the valve. Little is known about the effect that the hinged leaflets have on blood dynamics, although such designs are suspected of causing blood clots.

The computer model developed by Nguyen and his team simulates pressure flows through bileaflet MHVs by representing blood vessels as a computational mesh, where calculations are performed for individual blocks of the mesh. Their crucial advance was in enabling this mesh to move and evolve in response to the leaflet movements.

The researchers validated their computer model through laboratory experiments with a full 3D reproduction of the heart's circulation system. Particle imaging equipment allowed them to visualize the fluid dynamics under different scenarios including pulsatile flow, which follows the pattern of a typical cardiac cycle.

"We obtained good agreement between our computer simulations and the experiments in terms of the magnitude and velocity of blood flow through the leaflets," states Nguyen. The researchers also found that leaflet hinges might play a vital role in clotting, because individual hinges have different tolerances that can disrupt normal blood flow and cause stress in the vein walls.

This research is a first crucial step in understanding the impact of MHVs on blood flow. "Ultimately we hope to provide doctors with a tool to evaluate blood flow dynamics and other related aspects in patients with newly implanted valves," says Nguyen.

The A*STAR-affiliated researchers contributing to this research are from the Institute of High Performance Computing.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR), via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Vinh-Tan Nguyen, Yee Han Kuan, Po-Yu Chen, Liang Ge, Fotis Sotiropoulos, Ajit P. Yoganathan, Hwa Liang Leo. Experimentally Validated Hemodynamics Simulations of Mechanical Heart Valves in Three Dimensions. Cardiovascular Engineering and Technology, 2011; 3 (1): 88 DOI: 10.1007/s13239-011-0077-z

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, 21 December 2012

Data storage: A fast and loose approach improves memory

Dec. 20, 2012 — An unconventional design for a nanoscale memory device uses a freely moving mechanical shuttle to improve performance.

A loose and rattling part in your cell phone is generally a cause for concern. Like most other electronic devices, your phone works by moving electrons through fixed circuit pathways. If electrons are not sufficiently contained within these pathways, the efficiency and speed of a device decrease. However, as the miniature components inside electronic devices shrink with each generation, electrons become harder to contain. Now, a research team led by Vincent Pott at the A*STAR Institute of Microelectronics, Singapore, has designed a memory device using a loose and moving part that actually enhances performance.

The loose part is a tiny metal disk, or shuttle, about 300 nanometers thick and 2 micrometers long, and lies inside a roughly cylindrical metal cage. Because the shuttle is so small, gravity has little effect on it. Instead, the forces of adhesion between the shuttle and its metal cage determine its position. When stuck to the top of its cage, the shuttle completes an electrical circuit between two electrodes, causing current to flow. When it is at the bottom of the cage, the circuit is broken and no current flows. The shuttle can be moved from top to bottom by applying a voltage to a third electrode, known as a gate, underneath the cage.

Pott and co-workers suggested using this binary positioning to encode digital information. They predicted that the forces of adhesion would keep the shuttle in place even when the power is off, allowing the memory device to retain information for long periods of time. In fact, the researchers found that high temperature -- one of the classic causes of electronic memory loss -- should actually increase the duration of data retention by softening the metal that makes up the shuttle memory's disk and cage, thereby strengthening adhesion. The ability to operate in hot environments is a key requirement for military and aerospace applications.

The untethered shuttle also takes up less area than other designs and is not expected to suffer from mechanical fatigue because it avoids the use of components that need to bend or flex -- such as the cantilevers used in competing mechanical memory approaches. In a simulation, Pott and co-workers found that the shuttle memory should be able to switch at speeds in excess of 1 megahertz.

The next steps, the researchers say, include designing arrays of the devices and analyzing fabrication parameters in detail. If all goes well, their novel device could compete head-to-head with the industry-standard FLASH memory.

The A*STAR-affiliated researchers contributing to this research are from the Institute of Microelectronics/

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR), via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Vincent Pott, Geng Li Chua, Ramesh Vaddi, Julius Ming-Lin Tsai, Tony T. Kim. The Shuttle Nanoelectromechanical Nonvolatile Memory. IEEE Transactions on Electron Devices, 2012; 59 (4): 1137 DOI: 10.1109/TED.2011.2181517

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

On-demand synaptic electronics: Circuits that learn and forget

Dec. 20, 2012 — Researchers in Japan and the US propose a nanoionic device with a range of neuromorphic and electrical multifunctions that may allow the fabrication of on-demand configurable circuits, analog memories and digital-neural fused networks in one device architecture.

Synaptic devices that mimic the learning and memory processes in living organisms are attracting avid interest as an alternative to standard computing elements that may help extend Moore's law beyond current physical limits.

However so far artificial synaptic systems have been hampered by complex fabrication requirements and limitations in the learning and memory functions they mimic. Now Rui Yang, Kazuya Terabe and colleagues at the National Institute for Materials Science in Japan and the University of California, Los Angeles, in the US have developed two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions.

In its initial pristine condition the system has very high resistance values. Sweeping both negative and positive voltages across the system decreases this resistance nonlinearly, but it soon returns to its original state indicating a volatile state. Applying either positive or negative pulses at the top electrode introduces a soft-breakdown, after which sweeping both negative and positive voltages leads to non-volatile states that exhibit bipolar resistance and rectification for longer periods of time.

The researchers draw similarities between the device properties -- volatile and non-volatile states and the current fading process following positive voltage pulses -- with models for neural behaviour -- that is, short- and long-term memory and forgetting processes. They explain the behaviour as the result of oxygen ions migrating within the device in response to the voltage sweeps. Accumulation of the oxygen ions at the electrode leads to Schottky-like potential barriers and the resulting changes in resistance and rectifying characteristics. The stable bipolar switching behaviour at the Pt/WO3-x interface is attributed to the formation of the electric conductive filament and oxygen absorbability of the Pt electrode.

As the researchers conclude, "These capabilities open a new avenue for circuits, analog memories, and artificially fused digital neural networks using on-demand programming by input pulse polarity, magnitude, and repetition history."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by International Center for Materials Nanoarchitectonics (MANA), via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Rui Yang, Kazuya Terabe, Guangqiang Liu, Tohru Tsuruoka, Tsuyoshi Hasegawa, James K. Gimzewski, Masakazu Aono. On-Demand Nanodevice with Electrical and Neuromorphic Multifunction Realized by Local Ion Migration. ACS Nano, 2012; 6 (11): 9515 DOI: 10.1021/nn302510e

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Traffic congestion can be alleviated throughout a metropolitan area by altering trips in specific neighborhoods, model shows

Dec. 20, 2012 — In most cities, traffic growth has outpaced road capacity, leading to increased congestion, particularly during the morning and evening commutes. In 2007, congestion on U.S. roads was responsible for 4.2 billion hours of additional travel time, as well as 2.8 billion gallons of fuel consumption and an accompanying increase in air pollution.

One way to prevent traffic tie-ups is to have fewer cars on the road by encouraging alternatives such as public transportation, carpooling, flex time and working from home. But a new study -- by researchers at MIT, Central South University in China, the University of California at Berkeley and the Austrian Institute of Technology -- incorporates data from drivers' cellphones to show that the adoption of these alternatives by a small percentage of people across a metropolitan area might not be very effective. However, if the same number of people, but from a carefully selected segment of the driving population, chooses not to drive at rush hour, this could reduce congestion significantly.

The study, published in the Dec. 20 issue of the journal Scientific Reports, demonstrates that canceling or delaying the trips of 1 percent of all drivers across a road network would reduce delays caused by congestion by only about 3 percent. But canceling the trips of 1 percent of drivers from carefully selected neighborhoods would reduce the extra travel time for all other drivers in a metropolitan area by as much as 18 percent.

"This has an analogy in many other flows in networks," says lead research Marta González, the Gilbert W. Winslow Career Development Assistant Professor in MIT's Department of Civil and Environmental Engineering. "Being able to detect and then release the congestion in the most affected arteries improves the functioning of the entire coronary system."

The study, designed by González and former MIT postdoc Pu Wang, now a professor at Central South University, is the first large-scale traffic study to track travel using anonymous cellphone data rather than survey data or information obtained from U.S. Census Bureau travel diaries. Both of these are prone to error because of the time lag between gathering and releasing data and the reliance on self-reporting.

González and Wang used three weeks of cellphone data to obtain information about anonymous drivers' routes and the estimated traffic volume and speed on those routes. They inferred a driver's home neighborhood from the regularity of the route traveled and from the locations of cell towers that handled calls made between 9 p.m. and 6 a.m. They combined this with information about population densities and the location and capacity of roads in the networks of two metropolitan areas -- Boston and San Francisco -- to determine which neighborhoods are the largest sources of drivers on each road segment, and which roads these drivers use to connect from home to highways and other major roadways.

In the Boston area, they found that canceling 1 percent of trips by select drivers in the Massachusetts municipalities of Everett, Marlborough, Lawrence, Lowell and Waltham would cut all drivers' additional commuting time caused by traffic congestion by 18 percent. In the San Francisco area, canceling trips by drivers from Dublin, Hayward, San Jose, San Rafael and parts of San Ramon would cut 14 percent from the travel time of other drivers.

"These percentages are averages based on a one-hour commute with additional minutes caused by congestion," Wang says. "The drivers stuck in the roads with worst congestion would see the greatest percentage of time savings, because the selective strategy can more efficiently decrease the traffic flows in congested roads."

To validate the study's methodology, Alexandre Bayen, an associate professor of systems engineering at Berkeley, and graduate student Timothy Hunter compared González and Wang's estimations of travel time based on cellphone data with their own data obtained from GPS sensors in taxis in the San Francisco area. Using GPS data, Bayen and Hunter computed taxis' speed based on travel time from one location to another; from that speed of travel, they then determined congestion levels. Their findings agreed with those of González and Wang.

Because the new methodology requires only three types of data -- population density, topological information about a road network, and cellphone data -- it can be used for almost any urban area.

"In many cities in the developing world, traffic congestion is a major problem and travel surveys don't exist," González says. "So the detailed methodology we developed for using cellphone data to accurately characterize road network use could help traffic managers control congestion and allow planners to create road networks that fit a population's needs."

González and Wang are currently studying road use in the Dominican Republic, France, Portugal, Rwanda and Spain. They treat the anonymous cellphone data with the privacy-protection measures required for the treatment of human subjects under an institutional review board.

Katja Schechtner, head of the Dynamic Transportation Systems group at the Austrian Institute of Technology and a visiting scholar at the MIT Media Lab, is a co-author on the Scientific Reports paper with González, Wang, Bayen and Hunter.

"We are now at a time where it is less difficult to get mobility data, thanks to mobile phones and other devices, and the main problem we have is how to extract useful information from all these data," says Marc Barthelemy, a senior researcher at the Institute of Theoretical Physics at CEA in France. "[González] and her team proposed a very interesting and new idea of constructing the network of road usage, which allows us to understand where individuals on a given road are coming from, and enables us to propose new strategies for mitigating congestion. This approach will certainly open new avenues of research in the very active field of mobility in urban systems."

The study was funded by grants from the New England University Transportation Center, the NEC Corporation Fund, the Solomon Buchsbaum Research Fund and the National Natural Science Foundation of China. Wang received funding from the Shenghua Scholar Program of Central South University.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology. The original article was written by Denise Brehm.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Pu Wang, Timothy Hunter, Alexandre M. Bayen, Katja Schechtner, Marta C. González. Understanding Road Usage Patterns in Urban Areas. Scientific Reports, 2012; 2 DOI: 10.1038/srep01001

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, 19 December 2012

Wireless networks: Mobile devices keep track

Nov. 21, 2012 — A more sensitive technique for determining user position could lead to improved location-based mobile services.

Many mobile-phone applications (apps) use spatial positioning technology to present their user with location-specific information such as directions to nearby amenities. By simultaneously predicting the location of the mobile-user and the data access points, or hotspots, improved accuracy of positioning is now available, thanks to an international research team including Sinno Jialin Pan from the A*STAR Institute for Infocomm Research1. Software developers expect that such improvements will enable a whole new class of apps that can react to small changes in position.

Traditionally, device position was determined by the Global Positioning System (GPS) that uses satellites to triangulate approximate location, but its accuracy falters when the mobile device is indoors. An alternative approach is to use the 'received signal strength' (RSS) from local transmitters. Attenuation of radio waves by walls can limit accuracy; and, it is difficult to predict signals in complex, obstacle-filled environments.

Software developers have tried to circumvent these problems by using so-called 'learning-based techniques' that identify correlations between RSS values and access-point placement. Such systems do not necessarily require prior knowledge of the hotspot locations; rather they 'learn' from data collected on a mobile device. This also has drawbacks: the amount of data can be large, making calibration time consuming. Changes in the environment can also outdate the calibration.

Pan and his co-workers reduced this calibration effort in an experimental demonstration of a protocol that calculates both the positions of the device and the access points simultaneously -- a process they call colocalization. "Integrating the two location-estimation tasks into a unified mathematical model means that we can fully exploit the correlations between mobile-device and hotspot position," explains Pan.

First, the researchers trained a learning-based system with the signal-strength values received from access points at selected places in the area of interest. They used this information to calibrate a probabilistic 'location-estimation' system. Then, they approximated the location from the learned model using signal strength samples received in real-time from the access points.

Experimental trials showed that this approach not only required less calibration, but it was more accurate than other state-of-the-art systems. "We next want to apply the method to a larger-scale environment," says Pan. "We also want to find ways to make use of the estimated locations to provide more useful information, such as location-based advertising." As this technique could help robots navigate by themselves, it may also have important implications for the burgeoning field of robotics.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Jeffrey Junfeng Pan, Sinno Jialin Pan, Jie Yin, Lionel M. Ni, Qiang Yang. Tracking Mobile Users in Wireless Networks via Semi-Supervised Colocalization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012; 34 (3): 587 DOI: 10.1109/TPAMI.2011.165

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Flexible, low-voltage circuits made using nanocrystals

Nov. 26, 2012 — Electronic circuits are typically integrated in rigid silicon wafers, but flexibility opens up a wide range of applications. In a world where electronics are becoming more pervasive, flexibility is a highly desirable trait, but finding materials with the right mix of performance and manufacturing cost remains a challenge.

Now a team of researchers from the University of Pennsylvania has shown that nanoscale particles, or nanocrystals, of the semiconductor cadmium selenide can be "printed" or "coated" on flexible plastics to form high-performance electronics.

The research was led by David Kim, a doctoral student in the Department of Materials Science and Engineering in Penn's School of Engineering and Applied Science; Yuming Lai, a doctoral student in the Engineering School's Department of Electrical and Systems Engineering; and professor Cherie Kagan, who has appointments in both departments as well as in the School of Arts and Sciences' Department of Chemistry. Benjamin Diroll, a doctoral student in chemistry, and Penn Integrates Knowledge Professor Christopher Murray of Materials Science and of Chemistry also collaborated on the research.

Their work was published in the journal Nature Communications.

"We have a performance benchmark in amorphous silicon, which is the material that runs the display in your laptop, among other devices," Kagan said. "Here, we show that these cadmium selenide nanocrystal devices can move electrons 22 times faster than in amorphous silicon."

Besides speed, another advantage cadmium selenide nanocrystals have over amorphous silicon is the temperature at which they are deposited. Whereas amorphous silicon uses a process that operates at several hundred degrees, cadmium selenide nanocrystals can be deposited at room temperature and annealed at mild temperatures, opening up the possibility of using more flexible plastic foundations.

Another innovation that allowed the researchers to use flexible plastic was their choice of ligands, the chemical chains that extend from the nanocrystals' surfaces and helps facilitate conductivity as they are packed together into a film.

"There have been a lot of electron transport studies on cadmium selenide, but until recently we haven't been able to get good performance out of them," Kim said. "The new aspect of our research was that we used ligands that we can translate very easily onto the flexible plastic; other ligands are so caustic that the plastic actually melts."

Because the nanocrystals are dispersed in an ink-like liquid, multiple types of deposition techniques can be used to make circuits. In their study, the researchers used spincoating, where centrifugal force pulls a thin layer of the solution over a surface, but the nanocrystals could be applied through dipping, spraying or ink-jet printing as well.

On a flexible plastic sheet a bottom layer of electrodes was patterned using a shadow mask -- essentially a stencil -- to mark off one level of the circuit. The researchers then used the stencil to define small regions of conducting gold to make the electrical connections to upper levels that would form the circuit. An insulating aluminum oxide layer was introduced and a 30-nanometer layer of nanocrystals was coated from solution. Finally, electrodes on the top level were deposited through shadow masks to ultimately form the circuits.

"The more complex circuits are like buildings with multiple floors," Kagan said. "The gold acts like staircases that the electrons can use to travel between those floors."

Using this process, the researchers built three kinds of circuits to test the nanocrystals performance for circuit applications: an inverter, an amplifier and a ring oscillator.

"An inverter is the fundamental building block for more complex circuits," Lai said. "We can also show amplifiers, which amplify the signal amplitude in analog circuits, and ring oscillators, where 'on' and 'off' signals are properly propagating over multiple stages in digital circuits."

"And all of these circuits operate with a couple of volts," Kagan said. "If you want electronics for portable devices that are going to work with batteries, they have to operate at low voltage or they won't be useful."

With the combination of flexibility, relatively simple fabrication processes and low power requirements, these cadmium selenide nanocrystal circuits could pave the way for new kinds of devices and pervasive sensors, which could have biomedical or security applications.

"This research also opens up the possibility of using other kinds of nanocrystals, as we've shown the materials aspect is not a limitation any more," Kim said.

The research was supported by the U.S. Department of Energy and the National Science Foundation.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Pennsylvania.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

David K. Kim, Yuming Lai, Benjamin T. Diroll, Christopher B. Murray, Cherie R. Kagan. Flexible and low-voltage integrated circuits constructed from high-performance nanocrystal transistors. Nature Communications, 2012; 3: 1216 DOI: 10.1038/ncomms2218

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

One step closer to rollable, foldable e-Devices

Oct. 30, 2012 — The next generation of electronic displays -- e-Readers, smartphones and tablets -- is closer thanks to research out October 31 from the University of Cincinnati.

Advances that will eventually bring foldable/rollable e-devices as well as no pixel borders are experimentally verified and proven to work in concept at UC's Novel Devices Laboratory. That research is published this week in the journal Nature Communications.

The UC paper, "Bright e-Paper by Transport of Ink through a White Electrofluidic Imaging Film," is authored by College of Engineering and Applied Science doctoral students Matthew Hagedon, Shu Yang, and Ann Russell, as well as Jason Heikenfeld, associate professor of electronic and computing systems. UC worked on this research with partner: start-up company Gamma Dynamics.

Foldable e-Devices Closer Thanks to Electrofluidic Imaging Film

One challenge in creating foldable e-Paper devices has been the device screen, which is currently made of rigid glass. But what if the screen were a paper-thin plastic that rolled like a window shade? You'd have a device like an iPad that could be folded or rolled up repeatedly -- even tens of thousands of time. Just roll it up and stick it in your pocket.

The UC research out today experimentally verifies that such a screen of paper-thin plastic, what the researchers refer to as "electrofluidic imaging film," works. The breakthrough is a white, porous film coated with a thin layer of reflective electrodes and spacers that are then subjected to unique and sophisticated fluid mechanics in order to electrically transport the colored ink and clear-oil fluids that comprise the consumer content (text, images, video) of electronic devices.

According to UC's Hagedon, "This is the first of any type of electrowetting display that can be made as a simple film that you laminate onto a sheet of controlling electronics. Manufacturers prefer this approach compared to having to build up the pixels themselves within their devices, layer by layer, material by material. Our proof-of-concept breakthrough takes us one step closer to brighter, color-video e-Paper and the Holy Grail of rollable/foldable displays."

No Pixel Borders

Importantly, this paper-thin plastic screen developed at UC is the first among all types of fluidic displays that has no pixel borders.

In current technology, colors maintain their image-forming distinctiveness by means of what are known as "pixel borders." Each individual pixel that helps to comprise the image necessary for text, photographs, video and other content maintains its distinct color and does not bleed over into the next pixel or color due to a pixel border. In other words, each individual pixel of color has a border around it (invisible to the eye of the consumer) to maintain its color distinctiveness.

This matters because pixel borders are basically "dead areas" that dull any display of information, whether a display of text or image. Leading electronics companies have been seeking ways to reduce or eliminate pixel borders in order to increase display brightness.

Said UC's Heikenfeld, "For example, the pixel border in current electrowetting displays, which prevents ink merging, takes up a sizable portion of the pixel. This is now resolved with our electrofluidic film breakthrough. Furthermore, our breakthrough provides extraordinary capability to hide the ink when you don't want to see it, which further cranks up the available brightness and color of the display when you do want to see it. With a single, new technology, we have simplified manufacturability AND improved screen brightness."

Foldable e-Devices as Environmental Electronics

Expect that the first-generation foldable e-devices will be monochrome. Color will come later. Eventually, within 10 to 20 years, e-Devices with magazine-quality color, viewable in bright sunlight but requiring low power will come to market. "Think of this as the green iPad or e-Reader, combining high function and high color without the weight of a heavy battery, readable out in the sunlight, and foldable into your pocket," said Heikenfeld.

The device will require low power to operate since it will charge via sunlight and ambient room light. However, it will be so "tough" and only use wireless connection ports, such that you can leave it out over night in the rain. In fact, you'll be able to wash it or drop it without damaging the thin, highly flexible casing and screen.

This latest proof of concept research verifying the functionality of electrofluidic imaging film builds on previous research out of UC's Novel Devices Laboratory. That previous research broke down a significant barrier to bright electronic displays that don't require a heavy battery to power them.

Currently, faster, color-saturated, high-power devices like a computer's liquid-cystal display screen, an iPad or a cell phone require high power (and, consequently, a larger battery), in part, because they need a strong internal light source within the device (that "backlights" the screen) as well as color filters in order to display the pixels as color/moving images. The need for an internal light source within the device also means visibility is poor in bright sunlight.

The new electrofluidic imaging film is part of an overall UC design that will require only low-power to produce high speed content and function because it makes use of ambient light vs. a strong, internal light source within the device.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Cincinnati. The original article was written by M.B. Reilly.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Matthew Hagedon, Shu Yang, Ann Russell, Jason Heikenfeld. Bright e-Paper by Transport of Ink through a White Electrofluidic Imaging Film. Nature Communications, October 31, 2012

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Putting more cores to work in server farms

Nov. 26, 2012 — EPFL scientists have found that reorganizing the inner architecture of the processors used in massive data processing centers can yield significant energy savings. This is a research from EcoCloud research center, which was founded to pioneer technologies to make cloud computing scalable, cost-effective and sustainable.

Streaming data, social networks, online games and services, databases -- the number of interactions we have with the Internet is continually increasing. Every time we click on a link, we trigger an avalanche of computer operations that are then carried out in huge server farms. It's estimated that these massive installations are responsible for 2% of total world electricity consumption. EPFL EcoCloud Scientists are proposing a novel solution to help rein in this runaway consumption. By integrating the same kind of processor cores that are used in smartphones, the amount of energy needed can be reduced by a factor of four. Their study is part of EcoCloud, a research center dedicated to cloud computing. The results were recently published in an IEEE Micro article.

The giants of the digital world -- such as Facebook, Google, Microsoft -- all depend on vast, powerful farms with tens of thousands of servers to manage their data processing. To help keep costs down and to improve energy efficiency, chips have been improved and packed as tightly as possible into the processors. But this approach has reached its limits.

EcoCloud's solution, titled "scale-out processors," is based on a different approach. They propose a reorganization and redesign of the processors used in the servers. Instead of the current design, which is based on a few, very powerful processor cores, they recommend using a greater number of less powerful cores. Each processor could thus respond to a larger number of requests.

Over-powerful

"The vast majority of Internet requests don't involve complicated analysis, but are generally just retrieval from memory," explains Boris Grot, from Parallel Systems Architecture Laboratory (PARSA). "But current servers are designed for carrying out a whole range of tasks, from complex scientific calculations to gaming. They're actually way too powerful for most basic demands. As a result, they're not being used in an optimal manner."

The researchers have combined the advantages of new-generation small processor cores developed for smartphone-type devices; their architecture is simple but their processing ability is very efficient. Concentrated in large numbers in a large chip, they would provide a better solution to the way servers are currently used. After having studied and compared several designs, EcoCloud scientists concluded that this arrangement maximizes space in the processors and significantly improves their performance.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Ecole Polytechnique Fédérale de Lausanne. The original article was written by Sarah Perrin.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Boris Grot, Damien Hardy, Pejman Lotfi-Kamran, Babak Falsafi, Chrysostomos Nicopoulos, Yiannakis Sazeides. Optimizing Data-Center TCO with Scale-Out Processors. IEEE Micro, 2012; 32 (5): 52 DOI: 10.1109/MM.2012.71

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Beating the dark side of quantum computing

Nov. 16, 2012 — A future quantum computer will be able to carry out calculations billions of times faster than even today's most powerful machines by exploit the fact that the tiniest particles, molecules, atoms and subatomic particles can exist in more than one state simultaneously. Scientists and engineers are looking forward to working with such high-power machines but so too are cyber-criminals who will be able to exploit this power in cracking passwords and decrypting secret messages much faster than they can now.

Now, Richard Overill of the Department of Informatics at King's College London is working in the field of digital forensics to develop the necessary tools to pre-empt the cyber-criminals as quantum computing becomes reality. Writing in the Int. J. Information Technology, Communications and Convergence, Overill explains that while quantum computing is in its infancy, as with earlier technological leaps once the nuts and bolts are in place, it will be adopted rapidly by computer scientists and others eager to utilise its enormous potential.

The technologies that will underpin quantum computing will be quite esoteric to the non-specialist and include laser-excited atomic ion traps using beryllium or calcium atoms, bulk liquid-phase and solid-phase nuclear magnetic resonance, as well superconducting solid-state circuits operating at liquid helium temperatures. Of course, the semiconductor, silicon chip technology underpinning current supercomputers is perhaps just as esoteric although seems more familiar to us now.

Nevertheless, it is not the complexities of the technology that is important but what it will allow computer users to do such as solving logistics problems by overlaying all possible solutions and allowing quantum mechanics to find the optimal route, for instance. Or creating encryption keys that could never be cracked by a conventional computer. And, as Overill warns, providing those intent on cracking passwords and such to apply computational brute force with immeasurable efficiency. Such power might be wielded by crime fighters and criminals alike.

"At first sight, therefore, it would appear that with the advent of practical quantum computers the task of cyber-law enforcement will become significantly more challenging," says Overill. However, as has always been the case with crime and crime fighting, forensics constantly plays catch up with the technology exploited by criminals and so too with quantum computers. Overill provides a roadmap for how research into digital forensics must progress if crime fighters and investigators are to keep up with the pace of change.

Currently there is no answer to beating quantum crime. "There are ultimate physical limitations on what forensic information can be recovered from a quantum computation," says Overill. Forensic has always had such limitations but investigators are adept at obtaining clues regardless. "So, our digital quantum forensics mission has to focus on learning how to get 'more from less', by squeezing every last drop of information from the traces that can be recovered, and then devising novel techniques to interpret these traces as richly as possible."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Richard Overill et al. Digital quantum forensics: future challenges and prospects. Int. J. Information Technology, Communications and Convergence, 2012, 2, 205-211

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Field geologists (finally) going digital

Nov. 5, 2012 — Not very long ago a professional geologist's field kit consisted of a Brunton compass, rock hammer, magnifying glass, and field notebook. No longer. In the field and in the labs and classrooms, studying Earth has undergone an explosive change in recent years, fueled by technological leaps in handheld digital devices, especially tablet computers and cameras.

Geologist Terry Pavlis' digital epiphany came almost 20 years ago when he was in a museum looking at a 19th-century geology exhibit that included a Brunton compass. "Holy moly!" he remembers thinking, "We're still using this tool." This is despite the fact that technological changes over the last 10 years have not only made the Brunton compass obsolete, but swept away paper field notebooks as well (the rock hammer and hand-lens magnifier remain unchallenged, however).

The key technologies that replace the 19th-century field tools are the smart phone, PDA, handheld GPS, and tablet PC and iPad. Modern tablets, in particular, can do everything a Brunton compass can, plus take pictures and act as both a notebook and mapping device, and gather precise location data using GPS. They can even be equipped with open-source GIS software.

Pavlis, a geology professor at The University of Texas at El Paso, and Stephen Whitmeyer of James Madison University will be presenting the 21st-century way to do field geology on Monday, 5 Nov., at the meeting of the Geological Society of America (GSA) in Charlotte, N.C. The presentations are a part of a digital poster Pardee Keynote Symposium titled, "Digital Geology Speed-Dating: An Innovative Coupling of Interactive Presentations and Hands-On Workshop."

"I had a dream we would not be touching paper anymore," says Pavlis. "I'm now sort of an evangelist on this subject."

That's not to say that the conversion to digital field geology is anywhere near complete. The new technology is not quite catching on in some university field courses because the technology is more expensive and becomes obsolete quickly, says Pavlis.

"Field geology courses are expensive enough for students," he notes. As a result, the matter of teaching field geology with digital tools is actually rather controversial among professors.

Meanwhile, on the classroom side of earth science education, there are new digital tools that bring the field into the classroom. One of them is GigaPans -- gigantic panorama images.

"A GigaPan is basically a really big picture that's made of lots of full-resolution zoomed-in photos," explains geologist Callan Bentley of Northern Virginia Community College. To make a GigaPan, you need a GigaPan Robot that looks at the scene and breaks it into a grid, then shoots the grid. That can result in hundreds or even thousands of images. The GigaPan system then stitches them together. The resulting stitched image is uploaded to the GigaPan.org website where everybody can see it.

"In geology, we look at things in multiple scales," says Bentley. "A well-composed GigaPan is very useful." Bentley will be presenting GigaPans at the same GSA meeting session as Pavlis, along with others using the latest technology to study and teach geology.

GigaPans were developed by Google, NASA, and the robotics lab at Carnegie Mellon University. Bentley got involved when the "Fine Outreach for Science" program recruited him. Since then, he has documenting geology of the Mid-Atlantic region.

"I have used some of it in the classroom," said Bentley. "I have students look at a scene, make a hypothesis then look closer to test the hypothesis."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Geological Society of America.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

NASA's Curiosity rover checks in on Mars using Foursquare

Oct. 4, 2012 — NASA's Curiosity Mars rover checked in on Mars Wednesday using the mobile application Foursquare. This marks the first check-in on another planet. Users on Foursquare can keep up with Curiosity as the rover checks in at key locations and posts photos and tips, all while exploring the Red Planet.

"NASA is using Foursquare as a tool to share the rover's new locations while exploring Mars," said David Weaver, associate administrator for communications at NASA Headquarters in Washington. "This will help to involve the public with the mission and give them a sense of the rover's travels through Gale Crater."

After landing in Gale Crater last month, Curiosity began a planned 23-month mission that includes some of Mars' most intriguing scientific destinations. Curiosity is roving toward Mount Sharp, a mountain about 3 miles (5 kilometers) tall. The rover is conducting experiments along the way, seeking clues in the rocks and soil that would indicate whether Mars ever was capable of supporting microbial life. It is taking and sharing pictures of the trip.

Back here on Earth, Foursquare users will be able to earn a Curiosity-themed badge on the social media platform for check-ins at locations that generate an interest in science, technology, engineering and mathematics. Available late this year, this new badge will encourage Foursquare users to explore science centers, laboratories and museums that pique scientific curiosity.

NASA has been on Foursquare since 2010 through a strategic partnership with the platform. This partnership, launched with astronaut Doug Wheelock's first-ever check-in from the International Space Station, has allowed users to connect with NASA and enabled them to explore the universe and re-discover Earth.

The partnership launched the NASA Explorer badge for Foursquare users, encouraging them to explore NASA-related locations across the country. It also included the launch of a NASA Foursquare page, where the agency provides official tips and information about the nation's space program.

The Jet Propulsion Laboratory manages the Mars Science Laboratory mission and its Curiosity rover for NASA's Science Mission Directorate in Washington. The rover was designed, developed and assembled at JPL, a division of the California Institute of Technology in Pasadena.

To find out more about Mars Curiosity and NASA on Foursquare, visit: http://www.foursquare.com/MarsCuriosity and http://www.foursquare.com/NASA

For information about NASA's partnership with Foursquare, visit: http://www.nasa.gov/connect/foursquare.html

For more information about NASA's Curiosity mission, visit: http://www.nasa.gov/msl and http://mars.jpl.nasa.gov/msl .

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by NASA/Jet Propulsion Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Increasing efficiency of wireless networks: New method could have broad impacts on mobile Internet and wireless industries

Nov. 13, 2012 — Two professors at the University of California, Riverside Bourns College of Engineering have developed a new method that doubles the efficiency of wireless networks and could have a large impact on the mobile Internet and wireless industries.

Efficiency of wireless networks is key because there is a limited amount of spectrum to transmit voice, text and Internet services, such as streaming video and music. And when spectrum does become available it can fetch billions of dollars at auction.

The "spectrum crunch" is quickly being accelerated as customers convert from traditional cell phones to smartphones and tablets. For example, tablets generate 121 times more traffic than a traditional cell phone.

Without making networks more efficient, customers are likely to drop more calls, pay more money for service, endure slower data speed and not see an unlimited data plan again.

The UC Riverside findings were outlined in a paper titled "A method for broadband full-duplex MIMO radio" recently published online in the journal IEEE Signal Processing Letters. It was co-authored by Yingbo Hua and Ping Liang, who are both electrical engineering professors, and three of their graduate students: Yiming Ma, Ali Cagatay Cirik and Qian Gao.

Current radios for wireless communications are half-duplex, meaning signals are transmitted and received in two separate channels. Full duplex radios, which transmit signals at the same time in the same frequency band, can double the efficiency of the spectrum.

However, to make a full duplex radio, one must solve a problem: interference between the transmission and receiving functions. The technology of full duplex radio is not yet ready for the current 3G and 4G networks.

The interference caused by signals from cell towers could be billions times more powerful than the ones towers are trying to pick up from a user's smartphone. As a result, incoming signals would get drowned out.

The UC Riverside researchers have found a new solution called "time-domain transmit beamforming," which digitally creates a time-domain cancellation signal, couples it to the radio frequency frontend to allow the radio to hear much weaker incoming signals while transmitting strong outgoing signals at the same frequency and same time.

This new solution is indispensable for a full-duplex radio in general while it is complementary to other required solutions or components. The new solution not only has a sound theoretical proof, but also leads to a lower cost, faster and more accurate channel estimation for robust and effective cancellation.

"We believe the future applications of full duplex radios are huge, ranging from cell towers, backhaul networks and wireless regional area networks to billions handheld devices for data intensive application such as FaceTime," said Liang, who added that the researchers have had discussions with several major wireless telecommunication equipment companies.

Liang and Hua believe their research has commercial potential in part because most of the core components required are digital and therefore costly new components won't need to be added to existing infrastructure.

Liang and Hua also believe cell towers are one of the most likely places to start implementing full-duplex radios, in large part because they are less constrained by existing standards.

Liang and Hua also see applications in cognitive radio, a type of wireless communication in which a transceiver can detect which communication channels are in use and which are not, and move into vacant channels while avoiding occupied ones. While cellular frequency bands are overloaded, other bands, such as military, amateur radio and TV, are often underutilized.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Riverside. The original article was written by Sean Nealon.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Yingbo Hua, Ping Liang, Yiming Ma, Ali Cagatay Cirik, Qian Gao. A Method for Broadband Full-Duplex MIMO Radio. IEEE Signal Processing Letters, 2012; 19 (12): 793 DOI: 10.1109/LSP.2012.2221710

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Glove keyboard may revolutionize use of devices with one hand

Oct. 23, 2012 — Give a hand to some computer engineering students at The University of Alabama in Huntsville for designing a tool that could revolutionize new ways of using electronic devices with just one hand.

It's called a Gauntlet Keyboard, a glove device that functions as a wireless keyboard. Instead of tapping keys on a keyboard, the user simply touches their thumb to points on their fingers assigned a letter or other keyboard function.

Conductive thread carries the commands to a matchbox-sized Printed Circuit Board (PCB) affixed to the back of the glove.

The PCB transmits it via Bluetooth, whether it's a computer, a mobile phone, music synthesizer, video game or military device. Think of the Gauntlet as a touch screen that works by tapping your fingers to your thumb on a gloved hand.

Four senior engineering students at UAH made the glove their senior design project for a computer engineering class led by Dr. B. Earl Wells.

The students -- Jiake Liu, Stephen Dond, Douglas Kirby and Chris Heath -- are now seeking a patent to market the product. The project recently won a $20,000 prize from the Best Buy Innovator Fund among hundreds of entries.

"It's basically a keyboard on your hand," explained Lui, the principal innovator. "You, by tapping your thumb on each segment of your fingers, type to the screen basically. And you can do a swiping gesture that would erase it."

Gauntlet is an acronym for Generally Accessible Universal Nomadic Tactile Low-power Electronic Typist. That's a lengthy description of what essentially is a glove with a beehive of conductive threads running throughout the fingers and palm.

Liu said the inspiration came from his interest in science fiction movies and experience with touch-screen technologies.

Once he and his project partners came up with the idea, they did some scientific research on the most frequently used characters on a keyboard. Common keystrokes got the easiest finger-thumb alignments like the fingertips. Less common ones required more hand contortions to make the contacts.

"Doug (Kirby) did some research and found the most commonly used letters in the English alphabet," said Dond. "We all sat around and asked a few people and tried to figure these easiest places to touch your finger with your thumb and we put the most commonly used letters there. We tried to make it as efficient and easy to use as possible."

Until users memorize the new "key" positions, the characters are sewn into the finger and palm positions of the glove. Liu said the group has been in contact with a patent lawyer and a specialty glove designer about going commercial with the Gauntlet.

The students were assisted in their initial work by Huntsville electronics firm ADTRAN after entering it in the company's senior design showcase. The company assisted largely with the micro soldering of the PCB parts.

The young designers are excited about the possibilities for the Gauntlet. "There are several applications we can think of right now," Liu said. "The easy one would be as a keyboard for the consumer market. Also, the medical field for people limited to one hand from a disability. We can also think of military uses, as an entertainment device or used as a musical instrument for digital synthesizing."

Dr. Emil Jovanov, associate dean for Graduate Education and Research in the UAH College of Engineering, commended the students for their innovation. "It is a perfect example of how you take an original idea, find your niche and complete the whole idea."

Jovanov said the project would be pitched to the Alabama Launch Pad, a competition to help fund and launch business plans.

The young innovators are well on their way to success with their UAH education. Liu is co-founder and chief executive officer of Kabob, a smart phone application that provides users with digital versions of restaurant menus. Heath and Dond landed engineering jobs at Teledyne Brown, while Kirby got hired as a software engineer for Aegis Technologies.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Alabama Huntsville.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Now the mobile phone goes emotional

Oct. 25, 2012 — ForcePhone is a mobile synchronous haptic communication system. During phone calls, users can squeeze the side of the device and the pressure level is mapped to vibrations on the recipient's device. Computer scientists from University of Helsinki indicate that an additional haptic channel of communication can be integrated into mobile phone calls using a pressure to vibrotactile mapping with local and remote feedback. The pressure/vibrotactile messages supported by ForcePhone are called pressages.

Mobile devices include an increasing number of input and output techniques that are currently not used for communication. Recent research results by Dr Eve Hoggan from HIIT / University of Helsinki, Finland, however, indicate that a synchronous haptic communication system has value as a communication channel in real-world settings with users that express greetings, presence and emotions through presages.

-Pressure and tactile techniques have been explored in tangible interfaces for remote communication on dedicated devices but until now, these techniques have not been implemented on mobile devices or been used during live phone calls, says Eve Hoggan.

Using a lab based study and a small field study, Doctor Hoggan and her co-workers show that haptic interpersonal communication can be integrated into a standard mobile device. The new non-verbal design was also appreciated.

-When asked about the non-verbal cues that could be represented by pressages, the participants in our study highlighted three different approaches: to emphasize speech, express affection and presence, and to playfully surprise each other, she says.

When asked about the specific ways in which they adapted their communication style to accommodate the tactile modality, all of the participants stated that they tended to pause briefly after sending a pressage to 'make space for it in the conversation'.

According to the longitudinal study results the participants' phone calls lasted on average 4 minutes and 43 seconds with an average of 15.56 pressages sent during each call. All phone calls involved the use of pressages.

The prototype developed in this research, ForcePhone, is an augmented, commercially available mobile device with pressure input and vibrotactile output. ForcePhone was built at the Helsinki Institute of Information Technology and Nokia Research Center, Finland.

The research paper Pressages: Augmenting Phone Calls with Non-Verbal Messages by Eve Hoggan, Craig Stewart, Laura Haverinen, Giulio Jacucci and Vuokko Lantz was presented at the ACM Symposium on User Interface Software and Technology UIST'12 in Boston, MA, USA, October, 2012.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Helsinki, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Two-thirds of U.S. adults say kids should be 13 to use internet alone: Most support stronger protections

Nov. 19, 2012 — The Internet is full of information, but also full of real risks for children, like online predators or the pitfalls of losing privacy when kids share too much information. In a new University of Michigan poll, the majority of the public supports updating federal laws that require Internet safety standards to protect kids.

The University of Michigan C.S. Mott Children's Hospital National Poll on Children's Health recently asked adults nationwide about Internet use and proposed changes to the Children's Online Privacy Protection Act, known as COPPA.

COPPA was enacted to protect young children from some of these Internet dangers by prohibiting collection of personal information through websites if the user is under age 13. But COPPA was written in 1998, before the dawn of smartphones, applications and social networking sites like Facebook and Twitter.

The Federal Trade Commission is considering updating COPPA to reflect technology advances in the past decade.

The poll found that two-thirds of adults think children should be at least 13 years old to use the Internet on their own. But 29 percent of the parents with children age 9 to 12 said their children have their own handheld Wi-Fi enabled devices, which may mean children are online and unsupervised.

Although social networking sites like Facebook restrict access to users under age 13, 18% of parents polled said their children age 9-12 have their own social networking profile.

So it's not surprising that adults think COPPA needs updating, says Matthew M. Davis M.D., M.A.P.P., director of the C.S. Mott Children's Hospital National Poll on Children's Health.

"For parents, COPPA may be the most important piece of federal legislation you've never heard of," said Davis, who is also associate professor of pediatrics and internal medicine at the U-M Medical School and associate professor of public policy at the Gerald R. Ford School of Public Policy.

"So much has changed in the 14 years since COPPA was enacted: Facebook, Twitter and other social networks, along with applications. This report underscores the concerns among the general public to make sure proper safeguards are enacted to protect kids."

The Federal Trade Commission has suggested updating COPPA to reflect technology advances in the past decade, and this fall sought comments on proposed revisions.

In the poll, most adults expressed strong support for the proposed updates. The poll found that 60 percent of adults expressed strong support for prohibiting websites and applications designed for kids from collecting personal information of children under age 13.

The respondents showed similarly strong support to require websites and apps to ask users to confirm they are at least 13 years old and to require cell phone service providers and app developers to comply with COPPA regulations for users under age 13.

"Updating COPPA is a start, but parents must realize the digital landscape is continually evolving," says Davis. "It is important that parents play a key role in protecting their children online. With so many young children using the Internet every day, parents must talk to their kids about Internet safety and help teach them to identify and avoid dangerous situations."

Report: http://mottnpch.org/reports-surveys/public-supports-expanded-internet-safety-requirements-protect-kids

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Michigan Health System.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Small, portable sensors allow users to monitor exposure to pollution on their smart phones

Dec. 18, 2012 — Computer scientists at the University of California, San Diego have built a small fleet of portable pollution sensors that allow users to monitor air quality in real time on their smart phones. The sensors could be particularly useful to people suffering from chronic conditions, such as asthma, who need to avoid exposure to pollutants.

CitiSense is the only air-quality monitoring system capable of delivering real-time data to users' cell phones and home computers -- at any time. Data from the sensors can also be used to estimate air quality throughout the area where the devices are deployed, providing information to everyone -- not just those carrying sensors.

Just 100 of the sensors deployed in a fairly large area could generate a wealth of data -- well beyond what a small number of EPA-mandated air-quality monitoring stations can provide. For example, San Diego County has 3.1 million residents, 4,000 square miles -- and only about 10 stations.

"We want to get more data and better data, which we can provide to the public," said William Griswold, a computer science professor at the Jacobs School of Engineering at UC San Diego and the lead investigator on the project. "We are making the invisible visible."

The CitiSense sensors detect ozone, nitrogen dioxide and carbon monoxide, the most common pollutants emitted by cars and trucks. The user interface displays the sensor's readings on a smart phone by using a color-coded scale for air quality based on the EPA's air quality ratings, from green (good) to purple (hazardous).

Researchers provided the sensors for four weeks to a total of 30 users, including commuters at UC San Diego and faculty, students and staff members in the computer science department at the Jacobs School of Engineering. Computer scientists presented findings from these field tests at the Wireless Health 2012 conference in San Diego earlier this year.

User experiences

A view of the inside of the CitiSense sensor: the three cylindrical components detect ozone, nitrogen dioxide and carbon monoxide.

The sensors turned out to be great educational tools for their users. Many people assume that pollution diffuses equally in the air. But that's not true. It actually remains concentrated in hot spots, along main roads, at intersections and so on. The sensors made this clear for users. Wendy Chapman, an associate professor at the UC San Diego School of Medicine, was one of them. She often bikes to work and discovered that pollution on her route varies widely. She was exposed to the most pollution when she used the bike path along State Route 56. But when she drove home on that same road, she had virtually no exposure.

"The people who are doing the most to reduce emissions, by biking or taking the bus, were the people who experienced the highest levels of exposure to pollutants," said Griswold.

Users discovered that pollution varied not only based on location, but also on the time of the day. When Charles Elkan, a professor in the Department of Computer Science and Engineering, drove into work in mid-morning, the readings on his sensor were low. But when he drove back home in rush hour in the afternoon, readings were sometimes very high. Elkan said being part of the study allowed him to gauge how worried about pollution he should actually be. Air quality in San Diego is fairly good, he added.

"It's a valuable study," Elkan said. "I think it's going to have a big impact in the future."

Elkan added that he could envision a day in the near future when the sensors used by CitiSense would be built into smart phones, allowing virtually everyone to keep tabs on the levels of pollution they encounter every day. Of course, that means people might start worrying more about pollution as something they can see and measure.

Many of the users in the study did take action to limit their most severe exposure to pollutants. For example, bicyclists found out that they could avoid a great deal of exposure by simply biking one block away from a busy street. Commuters who took the bus avoided waiting near the vehicle's tail pipe, where the air quality was poor. One user convinced his supervisor to install new air filters in the office after registering poor air quality readings on his sensor.

Researchers also noticed that the users were sharing the information they collected, not only with family, friends and colleagues but also with strangers who asked them about the sensors during their commute or in public places. In other words, the sensors turned cell phones into a conversation starter, rather than devices that isolate their users from those around them.

The future of the project

The CitiSense sensor worn on a backpack.

SWhat's next? Some of the sensors are currently on loan to researchers at San Diego State University who are gauging air quality in San Ysidro, a community right on the border between the United States and Mexico, and one of the most polluted areas in San Diego County. Researchers hope to secure a grant from the National Institutes of Health to monitor air quality for school-age asthmatic children in that area and to determine what can be done to limit their exposure to pollutants.

The ultimate goal of CitiSense is to build and deploy a wireless network in which hundreds of small environmental sensors carried by the public rely on cell phones to shuttle information to central computers where it will be analyzed, anonymized and delivered to individuals, public health agencies and the community at large. The sensors currently cost $1,000 per unit, but could easily be mass-produced at an affordable price. So far, Griswold's team has built and deployed 20 of them in the field.

Technical challenges

CitiSense would not be possible without the expertise of computer science faculty members and graduate students at the Jacobs School of Engineering. In addition to principal investigator Griswold, the team includes School of Medicine and Calit2 professor Kevin Patrick; computer science professors Ingolf Krueger, Tajana Simunic Rosing, Hovav Shacham and Sanjoy Dasgupta; as well as graduate students and postdoctoral researchers Piero Zappi, Nima Nikzad, Elizabeth Bales, Celal Ziftci, Nichole Quick and Nakul Verma.

A key factor in the project's success was a breakthrough made by a group led by Dasgupta. Computer scientists used an artificial intelligence method, called Latent Variable Gaussian Regression, to capture high-quality data from the sensors in an uncontrolled environment. The method allowed researchers to remove noise from the data. "Sensors will differ. Sensors will fail," Griswold explained. "People will breathe on them. We wanted to make sure we got good data in these conditions."

Technical challenges remain. The data exchanges between smart phones and sensors use up a great deal of the phones' batteries. During field tests, researchers provided users with two chargers -- one for home and one for work -- to ensure that their phones were not going to run out of power.

To extend battery life, researchers are experimenting with uploading data from the sensors to the phones every 15 minutes or only when the user wants to retrieve the information. Computer scientists also have developed methods to turn off a phone's GPS -- a huge drain of the devices' batteries -- when the device is immobile.

These innovations to extend battery life were made possible by Krueger's previous work in service-oriented architecture, which can keep various components -- like machine learning, power management and security code -- much more separate than in traditional software systems, where functional elements are often so woven into the source code that it is difficult to quickly update any one aspect of the software.

CitiSense is funded by a $1.5 million grant from the National Science Foundation. Qualcomm, Inc. donated funds for the cell phones used for the project.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - San Diego.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Important progress for spintronics: Spin amplifier works at room temperature

Nov. 16, 2012 — A fundamental cornerstone for spintronics that has been missing up until now has been constructed by a team of physicists at Linköping University in Sweden. It's thought to be the world's first spin amplifier that can be used at room temperature.

Great hopes have been placed on spintronics as the next big paradigm shift in the field of electronics. Spintronics combines microelectronics, which is built on the charge of electrons, with the magnetism that originates in the electrons' spin. This lays the foundation for entirely new applications that fire the imagination. The word "spin" aims at describing how electrons spin around, much like how Earth spins on its own axis.

But turning theory into practice requires amplifying these very weak signals. Instead of transistors, rectifiers, and so on, the building blocks of spintronics will be formed by things like spin filters, spin amplifiers, and spin detectors. Through regulating and controlling electron spin, it will be possible to store data more densely and process it many times faster -- and with greater energy efficiency -- than today's technology.

In 2009, an LiU group from the Department of Functional Electronic Material, led by Professor Weimin Chen, presented a new type of spin filter that works at room temperature. The filter lets through electrons that have the desired spin direction, screening out the others. This function is crucial for constructing new types of components such as spin diodes and spin lasers.

Now the same group, in collaboration with colleagues from Germany and the United States, has published an article in the highly-ranked journal Advanced Materials, where they present an effective spin amplifier based on a non-magnetic semiconductor. The amplification occurs through deliberate defects in the form of extra gallium atoms introduced into an alloy of gallium, indium, nitrogen and arsenic.

A component of this kind can be set anywhere along a path of spin transport to amplify signals that have weakened along the way. By combining this with a spin detector, it may be possible to read even extremely weak spin signals.

"It's an advance that blazes a trail for a solution to the problem of controlling and detecting electron spin at room temperature, which is a prerequisite for the breakthrough of spintronics," says Weimin Chen.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Linköping University, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Y. Puttisong, I.A. Buyanova, A.J. Ptak, C.W. Tu, L. Geelhaar, H. Richert and W.M. Chen. Room-temperature electron spin amplifier based on Ga(In)NAs alloys. Advanced Materials, 26 October 2012 DOI: 10.1002/adma.20120597

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Photonics: Graphene's flexible future

Dec. 10, 2012 — Theoretical calculations show graphene's potential for controlling nanoscale light propagation on a chip.

Semiconductors have revolutionized computing because of their efficient control over the flow of electrical currents on a single chip, which has led to devices such as the transistor. Working towards a similar tunable functionality for light, researchers from the A*STAR Institute of High Performance Computing (IHPC), Singapore, have shown how graphene could be used to control light at the nanometer scale, advancing the concept of photonic circuits on chips1.

Graphene, which is made from a single layer of carbon atoms, has excellent electronic properties; some of these are also useful in photonic applications. Usually, only metals are able to confine light to the order of a few nanometers, which is much smaller than the wavelength of the light. At the surface of metals, collective oscillations of electrons, so-called 'surface plasmons', act as powerful antennae that confine light to very small spaces. Graphene, with its high electrical conductivity, shows similar behavior to metals so can also be used for plasmon-based applications, explains Choon How Gan of IHPC, who led the research.

Gan and co-workers studied theoretically and computationally how surface plasmons travel along sheets of graphene. Even though graphene is a poorer conductor than a metal, so plasmon propagation losses are higher, it has several key advantages, says team member Hong Son Chu. "The key advantage that makes graphene an excellent platform for plasmonic devices is its large tunability that cannot be seen in the usual noble metals," he explains. "This tunability can be achieved in different ways, using electric or magnetic fields, optical triggers and temperature."

The team's calculations indicated that surface plasmons propagating along a sheet of graphene would be much more confined to a small space than they would traveling along a gold surface (see image). However, the team also showed that surface plasmons would travel far better between two sheets of graphene brought into close contact. Furthermore, by adjusting design parameters such as the separation between the sheets, as well as their electrical conductivity, much better control over surface plasmon properties is possible.

In the future, Gan and his co-workers plan to investigate these properties for applications. "We will explore the potential of graphene plasmonic devices also for the terahertz and mid-infrared regime," he explains. "In this spectral range, graphene plasmonic structures could be promising for applications such as molecular sensing, as photodetectors, or for optical devices that can switch and modulate light."

The A*STAR-affiliated researchers contributing to this research are from the Institute of High Performance Computing

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Choon How Gan, Hong Son Chu, Er Ping Li. Synthesis of highly confined surface plasmon modes with doped graphene sheets in the midinfrared and terahertz frequencies. Physical Review B, 2012; 85 (12) DOI: 10.1103/PhysRevB.85.125431

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Control any device -- from mobile phones to television sets -- with just a wave of your hand

Oct. 8, 2012 — Forget the TV remote and the games controller, now you can control anything from your mobile phone to the television with just a wave of your hand.

Researchers at Newcastle University and Microsoft Research Cambridge (MSR) have developed a sensor the size of a wrist-watch which tracks the 3-D movement of the hand and allows the user to remotely control any device.

Mapping finger movement and orientation, it gives the user remote control anytime, anywhere -- even allowing you to answer your phone while it's still in your pocket and you're walking down the street. (Watch a video on the university's YouTube channel: http://www.youtube.com/watch?v=G98zYMMEDno)

Being presented this week at the 25th Association for Computing Machinery Symposium on User Interface Software and Technology, 'Digits' allows for the first time 3-D interactions without being tied to any external hardware.

It has been developed by David Kim, a MSR funded PhD from Newcastle University's Culture Lab; Otmar Hilliges, Shahram Izadi, Alex Butler, and Jiawen Chen of MSR Cambridge; Iason Oikonomidis of Greece's Foundation for Research & Technology; and Professor Patrick Olivier of Newcastle University's Culture Lab.

"The Digits sensor doesn't rely on any external infrastructure so it is completely mobile," explains David Kim, a PhD student at Newcastle University.

"This means users are not bound to a fixed space. They can interact while moving from room to room or even running down the street. What Digits does is finally take 3-D interaction outside the living room."

To enable ubiquitous 3-D spatial interaction anywhere, Digits had to be lightweight, consume little power, and have the potential to be as small and comfortable as a watch. At the same time, Digits had to deliver superior gesture sensing and "understand" the human hand, from wrist orientation to the angle of each finger joint, so that interaction would not be limited to 3-D points in space. Digits had to understand what the hand is trying to express -- even while inside a pocket.

David adds: "We needed a system that enabled natural 3-D interactions with bare hands, but with as much flexibility and accuracy as data gloves."

The current prototype, which is being showcased at the prestigious ACM UIST 2012 conference today, includes an infrared camera, IR laser line generator, IR diffuse illuminator, and an inertial-measurement unit (IMU) track.

David says: "We wanted users to be able to interact spontaneously with their electronic devices using simple gestures without even having to reach for them. Can you imagine how much easier it would be if you could answer your mobile phone while it's still in your pocket or buried at the bottom of your bag?"

It's All About the Human Hand

One of the project's main contributions is a real-time signal-processing pipeline that robustly samples key parts of the hand, such as the tips and lower regions of each finger. Other important research achievements are two kinematic models that enable full reconstruction of hand poses from just five key points. The project posed many challenges, but the team agrees that the hardest was extrapolating natural-looking hand motions from a sparse sampling of the key points sensed by the camera.

"We had to understand our own body parts first before we could formulate their workings mathematically," Shahram Izadi explains. "We spent hours just staring at our fingers. We read dozens of scientific papers about the biomechanical properties of the human hand. We tried to correlate these five points with the highly complex motion of the hand. In fact, we completely rewrote each kinematic model about three or four times until we got it just right."

The team agrees that the most exciting moment of the project came when team members saw the models succeed.

"At the beginning, the virtual hand often broke and collapsed. It was always very painful to watch," David explains. "Then, one day, we radically simplified the mathematical model, and suddenly, it behaved like a human hand. It felt absolutely surreal and immersive, like in the movie Avatar. That moment gave us a big boost!"

Both the Digits technical paper being presented at UIST 2012 and accompanying video present interactive scenarios using Digits in a variety of applications, with particular emphasis on mobile scenarios, where it can interact with mobile phones and tablets. The researchers also experimented with eyes-free interfaces, which enable users to leave mobile devices in a pocket or purse and interact with them using hand gestures.

"By understanding how one part of the body works and knowing what sensors to use to capture a snapshot," Izadi says, "Digits offers a compelling look at the possibilities of opening up the full expressiveness and dexterity of one of our body parts for mobile human-computer interaction."

By instrumenting only the wrist, the user's entire hand is left to interact freely without wearing data gloves, input devices worn as gloves, most often used in virtual reality applications to facilitate tactile sensing and fine-motion control. The Digits prototype, whose electronics are self-contained on the user's wrist, optically image the entirety of the user's hand, enabling freehand interactions in a mobile setting.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Newcastle University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here