AI

Syndicate content
Accelerating Intelligence
Updated: 18 min 38 sec ago

Remote-controlled DNA nanorobots could lead to the first nanorobotic production factory

Fri, 19/01/2018 - 11:41pm

German researchers created a 55-nm-by-55-nm DNA-based molecular platform with a 25-nm-long robotic arm that can be actuated with externally applied electrical fields, under computer control. (credit: Enzo Kopperger et al./Science)

By powering a self-assembling DNA nanorobotic arm with electric fields, German scientists have achieved precise nanoscale movement that is at least five orders of magnitude (hundreds of thousands times) faster than previously reported DNA-driven robotic systems, they suggest today (Jan. 19) in the journal Science.

DNA origami has emerged as a powerful tool to build precise structures. But now, “Kopperger et al. make an impressive stride in this direction by creating a dynamic DNA origami structure that they can directly control from the macroscale with easily tunable electric fields—similar to a remote-controlled robot,” notes Björn Högberg of Karolinska Institutet in a related Perspective in Science, (p. 279).

The nanorobotic arm resembles the gearshift lever of a car. Controlled by an electric field (comparable to the car driver), short, single-stranded DNA serves as “latches” (yellow) to momentarily grab and lock the 25-nanometer-long arm into predefined “gear” positions. (credit: Enzo Kopperger et al./Science)

The new biohybrid nanorobotic systems could even act as a molecular mechanical memory (a sort of nanoscale version of the Babbage Analytical Engine), he notes. “With the capability to form long filaments with multiple DNA robot arms, the systems could also serve as a platform for new inventions in digital memory, nanoscale cargo transfer, and 3D printing of molecules.”

“The robot-arm system may be scaled up and integrated into larger hybrid systems by a combination of lithographic and self-assembly techniques,” according to the researchers. “Electrically clocked synthesis of molecules with a large number of robot arms in parallel could then be the first step toward the realization of a genuine nanorobotic production factory.”


Taking a different approach to a nanofactory, this “Productive Nanosystems: from Molecules to Superproducts” film — a collaborative project of animator and engineer John Burch and pioneer nanotechnologist K. Eric Drexler in 2005 — demonstrated key steps in a hypothetical process that converts simple molecules into a billion-CPU laptop computer. More here.

Abstract of A self-assembled nanoscale robotic arm controlled by electric fields

The use of dynamic, self-assembled DNA nanostructures in the context of nanorobotics requires fast and reliable actuation mechanisms. We therefore created a 55-nanometer–by–55-nanometer DNA-based molecular platform with an integrated robotic arm of length 25 nanometers, which can be extended to more than 400 nanometers and actuated with externally applied electrical fields. Precise, computer-controlled switching of the arm between arbitrary positions on the platform can be achieved within milliseconds, as demonstrated with single-pair Förster resonance energy transfer experiments and fluorescence microscopy. The arm can be used for electrically driven transport of molecules or nanoparticles over tens of nanometers, which is useful for the control of photonic and plasmonic processes. Application of piconewton forces by the robot arm is demonstrated in force-induced DNA duplex melting experiments.

Categories: News

Tracking a thought’s fleeting trip through the brain

Wed, 17/01/2018 - 9:04pm


Repeating a word: as the brain receives (yellow), interpretes (red), and responds (blue) within a second, the prefrontal cortex (red) coordinates all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).

Recording the electrical activity of neurons directly from the surface of the brain, using electrocorticograhy (ECoG)*, neuroscientists were able to track the flow of thought across the brain in real time for the first time. They showed clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response to a perception.

Here’s what they found.

For a simple task, such as repeating a word seen or heard:

The visual and auditory cortices react first to perceive the word. The prefrontal cortex then kicks in to interpret the meaning, followed by activation of the motor cortex (preparing for a response). During the half-second between stimulus and response, the prefrontal cortex remains active to coordinate all the other brain areas.

For a particularly hard task, like determining the antonym of a word:

During the time the brain takes several seconds to respond, the prefrontal cortex recruits other areas of the brain — probably including memory networks (not tracked). The prefrontal cortex then hands off to the motor cortex to generate a spoken response.

In both cases, the brain begins to prepare the motor areas to respond very early (during initial stimulus presentation) — suggesting that we get ready to respond even before we know what the response will be.

“This might explain why people sometimes say things before they think,” said Avgusta Shestyuk, a senior researcher in UC Berkeley’s Helen Wills Neuroscience Institute and lead author of a paper reporting the results in the current issue of Nature Human Behavior.


For a more difficult task, like saying a word that is the opposite of another word, people’s brains required 2–3 seconds to detect (yellow), interpret and search for an answer (red), and respond (blue) — with sustained prefrontal lobe activity (red) coordinating all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).

The research backs up what neuroscientists have pieced together over the past decades from studies in monkeys and humans.

“These very selective studies have found that the frontal cortex is the orchestrator, linking things together for a final output,” said co-author Robert Knight, a UC Berkeley professor of psychology and neuroscience and a professor of neurology and neurosurgery at UCSF. “Here we have eight different experiments, some where the patients have to talk and others where they have to push a button, where some are visual and others auditory, and all found a universal signature of activity centered in the prefrontal lobe that links perception and action. It’s the glue of cognition.”

Researchers at Johns Hopkins University, California Pacific Medical Center, and Stanford University were also involved. The work was supported by the National Science Foundation, National Institute of Mental Health, and National Institute of Neurological Disorders and Stroke.

* Other neuroscientists have used functional magnetic resonance imaging (fMRI) and electroencephelography (EEG) to record activity in the thinking brain. The UC Berkeley scientists instead employed a much more precise technique, electrocorticograhy (ECoG), which records from several hundred electrodes placed on the brain surface and detects activity in the thin outer region, the cortex, where thinking occurs. ECoG provides better time resolution than fMRI and better spatial resolution than EEG, but requires access to epilepsy patients undergoing highly invasive surgery involving opening the skull to pinpoint the location of seizures. The new study employed 16 epilepsy patients who agreed to participate in experiments while undergoing epilepsy surgery at UC San Francisco and California Pacific Medical Center in San Francisco, Stanford University in Palo Alto and Johns Hopkins University in Baltimore. Once the electrodes were placed on the brains of each patient, the researchers conducted a series of eight tasks that included visual and auditory stimuli. The tasks ranged from simple, such as repeating a word or identifying the gender of a face or a voice, to complex, such as determining a facial emotion, uttering the antonym of a word, or assessing whether an adjective describes the patient’s personality.

Abstract of Persistent neuronal activity in human prefrontal cortex links perception and action

How do humans flexibly respond to changing environmental demands on a subsecond temporal scale? Extensive research has highlighted the key role of the prefrontal cortex in flexible decision-making and adaptive behaviour, yet the core mechanisms that translate sensory information into behaviour remain undefined. Using direct human cortical recordings, we investigated the temporal and spatial evolution of neuronal activity (indexed by the broadband gamma signal) in 16 participants while they performed a broad range of self-paced cognitive tasks. Here we describe a robust domain- and modality-independent pattern of persistent stimulus-to-response neural activation that encodes stimulus features and predicts motor output on a trial-by-trial basis with near-perfect accuracy. Observed across a distributed network of brain areas, this persistent neural activation is centred in the prefrontal cortex and is required for successful response implementation, providing a functional substrate for domain-general transformation of perception into action, critical for flexible behaviour.

Categories: News

Deep neural network models score higher than humans in reading and comprehension test

Mon, 15/01/2018 - 11:55pm

(credit: Alibaba Group)

Microsoft and Alibaba have developed deep neural network models that scored higher than humans in a Stanford University reading and comprehension test, Stanford Question Answering Dataset (SQuAD).

Microsoft achieved 82.650 on the ExactMatch (EM) metric* on Jan. 3, and Alibaba Group Holding Ltd. scored 82.440 on Jan. 5. The best human score so far is 82.304.

“SQuAD is a new reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage,” according to the Stanford NLP Group. “With 100,000+ question-answer pairs on 500+ articles, SQuAD is significantly larger than previous reading comprehension datasets.”

“A strong start to 2018 with the first model (SLQA+) to exceed human-level performance on @stanfordnlp SQuAD’s EM metric!,” said Pranav Rajpurkar, a Ph.D. student in the Stanford Machine Learning Group and lead author of a paper in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing on SQuAD (available on open-access ArXiv). “Next challenge: the F1 metric*, where humans still lead by ~2.5 points!” (Alibaba’s SLQA+ scored 88.607 on the F1 metric and Microsoft’s r-net+ scored 88.493.)

However, challenging the “comprehension” description, Gary Marcus, PhD, a Professor of Psychology and Neural Science at NYU, notes in a tweet that “the SQUAD test shows that machines can highlight relevant passages in text, not that they understand those passages.”

“The Chinese e-commerce titan has joined the likes of Tencent Holdings Ltd. and Baidu Inc. in a race to develop AI that can enrich social media feeds, target ads and services or even aid in autonomous driving, Bloomberg notes. “Beijing has endorsed the technology in a national-level plan that calls for the country to become the industry leader 2030.”

Read more: China’s Plan for World Domination in AI (Bloomberg)

*”The ExactMatch metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F1 score metric measures the average overlap between the prediction and ground truth answer.” – Pranav Rajpurkar et al., ArXiv

Categories: News

Scientists map mammalian neural microcircuits in precise detail

Fri, 12/01/2018 - 11:47pm

Nanoengineered electroporation microelectrodes (NEMs) allow for improved current distribution and electroporation effectiveness by reducing peak voltage regions (to avoid damaging tissue). (left) Cross-section of NEM model, illustrating the total effective electroporation volume and its distribution of the voltage around the pipette tip, at a safe current of 50 microamperes. (Scale bar = 5 micrometers.) (right) A five-hole NEM after successful insertion into brain tissue, imaged with high-resolution focused ion beam (FIB). (Scale bar = 2 micrometers) (credit: D. Schwartz et al./Nature Communications)

Neuroscientists at the Francis Crick Institute have developed a new technique to map electrical microcircuits* in the brain at far more detail than existing techniques*, which are limited to tiny sections of the brain (or remain confined to simpler model organisms, like zebrafish).

In the brain, groups of neurons that connect up in microcircuits help us process information about things we see, smell and taste. Knowing how many neurons and other types of cells make up these microcircuits would give scientists a deeper understanding of how the brain computes complex information.

Nanoengineered microelectrodes

The researchers developed a new design called “nanoengineered electroporation** microelectrodes” (NEMs). They were able to use an NEM to map out all 250 cells that make up a specific microcircuit in a part of a mouse brain that processes smell (known as the “olfactory bulb glomerulus”) in a horizontal slice of the olfactory bulb — something never before achieved.

To do that, the team created a series of tiny pores (holes) near the end of a micropipette using nano-engineering tools. The new design distributes the electrical current uniformly over a wider area (up to a radius of about 50 micrometers — the size of a typical neural microcircuit), with minimal cell damage.

The researchers tested the NEM technique with a specific microcircuit, the olfactory bulb glomerulus (which detects smells). They were able to identify detailed, long-range, complex anatomical features (scale bar = 100 micrometers). (White arrows identify parallel staining of vascular structures.) (credit: D. Schwartz et al./Nature Communications)

Seeing 100% of the cells in a brain microcircuit for the first time

Unlike current methods, the team was able to stain up to 100% of the cells in the microcircuit they were investigating, according to Andreas Schaefer, who led the research, which was published in open-access Nature Communications today (Jan. 12, 2018).

“As the brain is made up of repeating units, we can learn a lot about how the brain works as a computational machine by studying it at this [microscopic] level,” he said. “Now that we have a tool of mapping these tiny units, we can start to interfere with specific cell types to see how they directly control behavior and sensory processing.”

The work was conducted in collaboration with researchers at the Max-Planck-Institute for Medical Research in Heidelberg, Heidelberg University, Heidelberg University Hospital, University College London, the MRC National Institute for Medical Research, and Columbia University Medical Center.

* Scientists currently use color-tagged viruses or charged dyes with applied electroporation current to stain brain cells. These methods, using a glass capillary with a single hole, are limited to low current (higher current could damage tissue), so they can only allow for identifying a limited area of a microcircuit.

** Electroporation is a microbiology technique that applies an electrical field to cells to increase the permeability (ease of penetration) of the cell membrane, allowing (in this case) fluorophores (fluorescent, or glowing dyes) to penetrate into the cells to label (identify parts of) the neural microcircuits (including the “inputs” and “outputs”) under a microscope.

Abstract of Architecture of a mammalian glomerular domain revealed by novel volume electroporation using nanoengineered microelectrodes

Dense microcircuit reconstruction techniques have begun to provide ultrafine insight into the architecture of small-scale networks. However, identifying the totality of cells belonging to such neuronal modules, the “inputs” and “outputs,” remains a major challenge. Here, we present the development of nanoengineered electroporation microelectrodes (NEMs) for comprehensive manipulation of a substantial volume of neuronal tissue. Combining finite element modeling and focused ion beam milling, NEMs permit substantially higher stimulation intensities compared to conventional glass capillaries, allowing for larger volumes configurable to the geometry of the target circuit. We apply NEMs to achieve near-complete labeling of the neuronal network associated with a genetically identified olfactory glomerulus. This allows us to detect sparse higher-order features of the wiring architecture that are inaccessible to statistical labeling approaches. Thus, NEM labeling provides crucial complementary information to dense circuit reconstruction techniques. Relying solely on targeting an electrode to the region of interest and passive biophysical properties largely common across cell types, this can easily be employed anywhere in the CNS.

Categories: News

How to grow functioning human muscles from stem cells

Wed, 10/01/2018 - 2:54am

A cross section of a muscle fiber grown from induced pluripotent stem cells, showing muscle cells (green), cell nuclei (blue), and the surrounding support matrix for the cells (credit: Duke University)

Biomedical engineers at Duke University have grown the first functioning human skeletal muscle from human induced pluripotent stem cells (iPSCs). (Pluripotent stem cells are important in regenerative medicine because they can generate any type of cell in the body and can propagate indefinitely; the induced version can be generated from adult cells instead of embryos.)

The engineers say the new technique is promising for cellular therapies, drug discovery, and studying rare diseases. “When a child’s muscles are already withering away from something like Duchenne muscular dystrophy, it would not be ethical to take muscle samples from them and do further damage,” explained Nenad Bursac, professor of biomedical engineering at Duke University and senior author of an open-access paper on the research published Tuesday, January 9, in Nature Communications.

How to grow a muscle

In the study, the researchers started with human induced pluripotent stem cells. These are cells taken from adult non-muscle tissues, such as skin or blood, and reprogrammed to revert to a primordial state. The pluripotent stem cells are then grown while being flooded with a molecule called Pax7 — which signals the cells to start becoming muscle.

After two to four weeks of 3-D culture, the resulting muscle cells form muscle fibers that contract and react to external stimuli such as electrical pulses and biochemical signals — mimicking neuronal inputs just like native muscle tissue. The researchers also implanted the newly grown muscle fibers into adult mice. The muscles survived and functions for at least three weeks, while progressively integrating into the native tissue through vascularization (growing blood vessels).

A stained cross section of the new muscle fibers, showing muscle cells (red), receptors for neuronal input (green), and cell nuclei (blue) (credit: Duke University)

Once the cells were well on their way to becoming muscle, the researchers stopped providing the Pax7 signaling molecule and started giving the cells the support and nourishment they needed to fully mature. (At this point in the research, the resulting muscle is not as strong as native muscle tissue, and also falls short of the muscle grown in a previous study*, which started from muscle biopsies.)

However, the pluripotent stem cell-derived muscle fibers develop reservoirs of “satellite-like cells” that are necessary for normal adult muscles to repair damage, while the muscle from the previous study had much fewer of these cells. The stem cell method is also capable of growing many more cells from a smaller starting batch than the previous biopsy method.

“With this technique, we can just take a small sample of non-muscle tissue, like skin or blood, revert the obtained cells to a pluripotent state, and eventually grow an endless amount of functioning muscle fibers to test,” said Bursac.

The researchers could also, in theory, fix genetic malfunctions in the induced pluripotent stem cells derived from a patient, he added. Then they could grow small patches of completely healthy muscle. This could not heal or replace an entire body’s worth of diseased muscle, but it could be used in tandem with more widely targeted genetic therapies or to heal more localized problems.

The researchers are now refining their technique to grow more robust muscles and beginning work to develop new models of rare muscle diseases. This work was supported by the National Institutes of Health.


Duke Engineering | Human Muscle Grown from Skin Cells

Muscles for future microscale robot exoskeletons

Meanwhile, physicists at Cornell University are exploring ways to create muscles for future microscale robot exoskeletons — rapidly changing their shape upon sensing chemical or thermal changes in their environment. The new designs are compatible with semiconductor manufacturing, making them useful for future microscale robotics.

The microscale robot exoskeleton muscles move using a motor called a bimorph. (A bimorph is an assembly of two materials — in this case, graphene and glass — that bends when driven by a stimulus like heat, a chemical reaction or an applied voltage.) The shape change happens because, in the case of heat, two materials with different thermal responses expand by different amounts over the same temperature change. The bimorph bends to relieve some of this strain, allowing one layer to stretch out longer than the other. By adding rigid flat panels that cannot be bent by bimorphs, the researchers localize bending to take place only in specific places, creating folds. With this concept, they are able to make a variety of folding structures ranging from tetrahedra (triangular pyramids) to cubes. The bimorphs also fold in response to chemical stimuli by driving large ions into the glass, causing it to expand. (credit: Marc Z. Miskin et al./PNAS)

Their work is outlined in a paper published Jan. 2 in Proceedings of the National Academy of Sciences.

* The advance builds on work published in 2015, when the Duke engineers grew the first functioning human muscle tissue from cells obtained from muscle biopsies. In that research, Bursac and his team started with small samples of human cells obtained from muscle biopsies, called “myoblasts,” that had already progressed beyond the stem cell stage but hadn’t yet become mature muscle fibers. The engineers grew these myoblasts by many folds and then put them into a supportive 3-D scaffolding filled with a nourishing gel that allowed them to form aligned and functioning human muscle fibers.

Abstract of Engineering human pluripotent stem cells into a functional skeletal muscle tissue

The generation of functional skeletal muscle tissues from human pluripotent stem cells (hPSCs) has not been reported. Here, we derive induced myogenic progenitor cells (iMPCs) via transient overexpression of Pax7 in paraxial mesoderm cells differentiated from hPSCs. In 2D culture, iMPCs readily differentiate into spontaneously contracting multinucleated myotubes and a pool of satellite-like cells endogenously expressing Pax7. Under optimized 3D culture conditions, iMPCs derived from multiple hPSC lines reproducibly form functional skeletal muscle tissues (iSKM bundles) containing aligned multi-nucleated myotubes that exhibit positive force–frequency relationship and robust calcium transients in response to electrical or acetylcholine stimulation. During 1-month culture, the iSKM bundles undergo increased structural and molecular maturation, hypertrophy, and force generation. When implanted into dorsal window chamber or hindlimb muscle in immunocompromised mice, the iSKM bundles survive, progressively vascularize, and maintain functionality. iSKM bundles hold promise as a microphysiological platform for human muscle disease modeling and drug development.

Abstract of Graphene-based bimorphs for micron-sized, autonomous origami machines

Origami-inspired fabrication presents an attractive platform for miniaturizing machines: thinner layers of folding material lead to smaller devices, provided that key functional aspects, such as conductivity, stiffness, and flexibility, are persevered. Here, we show origami fabrication at its ultimate limit by using 2D atomic membranes as a folding material. As a prototype, we bond graphene sheets to nanometer-thick layers of glass to make ultrathin bimorph actuators that bend to micrometer radii of curvature in response to small strain differentials. These strains are two orders of magnitude lower than the fracture threshold for the device, thus maintaining conductivity across the structure. By patterning 2-<mml:math><mml:mi>

Categories: News

DARPA-funded ‘unhackable’ computer could avoid future flaws like Spectre and Meltdown

Mon, 08/01/2018 - 10:15pm

(credit: University of Michigan)

A University of Michigan (U-M) team has announced plans to develop an “unhackable” computer, funded by a new $3.6 million grant from the Defense Advanced Research Projects Agency (DARPA).

The goal of the project, called MORPHEUS, is to design computers that avoid the vulnerabilities of most current microprocessors, such as the Spectre and Meltdown flaws announced  last week.*

The $50 million DARPA System Security Integrated Through Hardware and Firmware (SSITH) program aims to build security right into chips’ microarchitecture, instead of relying on software patches.*

The U-M grant is one of nine that DARPA has recently funded through SSITH.

Future-proofing

The idea is to protect against future threats that have yet to be identified. “Instead of relying on software Band-Aids to hardware-based security issues, we are aiming to remove those hardware vulnerabilities in ways that will disarm a large proportion of today’s software attacks,” said Linton Salmon, manager of DARPA’s System Security Integrated Through Hardware and Firmware program.

Under MORPHEUS, the location of passwords would constantly change, for example. And even if an attacker were quick enough to locate the data, secondary defenses in the form of encryption and domain enforcement would throw up additional roadblocks.

More than 40 percent of the “software doors” that hackers have available to them today would be closed if researchers could eliminate seven classes of hardware weaknesses**, according to DARPA.

DARPA is aiming to render these attacks impossible within five years. “If developed, MORPHEUS could do it now,” said Todd Austin, U-M professor of computer science and engineering, who leads the project. Researchers at The University of Texas and Princeton University are also working with U-M.

* Apple released today (Jan. 8) iOS 11.2.2 and macOS 10.13.2 updates with Spectre fix for Safari and WebKit, according to MacWorld. Threatpost has an update (as of Jan. 7) on efforts by Intel and others in dealing with Meltdown and Spectre processor vulnerabilities .

** Permissions and privileges, buffer errors, resource management, information leakage, numeric errors, crypto errors, and code injection.

UPDATE 1/9/2018: BLUE-SCREEN ALERT: Read this if you have a Windows computer with an AMD processor: Microsoft announced today it has temporarily paused sending some Windows operating system updates (intended to protect against Spectre and Meltdown chipset vulnerabilities) to devices that have impacted AMD processors. “Microsoft has received reports of some AMD devices getting into an unbootable state after installation of recent Windows operating system security updates.”

 

 

 

Categories: News

Researchers hack cell biology to create complex shapes that form living tissue

Fri, 05/01/2018 - 1:32am

This image shows the shapes made of living tissue, engineered by the researchers. By patterning mechanically active mouse or human cells to thin layers of extracellular fibers, the researchers could create bowls, coils, and ripple shapes. (credit: Alex Hughes)

Many of the complex folded and curved shapes that form human tissues can now be programmatically recreated with very simple instructions, UC San Francisco (UCSF) bioengineers report December 28 in the journal Developmental Cell.

The researchers used 3D cell-patterning to shape active mouse and human embryonic cells into thin layers of extracellular matrix fibers (a structural material produced by human cells that make up our connective tissue) to create bowls, coils, and ripples out of living tissue. A web of these fibers folded themselves up in predictable ways, mimicking developmental processes in natural human body tissue.

Beyond 3D-printing and molds

As KurzweilAI has reported, labs have already used modified 3D printers to pioneer 3D shapes for tissue engineering (such as this research in creating an ear and jawbone structure). They have also used micro-molding for creating variously shaped objects using plastic material in a mold (frame). But the final product often misses key structural features of normal tissues.

Engineered tissue curvature using DNA-programmed assembly of cells (credit: Alex J. Hughes et al./ Developmental Cell)

The UCSF lab approach instead used a precision 3D cell-patterning technology called DNA-programmed assembly of cells (DPAC). It provides an initial template (pattern) for tissue to later develop in vitro (in a test tube or other lab container). That tissue automatically folds itself into complex shapes in ways that replicate how in vivo (body) tissues normally assemble themselves hierarchically during development.

“This approach could significantly improve the structure, maturation, and vascularization” of tissues in organoids” (miniature models of human parts, such as brains, used for drug testing) “and 3D-printed tissues in general,” the researchers note in the paper.

“We believe these efforts have important implications for the engineering of in vitro models of disease, for regenerative medicine, and for future applications of living active materials such as in soft robotics. … These mechanisms can be integrated with top-down patterning technologies such as optogenetics, micromolding, and printing approaches that control cellular and [extracellular matrix] tissue composition at specific locations.”

This work was funded by a Jane Coffin Childs postdoctoral fellowship, the National Institutes of Health, the Department of Defense Breast Cancer Research Program, the NIH Common Fund, the Chan-Zuckerberg Biohub Investigator Program, the National Science Foundation, the UCSF Program in Breakthrough Biomedical Research, and the UCSF Center for Cellular Construction.

Abstract of Engineered Tissue Folding by Mechanical Compaction of the Mesenchyme

Many tissues fold into complex shapes during development. Controlling this process in vitro would represent an important advance for tissue engineering. We use embryonic tissue explants, finite element modeling, and 3D cell-patterning techniques to show that mechanical compaction of the extracellular matrix during mesenchymal condensation is sufficient to drive tissue folding along programmed trajectories. The process requires cell contractility, generates strains at tissue interfaces, and causes patterns of collagen alignment around and between condensates. Aligned collagen fibers support elevated tensions that promote the folding of interfaces along paths that can be predicted by modeling. We demonstrate the robustness and versatility of this strategy for sculpting tissue interfaces by directing the morphogenesis of a variety of folded tissue forms from patterns of mesenchymal condensates. These studies provide insight into the active mechanical properties of the embryonic mesenchyme and establish engineering strategies for more robustly directing tissue morphogenesis ex vivo.

Categories: News

Brainwave ‘mirroring’ neurotechnology improves post-traumatic stress symptoms

Wed, 03/01/2018 - 11:08pm

Patient receiving a real-time reflection of her frontal-lobe brainwave activity as a stream of audio tones through earbuds. (credit: Brain State Technologies)

You are relaxing comfortably, eyes closed, with non-invasive sensors attached to your scalp that are picking up signals from various areas of your brain. The signals are converted by a computer to audio tones that you can hear on earbuds. Over several sessions, the different frequencies (pitches) of the tones associated with the two hemispheres of the brain create a mirror for your brainwave activity, helping your brain reset itself to reduce traumatic stress.

In a study conducted at Wake Forest School of Medicine, 20 sessions of noninvasive brainwave “mirroring” neurotechnology called HIRREM* (high-resolution, relational, resonance-based electroencephalic mirroring) significantly reduced symptoms of post-traumatic stress resulting from service as a military member or vet.


Example of tones (credit: Brain State Technologies)

“We observed reductions in post-traumatic symptoms**, including insomnia, depressive mood, and anxiety, that were durable through six months after the use of HIRREM, but additional research is needed to confirm these initial findings,” said the study’s principal investigator, Charles H. Tegeler, M.D., professor of neurology at Wake Forest School of Medicine, a part of Wake Forest Baptist.

About 500 patients have participated in HIRREM clinical trials at Wake Forest School of Medicine and other locations, according to Brain State Technologies Founder and CEO Lee Gerdes.


Brain State Technologies | HIRREM process, showing a technologist applying Brain State Technologies’ proprietary HIRREM process with a military veteran client.

HIRREM is intended for medical research. A consumer version of the core underlying brainwave mirroring process is available as “Brainwave Optimization” from Brain State Technologies in Scottsdale, Arizona. The company also offers a wearable device for ongoing brain support, BRAINtellect B2v2.

How HIRREM neurotechnology works

(credit: Brain State Technologies)

HIRREM is a neurotechnology that dynamically measures brain electrical activity. It uses two or more EEG (electroencephalogram, or brain-wave detection) scalp sensors to pick up signals from both sides of the brain. Computer software algorithms then convert dominant brain frequencies in real time into audible tones with varying pitch and timing, which can be heard on earbuds.

In effect, the brain is listening to itself. It the process, it makes self-adjustments towards improved balance (between brain temporal lobe activity in the two hemispheres — sympathetic (right) and parasympathetic (left) — of the brain), resulting in reduced hyper-arousal. No conscious cognitive activity is required. Signals from other areas of the brain can also be studied.

The net effect is to reset stress response patterns that have been wired by repetitive traumatic events (physical or non-physical).***

“Thus, if the stimulus is acoustic response to brain function (often called neurofeedback (NFB), then the response is made based on a threshold of the NFB provider. Since the brain moves three to five times faster than the thoughtful response of the client, the brain’s activity is way beyond any kind of activity which the client can mitigate. The NFB hypothesis is that the operant conditioning can be learned by the brain so it changes itself.

“In a HIRREM placebo-controlled insomnia study, HIRREM showed statistically significant improvement in sleep function over the placebo. Additionally, HIRREM demonstrated that biomarkers for the test were also statistically significant over the placebo. Posters for this study were presented at the International Sleep Conference and at the Dept of Defense Research meeting on sleep. A full length manuscript of the study is in process with hopes to be published Q1 2018).”

The study was published (open access) in the Dec. 22 online edition of the journal Military Medical Research with co-authors at Brain State Technologies. It was supported through the Joint Capability Technology Demonstration Program within the Office of the Under Secretary of Defense and by a grant from The Susanne Marcus Collins Foundation, Inc. to the Department of Neurology at Wake Forest Baptist.

The researchers acknowledge limitations of the study, including the small number of participants and the absence of a control group. It was also an open-label project, meaning that both researchers and participants knew what treatment was being administered.

* HIRREM is a registered trademark of Brain State Technologies based in Scottsdale, Arizona, and has been licensed to Wake Forest University for collaborative research since 2011.  In this single-site study, 18 service members or recent veterans, who experienced symptoms over one to 25 years, received an average of 19½ HIRREM sessions over 12 days. Symptom data were collected before and after the study sessions, and follow-up online interviews were conducted at one-, three- and six-month intervals. In addition, heart rate and blood pressure readings were recorded after the first and second visits to analyze downstream autonomic balance with heart rate variability and baroreflex sensitivity. HIRREM has been used experimentally with more than 500 patients at Wake Forest School of Medicine.

** According to the U.S. Department of Veterans Affairs, approximately 31 percent of Vietnam veterans, 10 percent of Gulf War (Desert Storm) veterans and 11 percent of veterans of the war in Afghanistan experience PTSD. Symptoms can include insomnia, poor concentration, sadness, re-experiencing traumatic events, irritability or hyper-alertness, and diminished autonomic cardiovascular regulation.

*** The effect is based on the “bihemispheric autonomic model” (BHAM ), “which proposes that trauma-related sympathetic hyperarousal may be an expression of maladaptive right temporal lobe activity, whereas the avoidant and dissociative features of the traumatic stress response may be indicators of a parasympathetic “freeze” response that is significantly driven by the left temporal lobe. An implication [is that brain-based] intervention may facilitate the reduction of symptom clusters associated with autonomic disturbances through the mitigation of maladaptive asymmetries.” — Catherine L. Tegeler et al./Military Medical Research.

Update Jan. 10, 2017: What about a control group?

“Our study had an open label design, without a control group,” Tegeler explained to KurzweilAI in an email, in response to reader questions.

“We agree that a randomized design is scientifically a more powerful approach, and one we would have preferred.  The reality was that for this cohort of participants, mostly drawn from the special operations community, constraints due to limitation on allowable time away from duties, training cycle pressures, therapeutic expectations, and available funding, prevented consideration of a controlled design.

“Other studies have used a placebo-controlled design utilizing acoustic stimulation linked to brainwaves, as compared to acoustic stimulation not linked to brainwaves. Manuscripts are being prepared to report those results. Finally, our current studies are all focused on evaluation of the effects and benefits of HIRREM alone, for a variety of symptoms or conditions.  That said, in the future there may be opportunities to seek funding for projects that might combine, or follow up after HIRREM, with other strategies such as meditation, improved nutrition, or exercise.”

“Biofeedback/neurofeedback is an open-loop system indicating that the feedback from the brain or other biological function is provided back to the client as the function being analyzed triggers a stimulus,” Gerdes added.

Abstract of Successful use of closed-loop allostatic neurotechnology for post-traumatic stress symptoms in military personnel: self-reported and autonomic improvements

Background: Military-related post-traumatic stress (PTS) is associated with numerous symptom clusters and diminished autonomic cardiovascular regulation. High-resolution, relational, resonance-based, electroencephalic mirroring (HIRREM®) is a noninvasive, closed-loop, allostatic, acoustic stimulation neurotechnology that produces real-time translation of dominant brain frequencies into audible tones of variable pitch and timing to support the auto-calibration of neural oscillations. We report clinical, autonomic, and functional effects after the use of HIRREM® for symptoms of military-related PTS.

Methods: Eighteen service members or recent veterans (15 active-duty, 3 veterans, most from special operations, 1 female), with a mean age of 40.9 (SD = 6.9) years and symptoms of PTS lasting from 1 to 25 years, undertook 19.5 (SD = 1.1) sessions over 12 days. Inventories for symptoms of PTS (Posttraumatic Stress Disorder Checklist – Military version, PCL-M), insomnia (Insomnia Severity Index, ISI), depression (Center for Epidemiologic Studies Depression Scale, CES-D), and anxiety (Generalized Anxiety Disorder 7-item scale, GAD-7) were collected before (Visit 1, V1), immediately after (Visit 2, V2), and at 1 month (Visit 3, V3), 3 (Visit 4, V4), and 6 (Visit 5, V5) months after intervention completion. Other measures only taken at V1 and V2 included blood pressure and heart rate recordings to analyze heart rate variability (HRV) and baroreflex sensitivity (BRS), functional performance (reaction and grip strength) testing, blood and saliva for biomarkers of stress and inflammation, and blood for epigenetic testing. Paired t-tests, Wilcoxon signed-rank tests, and a repeated-measures ANOVA were performed.

Results: Clinically relevant, significant reductions in all symptom scores were observed at V2, with durability through V5. There were significant improvements in multiple measures of HRV and BRS [Standard deviation of the normal beat to normal beat interval (SDNN), root mean square of the successive differences (rMSSD), high frequency (HF), low frequency (LF), and total power, HF alpha, sequence all, and systolic, diastolic and mean arterial pressure] as well as reaction testing. Trends were seen for improved grip strength and a reduction in C-Reactive Protein (CRP), Angiotensin II to Angiotensin 1–7 ratio and Interleukin-10, with no change in DNA n-methylation. There were no dropouts or adverse events reported.

Conclusions: Service members or veterans showed reductions in symptomatology of PTS, insomnia, depressive mood, and anxiety that were durable through 6 months after the use of a closed-loop allostatic neurotechnology for the auto-calibration of neural oscillations. This study is the first to report increased HRV or BRS after the use of an intervention for service members or veterans with PTS. Ongoing investigations are strongly warranted.

Categories: News

Will artificial intelligence become conscious?

Fri, 22/12/2017 - 2:44am

(Credit: EPFL/Blue Brain Project)

By Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University

Forget about today’s modest incremental advances in artificial intelligence, such as the increasing abilities of cars to drive themselves. Waiting in the wings might be a groundbreaking development: a machine that is aware of itself and its surroundings, and that could take in and process massive amounts of data in real time. It could be sent on dangerous missions, into space or combat. In addition to driving people around, it might be able to cook, clean, do laundry — and even keep humans company when other people aren’t nearby.

A particularly advanced set of machines could replace humans at literally all jobs. That would save humanity from workaday drudgery, but it would also shake many societal foundations. A life of no work and only play may turn out to be a dystopia.

Conscious machines would also raise troubling legal and ethical problems. Would a conscious machine be a “person” under law and be liable if its actions hurt someone, or if something goes wrong? To think of a more frightening scenario, might these machines rebel against humans and wish to eliminate us altogether? If yes, they represent the culmination of evolution.

As a professor of electrical engineering and computer science who works in machine learning and quantum theory, I can say that researchers are divided on whether these sorts of hyperaware machines will ever exist. There’s also debate about whether machines could or should be called “conscious” in the way we think of humans, and even some animals, as conscious. Some of the questions have to do with technology; others have to do with what consciousness actually is.

Is awareness enough?

Most computer scientists think that consciousness is a characteristic that will emerge as technology develops. Some believe that consciousness involves accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions. If that’s right, then one day machines will indeed be the ultimate consciousness. They’ll be able to gather more information than a human, store more than many libraries, access vast databases in milliseconds and compute all of it into decisions more complex, and yet more logical, than any person ever could.

On the other hand, there are physicists and philosophers who say there’s something more about human behavior that cannot be computed by a machine. Creativity, for example, and the sense of freedom people possess don’t appear to come from logic or calculations.

Yet these are not the only views of what consciousness is, or whether machines could ever achieve it.

Quantum views

Another viewpoint on consciousness comes from quantum theory, which is the deepest theory of physics. According to the orthodox Copenhagen Interpretation, consciousness and the physical world are complementary aspects of the same reality. When a person observes, or experiments on, some aspect of the physical world, that person’s conscious interaction causes discernible change. Since it takes consciousness as a given and no attempt is made to derive it from physics, the Copenhagen Interpretation may be called the “big-C” view of consciousness, where it is a thing that exists by itself – although it requires brains to become real. This view was popular with the pioneers of quantum theory such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger.

The interaction between consciousness and matter leads to paradoxes that remain unresolved after 80 years of debate. A well-known example of this is the paradox of Schrödinger’s cat, in which a cat is placed in a situation that results in it being equally likely to survive or die – and the act of observation itself is what makes the outcome certain.

The opposing view is that consciousness emerges from biology, just as biology itself emerges from chemistry which, in turn, emerges from physics. We call this less expansive concept of consciousness “little-C.” It agrees with the neuroscientists’ view that the processes of the mind are identical to states and processes of the brain. It also agrees with a more recent interpretation of quantum theory motivated by an attempt to rid it of paradoxes, the Many Worlds Interpretation, in which observers are a part of the mathematics of physics.

Philosophers of science believe that these modern quantum physics views of consciousness have parallels in ancient philosophy. Big-C is like the theory of mind in Vedanta – in which consciousness is the fundamental basis of reality, on par with the physical universe.

Little-C, in contrast, is quite similar to Buddhism. Although the Buddha chose not to address the question of the nature of consciousness, his followers declared that mind and consciousness arise out of emptiness or nothingness.

Big-C and scientific discovery

Scientists are also exploring whether consciousness is always a computational process. Some scholars have argued that the creative moment is not at the end of a deliberate computation. For instance, dreams or visions are supposed to have inspired Elias Howe‘s 1845 design of the modern sewing machine, and August Kekulé’s discovery of the structure of benzene in 1862.

A dramatic piece of evidence in favor of big-C consciousness existing all on its own is the life of self-taught Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32. His notebook, which was lost and forgotten for about 50 years and published only in 1988, contains several thousand formulas, without proof in different areas of mathematics, that were well ahead of their time. Furthermore, the methods by which he found the formulas remain elusive. He himself claimed that they were revealed to him by a goddess while he was asleep.

The concept of big-C consciousness raises the questions of how it is related to matter, and how matter and mind mutually influence each other. Consciousness alone cannot make physical changes to the world, but perhaps it can change the probabilities in the evolution of quantum processes. The act of observation can freeze and even influence atoms’ movements, as Cornell physicists proved in 2015. This may very well be an explanation of how matter and mind interact.

Mind and self-organizing systems

It is possible that the phenomenon of consciousness requires a self-organizing system, like the brain’s physical structure. If so, then current machines will come up short.

Scholars don’t know if adaptive self-organizing machines can be designed to be as sophisticated as the human brain; we lack a mathematical theory of computation for systems like that. Perhaps it’s true that only biological machines can be sufficiently creative and flexible. But then that suggests people should – or soon will – start working on engineering new biological structures that are, or could become, conscious.

Reprinted with permission from The Conversation

Categories: News

A breakthrough low-light image sensor for photography, life sciences, security

Wed, 20/12/2017 - 11:13pm

A sample photo (right) taken with the one-megapixel low-light Quanta Image Sensor operating at 1,040 frames per second. It is a binary single-photon image, so if the pixel was hit by one or more photons, it is white; if not, it is black. The photo was created by summing up eight frames of binary images taken continuously. A de-noising algorithm was applied to the final image. (credit: Jiaju Ma, adapted by KurzweilAI)

Engineers from Dartmouth’s Thayer School of Engineering have created a radical new imaging technology called “Quanta Image Sensor” (QIS) that may revolutionize a wide variety of imaging applications that require high quality at low light.

These include security, photography, cinematography, and medical and life sciences research.

Low-light photography (at night with only moonlight, for example) currently requires photographers to use time exposure (keeping the shutter open for seconds or minutes), making it impossible to photograph moving images.

Capturing single photons at room temperature

The new QIS technology can capture or count at the lowest possible level of light (single photons) with a resolution as high as one megapixel* (one million pixels) — scalable for higher resolution up to hundreds of megapixels per chip** — and as fast as thousands of frames*** per second (required for “bullet time” cinematography in “The Matrix”).

The QIS works at room temperature, using existing mainstream CMOS image sensor technology. Current lab-research technology may require cooling to very low temperatures, such as 4 kelvin, and is limited to low pixel count.

Quanta Image Sensor applications (credit: Gigajot)

For astrophysicists, the QIS will allow for detecting and capturing signals from distant objects in space at higher quality. For life-science researchers, it will provide improved visualization of cells under a microscope, which is critical for determining the effectiveness of therapies.

The QIS technology is commercially accessible, inexpensive, and compatible with mass-production manufacturing, according to inventor Eric R. Fossum, professor of engineering at Dartmouth. Fossum is senior author of an open-access paper on QIS in the Dec. 20 issue of The Optical Society’s (OSA) Optica. He invented the CMOS image sensor found in nearly all smartphones and cameras in the world today.

The research was performed in cooperation with Rambus, Inc. and the Taiwan Semiconductor Manufacturing Corporation and was funded by Rambus and the Defense Advanced Research Projects Agency (DARPA). The low-light capability promises to allow for improved security uses. Fossum and associates have co-founded the startup company Gigajot Technology to further develop and apply the technology to promising applications.

* By comparison, the iPhone 8 can capture 12 megapixels (but is not usable in low light).

** The technology is based on what the researchers call “jots,” which function like miniature pixels. Each jot can collect one photon, enabling the extreme low-light capability and high resolution.

*** By comparison, the iPhone 8 can record 24 to 60 frames per second.

Abstract of Photon-number-resolving megapixel image sensor at room temperature without avalanche gain

In several emerging fields of study such as encryption in optical communications, determination of the number of photons in an optical pulse is of great importance. Typically, such photon-number-resolving sensors require operation at very low temperature (e.g., 4 K for superconducting-based detectors) and are limited to low pixel count (e.g., hundreds). In this paper, a CMOS-based photon-counting image sensor is presented with photon-number-resolving capability that operates at room temperature with resolution of 1 megapixel. Termed a quanta image sensor, the device is implemented in a commercial stacked (3D) backside-illuminated CMOS image sensor process. Without the use of avalanche multiplication, the 1.1 μm pixel-pitch device achieves 0.21e−  rms0.21e−  rms average read noise with average dark count rate per pixel less than 0.2e−/s0.2e−/s, and 1040 fps readout rate. This novel platform technology fits the needs of high-speed, high-resolution, and accurate photon-counting imaging for scientific, space, security, and low-light imaging as well as a broader range of other applications.

Categories: News

How to program DNA like we do computers

Mon, 18/12/2017 - 10:52pm

A programmable chemical oscillator made from DNA (credit: Ella Maru Studio and Cody Geary)

Researchers at The University of Texas at Austin have programmed DNA molecules to follow specific instructions to create sophisticated molecular machines that could be capable of communication, signal processing, problem-solving, decision-making, and control of motion in living cells — the kind of computation previously only possible with electronic circuits.

Future applications may include health care, advanced materials, and nanotechnology.

As a demonstration, the researchers constructed a first-of-its-kind chemical oscillator that uses only DNA components — no proteins, enzymes or other cellular components — to create a classic chemical reaction network (CRN) called a “rock-paper-scissors oscillator.” The goal was to show that DNA alone is capable of precise, complex behavior.

A systematic pipeline for programming DNA-only dynamical systems and the implementation of a chemical oscillator (credit: Niranjan Srinivas et al./Science)

Chemical oscillators have long been studied by engineers and scientists. For example, the researchers who discovered the chemical oscillator that controls the human circadian rhythm — responsible for our bodies’ day and night rhythm — earned the 2017 Nobel Prize in physiology or medicine.

“As engineers, we are very good at building sophisticated electronics, but biology uses complex chemical reactions inside cells to do many of the same kinds of things, like making decisions,” said David Soloveichik, an assistant professor in the Cockrell School’s Department of Electrical and Computer Engineering and senior author of a paper in the journal Science.

“Eventually, we want to be able to interact with the chemical circuits of a cell, or fix malfunctioning circuits or even reprogram them for greater control. But in the near term, our DNA circuits could be used to program the behavior of cell-free chemical systems that synthesize complex molecules, diagnose complex chemical signatures, and respond to their environments.”

The team’s research was conducted as part of the National Science Foundation’s (NSF) Molecular Programming Project and funded by the NSF, the Office of Naval Research, the National Institutes of Health, and the Gordon and Betty Moore Foundation.


Programming a Chemical Oscillator

Abstract of Enzyme-free nucleic acid dynamical systems

An important goal of synthetic biology is to create biochemical control systems with the desired characteristics from scratch. Srinivas et al. describe the creation of a biochemical oscillator that requires no enzymes or evolved components, but rather is implemented through DNA molecules designed to function in strand displacement cascades. Furthermore, they created a compiler that could translate a formal chemical reaction network into the necessary DNA sequences that could function together to provide a specified dynamic behavior.

 

 

Categories: News

A new low-cost, simple way to measure medical vital signs with radio waves

Fri, 15/12/2017 - 10:54pm

A radio-frequency identification (RFID) tag, used to monitor vital signs, can go into your pocket or be woven into a shirt (credit: Cornell)

Replacing devices based on 19th-century technology* and still in use, Cornell University engineers have developed a simple method for gathering blood pressure, heart rate, and breath rate from multiple patients simultaneously. It uses low-power radio-frequency signals and low-cost microchip radio-frequency identification (RFID) “tags” — similar to the ubiquitous anti-theft tags used in department stores.

The RFID tags measure internal body motion, such as a heart as it beats or blood as it pulses under skin. Powered remotely by electromagnetic energy supplied by a central reader, the tags use a new concept called “near-field coherent sensing.” Mechanical motions (heartbeat, etc.) in the body modulate (modify) radio waves that are bounced off the body and internal organs by passive (no battery required) RFID tags.

The modulated signals detected by the tag then bounce back to an electronic reader, located elsewhere in the room, that gathers the data. Each tag has a unique identification code that it transmits with its signal, allowing up to 200 people to be monitored simultaneously.

Electromagnetic simulations of monitoring vital signs via radio transmission, showing heartbeat sensing (left) and pulse sensing (right) (credit: Xiaonan Hui and Edwin C. Kan/Nature Electronics)

“If this is an emergency room, everybody that comes in can wear these tags or can simply put tags in their front pockets, and everybody’s vital signs can be monitored at the same time. I’ll know exactly which person each of the vital signs belongs to,” said Edwin Kan, a Cornell professor of electrical and computer engineering.

The signal is as accurate as an electrocardiogram or a blood-pressure cuff, according to Kan, who believes the technology could also be used to measure bowel movement, eye movement, and many other internal mechanical motions produced by the body.

The researchers envision embedding the RFID chips in clothing to monitor health in real time, with little or no effort required by the user. They have also developed a method for embroidering the tags directly onto clothing using fibers coated with nanoparticles. A cellphone could read (and display) your vital signs and also transmit them for remote medical monitoring.

The system is detailed in the open-access paper “Monitoring Vital Signs Over Multiplexed Radio by Near-Field Coherent Sensing,” published online Nov. 27 in the journal Nature Electronics. “Current approaches to monitoring vital signs are based on body electrodes, optical absorption, pressure or strain gauges, stethoscope, and ultrasound or radiofrequency (RF) backscattering, each of which suffers particular drawbacks during application,” the paper notes.

* The sphygmomanometer was invented by Samuel Siegfried Karl Ritter von Basch in 1881. Devices based on its basic pressure principle are still in use. (credit: Wellcome Trustees)

Abstract of Monitoring vital signs over multiplexed radio by near-field coherent sensing

Monitoring the heart rate, blood pressure, respiration rate and breath effort of a patient is critical to managing their care, but current approaches are limited in terms of sensing capabilities and sampling rates. The measurement process can also be uncomfortable due to the need for direct skin contact, which can disrupt the circadian rhythm and restrict the motion of the patient. Here we show that the external and internal mechanical motion of a person can be directly modulated onto multiplexed radiofrequency signals integrated with unique digital identification using near-field coherent sensing. The approach, which does not require direct skin contact, offers two possible implementations: passive and active radiofrequency identification tags. To minimize deployment and maintenance cost, passive tags can be integrated into garments at the chest and wrist areas, where the two multiplexed far-field backscattering waveforms are collected at the reader to retrieve the heart rate, blood pressure, respiration rate and breath effort. To maximize reading range and immunity to multipath interference caused by indoor occupant motion, active tags could be placed in the front pocket and in the wrist cuff to measure the antenna reflection due to near-field coherent sensing and then the vital signals sampled and transmitted entirely in digital format. Our system is capable of monitoring multiple people simultaneously and could lead to the cost-effective automation of vital sign monitoring in care facilities.

Categories: News

Video games and piano lessons improve cognitive functions in seniors, researchers find

Wed, 13/12/2017 - 11:40pm

(credit: Nintendo)

For seniors, playing 3D-platform games like Super Mario 64 or taking piano lessons can stave off mild cognitive impairment and perhaps even prevent Alzheimer’s disease, according to a new study by Université de Montréal psychology professors.

In the studies, 33 people ages 55 to 75 were instructed to play Super Mario 64 for 30 minutes a day, five days a week for a period of six months, or take piano lessons (for the first time in their life) with the same frequency and in the same sequence. A control group did not perform any particular task.

The researchers evaluated the effects of the experiment with cognitive performance tests and magnetic resonance imaging (MRI) to measure variations in the volume of gray matter.

Increased gray matter in the left and right hippocampus and the left cerebellum after older adults completed six months of video-game training. (credit: Greg L. West et al./PLoS One)

  • The participants in the video-game cohort saw increases in gray matter volume in the cerebellum (plays a major role in motor control and balance) and the hippocampus (associated with spatial and episodic memory, a key factor in long-term cognitive health); and their short-term memory improved. (The hippocampus gray matter acts as a marker for neurological disorders that can occur over time, including mild cognitive impairment and Alzheimer’s.)
  • There were gray-matter increases in the dorsolateral prefrontal cortex (controls planning, decision-making, and inhibition) and cerebellum of the participants who took piano lessons.
  • Some degree of atrophy was noted in all three areas of the brain among those in the passive control group.

“These findings can also be used to drive future research on Alzheimer’s, since there is a link between the volume of the hippocampus and the risk of developing the disease,” said Gregory West, an associate professor at the Université de Montréal and lead author of an open-access paper in PLoS One journal.

“3-D video games engage the hippocampus into creating a cognitive map, or a mental representation, of the virtual environment that the brain is exploring,” said West. “Several studies suggest stimulation of the hippocampus increases both functional activity and gray matter within this region.”

However, “It remains to be seen whether it is specifically brain activity associated with spatial memory that affects plasticity, or whether it’s simply a matter of learning something new.”

Researchers at the Memorial University in Newfoundland, and Montreal’s Douglas Hospital Research Centre were also involved in the study.

In two previous studies by the researchers in 2014 and 2017, young adults in their twenties were asked to play 3D video games of logic and puzzles on platforms like Super Mario 64. Findings showed that the gray matter in their hippocampus also increased after training.

Abstract of Playing Super Mario 64 increases hippocampal grey matter in older adults

Maintaining grey matter within the hippocampus is important for healthy cognition. Playing 3D-platform video games has previously been shown to promote grey matter in the hippocampus in younger adults. In the current study, we tested the impact of 3D-platform video game training (i.e., Super Mario 64) on grey matter in the hippocampus, cerebellum, and the dorsolateral prefrontal cortex (DLPFC) of older adults. Older adults who were 55 to 75 years of age were randomized into three groups. The video game experimental group (VID; n = 8) engaged in a 3D-platform video game training over a period of 6 months. Additionally, an active control group took a series of self-directed, computerized music (piano) lessons (MUS; n = 12), while a no-contact control group did not engage in any intervention (CON; n = 13). After training, a within-subject increase in grey matter within the hippocampus was significant only in the VID training group, replicating results observed in younger adults. Active control MUS training did, however, lead to a within-subject increase in the DLPFC, while both the VID and MUS training produced growth in the cerebellum. In contrast, the CON group displayed significant grey matter loss in the hippocampus, cerebellum and the DLPFC.

Categories: News

AlphaZero’s ‘alien’ superhuman-level program masters chess in 24 hours with no domain knowledge

Mon, 11/12/2017 - 11:39pm

AlphaZero vs. Stockfish chess program | Round 1 (credit: Chess.com)

Demis Hassabis, the founder and CEO of DeepMind, announced at the Neural Information Processing Systems conference (NIPS 2017) last week that DeepMind’s new AlphaZero program achieved a superhuman level of play in chess within 24 hours.

The program started from random play, given no domain knowledge except the game rules, according to an arXiv paper by DeepMind researchers published Dec. 5.

“It doesn’t play like a human, and it doesn’t play like a program,” said Hassabis, an expert chess player himself. “It plays in a third, almost alien, way. It’s like chess from another dimension.”

AlphaZero also mastered both shogi (Japanese chess) and Go within 24 hours, defeating a world-champion program in all three cases. The original AlphaGo mastered Go by learning thousands of example games and then practicing against another version of itself.

“AlphaZero was not ‘taught’ the game in the traditional sense,” explains Chess.com. “That means no opening book, no endgame tables, and apparently no complicated algorithms dissecting minute differences between center pawns and side pawns. This would be akin to a robot being given access to thousands of metal bits and parts, but no knowledge of a combustion engine, then it experiments numerous times with every combination possible until it builds a Ferrari. … The program had four hours to play itself many, many times, thereby becoming its own teacher.”

“What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory,” MIT Technology Review notes. “Such sacrifices of high-value pieces are normally rare. In another case the program moved its queen to the corner of the board, a very bizarre trick with a surprising positional value.”

Abstract of Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

Categories: News

3D-printing biocompatible living bacteria

Fri, 08/12/2017 - 8:17pm

3D-printing with an ink containing living bacteria (credit: Bara Krautz/bara@scienceanimated.com)

Researchers at ETH Zurich university have developed a technique for 3D-printing biocompatible living bacteria for the first time — making it possible to produce produce high-purity cellulose for biomedical applications and nanofilters that can break down toxic substances (in drinking water, for example) or for use in disastrous oil spills, for example.

The technique, called “Flink” (“functional living ink”) allows for printing mini biochemical factories with properties that vary based on which species of bacteria are used. Up to four different inks containing different species of bacteria at different concentrations can be printed in a single pass.

Schematics of the Flink 3D bacteria-printing process for creating two types of functional living materials. (Left and center) Bacteria are embedded in a biocompatible hydrogel (which provides the supporting structure). (Right) The inclusion of P. putida* or A. xylinum* bacteria in the ink yields 3D-printed materials capable of degrading environmental pollutants (top) or forming bacterial cellulose in situ for biomedical applications (bottom), respectively. (credit: Manuel Schaffner et al./Science Advances)

The technique was described Dec. 1, 2017 in the open-access journal Science Advances.

(Left) A. xylinum bacteria were used in printing a cellulose nanofibril network (scanning electron microscope image), which was deposited (Right) on a doll face, forming a cellulose-reinforced hydrogel that, after removal of all biological residues, could serve as a skin transplant. (credit: Manuel Schaffner et al./Science Advances)

“The in situ formation of reinforcing cellulose fibers within the hydrogel is particularly attractive for regions under mechanical tension, such as the elbow and knee, or when administered as a pouch onto organs to prevent fibrosis after surgical implants and transplantations,” the researchers note in the paper. “Cellulose films grown in complex geometries precisely match the topography of the site of interest, preventing the formation of wrinkles and entrapments of contaminants that could impair the healing process. We envision that long-term medical applications will benefit from the presented multimaterial 3D printing process by locally deploying bacteria where needed.”

 * Pseudomonas putida breaks down the toxic chemical phenol, which is produced on a grand scale in the chemical industry; Acetobacter xylinum secretes high-purity nanocellulose, which relieves pain, retains moisture and is stable, opening up potential applications in the treatment of burns.

Abstract of 3D printing of bacteria into functional complex materials

Despite recent advances to control the spatial composition and dynamic functionalities of bacteria embedded in materials, bacterial localization into complex three-dimensional (3D) geometries remains a major challenge. We demonstrate a 3D printing approach to create bacteria-derived functional materials by combining the natural diverse metabolism of bacteria with the shape design freedom of additive manufacturing. To achieve this, we embedded bacteria in a biocompatible and functionalized 3D printing ink and printed two types of “living materials” capable of degrading pollutants and of producing medically relevant bacterial cellulose. With this versatile bacteria-printing platform, complex materials displaying spatially specific compositions, geometry, and properties not accessed by standard technologies can be assembled from bottom up for new biotechnological and biomedical applications.

Categories: News

New technology allows robots to visualize their own future

Wed, 06/12/2017 - 11:45pm


UC Berkeley | Vestri the robot imagines how to perform tasks

UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. It could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes.

The initial prototype focuses on learning simple manual skills entirely from autonomous play — similar to how children can learn about their world by playing with toys, moving them around, grasping, etc.

Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now — predictions made only several seconds into the future — but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

The robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment, or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised (no humans involved) exploration, where the robot plays with objects on a table.

After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

The research team demonstrated the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on Monday, December 4, 2017.

Learning by playing: how it works

Robot’s imagined predictions (credit: UC Berkeley)

At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.

“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.

With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.

Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. Building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously.

That contrasts with conventional computer-vision methods, which require humans to manually label thousands or even millions of images.

Categories: News

Why (most) future robots won’t look like robots

Mon, 04/12/2017 - 11:10pm

A future robot’s body could combine soft actuators and stiff structure, with distributed computation throughout — an example of the new “material robotics.” (credit: Nikolaus Correll/University of Colorado)

Future robots won’t be limited to humanoid form (like Boston Robotics’ formidable backflipping Atlas). They’ll be invisibly embedded everywhere in common objects.

Such as a shoe that can intelligently support your gait, change stiffness as you’re running or walking, and adapt to different surfaces — or even help you do backflips.

That’s the vision of researchers at Oregon State University, the University of Colorado, Yale University, and École Polytechnique Fédérale de Lausanne, who describe the burgeoning new field of  “material robotics” in a perspective article published Nov. 29, 2017 in Science Robotics. (The article cites nine articles in this special issue, three of which you can access for free.)

Disappearing into the background of everyday life

The authors challenge a widespread basic assumption: that robots are either “machines that run bits of code” or “software ‘bots’ interacting with the world through a physical instrument.”

“We take a third path: one that imbues intelligence into the very matter of a robot,” says Oregon State University researcher Yiğit Mengüç, an assistant professor of mechanical engineering in OSU’s College of Engineering and part of the college’s Collaborative Robotics and Intelligent Systems Institute.

On that path, materials scientists are developing new bulk materials with the inherent multifunctionality required for robotic applications, while roboticists are working on new material systems with tightly integrated components, disappearing into the background of everyday life. “The spectrum of possible ap­proaches spans from soft grippers with zero knowledge and zero feedback all the way to humanoids with full knowledge and full feed­back,” the authors note in the paper.

For example, “In the future, your smartphone may be made from stretchable, foldable material so there’s no danger of it shattering,” says Mengüç. “Or it might have some actuation, where it changes shape in your hand to help with the display, or it can be able to communicate something about what you’re observing on the screen. All these bits and pieces of technology that we take for granted in life will be living, physically responsive things, moving, changing shape in response to our needs, not just flat, static screens.”

Soft robots get superpowers

Origami-inspired artificial muscles capable of lifting up to 1,000 times their own weight, simply by applying air or water pressure (credit: Shuguang Li/Wyss Institute at Harvard University)

As a good example of material-enabled robotics, researchers at the Wyss Institute at Harvard University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed origami-inspired, programmable, super-strong artificial muscles that will allow future soft robots to lift objects that are up to 1,000 times their own weight — using only air or water pressure.

The actuators are “programmed” by the structural design itself. The skeleton folds define how the whole structure moves — no control system required.

That allows the muscles to be very compact and simple, which makes them more appropriate for mobile or body-mounted systems that can’t accommodate large or heavy machinery, says Shuguang Li, Ph.D., a Postdoctoral Fellow at the Wyss Institute and MIT CSAIL and first author of an an open-access article on the research published Nov. 21, 2017 in Proceedings of the National Academy of Sciences (PNAS).

Each artificial muscle consists of an inner “skeleton” that can be made of various materials, such as a metal coil or a sheet of plastic folded into a certain pattern, surrounded by air or fluid and sealed inside a plastic or textile bag that serves as the “skin.” The structural geometry of the skeleton itself determines the muscle’s motion. A vacuum applied to the inside of the bag initiates the muscle’s movement by causing the skin to collapse onto the skeleton, creating tension that drives the motion. Incredibly, no other power source or human input is required to direct the muscle’s movement — it’s automagically determined entirely by the shape and composition of the skeleton. (credit: Shuguang Li/Wyss Institute at Harvard University)

Resilient, multipurpose, scalable

Not only can the artificial muscles move in many ways, they do so with impressive resilience. They can generate about six times more force per unit area than mammalian skeletal muscle can, and are also incredibly lightweight. A 2.6-gram muscle can lift a 3-kilogram object, which is the equivalent of a mallard duck lifting a car. Additionally, a single muscle can be constructed within ten minutes using materials that cost less than $1, making them cheap and easy to test and iterate.

These muscles can be powered by a vacuum, which makes them safer than most of the other artificial muscles currently being tested. The muscles have been built in sizes ranging from a few millimeters up to a meter. So the muscles can be used in numerous applications at multiple scales, from miniature surgical devices to wearable robotic exoskeletons, transformable architecture, and deep-sea manipulators for research or construction, up to large deployable structures for space exploration.

The team could also construct the muscles out of the water-soluble polymer PVA. That opens the possibility of bio-friendly robots that can perform tasks in natural settings with minimal environmental impact, or ingestible robots that move to the proper place in the body and then dissolve to release a drug.

The team constructed dozens of muscles using materials ranging from metal springs to packing foam to sheets of plastic, and experimented with different skeleton shapes to create muscles that can contract down to 10% of their original size, lift a delicate flower off the ground, and twist into a coil, all simply by sucking the air out of them.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Wyss Institute for Biologically Inspired Engineering.


Wyss Institute | Origami-Inspired Artificial Muscles

Abstract of Fluid-driven origami-inspired artificial muscles

Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration.

Categories: News

Using light instead of electrons promises faster, smaller, more-efficient computers and smartphones

Fri, 01/12/2017 - 9:08pm

Trapped light for optical computation (credit: Imperial College London)

By forcing light to go through a smaller gap than ever before, a research team at Imperial College London has taken a step toward computers based on light instead of electrons.

Light would be preferable for computing because it can carry much-higher-density information, it’s much faster, and more efficient (generates little to no heat). But light beams don’t easily interact with one other. So information on high-speed fiber-optic cables (provided by your cable TV company, for example) currently has to be converted (via a modem or other device) into slower signals (electrons on wires or wireless signals) to allow for processing the data on devices such as computers and smartphones.

Electron-microscope image of an optical-computing nanofocusing device that is 25 nanometers wide and 2 micrometers long, using grating couplers (vertical lines) to interface with fiber-optic cables. (credit: Nielsen et al., 2017/Imperial College London)

To overcome that limitation, the researchers used metamaterials to squeeze light into a metal channel only 25 nanometers (billionths of a meter) wide, increasing its intensity and allowing photons to interact over the range of micrometers (millionths of meters) instead of centimeters.*

That means optical computation that previously required a centimeters-size device can now be realized on the micrometer (one millionth of a meter) scale, bringing optical processing into the size range of electronic transistors.

The results were published Thursday Nov. 30, 2017 in the journal Science.

* Normally, when two light beams cross each other, the individual photons do not interact or alter each other, as two electrons do when they meet. That means a long span of material is needed to gradually accumulate the effect and make it useful. Here, a “plasmonic nanofocusing” waveguide is used, strongly confining light within a nonlinear organic polymer.

Abstract of Giant nonlinear response at a plasmonic nanofocus drives efficient four-wave mixing

Efficient optical frequency mixing typically must accumulate over large interaction lengths because nonlinear responses in natural materials are inherently weak. This limits the efficiency of mixing processes owing to the requirement of phase matching. Here, we report efficient four-wave mixing (FWM) over micrometer-scale interaction lengths at telecommunications wavelengths on silicon. We used an integrated plasmonic gap waveguide that strongly confines light within a nonlinear organic polymer. The gap waveguide intensifies light by nanofocusing it to a mode cross-section of a few tens of nanometers, thus generating a nonlinear response so strong that efficient FWM accumulates over wavelength-scale distances. This technique opens up nonlinear optics to a regime of relaxed phase matching, with the possibility of compact, broadband, and efficient frequency mixing integrated with silicon photonics.

Categories: News

New nanomaterial, quantum encryption system could be ultimate defenses against hackers

Wed, 29/11/2017 - 11:43pm

New physically unclonable nanomaterial (credit: Abdullah Alharbi et al./ACS Nano)

Recent advances in quantum computers may soon give hackers access to machines powerful enough to crack even the toughest of standard internet security codes. With these codes broken, all of our online data — from medical records to bank transactions — could be vulnerable to attack.

Now, a new low-cost nanomaterial developed by New York University Tandon School of Engineering researchers can be tuned to act as a secure authentication key to encrypt computer hardware and data. The layered molybdenum disulfide (MoS2) nanomaterial cannot be physically cloned (duplicated) — replacing programming, which can be hacked.

In a paper published in the journal ACS Nano, the researchers explain that the new nanomaterial has the highest possible level of structural randomness, making it physically unclonable. It achieves this with randomly occurring regions that alternately emit or do not emit light. When exposed to light, this pattern can be used to create a one-of-a-kind binary cryptographic authentication key that could secure hardware components at minimal cost.

The research team envisions a future in which similar nanomaterials can be inexpensively produced at scale and applied to a chip or other hardware component. “No metal contacts are required, and production could take place independently of the chip fabrication process,” according to Davood Shahrjerdi, Assistant Professor of Electrical and Computer Engineering. “It’s maximum security with minimal investment.”

The National Science Foundation and the U.S. Army Research Office supported the research.

A high-speed quantum encryption system to secure the future internet

Schematic of the experimental quantum key distribution setup (credit: Nurul T. Islam et al./Science Advances)

Another approach to the hacker threat is being developed by scientists at Duke University, The Ohio State University and Oak Ridge National Laboratory. It would use the properties that drive quantum computers to create theoretically hack-proof forms of quantum data encryption.

Called quantum key distribution (QKD), it takes advantage of one of the fundamental properties of quantum mechanics: Measuring tiny bits of matter like electrons or photons automatically changes their properties, which would immediately alert both parties to the existence of a security breach. However, current QKD systems can only transmit keys at relatively low rates — up to hundreds of kilobits per second — which are too slow for most practical uses on the internet.

The new experimental QKD system is capable of creating and distributing encryption codes at megabit-per-second rates — five to 10 times faster than existing methods and on a par with current internet speeds when running several systems in parallel. In an online open-access article in Science Advances, the researchers show that the technique is secure from common attacks, even in the face of equipment flaws that could open up leaks.

This research was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency, and Oak Ridge National Laboratory.

Abstract of Physically Unclonable Cryptographic Primitives by Chemical Vapor Deposition of Layered MoS2

Physically unclonable cryptographic primitives are promising for securing the rapidly growing number of electronic devices. Here, we introduce physically unclonable primitives from layered molybdenum disulfide (MoS2) by leveraging the natural randomness of their island growth during chemical vapor deposition (CVD). We synthesize a MoS2 monolayer film covered with speckles of multilayer islands, where the growth process is engineered for an optimal speckle density. Using the Clark–Evans test, we confirm that the distribution of islands on the film exhibits complete spatial randomness, hence indicating the growth of multilayer speckles is a spatial Poisson process. Such a property is highly desirable for constructing unpredictable cryptographic primitives. The security primitive is an array of 2048 pixels fabricated from this film. The complex structure of the pixels makes the physical duplication of the array impossible (i.e., physically unclonable). A unique optical response is generated by applying an optical stimulus to the structure. The basis for this unique response is the dependence of the photoemission on the number of MoS2 layers, which by design is random throughout the film. Using a threshold value for the photoemission, we convert the optical response into binary cryptographic keys. We show that the proper selection of this threshold is crucial for maximizing combination randomness and that the optimal value of the threshold is linked directly to the growth process. This study reveals an opportunity for generating robust and versatile security primitives from layered transition metal dichalcogenides.

Abstract of Provably secure and high-rate quantum key distribution with time-bin qudits

The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.

Categories: News

Space dust may transport life between worlds

Sun, 26/11/2017 - 1:06am

Imagine what this amazingly resilient microscopic (0.2 to 0.7 millimeter) milnesium tardigradum animal could evolve into on another planet. (credit: Wikipedia)

Life on our planet might have originated from biological particles brought to Earth in streams of space dust, according to a study published in the journal Astrobiology.

A huge amount of space dust (~10,000 kilograms — about the weight of two elephants) enters our atmosphere every day — possibly delivering organisms from far-off worlds, according to Professor Arjun Berera from the University of Edinburgh School of Physics and Astronomy, who led the study.

The dust streams could also collide with bacteria and other biological particles at 150 km or higher above Earth’s surface with enough energy to knock them into space, carrying Earth-based organisms to other planets and perhaps beyond.

The finding suggests that large asteroid impacts may not be the sole mechanism by which life could transfer between planets, as previously thought.

“The streaming of fast space dust is found throughout planetary systems and could be a common factor in proliferating life,” said Berera. Some bacteria, plants, and even microscopic animals called tardigrades* are known to be able to survive in space, so it is possible that such organisms — if present in Earth’s upper atmosphere — might collide with fast-moving space dust and withstand a journey to another planet.**

The study was partly funded by the U.K. Science and Technology Facilities Council.

* “Some tardigrades can withstand extremely cold temperatures down to 1 K (−458 °F; −272 °C) (close to absolute zero), while others can withstand extremely hot temperatures up to 420 K (300 °F; 150 °C)[12] for several minutes, pressures about six times greater than those found in the deepest ocean trenches, ionizing radiation at doses hundreds of times higher than the lethal dose for a human, and the vacuum of outer space. They can go without food or water for more than 30 years, drying out to the point where they are 3% or less water, only to rehydrate, forage, and reproduce.” — Wikipedia

** “Over the lifespan of the Earth of four billion years, particles emerging from Earth by this manner in principle could have traveled out as far as tens of kiloparsecs [one kiloparsec = 3,260 light years; our galaxy is about 100,000 light-years across]. This material horizon, as could be called the maximum distance on pure kinematic grounds that a material particle from Earth could travel outward based on natural processes, would cover most of our Galactic disk [the "Milky Way"], and interestingly would be far enough out to reach the Earth-like or potentially habitable planets that have been identified.” — Arjun Berera/Astrobiology

Abstract of Space Dust Collisions as a Planetary Escape Mechanism

It is observed that hypervelocity space dust, which is continuously bombarding Earth, creates immense momentum flows in the atmosphere. Some of this fast space dust inevitably will interact with the atmospheric system, transferring energy and moving particles around, with various possible consequences. This paper examines, with supporting estimates, the possibility that by way of collisions the Earth-grazing component of space dust can facilitate planetary escape of atmospheric particles, whether they are atoms and molecules that form the atmosphere or larger-sized particles. An interesting outcome of this collision scenario is that a variety of particles that contain telltale signs of Earth’s organic story, including microbial life and life-essential molecules, may be “afloat” in Earth’s atmosphere. The present study assesses the capability of this space dust collision mechanism to propel some of these biological constituents into space. Key Words: Hypervelocity space dust—Collision—Planetary escape—Atmospheric constituents—Microbial life. Astrobiology 17, xxx–xxx.

Categories: News