- Physicists develop powerful method of suppressing errors in many types of quantum computers
- Researcher discover two highly complex organic molecules detected in space
- Fluorescent puppy is world's first transgenic dog
- Predictive powers: a robot that reads your intention? (w/Video)
- Researchers uncover how nanoparticles may damage lungs
- Electronic Pill Measures pH Levels In Digestive Tract
- Researchers edit genes in human stem cells
- 'Resurrection bug' revived after 120,000 years
Artificial penguins (video):
A wheel chair moved by thought (video): .
Physicists develop powerful method of suppressing errors in many types of quantum computersApril 22nd, 2009
(PhsyOrg.com) -- Researchers at the National Institute of Standards and Technology have demonstrated a technique for efficiently suppressing errors in quantum computers. The advance could eventually make it much easier to build useful versions of these potentially powerful but highly fragile machines, which theoretically could solve important problems that are intractable using today's computers.
The new error-suppression method, described in the April 23 issue of Nature, was demonstrated using an array of about 1,000 ultracold beryllium ions (electrically charged atoms) trapped by electric and magnetic fields. Each ion can act as a quantum bit (qubit) for storing information in a quantum computer. These ions form neatly ordered crystals, similar to arrays of qubits being fabricated by other researchers using semiconducting and superconducting circuitry. Arrays like this potentially could be used as multi-bit quantum memories.
The new NIST technique counteracts a major threat to the reliability of quantum memories: the potential for small disturbances, such as stray electric or magnetic fields, to create random errors in the qubits. The NIST team applied customized sequences of microwave pulses to reverse the accumulation of such random errors in all qubits simultaneously.
Co-lead author Michael J. Biercuk, a NIST post-doc, notes that correcting qubit errors after they occur will require extraordinary resources, whereas early suppression of errors is far more efficient, and improves the performance of subsequent error correction. The new NIST error-suppression method could enable quantum computers of various designs to achieve error rates far below the so-called fault-tolerance threshold of about 1 error in 10,000 computational operations (0.01 percent), Biercuk says. If error rates can be reduced below this level, building a useful quantum computer becomes considerably more realistic.Recently, scientists at another institution published a theory of how to modify pulse timing in order to improve noise suppression. The NIST team conducted the first experimental demonstration of this theory, and then extended these ideas by generating novel pulse sequences tailored to the ambient noise environment. These novel sequences can be found quickly through an experimental feedback technique, and were shown to significantly outperform other sequences without the need for any knowledge of the noise characteristics. The researchers tested these pulse sequences under realistic noise conditions simulating those appropriate for different qubit technologies, making their results broadly applicable. source
My comment: Nice! I won't comment too much on this, but I think that it would be cool to see a working quantum computer. And it offers a major potential for artificial intelligence. Did you know that according to some people, Internet may become self-aware at some point. Just imagine how you try to find a page and the Net faciliates you (or obstruct you) so that you can get there quicker. Or to the right place. The right place according to It, not according to you. Who knows, maybe it's not Google that's screwing with our traffic, maybe it's the Net :)
Researcher discover two highly complex organic molecules detected in spaceApril 21st, 2009
(PhysOrg.com) -- Scientists from the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn, Germany, Cornell University, USA, and the University of Cologne, Germany, have detected two of the most complex molecules yet discovered in interstellar space: ethyl formate and n-propyl cyanide.
The IRAM 30 metre telescope in Spain was used to detect emissions from molecules in the star-forming region Sagittarius B2, close to the centre of our galaxy. The two new molecules were detected in a hot, dense cloud of gas known as the "Large Molecule Heimat", which contains a luminous newly-formed star. Large organic molecules of many different sorts have been detected in this cloud in the past, including alcohols, aldehydes, and acids. The new molecules, ethyl formate (C2H5OCHO) and n-propyl cyanide (C3H7CN), represent two different classes of molecule - esters and alkyl cyanides - and they are the most complex of their kind yet detected in interstellar space.The researchers then used a computational model to understand the chemical processes that allow these and other molecules to form in space. Chemical reactions can take place as the result of collisions between gaseous particles; but there are also small grains of dust suspended in the interstellar gas, and these grains can be used as landing sites for atoms to meet and react, producing molecules. As a result, the grains build up thick layers of ice, composed mainly of water, but also containing a number of basic organic molecules like methanol, the simplest alcohol.
"But," says Robin Garrod, a researcher in astrochemistry at Cornell University, "the really large molecules don't seem to build up this way, atom by atom." Rather, the computational models suggest that the more complex molecules form section by section, using pre-formed building blocks that are provided by molecules, such as methanol, that are already present on the dust grains. The computational models show that these sections, or "functional groups", can add together efficiently, building up a molecular "chain" in a series of short steps. The two newly-discovered molecules seem to have been produced in this way.
Garrod adds, "There is no apparent limit to the size of molecules that can be formed by this process - so there's good reason to expect even more complex organic molecules to be there, if we can detect them." These may even include amino acids, which are required for the production of proteins, and are therefore essential to life on Earth.
The simplest amino acid, glycine (NH2CH2COOH), has been searched for in the past, but has as yet not been successfully detected. However, the size and complexity of this molecule is matched by the two new molecules discovered by the team (Astronomy & Astrophysics, in press). source
My comment: Cool, heh? I mean, if there are such complex molecules and if there is not limit in the size, then statistically, there might be a living organisms in the Space around us. Yes, it sounds kind of crazy, but the Universe is SO big, even the small chances can become very very big, when you have so much space and time in front of view. So, this makes some of the serious of Star Trek slightly not so fantstic. You fly around and you meet some huge nebulae (well, all of the Nebulaes are huge compared to a spaceship, but let's live in the Star Trek Reality for a while), and that nebulae is actually alive, self-awared and wanting to MEAT you. Nice :)
Fluorescent puppy is world's first transgenic dog
- 12:00 23 April 2009 by Ewen Callaway
A cloned beagle named Ruppy – short for Ruby Puppy – is the world's first transgenic dog. She and four other beagles all produce a fluorescent protein that glows red under ultraviolet light.
A team led by Byeong-Chun Lee of Seoul National University in South Korea created the dogs by cloning fibroblast cells that express a red fluorescent gene produced by sea anemones.
This new proof-of-principle experiment should open the door for transgenic dog models of human disease, says team member CheMyong Ko of the University of Kentucky in Lexington.
However, other researchers who study domestic dogs as stand-ins for human disease are less certain that transgenic dogs will become widespread in research.
Dogs already serve as models for diseases such as narcolepsy, certain cancers and blindness.
Lee's team created Ruppy by first infecting dog fibroblast cells with a virus that inserted the fluorescent gene into a cell's nucleus. They then transferred the fibroblast's nucleus to another dog's egg cell, with its nucleus removed. After a few hours dividing in a Petri dish, researchers implanted the cloned embryo into a surrogate mother.
Starting with 344 embryos implanted into 20 dogs, Lee's team ended up with seven pregnancies. One fetus died about half way through term, while an 11-week-old puppy died of pneumonia after its mother accidentally bit its chest. Five dogs are alive, healthy and starting to spawn their own fluorescent puppies, Ko says.
Besides the low efficiency of cloning – just 1.7 per cent of embryos came to term – another challenge to creating transgenic dogs is controlling where in the nuclear DNA a foreign gene lands. Lee's team used a retrovirus to transfer the fluorescent gene to dog fibroblast cells, but they could not control where the virus inserted the gene. source
My comment: Ok, as a dog owner I cannot feel particularly happy for the poor glowing dogs. It's not so much a problem that they glow, that might pass for cool. But what genes did they replace, can I be sure that the dog won't turn neurotic because of that and eat someone? I know this is not the point of the experiment, but still, I like dogs. It's good they proved they can do it, they can get a healthy generation (though we still have to wait and see how healthy it is), but now, I think they have to focus on the result of the viral gene insertion, since this is usually the most problematic part of the process of messing with genes. And let's hope Paris can stay away from the cute glowing puppies.
Predictive powers: a robot that reads your intention? (w/Video)June 5th, 2009
(PhysOrg.com) -- European researchers in robotics, psychology and cognitive sciences have developed a robot that can predict the intentions of its human partner. This ability to anticipate (or question) actions could make human-robot interactions more natural.
Many research groups are trying to build robots that could be less like workers and more like companions. But to play this role, they must be able to interact with people in natural ways, and play a pro-active part in joint tasks and decision-making. We need robots that can ask questions, discuss and explore possibilities, assess their companion's ideas and anticipate what their partners might do next.
The EU-funded JAST project brings a multidisciplinary team together to do just this. The project explores ways by which a robot can anticipate/predict the actions and intentions of a human partner as they work collaboratively on a task.
A major element of the JAST project, therefore, was to conduct studies of human-human collaboration. These experiments and observations could feed into the development of more natural robotic behaviour.
Scientists have already shown that a set of 'mirror neurons' are activated when people observe an activity. These neurons resonate as if they were mimicking the activity; the brain learns about an activity by effectively copying what is going on. In the JAST project, a similar resonance was discovered during joint tasks: people observe their partners and the brain copies their action to try and make sense of it.
In other words, the brain processes the observed actions (and errors, it turns out) as if it is doing them itself. The brain mirrors what the other person is doing either for motor-simulation purposes or to select the most adequate complementary action.
The JAST robotics partners have built a system that incorporates this capacity for observation and mirroring (resonance).
“In our experiments the robot is not observing to learn a task,” explains Wolfram Erlhagen from the University of Minho and one of the project consortium's research partners. “The JAST robots already know the task, but they observe behaviour, map it against the task, and quickly learn to anticipate [partner actions] or spot errors when the partner does not follow the correct or expected procedure.”
The robot was tested in a variety of settings. In one scenario, the robot was the 'teacher' - guiding and collaborating with human partners to build a complicated model toy. In another test, the robot and the human were on equal terms. “Our tests were to see whether the human and robot could coordinate their work,” Erlhagen continues.
By observing how its human partner grasped a tool or model part, for example, the robot was able to predict how its partner intended to use it. Clues like these helped the robot to anticipate what its partner might need next.
The robots were also programmed to deal with suspected errors and seek clarification when their partners’ intentions were ambiguous. For example, if one piece could be used to build three different structures, the robot had to ask which object its partner had in mind.
Before robots like this one can be let loose around humans, however, they will have to learn some manners. Humans know how to behave according to the context they are in. This is subtle and would be difficult for a robot to understand.
Nevertheless, by refining this ability to anticipate, it should be possible to produce robots that are proactive in what they do. source
My comment: There is a video on the source page, I urge you to go and see it! It's so cool. Continuing the dog-owner sentence, if you happen to be around an intelligent animal you find unbelievable pleasure when the animal can predict your intentions and act upon it. Even the little "let's go outside" that leads to happy jumping around and stuff is such a major success. Because when you interact with humans, you expect from them to understand you and to act in the context of what you're doing. But when it comes to an animal, you cannot expect that. Ok, you expect it, but it seldom happens. And I never stop to be amazed from small situations when my dog act as I would expect a little kid to act.
When it comes to robotics, it's all artificial-you cannot expect the natural intelligence of the dog to connect the dots-you have to connect the dots in the robot's software before seeing such behaviour. And that's what this research is so important. They learnt how to connect the dots. They learnt how to orient the robot in the human's world and to help it read human's intentions. Because that's where the power is-in cooperation. If you have to spell every little command, you'll spend less time in doing the action yourself. But if the metal dude can guess the command and just ask for your clarification or approval, the work can go so much faster. Nice!
Researchers uncover how nanoparticles may damage lungs
HONG KONG (Reuters) – Researchers in China appear to have uncovered how nanoparticles which are used in medicine for diagnosis and delivering drugs may cause lung damage.
Apart from medicine, nanoparticles are used in products like sporting goods, cosmetics, tires and electronics and have a projected annual market of around US$1 trillion by 2015.
However, concerns are growing that it may have toxic effects, particularly to the lungs. But it has never been clear how the damage is caused.
In an article published in the Journal of Molecular Cell Biology, the Chinese experts said a class of nanoparticles used in medicine, ployamidoamine dendrimers (PAMAMs), may cause lung damage by triggering a type of programed cell death known as autophagic cell death.
In experiments, they observed how several types of PAMAMs killed human lung cells but found no evidence that the cells were dying by apoptosis, a natural and common type of cell death.
In a subsequent experiment in mice, they injected an autophagy inhibitor in mice and later exposed the rodents to nanoparticles and found that it "significantly ameliorated the lung damage and improved survival rates."
"This provides us with a promising lead for developing strategies to prevent lung damage caused by nanoparticles," said the leader of the team, Chengyu Jiang, a molecular biologist at the Chinese Academy of Medical Sciences in Beijing. source
My commnt: That's what I'm talking about. This is one of the first evidences nanoparticles are really dangerous. And since they are so HEAVILY used in all kind of undustries, I sincerely hope that such experiments will give regulators the needed proofs that nanoparticles have to be very carefully examined before being introduced on the marker and in the people! It's not just paranoia, it's real danger!
Electronic Pill Measures pH Levels In Digestive Tractby Staff Writers
Jun 12, 2009
An electronic diagnostic tool called the SmartPill is swallowed by patients in order to take measurements as it travels through the gastrointestinal tract.
A new study by physician-scientists at NewYork-Presbyterian/Weill Cornell Medical Center used the device in patients with mild to moderate ulcerative colitis (UC), determining that they have significantly more acidic pH in their colons, compared with the average person - a finding that may impact treatment strategy. The study was presented at the Digestive Disease Week (DDW) meeting in Chicago, Ill.
Mesalamines are the mainstay drug therapy for the induction and maintenance of remission in patients with mild to moderate UC. Their efficacy is dependent on how well the drug is delivered to the active site of the disease. Several mesalamines have a delivery system that is dependent upon a specific pH in order to release. However, since the pH levels in the GI tract can vary, the researchers say, this could impact the proper release and efficacy of the medication.
Administered in the physician's office, the SmartPill allows that patient to go about their normal routine during the course of the test. As the SmartPill Capsule passes through the GI tract, it transmits data - including pressure, pH and temperature - to a SmartPill Data Receiver worn by the patient.
Once the single-use capsule has passed from the body, the patient returns the Data Receiver to the physician who then can download the collected data to a computer, where it can be analyzed. source
Researchers edit genes in human stem cellsJune 18th, 2009
Researchers at the Johns Hopkins School of Medicine have successfully edited the genome of human- induced pluripotent stem cells, making possible the future development of patient-specific stem cell therapies. Reporting this week in Cell Stem Cell, the team altered a gene responsible for causing the rare blood disease paroxysmal nocturnal hemoglobinuria, or PNH, establishing for the first time a useful system to learn more about the disease.
Cheng's lab and collaborators at Johns Hopkins study PNH, a condition where "friendly fire" kills patients' own blood cells and the body can't replenish the lost blood cells due to loss of normal blood stem cells.
To target and remove the function of the one specific gene known to cause PNH, the research team improved on the standard approach of gene targeting, which can remove a functional gene or replace a dysfunctional gene.
Gene targeting exploits a cell's own ability to repair broken DNA. When DNA breaks from exposure to mutagens or other agents like DNA-cutting enzymes, DNA-repairing enzymes in the cell find and re-join the two exposed DNA ends. However, if another piece of DNA with exposed ends is floating around, it effectively can be spliced into the broken DNA during repair, and replace the defective copy.
The team's technological improvement includes the use of custom-designed molecular scissors that are made by collaborators at Harvard University and University of Texas Southwestern Medical Center. These engineered DNA cutting enzymes make a precise break at specific locations in a cell's DNA—in this case in the gene that causes PNH.
Of all the cells surviving selection, they picked and grew eight iPS cell lines to study further, and five of those contained a targeted insertion at the gene site. Further examination showed that the cells contained the correct number of chromosomes, no longer contained any trace of the molecular scissors and had characteristics as cells from PNH patients that lack a group of cell surface molecules. source
'Resurrection bug' revived after 120,000 years
00:01 15 June 2009 by Andy Coghlan
A tiny bacterium has been coaxed back to life after spending 120,000 years buried three kilometres deep in the Greenland ice sheet.
Officially named Herminiimonas glaciei, the bug consists of rods just 0.9 micrometres long and 0.4 micrometres in diameter, about 10 to 50 times smaller than the well-known bacterium, Escherichia coli.
Thanks to its tiny dimensions, it can survive in minute veins in the ice, scavenging sparse nutrients that were buried along with the ice. It also has extensive tail-like flagella to help it manoeuvre through the veins to find food.
Researchers in the team coaxed it back to life by keeping it at 2 °C for 7 months, then at 5 °C for a further four-and-a-half months, after which they saw colonies of very small purplish-brown bacteria.
Loveland-Curtze speculates that similar microbes may have evolved in the ice on other planets and moons, such as the ice at the poles of Mars and the ice-covered ocean on Europa, one of Jupiter's moons. source