You think Robots are in the Future? Well, they appear to be in the present. Check out those absolutely awesome articles.
- Monkeys Think, Moving Artificial Arm as Own
- Crash-predicting car can brace itself for impact
- Shopper-watching windows
- The rise of the emotional robot
By BENEDICT CAREY
Monkeys Think, Moving Artificial Arm as Own
Two monkeys with tiny sensors in their brains have learned to control a mechanical arm with just their thoughts, using it to reach for and grab food and even to adjust for the size and stickiness of morsels when necessary, scientists reported on Wednesday.
The report, released online by the journal Nature, is the most striking demonstration to date of brain-machine interface technology. Scientists expect that technology will eventually allow people with spinal cord injuries and other paralyzing conditions to gain more control over their lives.
The findings suggest that brain-controlled prosthetics, while not practical, are at least technically within reach.
In previous studies, researchers showed that humans who had been paralyzed for years could learn to control a cursor on a computer screen with their brain waves and that nonhuman primates could use their thoughts to move a mechanical arm, a robotic hand or a robot on a treadmill.
The new experiment goes a step further. In it, the monkeys’ brains seem to have adopted the mechanical appendage as their own, refining its movement as it interacted with real objects in real time. The monkeys had their own arms gently restrained while they learned to use the added one.
Experts not involved with the study said the findings were likely to accelerate interest in human testing, especially given the need to treat head and spinal injuries in veterans returning from Iraq and Afghanistan.
In the experiment, two macaques first used a joystick to gain a feel for the arm, which had shoulder joints, an elbow and a grasping claw with two mechanical fingers.
Then, just beneath the monkeys’ skulls, the scientists implanted a grid about the size of a large freckle. It sat on the motor cortex, over a patch of cells known to signal arm and hand movements. The grid held 100 tiny electrodes, each connecting to a single neuron, its wires running out of the brain and to a computer.
The computer was programmed to analyze the collective firing of these 100 motor neurons, translate that sum into an electronic command and send it instantaneously to the arm, which was mounted flush with the left shoulder.
The scientists used the computer to help the monkeys move the arm at first, essentially teaching them with biofeedback.
After several days, the monkeys needed no help. They sat stationary in a chair, repeatedly manipulating the arm with their brain to reach out and grab grapes, marshmallows and other nuggets dangled in front of them. The snacks reached the mouths about two-thirds of the time — an impressive rate, compared with earlier work.
The monkeys learned to hold the grip open on approaching the food, close it just enough to hold the food and gradually loosen the grip when feeding.
On several occasions, a monkey kept its claw open on the way back, with the food stuck to one finger. At other times, a monkey moved the arm to lick the fingers clean or to push a bit of food into its mouth while ignoring a newly presented morsel.
The animals were apparently freelancing, discovering new uses for the arm, showing “displays of embodiment that would never be seen in a virtual environment,” the researchers wrote.
Scientists have to clear several hurdles before this technology becomes practical, experts said. Implantable electrode grids do not generally last more than a period of months, for reasons that remain unclear.
The equipment to read and transmit the signal can be cumbersome and in need of continual monitoring and recalibrating. And no one has yet demonstrated a workable wireless system that would eliminate the need for connections through the scalp.
Yet Dr. Schwartz’s team, Dr. Donoghue’s group and others are working on all of the problems, and the two macaques’ rapid learning curve in taking ownership of a foreign limb gives scientists confidence that the main obstacles are technical and, thus, negotiable.
In an editorial accompanying the Nature study, Dr. John F. Kalaska, a neuroscientist at the University of Montreal, argued that after such bugs had been worked out, scientists might even discover areas of the cortex that allow more intimate, subtle control of prosthetic devices.
Robo-monkeys use brain power to grab a bite
Most people who become paralysed or lose limbs retain the mental dexterity to perform physical actions. And by tapping into a region of the brain responsible for movement – the motor cortex – researchers can decode a person's intentions and translate them into action with a prosthetic.
This had been done mostly with monkeys and in virtual worlds or with simple movements, such as reaching out a hand. But two years ago, an American team hacked into the brain of a patient with no control over his arms to direct a computer cursor and a simple robotic arm.
Schwartz's team extracted even more complicated information from the brains of two rhesus macaques by reading the electrical pulses of about 100 brain cells. Normally, millions of neurons fire when we lift an arm or grab a snack, but the signals from a handful of cells are enough to capture the basics, Schwartz says.
His macaques controlled a robotic arm that moved at the shoulder and elbow and could clench and open its hand.
To train the monkeys, the researchers first recorded their brain activity as they controlled the robotic arm with a joystick. Once the monkeys had learned to feed themselves in this way, Schwartz's team secured their arms and made them rely on controlling the robot with their brain.
To avoid frustrating the animals during their first attempts, the researchers partially guided the robot themselves. Gradually, these training aids were dispensed with, and after three weeks the monkeys had mastered the robotic arm.
In tests where a monkey had to grab marshmallows or grapes and feed himself, one monkey succeeded 61% of the time, often reaching for another treat while still chewing on the last one. The animals manoeuvred the arm around obstacles and readjusted its path when researchers moved the food.
"It's impressive how naturally the animal interacts with the robot," says John Kalaska, a neuroscientist at the University of Montreal. "It's a natural extension of their own body because they control it so easily just by thinking."
He says Schwartz's gradual and assisted approach to training the monkeys is likely to work with neural prosthetics in human patients.
My comment:I'm always particularly interested in any synergies between the brain and a computer for many reasons. This article only shows how well the things are going for these kind of technologies. People seem to not understand why interacting with the brain is important- well, that's why. How would you feel if they have to cut your arm? Probably very sucky. What about if that arm can be replaced. It won't be the same (at least not yet), but you will be able to use it as your own. It makes a difference, right?
Crash-predicting car can brace itself for impact
A car that protects those inside by strengthening its frame just before a side-on collision has been crash-tested by European engineers.
The system is the latest demonstration of car safety devices that take action before a crash, not just afterwards. Even established safety features like airbags and seatbelts could be much more effective if they took pre-emptive action just milliseconds before an impact.
The prediction systems that are needed to prepare for an impact are becoming possible due to improvements in sensors and computing, and side impacts may be where they offer the greatest benefit.
Evidence from both real and simulated crashes shows that drivers rarely manage to react to a typical 30 to 40 kilometres per hour side impact, and there is very little distance between passengers and the object that strikes the car.
The system recently crash-tested uses radar and cameras to anticipate an impact just a fraction of a second before it occurs.
When activated, a metal bar slides into place to create a temporary brace that makes the car's frame significantly stronger. The bar bridges a gap between the front door and another bar running across the car and anchored on the chassis.
"The energy of the impact is transferred to the 'unstruck' side of the vehicle," says Joachim Tandler, an engineer at car maker Continental, which is leading EU-funded project APROSYS. "Normally that connection could not be complete," Tandler adds.
Design constraints, like the need to lower the door window, mean car frames cannot be built with the beam already in place. Like airbags, once activated, the brace would need a trip to the workshop to be reset, but the team are working on making the brace retractable.
For the reinforcing bar to be deployed before a crash, the safety system must predict a collision about 230 milliseconds before it happens.
Crash-prediction software uses two radars and stereo cameras positioned in the back window, looking out of the car's side with a range of about 20 metres.
Once the software decides to deploy the moving bar, it snaps into place in 70 milliseconds, driven by a powerful spring. The spring is held back by a coil of shape-memory alloy wire, but an electric current rapidly heats the coil, causing it to "remember" a previous shape and release the spring to drive the bar into place.
The car was subjected to a safety test used by European car safety watchdog NCAP, which involves hitting it with a barrier travelling at 50 km/h. The tests took place at the Research and Development Center in Transport & Energy in Valladolid, Spain.
The brace-for-impact system reduced the amount that the barrier penetrated the car by between 5 and 8 centimetres – enough to make a difference to the safety of the passengers in the car. "The intrusion velocity was considerably reduced," says Tandler.
Although the brace was a success, the sensors and software used to predict a collision are the parts most likely to be adopted first by car manufacturers. "We can give conventional in-crash devices like airbags more time to react," he adds.
But, in the end, it is economics that decide whether new safety technologies make it onto the road, he says: "It comes down to the manufacturers deciding if the extra weight and cost of installing the system on new cars is worth it." source
My comment: This is even cooler. I live in a country where people die on the streets in numbers you can't even grasp. I don't support reckless driving, but that for sure will help eventual victims of such driving. I doubt it would help for a head-on collision, but it would be great to have it for side colisions.
Gaze-tracking shop windows
Eye tracking software has become a mature technology that works effectively in many real situations. So the consumer electronics company Philips hopes to apply it to displays in shop windows.
The company's idea is to track the gaze of window shoppers to determine which items in the window they are staring at, then to display enlarged pictures, a slide show or other information about those items on nearby computer screens.
Philips says that the system could also be used in museums and art galleries to provide visitors with extra information as they need it
My comment:Now that's a rather creepy one. Why I find it useful is the eventual use for pupil-controlling of a computer. That could be great. No need to use that annoying touchpad anymore. You just gaze somewhere and you click. Cool, huh?
The rise of the emotional robot
Duke is careering noisily across a living room floor resplendent in the dark blue and white colours of Duke University in Durham, North Carolina. He's no student but a disc-shaped robotic vacuum cleaner called the Roomba. Not only have his owners dressed him up, they have also given him a name and gender.
Duke is not alone. Such behaviour is common, and takes myriad forms according to a survey of almost 400 Roomba owners, conducted late last year by Ja-Young Sung and Rebecca Grinter, who research human-computer interaction at the Georgia Institute of Technology in Atlanta.
Sung believes that the notion of humans relating to their robots almost as if they were family members or friends is more than just a curiosity. "People want their Roomba to look unique because it has evolved into something that's much more than a gadget," she says. Understanding these responses could be the key to figuring out the sort of relationships people are willing to have with robots.
Until now, robots have been designed for what the robotics industry dubs "dull, dirty and dangerous" jobs, like welding cars, defusing bombs or mowing lawns. Even the name robot comes from robota, the Czech word for drudgery. But Sung's observations suggest that we have moved on. "I have not seen a single family who treats Roomba like a machine if they clothe it," she says.
The Roomba, which is made by iRobot in Burlington, Massachusetts, isn't the only robot that people seem to bond with. US soldiers serving in Iraq and interviewed last year by The Washington Post developed strong emotional attachments to Packbots and Talon robots, which dispose of bombs and locate landmines, and admitted feeling deep sadness when their robots were destroyed in explosions. Some ensured the robots were reconstructed from spare parts when they were damaged and even took them fishing, using the robot arm's gripper to hold their rod.
Figuring out just how far humans are willing to go in shifting the boundaries towards accepting robots as partners rather than mere machines will help designers decide what tasks and functions are appropriate for robots. Meanwhile, working out whether it's the robot or the person who determines the boundary shift might mean designers can deliberately create robots that elicit more feeling from humans.
Not surprisingly, though there are similarities between the way people view robots and other human beings, there are also differences. Daniel Levin and colleagues at Vanderbilt University in Nashville, Tennessee, showed people videos of robots in action and then interviewed them. He says that people are unwilling to attribute intentions to robots, no matter how sophisticated they appear to be.
Further complicating the matter, researchers have also shown that the degree to which someone socialises with and trusts a robot depends on their gender and nationality (See "Enter the gender-specific robot").
But Hiroshi Ishiguro of Osaka University in Japan thinks that the sophistication of our interactions with robots will have few constraints. He has built a remote-controlled doppelgänger, which fidgets, blinks, breathes, talks, moves its eyes and looks eerily like him (New Scientist, 12 October 2006, p 42). Recently he has used it to hold classes at his university while he controls it remotely. He says that people's reactions to his doppelgänger suggest that they are engaging with the robot emotionally. "People treat my copy completely naturally and say hello to it as they walk past," he says. "Robots can be people's partners and they will be."source
My comment: This article is great for several reasons. I don't think it was a great surprise to see people impersonating robots, we do that to trees and rock, and the robot even seems to respond. What is interesting is that companies may design robots in a way that they will provoke a specific feelings for certain robot. Which brings us a step closer to the movie, The AI. A movie that I always found to sad, but anyway.