Monday, 17 October 2011

Evolutionary Comparison Finds Shocking History for Vertebrates

 !!0_0!!
Wired UK (Oct, 12. 2011) - Evolutionary biologists from Cornell University have discovered that just about every vertebrate on Earth — including humans — descended from an ancient ancestor with a sixth sense: the ability to detect electrical fields in water.

About 500 million years ago there was probably a predatory marine fish with good eyesight, powerful jaws and sharp teeth roaming the oceans, sporting a lateral line system for detecting water movements and a well-developed electroreceptive system to sense predators and prey around it. The vast majority of the 65,000 or so living vertebrate species are its descendants.

A few hundred million years ago, there was a major fork in the evolutionary tree. One lineage led to the ray-finned fishes, or actinopterygians, and they’ve kept a weak electroreceptive system to this day. Sturgeon have receptors in the skin of their heads, for example, and the North American paddlefish has 70,000 receptors in its snout and head.

The other lineage led to lobe-finned fishes, or sarcopterygians, which in turn gave rise to land vertebrates. Some land vertebrates, including salamanders like the Mexican axolotl, still have electroreception. But in the change to terrestrial life, the lineage leading to reptiles, birds and mammals lost that electrosense and the lateral line.

The researchers took the axolotl (to represent the evolutionary lineage leading to land animals) and the paddlefish (as a model for the branch leading to ray-finned fishes) to find out the history of this sense. They found that electrosensors develop in precisely the same pattern from the same embryonic tissue in the developing skin, confirming that this is an ancient sensory system.

Also, the electrosensory organs develop immediately adjacent to the lateral line, providing compelling evidence “that these two sensory systems share a common evolutionary heritage,” said Willy Bemis, Cornell professor of ecology and evolutionary biology and a senior author of the paper.

Bemis and his colleagues will now be able to build a better picture of what the common ancestor of these two lineages looked like.

Clever Test Shows Meerkat Voices Are Personal

"Alan! Alan! Alan!
...oh it's not Alan. Jeff!!"
Wired Science (Oct, 12. 2011) - By using audio trickery to present meerkats with a puzzling situation, biologists have demonstrated that the adorable African critters recognize each other by voice.

The findings are based on tests in which calls from the same individual were played near-simultaneously in two different locations. The implications go beyond meerkats.

Humans and many primates clearly recognize individual voices, a capacity considered fundamental for rich social lives. Some mammals, such as dolphins, have demonstrated the ability in captive settings. But while recognizing voices seems obvious — how else could Meerkat Manor make sense? — it’s been surprisingly difficult to design quantitative studies for truly wild animals other than primates, leaving an important aspect of animal social life in empirical shadow.

“Understanding how animals experience the individuals within their social worlds is key to deciphering the evolution of social and communicative capacities,” write researchers led by University of Zurich ethologist Simon Townsend in their new meerkat study, published Oct. 11 in Biology Letters.

Of course, many animals clearly recognize individuals by scent and sight. But those abilities are considered less cognitively-demanding than vocalizations, which can be highly complex and imply mental representations of other individuals. The late, great ethologist Donald Griffin called vocalizations a window into animal minds.

For primates, the best-studied vocalizers, social relationships are often so complex and self-evident that it’s possible to play recordings of individual voices, then see how animals respond. A chimp will, for example, react differently to the voice of each different member of his group.

As for other mammals, experiments in natural settings have tended to involve tests of whether individuals respond to their kin, as with leopard seals recognizing their pups’ cries, or to general social categories.

In one unpublished study, meerkats responded differently to the voices of a group’s dominant female, but it wasn’t clear whether they recognized the individual or simply some categorical sign of dominance.

But Townsend’s team came up with a deceptively simple test that posed a physically impossible scenario to any meerkat capable of recognizing individual voices.

Using hidden speakers, they played recorded calls from one individual on one side of a target meerkat, and then from the other. The situation was similar to hearing a friend shout from the kitchen, then from the second-floor bathroom just a second later.

The meerkats reacted with a prolonged vigilance, paying much closer attention than they did to other recorded calls. The situation didn’t compute.

According to Townsend, the methodology could be applied to other animals that haven’t yet been studied, producing an animal kingdom-wide picture of individual voice recognition. That picture could help show what makes humans special — or, conversely, what seemingly special abilities are actually widespread.

“You see this ability in primates, which you’d expect,” said Townsend. “But the fact that we can show this in non-primate social mammals suggests the skill is omnipresent. It suggests that humans aren’t so unique.”

Children Like Teamwork More Than Chimps Do

LiveScience (Oct, 13. 2011) - Chimpanzees and humans are fairly close cousins, evolutionarily speaking. But a new study finds they lack something that we have (besides written language and hairlessness): a desire to work together.

When all other things are equal, 3-year-old children prefer to do a task collaboratively rather than alone, while chimpanzees show no such preference, said study researcher Yvonne Rekers, a cognitive scientist at the Max Planck Institute for Evolutionary Anthropology in Germany.

"We expected that difference between human and chimpanzee cooperation, because we can see it nowadays," Rekers told LiveScience. "Humans collaborate in a larger variety of contexts and in more complex forms."

However, that leaves the question: Why these differences in cooperation? Cognitive abilities may be at the root of some of them, Rekers said, but motivation could matter as well.

Working together

To investigate the motivations of both species, the researchers chose a task that both groups would willingly undertake: pulling a rope to get a food reward. The children in the study got gummy frogs as their treat, while the chimpanzees got bananas.

Fifteen chimps and 24 children were introduced to the same experimental set-up: a room containing both a single end of rope and a doubled-over rope with two available ends. The 3-year-olds and the chimps were all taught that by pulling both ends of the doubled-over rope at the same time, they could draw a food-laden board toward them, delivering a batch of gummy frogs or bananas.

Pulling the single rope would produce the identical food reward, but only with the help of another child or chimp in the room next door, who had to pull the opposite end of the rope at the same time. (The child or chimp acting as the potential partner in the experiment wasn't being tested; he or she had only the single end to pull. The potential partners were, however, highly motivated to pull that rope, because they too knew that a food reward would be coming their way.)

Cooperating kids

Despite the fact that the chimps got their food four to five seconds faster when they pulled the single end and worked with a partner than when they pulled both ends of the doubled rope by themselves, they were just as likely to choose the doubled rope, the researchers said. The chimps chose the single-ended rope 58 percent of the time, a number not significantly different than chance.

The 3-year-old children, by contrast, chose to pull the collaborative single rope in 78 percent of trials, even though it did not produce snacks any faster.

The children had all practiced the game beforehand and so knew how it worked. They, like the chimps, could see their potential partner through a opening between the two rooms. But to make their experience more like that of the chimps, the kids were encouraged not to speak during the experiment.

In order to keep all factors constant, a snack went to the cooperating child (the one not being tested) regardless of whether he or she was called upon to pull. That set-up, however, led Rekers and her colleagues to worry that perhaps the tested children were picking the collaborative work to prevent their partners from getting gummy frogs for doing nothing.

The researchers set up a second experiment with 12 new children in which the potential partner never received a reward — at least not within the sight of the tested child. The results were essentially unchanged, with 81 percent of kids choosing to work together. That finding suggested that the original result was not influenced by any desire to prevent freeloaders.

Rekers and her colleagues aren't sure whether this preference for cooperation is innate in humans or not, but one theory is that evolutionary pressures at some point nudged humans, but not chimps, into becoming cooperative foragers. The next step, Rekers said, is to study other primate species, such as bonobos.

She said she also plans to look into what children get out of working together.

"Is it just that they enjoy doing stuff together?" she said. "Or are they following other strategies or goals?"

Monday, 10 October 2011

Alison Gopnik: What do babies think?



















TED (Oct, 10. 2011) - "Babies and young children are like the R&D division of the human species," says psychologist Alison Gopnik. Her research explores the sophisticated intelligence-gathering and decision-making that babies are really doing when they play.

Psychologists Decipher Brain’s Clever Autofocus Software

Wired Science (Oct, 10. 2011) - It’s something we all take for granted: our ability to look at an object, near or far, and bring it instantly into focus. The eyes of humans and many animals do this almost instantaneously and with stunning accuracy. Now researchers say they are one step closer to understanding how the brain accomplishes this feat.

Wilson Geisler and Johannes Burge, psychologists at the Center for Perceptual Systems at the University of Texas, Austin, have developed a simple algorithm for quickly and accurately estimating the focus error from a single blurry image-something they say is key to understanding how biological visual systems avoid the repetitive guess-and-check method employed by digital cameras. The discovery may advance our understanding of how nearsightedness develops in humans or help engineers improve digital cameras, the researchers say.

In order to see an object clearly, an accurate estimate of blur is important. Humans and animals instinctively extract key features from a blurry image, use that information to determine their distance from an object, then instantly focus the eye to the precise desired focal length, Geisler explains. “In some animals, that’s the primary way they sense distance,” he says. For example, the chameleon relies on this method to pinpoint the location of a flying insect and snap its tongue to that exact spot. Altering the amount of blur by placing a lens in front of its eye causes the chameleon to misjudge the distance in a predictable way.

But scientists didn’t know how biological visual systems estimate blur so well. Many researchers had thought the brain used a system of guessing and checking to get to the answer, much like the way a camera’s auto-focus system works. Basically, the camera changes the focal distance, measures the contrast in the image it sees, and repeats the process until it has maximized the contrast, Burge says.

“This search procedure is slow, often begins its search in the wrong direction, and relies on the assumption that maximum contrast equals best focus—which is not strictly true,” Burge says.

In an attempt to resolve the question of how humans and animals might use blur to accurately estimate distance, Geisler and Burge used well-known mathematical equations to create a computer simulation of the human visual system. They presented the computer with digital images of natural scenes similar to what a person might see, such as faces, flowers, or scenery, and observed that although the content of these images varied widely, many features of the images—patterns of sharpness and blurriness and relative amounts of detail—remained the same.

The duo then attempted to mimic how the human visual system might be processing these images by adding a set of filters to their model designed to detect these features. When they blurred the images by systematically changing the focus error in the computer simulation and tested the response of the filters, the researchers found that they could predict the exact amount of focus error by the pattern of response they observed in the feature detectors. The researchers say this provides a potential explanation for how the brains of humans and animals can quickly and accurately determine focus error without guessing and checking. Their research appears online this week in the Proceedings of the National Academy of Sciences.

“They’ve provided proof that there is enough information in a static image to determine if an object is too close or too far away,” says Larry Thibos, a professor of optometry and vision researcher at Indiana University, Bloomington. “We’ve known for 50 or 60 years that people are very good at knowing whether or not something is in focus. It’s taken this paper to show us how the visual system might accomplish this feat.”

The researchers also added common visual imperfections to their simulations and found that when it comes to judging focus, flaws are actually a good thing.

“What we discovered is that the imperfections in the eye—things like astigmatism and chromatic aberration—actually help it to focus,” Geisler explains. That may help explain why people who have had their astigmatism corrected through laser eye surgery often have trouble focusing for several weeks afterward, Geisler says.

That sort of understanding may have an impact on medical decisions, Thibos says. “People might be tempted to try and perfect nature,” he says, “when maybe it’s better to be a little bit imperfect.”

Saturday, 8 October 2011

Problem solving Elephant



UPI.com (Oct, 8. 2011) -- An elephant at the National Zoo in Washington devised a problem-solving strategy to reach a branch with his trunk and grab a treat, zoo officials said.

Kandula, the zoo's youngest elephant, figured out how to roll a large cube underneath the branch and stand on it to secure his meal.

Scientists said that sort of spontaneous problem-solving had never been seen in elephants before, even though they can recognize themselves in mirrors, drop logs to collapse fences to get to food and even dig wells, The Washington Post reported.

"We knew elephants were intelligent," said Diana Reiss, who studies animal intelligence at City University of New York. But although as intelligent as dolphins and chimpanzees in some ways, researchers said, all attempts to get elephants to spontaneously solve a problem had previously failed.

In a study published in the journal PLoS One, researchers described hanging bamboo and fruit just out of reach of elephants at the National Zoo, placing a cube or aluminum tub nearby.

In the seventh session, researchers said, Kandula "just suddenly did it." And in the next session, Kandula rolled the cube all over the elephant compound, using it to reach a flower he wanted to sniff and to play with a toy hung from a tree, they said.

Thursday, 6 October 2011

Monkeys Use Mind Control



ScienceNOW (Oct, 6. 2011) - By implanting electrodes into both the motor and the sensory areas of the brain, researchers have created a virtual prosthetic hand that monkeys control using only their minds, and that enables them to feel virtual textures.

Neuroscientist Miguel Nicolelis of Duke University in Durham, N.C., whose group has been developing so-called brain-machine interfaces, says that one of the pitfalls in these systems is that “no one’s been able to close the loop” between controlling a limb and feeling a physical touch. So he and a group of researchers decided to create a “brain-machine-brain” interface using a virtual system. The researchers implanted two sets of tiny electrodes into a monkey’s brain: one set in the motor control center, and the other in the part of the somatosensory cortex that processes the sensation of physical touch from the left hand. Using the first set, the monkey could control a virtual monkey arm on a computer screen and sweep the hand over virtual disks with different “textures.” Meanwhile, the second set of electrodes fed a series of electrical pulses into the touch center of its brain. A low frequency of pulses indicated a rough texture, whereas high frequency indicated a fine texture (see video), and the monkeys quickly learned to tell the difference.

By giving the monkey rewards when it identified the right texture, the researchers discovered that it took as few as four training sessions for the animal to consistently distinguish the textures from one another, even when the researchers switched the order of the visually identical disks on the screen. The researchers then implanted the electrodes into the sensory region that receives tactile sensations from the foot in a different monkey; this monkey, too, acted as if the virtual appendage (in this case, the foot) was its own, moving it to correctly identify the textures, the team reports online today in Nature.

Although the monkeys are all adults, the motor and sensory regions of their brains are amazingly plastic, Nicolelis says: the combination of seeing an appendage that they control and feeling a physical touch tricks them into thinking that the virtual appendage is their own “within minutes.” And throughout this experiment, the monkey’s own general sense of touch didn’t seem to be affected. “The brain,” Nicolelis says, “is creating a sixth sense.”

“It’s definitely a milestone in brain-computer interfaces,” says neuroscientist Sliman Bensmaia of the University of Chicago, who is developing touch-feedback systems for human prosthetics. Too many of the robotic arms now being developed, even very advanced ones, he says, ignore the importance of touch. “Sensory feedback is critical to doing anything,” he says. Even mundane tasks like picking up a cup require a great deal of concentration so the wearer does not drop or crush it.

The new work is still an early step, however, he says. A biological arm receives countless inputs not only from texture but also from temperature and its position in space.

Nicolelis says his group is currently working on fine-tuning the sensory feedback as well as exploring ways to link the brain and computer wirelessly. After many years of working on brain-computer interfaces, he says, “We’re getting very close to where they may be clinically useful” for paralyzed patients, not just in the lab, and for doctors as well. Touch feedback may allow surgeons, for instance, to perform microscopic surgery or countless other applications. “The brain,” Nicolelis says, “has evolved capabilities that go way beyond the body.”

Saturday, 1 October 2011

Ben Goldacre: Battling Bad Science


TED TALKS (Filmed July 2011, Posted Sept 2011) Every day there are news reports of new health advice, but how can you know if they're right? Doctor and epidemiologist Ben Goldacre shows us, at high speed, the ways evidence can be distorted, from the blindingly obvious nutrition claims to the very subtle tricks of the pharmaceutical industry.

Richard Dawkins Interview



An interview with the evolutionary biologist, best-selling author and outspoken atheist.

Archaeologists find ancient 'child cave art' in the Dordogne


BBC Science (Sept. 30, 2011)
- Jessica Cooney, an archaeologist from Cambridge University, talks about paintings found in a cave "nursery" in France.

One of the pieces of art, discovered in the cave of Rouffignac in the Dordogne, is thought to be a 13,000-year-old finger painting created by two and three-year-olds.

Jessica Cooney, speaking to the BBC's David Sillito, explains that the most prolific artist is thought to be a five-year-old girl.

ShareThis