Once we learn the relationship between a cue and its consequences—say, the sound of a bell and the appearance of the white ice cream truck bearing our favorite chocolate cone—do we turn our attention to that bell whenever we hear it? Or do we tuck the information away and marshal our resources to learning other, novel cues—a recorded jingle, a blue truck?
Psychologists observing “attentional allocation” now agree that the answer is both, and they have arrived at two principles to describe the phenomena. The “predictive” principle says we search for meaningful—important—cues amid the “noise” of our environments. The “uncertainty” principle says we pay most attention to unfamiliar or unpredictable cues, which may yield useful information or surprise us with pleasant or perilous consequences.
Animal studies have supplied evidence for both, and research on humans has showed how predictiveness operates, but not uncertainty. “There was a clear gap in the research,” says Oren Griffiths, a research fellow at the University of New South Wales, in Australia. So he, along with Ameika M. Johnson and Chris J. Mitchell, set out to demonstrate the uncertainty principle in humans.
“We showed that people will pay more attention to a stimulus or a cue if its status as a predictor is unreliable,” he says. The study will be published in an upcoming issue of Psychological Science, a journal of Association for Psychological Science.
The researchers investigated what is called “negative transfer”—a cognitive process by which a learned association between cue and outcome inhibits any further learning about that cue. We think we know what to expect, so we aren’t paying attention when a different outcome shows up—and we learn that new association more slowly than if the cue or outcome were unpredictable. Negative transfer is a good example of the uncertainty principle at work.
Participants were divided into three groups, and administered the “allergist test.” They observed “Mrs. X” receiving a small piece of fruit—say, apple. Using a scroll bar they predicted her allergic reaction, from none to critical. They then learned that her reaction to the apple was “mild.” Later, when Mrs. X ate the apple, she had a severe reaction which participants also had to learn to predict.
The critical question was how quickly people learned about the severe reaction. Unsurprisingly, if apple was only ever paired with a severe reaction, learning was fast. But what about if apple had previously been shown to be dangerous (i.e. produce a mild allergic reaction)? In this case, learning about the new severe reaction was slow. This is termed the “negative transfer” effect. This effect did not occur, however, when the initial relationship between apple and allergy was uncertain — if, say, apple was sometimes safe to eat. Under these circumstances, the later association between apple and severe allergic reaction was learned rapidly.
Why? “They didn’t know what to expect from the cue, so they had to pay more attention to it,” says Griffiths. “That’s because of the uncertainty principle.”
o