One of the main controversies inspired by the Darwin-Wallace theory of evolution was the issue of continuity.
Continuity: the idea that all living things are related to each other to some degree.
The theory of evolution made human beings just another species rather than the final, perfect product of Aristotle’s scale of nature. The adoption of continuity by scientists late in the 19th century opened up the door to the study of animal psychology.
Darwin himself had pioneered that subfield following the publication of his 1872 book, The Expression of Emotions in Man and Animals.
George Romanes, a younger follower of Darwin, popularized the idea of studying animal intelligence through his books on the subject.
C. Lloyd Morgan, in turn, followed Romanes and revised much of his work through careful observation along with a much more thorough and skeptical approach to the field. Unlike Romanes, Morgan discounted nearly completely the anecdotal reports from untrained observers to which Romanes had become enamored of late in his life and career. Romanes, for example, had bought a monkey for his sister to raise. When the monkey learned how to unscrew a bolt, Romanes inferred that it had learned the mechanical principles behind threaded screws.
Video on Romanes shows his anthropomorphic approach to animal behavior.
Morgan, while not doubting that the animal could indeed screw and unscrew bolts, interpreted the monkey’s behavior more cautiously; it had learned that turning the bolt one way or another led to a predictable outcome.
Morgan went on to suggest what he called his canon (Morgan, 1894, p. 53):
"In no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale.
Morgan intended for his canon to put animal psychology on a par with human psychology by providing a rule for interpreting animal minds scientifically (Costall, 1993)."
Morgan did not wish to disallow comparison between human and animal intelligences. In fact, he believed that such comparisons were legitimate; they just had to be made carefully (Karin-D’Arcy, 2005).
However, a completely different result ensued. The comparison of animal intelligence and human intelligence disappeared as a psychological topic until its revival late in the 20th century. Thus, comparative psychology was part of early psychology that grew out of its border with biology. Today, the situation is not much different for the subfield. While comparative psychology reappeared within psychology, it has gradually become a rarer course in the psychology curriculum.
Abramson (2018, p. 3) in an article supporting comparative psychology’s importance in the face of generalized neglect by the rest of psychology stated:
"In my view, there is no psychology as important as comparative psychology. The skills and perspective of a comparative psychologist would make them a highly valued member of any research team. Comparative psychology should be taught not only at the college level but in high school as well.
Remarkably modern psychology has all but ignored a pioneer in the study of animal cognition, Charles H. Turner. He was an African American psychologist who conducted experimental research in animal cognition on insects, birds, and reptiles. Despite publishing important research results including three articles in Science, only a handful of his contemporaries were aware of his contributions. Today, in his honor, the Animal Behavior Society gives an annual travel award to members of underrepresented groups (Dona & Chittka, 2020).
Came to Cornell and was Titchener's first graduate student
Taught at Vassar and other women's colleges
Not invited to join Titchener's Experimental Psychologists
Published The Animal Mind
That book did not incorporate good experimental practices
Thorndike's work did
Major Theoretical Concepts
Thorndike was the first to espouse connectionism (early S-R psychology)
Used puzzle boxes and pioneered the learning curve
Trial and Error learning
The learning curve described trial and error learning
Learning is not insightful
Ideas are not a part of learning
All mammals (including humans) learned in the same way
That idea will be challenged later (Tolman, cognitive psychology)
Thorndike Before 1930
Laws of learning
Readiness
Animals act when ready
Animals ready to act but who do not become annoyed
Animals ready to act but prevented from doing so may become frustrated
Exercise
S-R connections are strengthened with use
S-R connections not practice become weaker
Effect
"Of several responses made to the same situation, those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections with that situation weakened, so that, when it recurs, they will be less likely to occur. The greater the satisfaction or discomfort, the greater the strengthening or weakening of the bond.(Thorndike, 1911, p. 244)"
Notice the non-behavioral terminology (e.g., "satisfaction" or "discomfort")
First definition of reinforcement (and we'll see many more)
Secondary Concepts Before 1930
Multiple Responses
Look at the cat in the Thorndike video on the Thorndike page.
Notice how the naive cat tries several behaviors trying to escape the puzzle box
For Thorndike, trial and error learning consisted of organisms trying out behaviors and rejecting those that do not work
Set or Attitude
Many factors can affect learning:
Culture
Genetics
Deprivation
Fatigue
Prepotency of Elements
Organisms select aspects of the important they deem more important
Deer can see the color orange, but they respond more to odor or movement
Notice that muich of the power of learning comes from transfer from the learned situation to a new one
Think of driving a car, they are all a little different but what you learn in one transfers to another
Recently standard transmissions (stick shifts) have become rare.
So, if you were to borrow my old truck and had never driven a standard transmission your previous experience with automatic transmissions would not help you
Radical Behaviorism: As far as radical behaviorists are concerned no border exists between psychology and biology.
Borrowing the mechanism of selection from evolutionary theory, they argue that Radical Behaviorism operates at three levels.
The first level is Darwin’s natural selection. Innate behaviors come from this level.
The second level is operant conditioning that selects organisms’ emitted behaviors (or operants) through the action of the environment. Learned behaviors in animals and humans come from this level.
The final level is cultural where humans’ verbal responses (also considered to be operants) are selected through the action of the linguistic communities people live in. Culturally based behaviors in humans come from this level.
Radical Behaviorism interprets each type of selection at its own level with each possessing its own time frame.
So, phylogenetically based innate behaviors evolved over millions of years.
Learned behaviors in animals and humans develop over the course of the lifetimes of individual species members.
Culturally based behaviors also evolve over long periods, from as little as several lifetimes to thousands of years.
In terms of theory, radical behaviorists confine themselves to the last two levels but take pains to demonstrate that at all three levels either genes, behaviors, or verbal behaviors are being selected mechanistically according to environmental consequences operating at their respective levels.
Radical Behaviorism, what is it?
Skinner's definition of psychology: a science of behavior
But, only directly observable stimuli admitted
But, "private" stimuli are observable by self
Rejects mentalism
Types of Behaviorism
Methodological--nearly all psychologists, scientific method, behavior
Mediational--most restrictive, only accepts operationally defined units (see: Hull)
Radical--less restrictive, observable behaviors can be public or private
Functional Analysis of Behavior
Search for events in the environment that govern behavior: stimuli, responses
Traditional three step causal chain: E->M->R
Skinner suggests that mental causes are "explanatory fictions" at worst and "mental way stations" at best
Proposes E->R instead making him an environmental determinist
Single-N Designs
Small-N research is conducted in many areas of psychology, especially in clinical psychology and in Skinnerian behavior analysis. The number of participants varies from 1 (the smallest N) to larger numbers of participants.
Dukes (1965) notes that small–N research is most appropriate for situations involving low variability between subjects or when the opportunity to research a specific situation is limited.
Dukes, W. F. (1965). N = 1, Psychological Bulletin, 64, 74-79.
Is Skinner's Radical Behaviorism Antitheoretical?
Skinner's theory is theoretical, it demands observable data however.
Assumptions:
External stimuli govern behavior
Operant conditioning accounts for much of behavior
Experimental (or functional) analysis of behavior
Controlling the environment will control behavior
Respondent conditioning (is his term for classical conditioning))
Darwin and Skinner
Skinner sees responses as "naturally selected" by the environment
Instrumental Conditioning
Very similar to operant
Apparatus is different
Mazes
Lashley Jumping Stand
Key is that response can only be made on a trial-by-trial basis
Discriminative Stimuli
SD-stimulus signaling opportunity for reinforcement, pigeon and green light
Reinforcement only delivered when green light is on
Pigeon learns to only respond when green light is on
Human example:
"Want a back rub?"
SΔ -stimulus signaling the lack of opportunity for reinforcement, frown
For the pigeon, the red light signals nonreinforcement
Human example:
"I have such a headache"
In other words, "don't ask"
Discriminative stimuli are the first step in a reinforcement situation
SD -> R -> SR
Or, discriminative stimulus, followed by response, followed by reinforcement
Reinforcement: "anything that follows a response that changes the probability of a response's occurring again"
A. Positive and Negative Reinforcement
Both RAISE operant level
Positive reinforcement raises operant level when GIVEN after the response
Negative reinforcement raises operant level when TAKEN AWAY after response
B. Punishment
LOWERS operant level
Positive punishment lowers operant level when GIVEN after the response
Negative punishment lowers operant level when TAKEN AWAY after the response
C. Illustration of Reinforcement and Punishment
Reinforcement
Positive-response followed by food/Raise hand, get $20
Negative-response followed by shock removal/Raise hand, shock removed/Nagging
Punishment
Positive-response followed by shock presentation/Raise hand, get shock
Negative-response followed by food removal/Raise hand, lose points
D. Primary and Secondary Reinforcers
Primary-associated with biological drives/UCSs are identical
Hypothetico-Deductive system: a system using logic derived from a small set of given truths used to deduce new, derived, and logically consistent statements.
After, those deductions are tested experimentally.
Statements experimentally confirmed are kept and the others are discarded.
Major Theoretical Concepts: Inspired by Euclid's Geometry and Newton's Principia
Postulates and Theorems
16 major posutulates (1943) See HERE for more detail
1. Postulate 1 – Sensory input (the afferent neural impulse) and the stimulus trace (afferent impulse decay):
2. Postulate 2 – Interaction of afferent neural impulses:
3. Postulate 3 – Innate behavior tendencies:
4. Postulate 4 – Habit strength as a function of the temporal relation of the conditioned stimulus to the reaction:
Postulate 5 – Primary stimulus equivalence and stimulus generalization
6. Postulate 6 – Primary motivation:
7. Postulate 7 – Reaction potential:
8. Postulate 8 – Innate inhibition from primary negative drive:
9. Postulate 9 – Conditioned inhibition— the learned response of not responding:
10. Postulate 10 – Inhibitory potentiality varies from instant to instance:
a reaction that develops progressively earlier in conditioning a series of responses and may become a conditioned stimulus for subsequent responses.
To explain long-delay learning without resorting to cognitive theorizing
PATCHING of Hull's system, AKA--"weak"S-R theory
In other words, Hull needed a way to explain behavior sequences without resorting to a cognitive approach
So, in the graphic above, classical conditioning and the various sections of the runway (s6 to s1) drive the rat to the goal box
The rat need not "know" (in Hull's universe, at least) that there is food in the goal box
Oscillation and Threshold
Note that learning is variable
Note that a "steep learning curve" is a GOOD thing!
Look at the envelope for the learning curves above:
The middle line is the mean and the two outside lines are the limits
The first line (the steepest one) reached the threshold of learning sLr first
How often have you seen "steep learning curve" misused?
SER = SHR x D x V x K - IR - SIR - SLR +/- SOR
Hull's final equation (above) also shows that if any of the first four terms (SHR x D x V x K) are 0, then SER is 0
Let me illustrate using D
D will eventually go to 0 (satiety or being full or no longer being thirsty)
What happens then? SER is now 0 too
The Habit Family Hierarchy
As we will see when we study Tolman, he and Hull were at odds, in terms of theory
Tolman and Honzik (1930) set up an experiment designed to attack Hull's notion of habit strength (SHR)
They used the maze below (I apologize for the German labels):
Weg = Path, Sperre = Block, Futter = Goal Box
At first, Tolman and Honzik allowed their rats to explore this maze without any blocks in place
The rats took all the paths but took Path 1 the most and Path 3 the least
So, the SHR was greatest for Path 1 and least for Path 3
When the block was put at A, the rats switched to Path 2 (as Hull predicted)
But, when the block was put a B, the rats went to Path 3 immediately
In other words, they did went to the path with the lowest SHR
Unlike Hull, Tolman assumed that the rats had a mental map of the maze and that SHR was not a factor
(More on this dispute when we get to Tolman.
Hull on Education
As the text notes, applying Hull's theory to the classroom is difficult.
Using anxiety and its reduction as a reinforcer seems contraproductive
The Testing Effect is another issue in applying Hull's ideas to the classroom
Testing effect: the finding that taking a test on previously studied material leads to better retention than does restudying that material for an equivalent amount of time. Although testing is often conceptualized as an assessment tool, this finding suggests that testing (or retrieval practice) can also be considered a learning tool. Indeed, exams or tests seem to activate retrieval processes that facilitate the learning of study material and cause knowledge to be stored more effectively in long-term memory. (APA Dictionary of Psychology)
Evaluation of Hull's Theory
Contributions
Extremely influential in his day (the Yale-Iowa Axis)
The use of operational definitions led to many applications
Made testable predictions
For many years psychology was said to revolve around the Yale (Hull)-Iowa (Spence) Axis
Known as "Hullian Psychology"
In Emerson, AR the Purple Hull Festival is held every summer.
So, in class, I once joked about being a "Purple Hullian" psychologist
Later, the class made me a t-shirt labelling me such a psychologist
Criticisms
Very much lab based, rat based and not easily moved to the real world
He died before he could write his third book
If an operational definition was not possible, then Hullians could not address the topic
Think of "love" for instance
Do you have a good operational definition for that interesting psychological phenomenon?