Shaping (psychology)

Shaping is a conditioning paradigm used primarily in the experimental analysis of behavior. The method used is differential reinforcement of successive approximations. It was introduced by B. F. Skinner[1] with pigeons and extended to dogs, dolphins, humans and other species. In shaping, the form of an existing response is gradually changed across successive trials towards a desired target behavior by rewarding exact segments of behavior. Skinner's explanation of shaping was this:

We first give the bird food when it turns slightly in the direction of the spot from any part of the cage. This increases the frequency of such behavior. We then withhold reinforcement until a slight movement is made toward the spot. This again alters the general distribution of behavior without producing a new unit. We continue by reinforcing positions successively closer to the spot, then by reinforcing only when the head is moved slightly forward, and finally only when the beak actually makes contact with the spot. ... The original probability of the response in its final form is very low; in some cases it may even be zero. In this way we can build complicated operants which would never appear in the repertoire of the organism otherwise. By reinforcing a series of successive approximations, we bring a rare response to a very high probability in a short time. ... The total act of turning toward the spot from any point in the box, walking toward it, raising the head, and striking the spot may seem to be a functionally coherent unit of behavior; but it is constructed by a continual process of differential reinforcement from undifferentiated behavior, just as the sculptor shapes his figure from a lump of clay.[2]

Successive approximations

The successive approximations reinforced are increasingly accurate approximations of a response desired by a trainer. As training progresses the trainer stops reinforcing the less accurate approximations. For example, in training a rat to press a lever, the following successive approximations might be reinforced:

  1. simply turning toward the lever will be reinforced
  2. only stepping toward the lever will be reinforced
  3. only moving to within a specified distance from the lever will be reinforced
  4. only touching the lever with any part of the body, such as the nose, will be reinforced
  5. only touching the lever with a specified paw will be reinforced
  6. only depressing the lever partially with the specified paw will be reinforced
  7. only depressing the lever completely with the specified paw will be reinforced

The trainer would start by reinforcing all behaviors in the first category, then restrict reinforcement to responses in the second category, and then progressively restrict reinforcement to each successive, more accurate approximation. As training progresses, the response reinforced becomes progressively more like the desired behavior.

The culmination of the process is that the strength of the response (measured here as the frequency of lever-pressing) increases. In the beginning, there is little probability that the rat would depress the lever, the only possibility being that it would depress the lever by accident. Through training the rat can be brought to depress the lever frequently.

Successive approximation should not be confused with feedback processes, as feedback generally refers to numerous types of consequences. Notably, consequences can also include punishment, while shaping instead relies on the use of positive reinforcement. Feedback also often denotes a consequence for a specific response out of a range of responses, such as the production of a desired note on a musical instrument versus the production of incorrect notes. Shaping, on the other hand, involves the reinforcement of each intermediate response that further resembles the desired response.

Not all approximations are successful. Marian and Keller Breland (students of B.F. Skinner) used their knowledge of autoshaping to try to make a pig and a raccoon deposit a coin in a bank. However the sign-tracking failed. The coin, which was being reinforced with food, began to be perceived as the food reward itself by the animals. They acted towards the coin in the same way that they may have acted towards a snack.[3] Animals that act this way are more prone to addictive behaviors than others. Sometimes these animals may even be called "sign-trackers". If the animal did not behave in this manner and actually placed the coin in the bank, it may have been labeled a "goal-tracker".

Practical applications

Shaping is used in training operant responses in lab animals, and in applied behavior analysis to change human or animal behaviors considered to be maladaptive or dysfunctional. It also plays an important role in commercial animal training. Shaping assists in "discrimination", which is the ability to tell the difference between stimuli that are and are not reinforced, and in "generalization", which is the application of a response learned in one situation to a different but similar situation.[4]

Shaping can also be used in a rehabilitation center. For example, training on parallel bars can approximate walking with a walker.[5] Or shaping can teach patients how to increase the time between bathroom visits.


Autoshaping (sometimes called sign tracking) is any of a variety of experimental procedures used to study classical conditioning. In autoshaping, in contrast to shaping, the reward comes irrespective of the behavior of the animal. In its simplest form, autoshaping is very similar to Pavlov's salivary conditioning procedure using dogs. In Pavlov's best-known procedure, a short audible tone reliably preceded the presentation of food to dogs. The dogs naturally, unconditionally, salivated (unconditioned response) to the food (unconditioned stimulus) given to them, but through learning, conditionally, came to salivate (conditioned response) to the tone (conditioned stimulus) that predicted food. In auto-shaping, a light is reliably turned on shortly before animals are given food. The animals naturally, unconditionally, display consummatory reactions to the food given them, but through learning, conditionally, came to perform those same consummatory actions directed at the conditioned stimulus that predicts food.

Autoshaping provides an interesting conundrum for B.F. Skinner's assertion that one must employ shaping as a method for teaching a pigeon to peck a key. After all, if an animal can shape itself, why use the laborious process of shaping? Autoshaping also contradicts Skinner's principle of reinforcement. During autoshaping, food comes irrespective of the behavior of the animal. If reinforcement were occurring, random behaviors should increase in frequency because they should have been rewarded by random food. Nonetheless, key-pecking reliably develops in pigeons,[6] even if this behavior had never been rewarded.

But, the clearest evidence that auto-shaping is under Pavlovian and not Skinnerian control was found using the omission procedure. In that procedure,[7] food is normally scheduled for delivery following each presentation of a stimulus (often a flash of light), except in cases in which the animal actually performs a consummatory response to the stimulus, in which case food is withheld. Here, if the behavior were under instrumental control, the animal would stop attempting to consume the stimulus, as that behaviour is followed by the withholding of food. But, animals persist in attempting to consume the conditioned stimulus for thousands of trials[8] (a phenomenon known as negative automaintenance), unable to cease their behavioural response to the conditioned stimulus even when it prevents them from obtaining a reward.

See also


  1. Peterson, G.B. (2004) A day of great illumination: B.F. Skinner's discovery of shaping. Journal of the Experimental Analysis of Behavior, 82: 317–28
  2. Skinner, B.F. (1953). Science and human behavior. pp. 92–3. Oxford, England: Macmillan.
  3. Powell, R.; Symbaluk, D.; Honey, P. (2008). Introduction to Learning and Behavior. Cengage Learning. p. 430. ISBN 9780495595281. Retrieved 2015-10-23.
  4. Barbara Engler: Personality Theories
  5. Miltenberger, R. (2012). Behavior modification, principles and procedures. (5th ed.). Wadsworth Publishing Company.
  6. Brown, P. & Jenkins, H.M. (1968). Auto-shaping of the pigeon's key peck. J. Exper. Analys. Behav. 11: 1–8.
  7. see Sheffield, 1965; Williams & Williams, 1969
  8. Killeen, Peter R. (2003). Complex dynamic processes in sign tracking with an omission contingency (negative automaintenance). Journal of Experimental Psychology. 29(1): 49-61

External links

This article is issued from Wikipedia - version of the 4/25/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.