UB logo Addiction Research Unit
Department of Psychology/University at Buffalo

Experimental Procedure Glossary



This Experimental Procedure Glossary is provided only as a study aid for the Senior Seminar on Assessing Drug Reinforcement. It is being slowly expanded on an on-demand basis. If you have experimental procedures you are having difficulty finding information about and would like them added to this list, submit your request to: bozarth@buffalo.edu. Items are added as time permits.
 
Finding Descriptions of Experimental Procedures
Remember to use the search engine to locate various terms and procedures in online materials available at this web site. Keep in mind that while one source may not provide a detailed explanation of a term or procedure, the next referenced source might provide a more thorough explanation. Use the two-step search procedure to first locate articles and then the term within each article. For additional help, click here

 
Lick-Contingent Hypothalamic Stimulation (used to induce oral self-administration)
"Lick-contingent hypothalamic stimulation" refers to the situation where the subject receives electrical stimulation of the lateral hypothalamus each time it licks the drinking tube. Animals are surgically implanted with stimulating electrodes in the lateral hypothalamic area. Depending on the exact location, electrical stimulation through these electrodes can either be directly reinforcing (see the Reid, 1987 chapter) or can induce feeding and drinking behavior. Stimulation consists of brief electrical pulses (e.g., 0.5 sec) that are delivered on some programmed schedule -- in this case, each time the subject licks the drinking tube.

Lick-contingent lateral hypothalamic stimulation can be used to increase consumption of an ethanol solution. The subject is reinforced for licking the drinking tube (and thereby consuming the ethanol solution) by rewarding lateral hypothalamic stimulation. This procedure gets the subject to consume large quantities of ethanol (to the point of gross intoxication) that it would not otherwise consume. In simple terms you might consider the subject being "forced" to drink the ethanol to receive the rewarding brain stimulation. Investigators using this procedure often hope that the animal will develop a preference for drinking ethanol (after the "forced" experience with it) and continue to drink the ethanol solution even after the rewarding hypothalamic stimulation is omitted. This might be considered analogous to the situation where humans learn to drink ethanol solutions (e.g., scotch whiskey) that they don't initially find very pleasant but with experience they learn to "enjoy" the beverage. Friends and other social factors might initially reinforce drinking the whiskey (similar to the electrical brain stimulation sometimes used in animals), but after sufficient experience the whiskey becomes reinforcing in itself. The idea in both cases is to get the subject to develop a "taste/preference" for ethanol through experience (i.e., repeated exposure). Considerable research suggests that ethanol is not very rewarding initially in laboratory animals or in humans but that with sufficient experience/exposure it can become a potent reinforcer for at least some individuals. This parallels many individuals' experience in 'learning to drink' alcoholic beverages.

Reference: Amit et al. (1987)

Micro-Punch Assay Procedure (used for determining drug dispersion from microinjections)
The micro-punch assay procedure involves determining the amount of drug in a given brain area by taking a small tissue sample from sites around the infusion site. This procedure can be used to determine how much drug actually got into the brain and where it went.

The animal is microinjected with a radiolabeled drug. After allowing a short time for the drug to diffuse (e.g., 5 to 15 minutes), the animal is sacrificed and the brain is removed. Next, the brain is frozen (e.g., -40 C) and sliced into very thin sections (e.g., 40 micron) and mounted on glass slides. A small stainless steel tube (e.g., 0.5 to 1 mm) is used to 'punch' samples from brain regions surrounding the injection site. Each small sample is expelled (e.g., blown through the tube) into a scintillation counting vial. Next, the tissue is chemically dissolved and a scintillation cocktail is added. The sample is then placed in a scintillation counter and the amount of radioactivity is determined for each sample (remember, the drug is radiolabeled). By comparing the radioactivity contained in the sample to the radioactivity contained in a reference solution (e.g., dpm/ml), the volume of drug in each tissue sample can be determined. For example, if one microliter of the reference solution produced 16,000 dpm and the sample from the micropunch assay contained 8,000 dpm, then the drug volume corresponding to the sample would be 0.5 ml; if the sample contained 4,000 dpm, then the drug volume contained in that sample would be 0.25 ml. With this method the amount of
drug delivered into the brain can be accurately determined and its dispersion pattern (e.g., where did it go? left, right? up the shaft?) approximated.

Reference: Bozarth (1987)

Progressive-Ratio Schedule (used with self-administration studies)
The progressive-ratio procedure is used to determine the maximum fixed ratio (FR) schedule a subject will complete to receive a rewarding drug injection. There are several variations of this basic procedure, but one common method increments the FR requirement after each drug injection. For example, the first drug injection might require one lever press, the next injection 2 lever presses, the third injection 4 lever presses, the fourth injection 8 lever presses, the fifth injection 16 lever presses, and so forth (i.e., FR-1, FR-2, FR-4, FR-8, FR-16, etc.). Different investigators use different size FR increments. Break point is traditionally defined as the first FR requirement that will not sustain drug self-administration, although a few investigators now use this term to describe the largest completed FR schedule. The maximum number of times a subject will lever press to obtain a drug injection is used as an indication of the drug's reinforcing efficacy -- the stronger the rewarding impact of a drug, the higher its break point. 

The data shown on the right are from a study that used a modified version of the progressive-ratio (PR) procedure that employed a second-order component (PR [FR-3:S]). Using this method, rats will lever press around 300 times for a single intravenous cocaine injection. (Bozarth & Pudiak, unpublished observations)

Progressive-ratio testing in laboratory rats
References: Bradey et al. (1987); Weeks & Collins (1987); Yanagita et al. (1987); Yokel (1987)

Schedule-Induced Polydipsia (used with oral self-administration studies)
Schedule-induced polydipsia is excessive drinking behavior (i.e., polydipsia) induced by the subject working for another reinforcer. It can be used to get the subject to drink large quantities of a fluid that it might not otherwise consume (e.g., ethanol solution). 

Consider the following example. The rat is food deprived and lever presses for small pellets of food. It lever presses frequently to obtain the food reinforcer and occasionally drinks fluid from a drinking tube/bottle that is freely available (i.e., it doesn't have to work to obtain the liquid). In the course of working for and eating the food reinforcer (the reinforcement schedule whereby the food reinforcer is obtained [e.g., FR-1, FR-3] is the schedule component), the rat consumes considerable amounts of the fluid -- much more fluid that it would normally drink (this is the polydipsia component). In this situation the rat will consume much more liquid than if the food were freely available (e.g., it didn't have to lever press for the food), even though there no physiological reason for drinking more liquid (no, the little rat is not 'working up a sweat'). 

This method (i.e., schedule-induced polydipsia) can be used to get the rat (or any other animal) to drink large quantities of an alcohol solution it would not normally drink. No one really understands the mechanism of schedule-induced polydipsia, but it's considered an adjunctive behavior (i.e., follows the performance of some other, unrelated response). 

Consider sitting in a bar and eating popcorn while talking to your friends. The popcorn tastes good, but not that good. After some time passes you find you've devoured the entire bowl of popcorn and the bartender is filling it for the third time. You're eating much more popcorn than you might normally eat, but it's an adjunctive behavior associated with talking to your friends. (Of course the bar owner hopes that the salty popcorn will make you thirsty and induce you to purchase more drinks.) And perhaps eating the small morsels of food (i.e., popcorn) causes more drinking (from schedule-induced polydipsia) in addition to the increased drinking caused by the osmotic thirst from the salt on the popcorn. Hum, one could sell a lot of drinks that way. Pub owners have worked this out years before psychologists had labels for the processes involved. 

References: Amit et al. (1987); Meisch & Carroll (1987)

Second-Order Schedules (used with self-administration studies)
The FI (FR:S) second-order schedule has two schedule components. One involves a fixed interval schedule (FI): the first lever press after completion of the interval produces the reinforcement. For example, on an FI 1-minute schedule the first lever press after one minute elapses is reinforced; on an FI 5-minute schedule the first lever press after five minutes elapses is reinforced. Of course the subject does not know the schedule requirement, so it presses the lever intermittently throughout the interval. Lever pressing usually speeds up near the end of the FI schedule requirement, when the subject anticipates the reinforcer. The maximum number of reinforcers the subject can earn is determined by the FI schedule. For an FI 5-minute schedule, it is one reinforcement every five minutes; for an FI 10-minute schedule, it is one reinforcement every ten minutes. This is one method of limiting the amount of drug the subject can self-administer and thereby avoiding potentially toxic and response-altering drug effects. Now the second component. 

To keep the animal responding through much of the interval, a fixed ratio schedule (FR) can be used which employs a stimulus (S) that may serve as a type of secondary reinforcer or cue. First consider how an FR schedule works: every nth lever press the subject is reinforced. On an FR-10 schedule the subject is reinforced after every 10th lever press. On an FR-20 schedule the subject is reinforced after every 20th lever press. Instead of delivering the drug on the FR component, a second-order schedule presents a stimulus (e.g., cue light: S) when the ratio requirement is completed; this stimulus is sometimes associated with drug delivery. 

Now combine the two schedules. If the FR component is an FR-10 and the FI component is an FI-20 minutes, then every 10 lever presses would cause the stimulus light to briefly illuminate (but no drug is delivered); the first completed FR requirement (i.e., 10 lever presses in this example) after completion of the FI requirement (i.e., 20 minutes in this example) would cause the cue light to illuminate and the drug to be delivered. In this situation the cue light is associated with the drug delivery, but drug is not delivered every time the cue light briefly illuminates. (This is analogous to a partial reinforcement effect where response rates and resistance to extinction are relatively high.) The cue light probably serves as a secondary reinforcer, reinforcing responding on the FR component of the second-order schedule. As smart as rats and monkeys (and even humans) are, they never figure out to sit back and relax for 19 minutes, then press the lever 10
times to receive their drug reinforcement. They keep pressing the lever, watching the cue light illuminate every 10 presses (in this example), and perhaps thinking "the light keeps coming on when I lever press so my drug is coming soon." 

The behavioral effect (e.g., response pattern) is that the subject works more more steadily when it periodically receives the secondary reinforcer which helps maintain its responding. If the subject were just working for the FI component on an FI (FR:S) schedule, it would spend much of its time just sitting and not pressing the lever until near the end of the fixed interval. (If the interval were long, the subject might even go to sleep and not wake up for its next opportunity to self-administer drug.) The second-order schedule can maintain relatively high rates of responding throughout the experimental session while limiting how much drug the subject is administered. Second-order schedules can also be combined with progressive-ratio testing to yield high break points for intravenous drug self-administration (see above).

References: Katz & Goldberg (1987); Meisch & Carroll (1987)

Two-Lever Choice Test (used with self-administration studies)
The two-lever choice test compares lever-press rates on two levers: one lever produces response-contingent drug injections while the other has no scheduled effect; lever-pressing on the second lever is interpreted as an indication of nonspecific lever-depressions. Although this method has been successfully used to demonstrate behavioral specificity during intracranial drug self-administration, the results of this test can be variable. With behaviorally activating drug injections, the subject may depress both the active lever and depress the inactive lever during the rewarding drug injection. Thus, the rewarding effect of the drug injection would actually be paired with both levers. Because there is no penalty for pressing the inactive control lever, the discrimination learning may not be very strong. Animals may show response generalization to the inactive lever and thus increase response rates on both levers. Two-lever choice testing with intravenous heroin self-administration frequently shows excellent discrimination between active and inactive levers, but some animals show high levels of responding on the inactive lever (Bozarth, unpublished observations). When this occurs, certain subjects demonstrate behavioral specificity (i.e., preferential lever-pressing on the lever associated with drug injections), but others do not. A simple averaging of response rates on the two levers across different subjects can result in a failure to demonstrate an overall preference for the active lever. Furthermore, if a subject shows stereotypic responding on an inactive lever, the response rate on that lever can be several times the response rate on the drug-contingent lever. Averaging inflates the number of inactive lever responses obscuring the fact that other subjects may show good two-lever discrimination. The alternative procedure of using selected subjects to demonstrate two-lever choice responding can dangerously bias the experimental interpretation, although the exclusion of a small percentage of subjects stereotyping on the inactive lever is probably justified. In either case it is important that all data be presented so that the reader is free to draw his own conclusion regarding the behavioral specificity of the response. 
Reference: Bozarth (1987)

Yoked Control Procedure (used with self-administration studies)
The yoked-control procedure is a method of determining if the lever-pressing is a consequence of the rewarding action of the drug. With this method two animals are tested concurrently in separate operant chambers. One animal is allowed to lever press for response-contingent drug injections while the other subject is tested with an inactive lever. The lever presses of the first animal (executive) produce concurrent injections in both subjects, and the lever pressing of the second subject (yoked) is simply measured as an indication of nonspecific behavioral arousal. 

A potential problem with this procedure is that cues (e.g., activation of a cue light) accompanying the rewarding drug effect may become associated with drug reward in both subjects, and the yoked-control animal may also approach these cues. This can be a problem if a cue light is illuminated directly above the lever and the control animals are repeatedly tested. Arousal and approach behavior associated with the illumination of the cue light may lead to an increase in contacts with the inactive lever. This potential problem might be eliminated by using an auditory cue associated with drug delivery. Even then, however, conditioned increases in locomotor activity may lead to increased accidental lever contacts. 

References: Bozarth (1987); Yokel (1987)


Copyright 2000 Addiction Research Unit/University at Buffalo
This page was last revised 15 March 2000 11:20 EST.
Report technical problems to: bozarth@buffalo.edu
ARU home page | ARU Profile | Addiction Primer | Biological Basis | Research Findings
University Courses | Opportunities | Research Reports | Feedback