From Bozarth, M.A. (1991). The mesolimbic dopamine system as a model brain reward system. In P. Willner and J. Scheel-Krüger (eds.), The mesolimbic dopamine system: From motivation to action (pp. 301-330). London: John Wiley & Sons.
The Mesolimbic Dopamine System as
Model Reward System
Michael A. Bozarth
Department of Psychology
State University of New York at Buffalo
Buffalo, New York 14260 U.S.A.
Brain dopamine systems have been the focus of considerable research since their initial identification almost three decades ago. Dahlström and Fuxe (1964) identified several clusters of catecholamine-containing cell bodies and traced the primary axonal projections of these cell groups to various brain regions. Subsequent work (e.g., Andén, Dahlström, Fuxe, Larsson, Olson, & Ungerstedt, 1966; Lindvall & Björkland, 1974, Palkovits & Jacobowitz, 1974; Ungerstedt, 1971a) provided more precise anatomical localization of central dopamine-containing cells and their projections. Two major systems emerged from this work—the nigrostriatal and the mesolimbic/mesocortical dopamine systems—and several shorter neural projections were also described. These dopamine systems have been implicated in a wide variety of behavioral actions including locomotor activity, stereotypy, feeding and drinking, drug reinforcement, and brain stimulation reward as well as in human psychopathology. The fact that these behaviors appear to share a common neuropharmacological and neuroanatomical basis has prompted numerous attempts to develop an integrative theoretical model (e.g., Beninger, 1983; Depue & Iacono, 1989; Mogenson, Jones, & Yim, 1980; Mogenson & Phillips, 1976).
This chapter will describe some of the motivational functions of the mesolimbic dopamine system with an emphasis on appetitive motivation and reward processes. Literature supporting the involvement of this system in the rewarding effects of electrical brain stimulation, psychomotor stimulants, and opiates will be reviewed, and additional data suggesting the possible involvement of the mesolimbic dopamine system in other behaviors will be briefly surveyed. No attempt will be made to exhaustively review the literature; rather, a specific model is described that integrates much of the empirical data, and the heuristic value of this model is explored.
Prelude to the Study of Brain Reward Systems
Before proceeding with a description of the motivational function of the mesolimbic dopamine system, two general issues need to be addressed. The first involves the concept of brain reward systems and briefly examines both the empirical and theoretical basis of their existence. The second concerns the approach to studying brain reward function and an examination of several factors important in evaluating the empirical data base.
Specialized Brain Function
There are numerous examples of specialized neural functioning. Specific sensory pathways have been identified and many can be traced from their peripheral transducer organs to their central nervous system (CNS) representation (e.g., retinal activation from light stimulus to neural representation at the occipital cortex). Similarly, motor pathways can be traced from their CNS origin (e.g., motor cortex) through their efferent outputs to the effector organ (e.g., neuromuscular junction). The exact anatomical linkages, stimulus coding information, and neurochemical mediation of many sensory and motor events have been described in fine detail. The linkage that has been evasive is that between the sensory input and the motor output. Except for a few examples of reflexive behavior, the exact nature CNS mediation between sensory input and motor output remains obscure.
The nature of mechanisms mediating between stimulus input and motor output is a topic long considered by motivational psychologists. In general, their approach has used molar units of stimuli and behavior; intervening variables have been postulated to account for the variability in responding to constant stimulus conditions and to describe when a given antecedent condition will yield a specific response (e.g., determining when a food stimulus is associated with an eating response). Physiological psychologists, on the other hand, have specifically addressed the CNS mediation of these processes and have attempted to identify specific neurophysiological conditions that influence responding. Few, however, have attempted to integrate these CNS processes with general motivational theory just as relatively few motivational psychologists have attempted to describe specific physiological processes underlying motivated behavior. (The theories of Hull [1943, 1952] and of Bindra  have implications regarding the nature of neurophysiological mechanisms mediating their models, but little empirical work has been generated by these formulations.)
The work of Hess (1949) demonstrated that electrical activation of certain CNS sites can modulate the organism’s responses to various stimulus conditions (e.g., elicit attack behavior, feeding). Subsequent work has shown that a variety of behaviors can be evoked by appropriate stimulus conditions concomitant with electrical activation of CNS sites. For examples, feeding behavior, sexual behavior, and attack behavior can all be elicited by hypothalamic stimulation. This work suggest a possible CNS focus for gating stimulus-response phenomena, and the hypothalamus has emerged as a possible center for these motivational effects.
The report by Olds and Milner (1954) that rats would work to electrically stimulate certain parts of their brains opened a new dimension in the conceptualization of CNS mediation of behavior. Arbitrarily selected responses could be reinforced by electrical stimulation of the lateral hypothalamus and other brain sites, despite the absence of any appropriate goal object or pre-existing biological need. The finding that animals will work to directly activate specific CNS pathways was initially met with skepticism by some motivational psychologists; apparent differences in the way that electrical stimulation controlled behavior and the way that natural rewards controlled behavior were emphasized by some learning theorists, and brain stimulation reward was suggested to represent a special case of motivated behavior. However, subsequent work revealed that electrical brain stimulation could control behavior in much the same fashion as conventional rewards (see Olds, 1977; Reid, 1967; Trowill, Panksepp, & Gandelman, 1969), and the notion that electrical stimulation could activate reward systems involved in the control of natural behavior gained widespread acceptance.
The demonstration of rewarding effects from electrical brain stimulation along with the concurrent discovery that electrical stimulation of other brain regions produced intense aversive effects (Delgado, Roberts, & Miller, 1954) suggested that goal-directed behavior might be guided and maintained by simple approach and avoidance systems. Separate neural systems were identified that elicited approach and avoidance behavior, and naturally occurring motivational conditions were postulated to be mediated by the activation of these systems (e.g., Olds, 1962; Olds & Olds, 1963; Stein, 1964). This line of investigation specifically addressed what happens between stimulus presentation and response emission. Neutral stimuli (e.g., a lever) can develop motivational significance and engender approach responding (e.g., lever manipulation) contingent upon electrical activation of brain reward pathways. The study of motivation, from this perspective, starts with what happens in the middle (the "organismic" variable) and then precedes forward to stimulus input and backward to response output.
Approaches to Studying Brain Reward Function
There are three dimensions to be considered in the study of brain reward systems. The first consideration is the use of a correlative versus a functional approach to the study of brain reward systems (see Bozarth, 1987a). The correlative approach observes behavior and measures changes in CNS activity that are associated with specific behaviors. The putative mediation of function is inferred by high correlations between specific neural events and the occurrence of the behavior being studied. In contrast, the functional approach uses direct experimental manipulation to determine the role of various CNS events in reward processes. A specific neural system can be chemically or electrically stimulated, or the function of the neural system can be disrupted by the use of selective lesioning or pharmacological procedures. The functional role of the neural system is determined by causing the occurrence of the behavior through stimulation or by the disruption of the behavior through selectively blocking the activity of the system.
The strongest demonstration that a given neural system mediates a behavior comes from the functional approach where the behavior can be directly elicited or inhibited by experimental manipulation. The correlative approach is most useful for preliminary exploratory studies that suggest possible neural events involved in the behavior and for providing corroborative evidence that the neural events identified by the functional approach behave according to the predictions derived from the experimental model.
A second consideration is specifying what conditions are identified by the functional approach. The activity of a neural system may be sufficient to produce the behavior, it may be necessary for the expression of the behavior, or it may be both necessary and sufficient for the occurrence of the behavior (see Bozarth, 1983, 1987a). These three cases represent separate conditions that can be identified using the functional approach. If the behavioral response is elicited by activation of the neural system, then activation of that system is sufficient for elicitation of the behavioral response. There is no reason, however, to presuppose that the same neural system is necessary for the behavioral response following activation by other response-eliciting stimuli. Similarly, a specific neural system may be necessary for the occurrence of a behavior, but the activation of the same neural system may fail to evoke a behavioral response. The case where a stimulus is sufficient but not necessary to elicit a behavior merits special consideration. Some important neural functions may be overdetermined, having several independent systems that can produce a behavioral response. For example, drinking behavior can be elicited by activating one of two independent physiological mechanisms—intracellular or extracellular processes (see Fitzsimons, 1971). Interestingly, the normal control of water intake in ad libitum conditions probably does not involve either mechanism. Nonetheless, drinking can be elicited by intracellular or extracellular fluid loss despite the apparent unimportance of these mechanisms in the control of ad libitum drinking.
The last consideration involves the level of study. With few exceptions, rewarding effects are measured by the behavioral response of the intact organism. This necessitates a molar approach to the study of reward processes (but see Olds, 1962; Stein & Belluzzi, 1987). Until the relevant neurological events are identified, it is necessary to anchor studies of reward processes with behavioral measures that indicate when reward processes are operative. Attempts to determine molecular events linked with reward are futile without proper reference to behavioral indices of reward. An understanding of basic motivational theory is just as important as an understanding of basic neuropharmacology and neuroanatomy in the elucidation of reward mechanisms. Most psychopharmacologists have at least an elementary understanding of operant conditioning theory, but many fail to have any training in the broader field of motivational psychology. This creates three types of difficulties: (i) problems discerning the necessary control procedures for behavioral studies, (ii) problems determining the significance of the observed behaviors, and (iii) problems identifying the relevant behavioral processes to study. During the past several decades, scientists studying reward processes have become more sophisticated in their knowledge of neuroscience techniques, but they have concomitantly become less sophisticated in their understanding of basic motivational processes.
The model developed in this chapter emphasizes the functional approach and determines what events are sufficient to activate reward processes described in molar units of behavior. This perspective is important to understanding how the neural elements are assembled in the present model and to understanding how specific predictions are derived from the model.
Involvement of Dopamine in Brain Reward Function
The terms reward and reinforcement have various meanings. Some authors use these terms interchangeably (e.g., Stellar & Stellar, 1985), while others emphasize important differences based on the terms’ histories (e.g., White, 1989). Reinforcement suffers from less ambiguity in its usage, largely because of the general acceptance of the empirical law of effect which describes reinforcement as a process where certain events (i.e., reinforcing stimuli) increase the probability of behaviors they are associated with (e.g., Spence, 1956). Wise (1989) presents arguments that behaviors described as involving reward or reinforcement may have the same underlying neural mechanism and therefore represent manifestations of the same phenomenon; by implication, the terms reward and reinforcement may be used interchangeably according to this view (cf. Wise & Rompré, 1989).
In the context of the present chapter, there is a subtle but important difference in reward and reinforcement processes. Reward refers to the tendency of certain events to direct behavior, specifically, to elicit and reinforce approach behavior. The term connotes a pleasant hedonic impact of rewarding stimuli, and the argument of Bindra (1969) that the motivational and hedonic attributes of stimuli are inherently related will be adopted. Reward and reinforcement can frequently be used interchangeably, but there are circumstances where the difference in the appropriate usage of these terms becomes apparent. A common observation made by workers studying brain stimulation reward or intravenous drug self-administration illustrates this point. If a well-trained subject is placed in the test apparatus and does not immediately initiate responding for brain stimulation reward, a single priming stimulus is usually sufficient to elicit responding. Consider the situation where the subject is sitting in the corner of the operant chamber grooming. According to a strict interpretation of reinforcement theory, a noncontingent rewarding stimulus should reinforce the grooming response. But the subject’s response to that noncontingent reward is much different. Approach and vigorous lever pressing behavior immediately follow the priming stimulation. Similar effects are noted by those working with intravenous drug self-administration; noncontingent, experimenter-delivered drug infusions usually elicit lever-pressing behavior in trained subjects, and priming infusions seldom effectively reinforce whatever behavior the animal was emitting during the experimenter-delivered reward (e.g., grooming, rearing in the opposite corner of the test apparatus; see Stewart & de Wit, 1987).
The difference between reward and reinforcement processes can be very important when studying basic motivational processes. The case where a reward does not reinforce behavior (in the traditional operant sense) identifies an important attribute of reward mechanisms—they increase behaviors associated with their activation. Brain reward function thus describes processes that elicit approach behavior and processes that the organism "seeks" to activate. Reward functions to direct the animal’s behavior toward whatever stimuli or response is most strongly associated with reward expectancy; reinforcement refers to the process where these expectancies are developed, frequently through simple contiguity. Behavior, in this view, becomes purposive in the sense described by Tolman (1932), and reward processes are neural events that direct the organism toward stimuli associated with activation these reward processes.
Brain Stimulation Reward
Although electrical stimulation at many different brain sites is rewarding (see Phillips & Fibiger, 1989), electrode placements along the medial forebrain bundle extending from the lateral hypothalamus to the ventral tegmental area have been the focus of considerable research. Electrical stimulation of this region produces the strongest rewarding effects, demonstrated by the highest lever-pressing rates, lowest stimulation thresholds, and the least sensitivity to disruption by aversive contingencies (see Mogenson & Phillips, 1976; Olds, 1962, 1977). There is evidence that several independent systems can support brain stimulation reward (e.g., see Phillips, 1984; Robertson, 1989), but the rewarding effects from electrode placements along the medial forebrain bundle (MFB) probably represents activation of a common reward substrate.
Disruption of dopamine synthesis (e.g., Cooper, Black, & Paolino, 1971; Stinus & Thierry, 1973) or blockade of dopamine receptors (e.g., Lippa, Antelman, Fisher, & Canfield, 1973; Wauquier & Niemegeers, 1972) attenuates the rewarding effects of MFB stimulation. Several behavioral measures have been developed that distinguish motor impairment from reward attenuation (e.g., Gallistel, Boytim, Gomita, & Klebanoff, 1982; Fouriezos, Hansson, & Wise, 1978; Fouriezos & Wise, 1976; Franklin & McCoy, 1979; Mora, Sanguinetti, Rolls, & Shaw, 1975; Phillips & Fibiger, 1979), and most studies have reported that dopamine-receptor blockade disrupts the rewarding impact of electrical stimulation at doses that do not produce significant impairment in the animal’s ability to respond (for reviews, see Fibiger, 1978; Wise 1978, 1982). Microinjections of dopamine-receptor antagonists directly into the nucleus accumbens attenuate brain stimulation reward (Broekkamp & van Rossum, 1975; Mogenson, Takigawa, Robertson, & Wu, 1979; Mora et al., 1975; Stellar & Corbett, 1989), suggesting that this dopamine terminal field is critically involved in the rewarding effects of MFB stimulation. Dopamine-depleting lesions of the mesolimbic system have been less successful in disrupting brain stimulation reward, but many of the lesion studies failed to test adequate dopamine depletions. Several studies (e.g., Fibiger & Phillips, 1979; Koob, Fray, & Iversen, 1978) suggest that severe dopamine depletions disrupt the rewarding effects of MFB stimulation.
Although the lesion data are less clear, the pharmacological data suggest an important role for dopamine in the rewarding effects of electrical brain stimulation. The simplest hypothesis suggests that dopaminergic neurons are directly activated by the electrical stimulation, and manipulations that interfere with the resulting enhancement of dopaminergic neurotransmission disrupt the rewarding impact of this stimulation. Studies investigating the activation of dopamine neurons by rewarding stimulation have used three general approaches—anatomical mapping which determines the effectiveness of electrical stimulation along the dopamine systems, electrophysiological characterization of the neural population directly activated by rewarding stimulation, and neurochemical evidence for activation of the ascending dopamine systems.
German and Bowden (1974) compared the distribution of electrode placements effective in supporting brain stimulation reward with the distribution of catecholamines. Their review showed a close correspondence between sites supporting brain stimulation reward and brain regions containing catecholamine fibers, cell bodies, and terminal fields. Using a movable stimulation electrode combined with fluorescence histochemical visualization of dopamine-containing cell bodies, Corbett and Wise (1980) reported that rewarding effects were produced by electrical stimulation within the ventral tegmental and substantia nigra—pars compacta dopamine-containing cell layers but not at placements adjacent to these sites. The anatomical mapping studies seemed to provide strong support for the notion that rewarding stimulation directly activated dopamine neurons (see Wise, 1980a).
Experimental procedures developed from cellular electrophysiology have been adapted to determine the characteristics of the behaviorally relevant neurons directly activated by electrical stimulation. Electrical stimulation nonselectively activates heterogeneous neural populations, but methods have been developed that can determine the electrophysiological characteristics of the neurons that mediate the behavioral effect being studied (Gallistel, Shizgal, Yeomans, 1981; Shizgal, 1989). Work using stimulation pulse pairs have determined the refractory periods (Yeomans, 1979), the conduction velocities (Bielajew, & Shizgal, 1982), and the direction of conduction (Bielajew & Shizgal, 1986) for the neural population directly activated by rewarding stimulation in the lateral hypothalamus. These studies have revealed that the neurons directly activated by rewarding lateral hypothalamic stimulation are primarily fast conducting, myelinated neurons that descend from the lateral hypothalamus to the ventral tegmental area. This electrophysiological characterization eliminates the ascending dopaminergic systems as candidates for direct activation by rewarding stimulation. The dopamine neurons are unmyelinated and have substantially slower conduction velocities. The direction of impulse conduction is also opposite for the first-stage neurons directly activated by lateral hypothalamic stimulation and the ascending dopamine systems. Thus, dopamine neurons do not appear to be directly activated by lateral hypothalamic stimulation (but see Yeomans, 1989).
The simplest model integrating the pharmacological data (viz., that implicate dopamine in brain stimulation reward) and the electrophysiological data (viz., that rule out dopaminergic neurons as the system directly activated by the rewarding stimulation) proposes that the ascending dopamine system is transsynaptically activated by the descending system traversing the lateral hypothalamus (Bozarth, 1987a; Stellar & Stellar, 1985; Wise, 1980b; Wise & Bozarth, 1984; Yeomans, 1982). Arbuthnott, Crow, Fuxe, Olson, and Ungerstedt (1970) showed a decline in catecholamine fluorescence following lateral hypothalamic and ventral tegmental stimulation; this was attributed to the activation of dopaminergic and noradrenergic systems by the electrical stimulation. Other studies using electrical stimulation of the ventral tegmental area have shown an activation of the mesolimbic dopamine system (for reviews, see Fibiger & Phillips, 1987; Phillips, Blaha, & Fibiger, 1989): dopamine levels in the nucleus accumbens are reduced while metabolite levels are increased following rewarding stimulation. A preliminary study (M. Bozarth, unpublished observations cited in Bozarth, 1987a) suggested that rewarding lateral hypothalamic stimulation may produce bilateral activation of the mesolimbic dopamine system, although changes on the stimulated side corresponded better with manipulation of stimulation intensity. If bilateral activation does result from unilateral electrical stimulation, this effect could explain the apparent difficulty in disrupting brain stimulation reward with unilateral dopamine-depleting lesions. Further work is needed to fully document this effect.
The data suggest a circuitous pathway: lateral hypothalamic stimulation along the medial forebrain bundle appears to activate a descending pathway that transsynaptically activates the ascending mesolimbic dopamine system. The myelin sheath on the first-stage neurons functions as an electrical insulator concentrating the electrical charge at the nodes of Ranvier. This increased charge density probably permits electrical stimulation at parameters that are insufficient to directly activate the ascending dopamine neurons. Thus, even if electrode placements are proximal to the unmyelinated dopamine neurons, the electrical stimulation would preferentially activate the descending myelinated system. The neurotransmitter system comprising the descending pathway is probably heterogeneous which explains why manipulations of other neurotransmitters are not as effective as dopaminergic manipulations in altering brain stimulation reward. One component of this descending system appears to be cholinergic (Gratton & Wise, 1985; Kofman & Yeomans, 1989; Yeomans, Kofman, & McFarlane, 1985), although noncholinergic and perhaps even some dopaminergic neurons are also involved (see Yeomans, 1989).
Psychomotor Stimulant Reward
Two lines of evidence have converged to independently identify the neural substrate of psychomotor stimulant reward. The first involves studying the neural basis of brain stimulation reward, and the second involves studying the neural basis of direct reinforcement from psychomotor stimulants. Early studies showed that amphetamine (Stein, 1962) and cocaine (Crow, 1970) enhanced brain stimulation reward. These data were interpreted as supporting the proposed role of catecholamines in the rewarding effects of electrical stimulation. Later studies identified the nucleus accumbens as the site of action for amphetamine’s reward-enhancing effects (Broekkamp, 1976; Broekkamp, Pijnenburg, Cools, & van Rossum, 1975). With the demonstration that morphine also facilitates brain stimulation reward, a general model was proposed that suggested that the facilitatory effect of drugs on brain stimulation reward was related to the drugs’ intrinsic rewarding properties (viz., addiction liability; see Broekkamp, 1976; Esposito & Kornetsky, 1978; Esposito, Porrino, & Seeger, 1987; Kornetsky, Esposito, McLean, & Jacobson, 1979; Reid, 1987; Reid & Bozarth, 1978). This hypothesis had appreciable appeal to some groups but was not generally accepted by others that advocated study of the drug’s directly reinforcing effects by using self-administration techniques. If the model is valid (i.e., a drug’s intrinsically rewarding effect can be assessed by its facilitatory action on brain stimulation reward), then studies localizing the site of amphetamine’s facilitation of brain stimulation reward concurrently determined the site of amphetamine’s rewarding action.
Another approach to determining the neural basis of psychomotor stimulant reward examines the direct reinforcing action of these compounds. Although a number of approaches have been developed to study drug reward (see Bozarth, 1987b), the most popular method uses animals prepared with intravenous catheters and allowed to intravenously self-administer drug (see Yokel, 1987). Drugs that are self-administered (and hence rewarding) in humans are generally self-administered by animals, and most drugs that are not self-administered by animals are not self-administered by humans (e.g., Deneau, Yanagita, & Seevers, 1969; Weeks & Collins, 1987; Yokel, 1987). The intravenous self-administration method provides a consentaneously accepted paradigm for assessing drug reward and reinforcement processes, and it has been widely used to study the neuropharmacological and neuroanatomical basis of drug reinforcement (e.g., see Roberts & Zito, 1987; Yokel, 1987).
Psychomotor stimulants enhance catecholaminergic neurotransmission (e.g., Axelrod, 1970; Carlsson, 1970; Heikkila, Orlansky, & Cohen, 1975) but only the dopaminergic enhancement is critical for the rewarding action of these drugs. Specific dopamine-receptor blockers attenuate the rewarding impact of cocaine (de Wit & Wise, 1977; Ettenberg, Pettit, Bloom & Koob, 1982) and amphetamine (Yokel & Wise, 1975, 1976), while noradrenergic receptor-blockers are ineffective or decrease drug intake by impairing motor performance. Dopamine-depleting lesions of the mesolimbic system at the level of the nucleus accumbens disrupt cocaine (Roberts, Corcoran, & Fibiger, 1977; Roberts, Koob, Klonoff, & Fibiger, 1980) and amphetamine (Lyness, Friedle, & Moore, 1979) self-administration. Lesions in the striatum and lesions of the noradrenergic systems do not affect cocaine intake (see Roberts & Zito, 1987). Destruction of the mesolimbic system by neurotoxin injections into the ventral tegmental area also disrupts intravenous cocaine self-administration (Roberts & Koob, 1982). Further support for the role of the nucleus accumbens in psychomotor stimulant reward comes from the demonstration that animals will self-administer amphetamine (Hoebel, Monaco, Hernandez, Aulisi, Stanley, & Lenard, 1983) and dopamine (Dworkin, Goeders, & Smith, 1986) directly into this brain region. Other projections of the mesolimbic dopamine system have also been suggested to play a role in psychomotor stimulant reward (Goeders & Smith, 1983; Phillips, Mora, & Rolls, 1981; cf. Martin-Iverson, Szostak, & Fibiger, 1986), but the importance of the nucleus accumbens terminal field is well established (Roberts & Zito, 1987).
If the facilitatory action of opiates on brain stimulation reward does in fact reflect the intrinsic rewarding impact of these drugs, then the site of opiate reward was initially identified by brain stimulation reward studies. Microinjection of morphine (Broekkamp, 1976; Broekkamp, van den Bogaard, Heynen, Rops, Cools, & van Rossum, 1976) and other opioids (Broekkamp, Phillips, & Cools, 1979) into the ventral tegmental area facilitates lateral hypothalamic brain stimulation reward, while injections into other brain regions do not enhance the rewarding impact of electrical stimulation. Because the significance these data was not widely appreciated, recognition of the importance of the mesolimbic dopamine system in opiate reward waited for independent corroboration from more direct tests of opiate reward. Indeed, the predominant view of the 1970s and early 1980s was that opiates inhibited dopaminergic neurotransmission (e.g., Eidelberg, 1976; Schwartz, Pollard, Llorens, Malfroy, Gros, Pradelles, & Dray, 1978), and that psychomotor stimulants and opiates generally produced opposite pharmacological effects. Data were reported suggesting that opiate reward was produced by an inhibition of dopaminergic neurotransmission (e.g., Glick & Cox, 1977), although the dopamine-enhancing effects of psychomotor stimulants were widely appreciated. Early reviews of the role of dopamine in reward processes did not attempt to reconcile these seemingly conflicting data and simply omitted opiate reward processes from consideration (e.g., Fibiger, 1978; Wise, 1978; cf. Wise, 1980b; Wise & Bozarth, 1982). On the other hand, adherents to the hypothesis that the effects of addictive drugs on brain stimulation reward reflect their intrinsic rewarding properties suggested that opiates, psychomotor stimulants, and brain stimulation reward shared a common dopaminergic substrate (Bozarth, 1978; Broekkamp, 1976).
Opiates enhance dopamine-cell firing (Gysling & Wang, 1983; Matthews & German, 1984) and increase metabolic indices of dopamine release in the nucleus accumbens (Westerink, 1978; Wood, 1983). Behavioral data also suggest that opiate administration activates the mesolimbic dopamine system; increased locomotor activity follows bilateral morphine injections into the ventral tegmental area (Joyce & Iversen, 1979), and circling behavioral is produced by unilateral morphine application at the ventral tegmentum (Holmes, Bozarth, & Wise, 1983; see also Bozarth, 1983). These studies provide electrophysiological, neurochemical, and behavioral data suggesting an enhancement of the mesolimbic dopamine system following opiate administration.
Animals quickly learn to intracranially self-administer morphine (Bozarth & Wise, 1981a; Welzl, Kuhn, & Huston, 1989) or fentanyl (van Ree & de Wied, 1980) directly into the ventral tegmental area. Opiate injections into this region also produce a conditioned place preference (Bozarth & Wise, 1982, Phillips & LePiane, 1980), and the rostro-caudal boundaries of the reward-relevant opiate receptor field correspond to the approximate location of the ventral tegmental dopamine-containing cell bodies (Bozarth, 1987c). Dopamine-receptor blockers attenuate the rewarding impact of systemically administered opiates (Bozarth & Wise, 1981b; Phillips, Spyraki, & Fibiger, 1982; Schwartz & Marchok, 1974), and dopamine-depleting lesions of the mesolimbic system disrupt the initial rewarding impact of opiates (Bozarth & Wise, 1986; Spyraki, Fibiger, & Phillips, 1983). Marked differences, however, are apparent in the sensitivity of opiate and psychomotor stimulant self-administration to nucleus accumbens lesions in animals previously trained to self-administer drug (M. Bozarth & R. Wise, unpublished observations; Pettit, Ettenberg, Bloom, & Koob, 1984). This suggests that repeated opiate administration may involve other reward processes mediated by brain regions outside the mesolimbic dopamine system (see Bozarth, 1988; Bozarth & Wise, 1983). Nonetheless, ventral tegmental morphine infusions partially substitute for intravenous cocaine reward (Bozarth & Wise, 1986).
Other Drug Rewards
These data suggest a common dopaminergic basis for electrical brain stimulation, psychomotor stimulant, and opiate rewards (see Bozarth, 1986, 1987a; Wise & Bozarth, 1984 for reviews). The fact that two distinctively different pharmacological drug classes derive at least part of their rewarding impact by activating the mesolimbic dopamine system prompts investigating the possibility that other rewarding drugs may also activate this system. Data have been reported suggesting that ethanol, nicotine, and barbiturates may all active the mesolimbic dopamine system (e.g., Di Chiara & Imperato, 1988). Although the importance of this effect in mediating the rewarding impact of these compounds has not been established, a general theory of addiction has been proposed that attributes the rewarding properties of various drugs to activation of the mesolimbic dopamine system (Wise & Bozarth, 1987). Further empirical work is needed to determine the merit of this proposal.
Dopamine has been suggested to be involved in several other appetitive behaviors, most notably feeding and drinking, sexual behavior, and maternal behavior. The role of dopamine in feeding behavior is well-established, although the specific role of the mesolimbic dopamine system is less clear. The importance of dopamine in other motivated behaviors is more speculative, and considerable work remains to be done in these areas.
Feeding and Drinking Behavior
Specific dopamine-depleting lesions of the nigrostriatal system have been shown to disrupt regulation of food and water intake (e.g., Fibiger, Zis, & McGeer, 1973; Marshall & Teitelbaum 1973; Ungerstedt, 1971b). Animals do, however, eventually resume intake of food and water although enduring deficits persist. Lesions of the mesolimbic dopamine system do not attenuate food or water intake (Bozarth & Wise, 1986; Kelley & Stinus, 1985; Le Moal, Stinus, Simon, Tassin, Thierry, Blanc, Glowinski, & Cardo, 1977), but they have been reported to disrupt food hoarding behavior (Kelley & Stinus, 1985). This latter measure may reflect nonregulatory food acquisition. Systemic administration of dopamine-receptor blockers attenuates various instrumental responses maintained by food and water rewards; as with brain stimulation reward studies, specific procedures have been devised that eliminate motor impairment as a factor in this effect (see Wise, 1982 for a review). The dopamine system involved in this effect has not been established, and it is possible that either the nigrostriatal or the mesolimbic (or perhaps both) systems can influence responding for food and water.
Electrical stimulation of the lateral hypothalamus can elicit eating in food-satiated animals (see Wise, 1974 for a review), and stimulation-induced feeding is attenuated by dopamine- receptor blockade (Jenck, Gratton, & Wise, 1986; Phillips & Nikaido, 1975; Streather & Bozarth, 1987). Opioid microinjections into the ventral tegmental area facilitate stimulation-induced feeding (Jenck, Quirion, & Wise, 1987), and opioid microinjections into this region also elicit feeding in nonstimulated animals (Hamilton & Bozarth, 1988; Mucha & Iversen, 1986).
It appears that activation of the mesolimbic dopamine system is a sufficient but not a necessary condition for feeding. The dissociation of neural systems that constitute necessary and sufficient conditions for feeding is interesting and illustrates the importance of separate consideration of these two factors in the study of brain reward systems. The integrity of the nigrostriatal dopamine system appears to be necessary for the normal regulation of food intake, but electrical stimulation of the substantia nigra does not elicit feeding (Cioé & Mogenson, 1974; Phillips & Fibiger, 1973). In contrast, the integrity of the mesolimbic dopamine system is not necessary for the normal regulation of food intake, but activation of this system appears to elicit feeding.
Dopamine agonists enhance and dopamine antagonists attenuate male sexual behavior (Bitran & Hull, 1987). These effects are generally considered to be mediated by mechanisms in the preoptic area of the hypothalamus, but recent data suggest that the mesolimbic dopamine system may also be involved. Male sexual behavior is associated with increased dopamine release in the nucleus accumbens (Pfaus, Newton, Blaha, Fibiger, & Phillips, 1989), and opioid microinjections into the ventral tegmentum elicit sexual behavior in castrated male rats (Mitchell & Stewart, 1990). The potential dopaminergic involvement in opioid modulation of sexual behavior has not been determined, but the fact that ventral tegmental opioid application enhances nucleus accumbens dopamine function makes this possibility viable. As in the case of feeding behavior, however, the mesolimbic dopamine system does not appear to be necessary for normal sexual behavior, but activation of this system may be sufficient to enhance sexual responsiveness.
Numan (1988) has reviewed evidence that neural projections from the preoptic area to the ventral tegmentum are important in maternal behavior and that bilateral ventral tegmental lesions disrupt maternal behavior. Although the effects of dopamine-depleting lesions of the mesolimbic system have not been evaluated, systemic administration of a dopamine-receptor antagonist also disrupts maternal behavior. Furthermore, prolactin, which appears to be involved in the induction of maternal behavior (see Numan, 1988; Stern, 1989), has been reported to increase dopamine turnover in the nucleus accumbens (Fuxe, Eneroth, Gustafsson, Lofstrom, & Skett, 1977).
General Models of Motivation
Two general models have dominated motivational theory and will be briefly described here. Drive-reduction theory asserts that organisms are motivated by drives which ‘push’ the animal toward the goal object; the behavioral objective is to reduce the drive. Incentive motivational theory asserts that organisms are motivated by the incentive value of various stimuli (viz., attraction to the goal object) which ‘pull’ the animal toward the goal object; motivation is generated by the expectancy of reward. Both theories recognize the relevance of drive (or at least organismic conditions that may function as a drive-like process), but the role ascribed to drive is markedly different. Drive-reduction theory specifically asserts that drive states provide motivational energy and the organism seeks to diminish drive stimulation. Incentive motivational theory asserts that organismic conditions (e.g., drive-like stimuli) modulate the incentive value of various stimuli; for example, food deprivation (producing what is classically defined as a food drive) enhances the incentive attraction of various stimuli associated with food. The increased incentive value of these stimuli, in turn, functions to energize and direct the organism’s behavior toward the goal object. The primary motivation, however, is not to reduce the drive, but rather, to increase and maintain contact with the incentive stimuli. Drive reduction (e.g., food satiation) is association with the termination of goal-directed behavior but only because the incentive stimuli are no longer able to motivate the organism without the appropriate drive-like condition.
The Hullian (e.g., 1943, 1952) derivation of drive-reduction theory postulated that separate energizing and directional mechanisms govern behavior. The independence of these mechanisms is illustrated by the various equations used to describe an organism’s behavior. For example:
sEr = (D x sHr) (functional equation from Hull, 1951)
The organism’s tendency to engage in a particular behavior (sEr: effective reaction potential) is a function of drive level (D: typically the number of hours deprivation) and habit strength (sHr: number of reinforced trials); drive provides the motivational energy, while habit strength is a primary determinant of response direction. The variables drive and habit strength are independent, except that (i) the reduction of drive (Hull, 1943) or the reduction of drive-related stimuli (Hull, 1952) on previous trials reinforcers stimulus-response associations (i.e., increase habit strength), and (ii) specific stimuli associated with a particular drive can increase the probably of certain behavioral responses. Nonetheless, separate mechanisms provide motivational energy and directionality for goal-directed behavior.
There are two features that must be present in the neural substrate mediating this version of drive-reduction theory—separate energizing and directional mechanisms and a drive process that is active when the organism is motivated and whose diminished activity reinforces responding. For the drive mechanism itself, neural substrate activity is decreased during the reward process, but this reduced drive substrate activity maintains responding until satiation.
Incentive Motivational Theory
Although Hull’s drive-reduction theory also recognized the influence of incentive factors on behavior, the behavioral objective of the organism remained the reduction of drive (Hull, 1943) or drive stimuli (Hull, 1952). The incentive properties of the goal object were viewed as modulating the organism’s behavior, but incentive was not considered a primary source of motivational energy nor an important factor in governing directionality.
Incentive motivational theory permits consideration of a unitary energizing and directional mechanism. Stimuli associated with the goal object energize behavior, and the organism is motivated to increase contact with these stimuli (through approach behavior) until the motivational condition is terminated by satiation. The neural substrate mediating reward is activated by the joint action of the incentive stimulus and the internal organismic condition (i.e., central motive state; see Bindra, 1974, 1978). The behavioral objective of the organism is to enhance or maintain activation of the reward substrate, not to reduce its activity as specified by drive-reduction theory.
Motivational Function of Reward Systems
The notion that organisms are motivated to decrease drive states produced by various biological needs is generally untenable. Most motivational theorists favor incentive motivational explanations of appetitive motivation (e.g., Bindra, 1969, 1974, 1976, 1978; Bolles, 1967, 1972, 1975; Toates, 1981), although aversively motivated tasks (e.g., avoidance and escape behavior) may be adequately explained by drive-reduction theory (see Spence, 1956, 1960). The reward substrate mediating this incentive motivational process could both energize and direct behavior: the organism is simply motivated to maximize activation of this reward substrate. Approach behavior would increase contact with relevant incentive stimuli, and this model postulates that forward locomotion is elicited by various appetitive stimuli.
A distinctive feature of this motivational model is that animals work to maximize activation of the reward substrate. Normally, this can only be achieved by increasing contact with relevant incentive stimuli; but if the animal could modify its own organismic condition (which in turn, can increase the central motive state and enhance reward substrate activation by incentive stimuli), then the animal would perform various tasks to increase this drive-like condition. Indeed, support for this notion has been obtained from studies showing that animals will perform an instrumental response to receive lateral hypothalamic stimulation (that elicits feeding) if food is concurrently available (e.g., Coons & Cruce, 1968; Mendelson, 1966; Streather, Bozarth, & Wise, 1985). This is the converse of drive-reduction theory which states that animals work to decrease organismic conditions with drive-like properties.
Incentive stimuli operate in concert with the organism’s internal state to engender approach behavior. When forward locomotion of the organism no longer continues to increase (or even maintain) activation of the reward system, then goal-directed movement ceases and the organism is functionally satiated. This condition may be associated with one of two extremely different situations—maximum-level or zero-level reward activation. The hedonic state associated with the former condition is gratification, while the hedonic state associated with the latter condition is amotivational satisfaction. Appetitive external stimuli will continue to summate with (or enhance) intrinsic reward substrate activation until satiation is reached; thus the organism will continue to be motivated to approach appetitive stimuli up to the point of maximum reward. Noncontingent activation of the reward system in the absence of relevant appetitive stimuli will elicit forward locomotion similar to Hull’s innate response tendencies (sOr): the organism is essentially seeking relevant stimuli. When the organism comes in contact with a relevant appetitive stimulus (e.g., food), further contact with that stimulus (e.g., consummatory response of eating) will enhance the reward activation. In this fashion, noncontingent/nonselective activation of certain reward processes can increase the effective incentive value of some appetitive stimuli thereby broadening the stimulus-generalization gradient. This permits marginally effective stimuli to direct behavior until more salient stimuli are available. Throughout this process, the organism’s behavioral objective is to maximize activation of the reward system. One simple mechanism that could direct behavior in this manner has been described by Milner (1970).
The Mesolimbic Dopamine System as a Model Reward System
Research on brain reward mechanisms has developed to a stage where large amounts of empirical data appear to be explained by relatively few neural elements. Three distinct rewarding events—opioids, psychomotor stimulants, and brain stimulation reward—all seem to derive a major part of their reinforcing impact by the activation of a common brain reward system. This same brain system also appears to be involved in feeding, maternal behavior, and forward locomotion. A specific model reward system has been formulated on the basis of these data, and the following section will describe some of the characteristics of the proposed model.
Biological Modeling of Brain Reward Systems
The concept of biological modeling is an extension of the use of model systems to study various neurological and neurobehavioral events as well as phenomena in the natural sciences. A readily identifiable system is intensely studied, and the principles elucidated by the use of the model are later tested for generalization to other applications. Whether brain reward systems actually exist is a moot point: neural elements have been identified that behave like a brain reward system and that fulfill the criteria of a brain reward system. Specifically, activation of these neural elements, either by electrical or chemical stimulation, can selectively direct and reinforce behavior. Several distinct events can activate this system, and each appears to derive an important part of its rewarding impact through its action in this system. In addition, other, naturally motivated behaviors, such as feeding and sexual behavior, may be modulated by the activation of this system.
The use of the term biological modeling denotes that this model is not synthetically assembled in vitro, but rather, makes use of neural elements functioning normally, in situ. Selective activation or inhibition of these neural elements (i) can produce reward or (ii) blunt the rewarding impact of some stimuli, respectively. The utility of model construction is appreciated by the model builders, but other scientists who are seemingly content with collecting and cataloging apparently unrelated sets of empirical observations may be puzzled and ask "Why build a model?" In addition to the obvious use in organizing empirical observations, a model serves as an ideational testing device that can direct research aimed at determining the underlying mechanisms of behavior.
Intervening variables can be used to describe the relationship between unobservable events and behavior without exceeding the empirical data base. Model construction, however, relies more heavily on hypothetical constructions that attempt to describe the underlying mechanisms mediating the behavior; this process exceeds the directly observed empirical relationships and postulates the existence of specific not-presently-observed events. The primary usefulness of model construction is in its heuristics. Seemingly disparate events can sometimes be shown to share an underlying basis, when analysis supersedes the often superficial abstractive phase. For example, the marked differences in the general CNS effects of psychomotor stimulants and opiates obscured the common basis of their rewarding effects for most scientists. Adherents to the model that a drug’s effect on brain stimulation reward could reveal its rewarding action had sufficient reason to probe beyond the superficial dissimilarities and question whether a common neural basis might mediate these events. This tenacity (or naiveté) may have prompted an important breakthrough in the study of drug addiction.
Many models are formulated post hoc and a new model is seemingly fabricated with each new experiment. This fails to critically test the derivations of any model and voids the primary usefulness of model construction: the model is reduced to a convenient form of data summary and loses its most important attribute for directing new research.
Application of the Hypothetico-Deductive Method
One of the most rigorous attempts to develop a unifying theory of behavior based on an experimental analysis was that of Clark Hull (1943, 1952). The influence of Hull’s work on contemporary motivational psychology is unclear, and little survives from Hull’s extensive formulations and mathematical models. Several features of this approach to the study of behavior, however, may merit resurrection. In particular, his use and popularization of the hypothetico-deductive method of investigation.
This method begins "with a set of explicitly stated postulates accompanied by specific or ‘operational’ definitions of the critical terms employed (Hull, 1937)." Theorems are then derived from the postulates, and specific empirical tests are conducted to determine the validity of these predictions. This approach emphasizes deductive logic and the systematic construction of a theoretical system based on empirically substantiated theorems. "If the theorems agree with the observed facts, the system is probably true; if they disagree, the system is false (Hull, 1937)." Although space does not permit presentation of a formal system, this approach can be used to direct model construction.
Three factors must be kept in mind when reviewing the system described in the following section. First, the model addresses only appetitive motivation; considerable literature suggests an involvement of similar mechanisms in behavior governed by aversive stimuli, but no attempt has been made to integrate these data. Second, the mesolimbic dopamine system is postulated to function as a reward system, but a more comprehensive treatment of the available data would also consider other terminal projections of the ventral tegmental dopamine system as well as the nigrostriatal system. Restriction to the mesolimbic system will suffice to illustrate the basic approach, summarize much of the data, and postulate specific elements in a reward system. And third, the following section omits considerable data that are relevant to the system being described for brevity. The current application of this method is only illustrative and it is not intended to delineate a complete system. Rather, it is intended to show the direction dedicated by this approach to theory development and scientific research.
Model construction proceeds by summarizing a series of empirical observations linking reward processes to mesolimbic dopamine function. Several distinctively different rewarding events appear to activate the mesolimbic dopamine system. Other, undefined commonalties may also exist, but these rewarding events are generally acknowledged as being dissimilar except for two properties: they are rewarding and they enhance mesolimbic dopamine function. This provides a convenient starting point for model construction as shown in Figure 1.
Specific neural events can be identified that are sequentially related
to enhancement of mesolimbic dopamine function by these three stimulus
events. Anatomical/functional linkages in the reward activation process
To disrupt the reward process, substrate activation can be inhibited at any point efferent to the initiating stimulus. This ‘prediction’ has a single restrictive condition—the rewarding effects of the stimulus must depend exclusively on activation of this system. The model does postulate (i) that all rewards are mediated by the mesolimbic dopamine system or even that all rewards necessarily involve dopamine activation, (ii) that activation of the mesolimbic dopamine system by a stimulus excludes the involvement of other systems in the rewarding action of that stimulus, and (iii) that disruption of the mesolimbic dopamine system leads to the blockade of motivational effects, even for a single stimulus condition (viz., necessary vs. sufficient condition for reward).
The relationship between activation of the mesolimbic dopamine system and reward is postulated as a simple conditional function. If the mesolimbic system is activated then reward is produced (A à R). It is not, however, biconditional (A ßà R)—reward is produced if and only if the mesolimbic system is activated. The conclusion that reward is not produced if the mesolimbic system is not activated accepts the contrapositive of the above postulate; this is a logical fallacy (viz., A à R, ~ A à ~ R). The model readily acknowledges that activation of the mesolimbic dopamine system may not be a necessary condition for reward.
Some of the theorems readily derived from this model include: (1) drugs activating the mesolimbic dopamine system are rewarding (see Wise & Bozarth, 1987); (2) drugs activating the mesolimbic dopamine system produce conditioned incentive effects (see Beninger, 1983; see also Stewart & de Wit, 1987); (3) drugs activating the mesolimbic dopamine system produce forward locomotion (see Wise, & Bozarth, 1987); and (4) activation of the descending myelinated system (that transsynaptically activates the mesolimbic dopamine system) specifically enhances approach behaviors associated with appetitive stimuli (see Stellar, Brooks, & Mills, 1979). Each of these predictions has received partial empirical confirmation.
The model reward system developed in this chapter is obviously not intended to represent a complete system. Rather, an attempt is made to integrate a small segment of motivated behavior and to illustrate application of this method in behavioral neurobiology. The primary objective of this work is to establish a functional relationship between reward processes and activation of the mesolimbic dopamine system; the current objective is to postulate the exact neural mechanisms underlying specific behaviors. Too much emphasis is sometimes placed on describing complex neural networks that could explain the relationship between two events if such a relationship were shown to exist.
In many respects there seems to have been very little theoretical progress in the past 30 years. The basic formulations outlined in this chapter have been previously suggested by various authors, although not necessarily assembled in the context used in this paper (e.g., Bindra, 1969, 1974, 1976; Bolles, 1972; 1975; Hull, 1943, 1952; Mendelson, 1966; Milner, 1970). But considerable progress has been achieved in the empirical realm; specific neural mechanisms are postulated to mediate the motivational processes previously described in vague terms. The hypothetical constructs of the 1940s and 1950s are being replaced with specific neural elements, and the black box has become translucent. The heuristic value of such specification and integration with basic motivational theory can only be established with further empirical work.
Andén, N.E., Dahlström, A., Fuxe, K., Larsson, K. Olson, L., & Ungerstedt, U. (1966). Ascending monoamine neurons to the telencephalon and diencephalon. Acta Physiologica Scandinavica 67: 313-326.
Arbuthnott, G.W., Crow, T.J., Fuxe, K., Olson, L. & Ungerstedt, U. (1970). Depletion of catecholamines in vivo induced by electrical stimulation of central monoamine pathways. Brain Research 24: 471-483.
Axelrod, J. (1970). Amphetamine: Metabolism, physiological disposition and its effects on catecholamine storage. In E. Costa and S. Garattini (Eds.), Amphetamine and related compounds (pp. 207-216). New York: Raven Press.
Beninger, R.J. (1983). The role of dopamine in locomotor activity and learning. Brain Research Reviews 6: 173-196.
Bielajew, C. & Shizgal, P. (1982). Behaviorally derived measures of conduction velocity in the substrate for rewarding medial forebrain bundle stimulation. Brain Research 237: 107-119.
Bielajew, C. & Shizgal, P. (1986). Evidence implicating descending fibers in self-stimulation of the medial forebrain bundle. Journal of Neuroscience 6: 919-929.
Bindra, D. (1969). The interrelated mechanisms of reinforcement and motivation and the nature of their influence on response. In W.J. Arnold and D. Levine (Eds.), Nebraska symposium on motivation (pp. 1-33). Lincoln: University of Nebraska Press.
Bindra, D. (1974). A motivational view of learning, performance, and behavior modification. Psychological Review 81: 199-213.
Bindra, D. (1976). A theory of intelligent behavior. New York: John Wiley & Sons.
Bindra, D. (1978). How adaptive behavior is produced: A perceptual-motivational alternative to response-reinforcement. Behavioral and Brain Sciences 1: 41-91.
Bitran, D. & Hull, E.M. (1987). Pharmacological analysis of male rat sexual behavior. Neuroscience & Biobehavioral Reviews 11: 365-389.
Bolles, R.C. (1967). Theory of motivation. New York: Harper & Row.
Bolles, R.C. (1972). Reinforcement, expectancy, and learning. Psychological Review 79: 394-409.
Bolles, R.C. (1975). Theory of motivation (2nd edition). New York: Harper & Row.
Bozarth, M.A. (1978). Intracranial self-stimulation as an index of opioid addiction liability: An evaluation. Unpublished master’s thesis, Rensselaer Polytechnic Institute.
Bozarth, M.A. (1983). Opiate reward mechanisms mapped by intracranial self-administration. In J.E. Smith and J.D. Lane (Eds.), Neurobiology of opiate reward processes (pp. 331-359). Amsterdam: Elsevier/North Holland Biomedical Press.
Bozarth, M.A. (1986). Neural basis of psychomotor stimulant and opiate reward: Evidence suggesting the involvement of a common dopaminergic substrate. Behavioural Brain Research 22: 107-116.
Bozarth, M.A. (1987a). Ventral tegmental reward system. In L. Oreland and J. Engel (Eds.), Brain reward systems and abuse (pp. 1-17). New York: Raven Press.
Bozarth, M.A. (Ed.) (1987b). Methods of assessing the reinforcing properties of abused drugs. New York: Springer-Verlag.
Bozarth, M.A. (1987c). Neuroanatomical boundaries of the reward-relevant opiate-receptor field in the ventral tegmental area as mapped by the conditioned place preference method in rats. Brain Research 414: 77-84.
Bozarth, M.A. (1988). Opioid reinforcement processes. In R.J. Rodgers and S.J. Cooper (Eds.), Endorphins, opiates and behavioural processes (pp. 53-75). London: John Wiley & Sons.
Bozarth, M.A. & Wise, R.A. (1981a). Intracranial self-administration of morphine into the ventral tegmental area in rats. Life Sciences 28: 551-555.
Bozarth, M.A. & Wise, R.A. (1981b). Heroin reward is dependent on a dopaminergic substrate. Life Sciences 28: 557-562.
Bozarth, M.A. & Wise, R.A. (1982). Localization of the reward-relevant opiate receptors. In L.S. Harris (Ed.), Problems of drug dependence, 1981 (pp. 158-164). Washington, DC: U.S. Government Printing Office.
Bozarth, M.A & Wise, R.A. (1983). Neural substrates of opiate reinforcement. Progress in Neuro-Psychopharmacology & Biological Psychiatry 7: 569-575.
Bozarth, M.A. & Wise, R.A. (1986). Involvement of the ventral tegmental dopamine system in opioid and psychomotor stimulant reinforcement. In L.S. Harris (Ed.), Problems of drug dependence, 1985 (pp. 190-196). Washington, DC: U.S. Government Printing Office.
Broekkamp, C.L.E. (1976). The modulation of rewarding systems in the animal brain by amphetamine, morphine, and apomorphine. Druk, The Netherlands: Stichting Studentenpers Nijmegen.
Broekkamp, C.L.E., Phillips, A.G. & Cools, A.R. (1979). Facilitation of self-stimulation behavior following intracerebral microinjections of opioids into the ventral tegmental area. Pharmacology Biochemistry & Behavior 11: 289- 295.
Broekkamp, C.L.E., Pijnenburg, A.J.J., Cools, A.R. & van Rossum, J.M. (1975). The effect of microinjections of amphetamine into the neostriatum and the nucleus accumbens on self-stimulation behavior. Psychopharmacology 42: 179-183.
Broekkamp, C.L.E., van den Bogaard, J.H., Heynen, H.J., Rops, R.H., Cools, A.R. & van Rossum, J.M. (1976). Separation of inhibiting and stimulating effects of morphine on self-stimulation behaviour by intracerebral microinjection. European Journal of Pharmacology 36: 443-446.
Broekkamp, C.L.E. & Van Rossum, J.M. (1975). The effect of microinjections of morphine and haloperidol into the neostriatum and the nucleus accumbens on self-stimulation behaviour. Archives Internationales de Pharmacodynamie et de Therapie 217: 110-117.
Carlsson, A. (1970). Amphetamine and brain catecholamines. In E. Costa and S. Garattini (Eds.), Amphetamine and related compounds (pp. 289-300). New York: Raven Press.
Cioé, J. & Mogenson, G.J. (1974). Effects of electrical stimulation and lesions in the region of the dorsal noradrenergic (NA) pathway on feeding behavior. Federation Proceedings 33: 342.
Coons, E.E. & Cruce, J.A.F. (1968). Lateral hypothalamus: Food and current intensity in maintaining self-stimulation of hunger. Science 159: 1117-1119.
Cooper, B.R., Black, W.C. & Paolino, R.M. (1971). Decreased septal-forebrain and lateral hypothalamic reward after alpha-methyl-p-tyrosine. Physiology & Behavior 6: 425-429.
Corbett, D. & Wise, R.A. (1980). Intracranial self-stimulation in relation to the ascending dopamine systems of the midbrain: A moveable electrode mapping study. Brain Research 185: 1-15.
Crow, T.J. (1970). Enhancement by cocaine of the intra-cranial self-stimulation in the rat. Life Sciences 9: 375-381.
Dahlström, A. & Fuxe, K. (1964). Evidence for the existence of monoamine-containing neurons in the central nervous system. I. Demonstration of monoamines in the cell bodies of brain stem neurons. Acta Physiologica Scandinavica 62 (Suppl. 232): 1-55.
Delgado, J.M.R., Roberts, W.W. & Miller, N.E. (1954). Learning motivated by electrical stimulation of the brain. American Journal of Physiology 179: 587- 593.
Deneau, G., Yanagita, T. & Seevers, M.H. (1969). Self-administration of psychoactive substances by the monkey. Psychopharmacologia 16: 30-48.
Depue, R.A. & Iacono, W.G. (1989). Neurobehavioral aspects of affective disorders. Annual Review of Psychology 40: 457-492.
de Wit, H. & Wise, R.A. (1977). Blockade of cocaine reinforcement in rats with the dopamine receptor blocker pimozide, but not with the noradrenergic blockers phentolamine or phenoxybenzamine. Canadian Journal of Psychology 31: 195-203.
Di Chiara, G. & Imperato, A. (1988). Drugs abused by humans preferentially increase synaptic dopamine concentrations in the mesolimbic system of freely moving rats. Proceedings of the National Academy of Sciences 85: 5274-5278.
Dworkin, S.I., Goeders, N.E. & Smith, J.E. (1986). The reinforcing and rate effects of intracranial dopamine administration. In L.S. Harris (Eds.), Problems of drug dependence, 1985 (pp. 242-248). Washington, DC: U.S. Government Printing Office.
Eidelberg, E. (1976). Possible actions of opiates upon synapses. Progress in Neurobiology 6: 81-102.
Esposito, R.U. & Kornetsky, C. (1978). Opioids and rewarding brain stimulation. Neuroscience & Biobehavioral Reviews 2: 115-122.
Esposito, R.U., Porrino, L.J. & Seeger, T.F. (1987). Brain stimulation reward: Measurement and mapping by psychophysical techniques and quantitative 2-[14C] deoxyglucose autoradiography. In M.A. Bozarth (Ed.), Methods of assessing the reinforcing properties of abused drugs (pp. 421-445). New York: Springer-Verlag.
Ettenberg, A., Pettit, H.O., Bloom, F.E. & Koob, G.F. (1982). Heroin and cocaine intravenous self-administration in rats: Mediation by separate neural systems. Psychopharmacology 78: 204-209.
Fibiger, H.C. (1978). Drugs and reinforcement mechanisms: A critical review of the catecholamine theory. Annual Review of Pharmacology & Toxicology 18: 37-56.
Fibiger, H.C. & Phillips, A.G. (1979). Dopamine and the neural mechanisms of reinforcement. In A.S. Horn, B.H.C. Westerink and J. Korf (Eds.), The neurobiology of dopamine (pp. 597-615). New York: Academic Press.
Fibiger, H.C. & Phillips, A.G. (1987). Role of catecholamine transmitters in brain reward systems: Implications for the neurobiology of affect. In J. Engel and L. Oreland (Eds.), Brain reward systems and abuse (pp. 61-74). New York: Raven Press.
Fibiger, H.C., Zis, A.P. & McGeer, E.G. (1973). Feeding and drinking deficits after 6-hydroxydopamine administration in the rat: Similarities to the lateral hypothalamic syndrome. Brain Research 55: 135-148.
Fitzsimons, J.T. (1971). The physiology of thirst: A review of the extraneural aspects of the mechanisms of drinking. In E. Stellar and J.M. Sprague (Eds.), Progress in physiological psychology, vol. 4 (pp. 119-201). New York: Academic Press.
Fouriezos, G., Hansson, P. & Wise, R.A. (1978). Neuroleptic-induced attenuation of brain stimulation reward in rats. Journal of Comparative and Physiological Psychology 92: 661-671.
Fouriezos, G. & Wise, R.A. (1976). Pimozide-induced extinction of intracranial self-stimulation: Response patterns rule out motor or performance deficits. Brain Research 103: 377-380.
Franklin, K.B.J. & McCoy, S.N. (1979). Pimozide-induced extinction in rats: Stimulus control of responding rules out motor deficit. Pharmacology Biochemistry & Behavior 11: 71-75.
Fuxe, K., Eneroth, P., Gustafsson, J.-A., Lofstrom, A. & Skett, P. (1977). Dopamine in the nucleus accumbens: Preferential increase of DA turnover by rat prolactin. Brain Research 122: 177-182.
Gallistel, C.R., Boytim, M., Gomita, Y. & Klebanoff, L. (1982). Does pimozide block the reinforcing effect of brain stimulation? Pharmacology Biochemistry & Behavior 17: 769-781.
Gallistel, C.R., Shizgal, P. & Yeomans, J.S. (1981). A portrait of the substrate for self-stimulation. Psychological Review 88: 228-273.
German, D.C. & Bowden, D.M. (1974). Catecholamine systems as the neural substrate for intracranial self-stimulation: A hypothesis. Brain Research 73: 381-419.
Glick, S.D. & Cox, R.D. (1977). Changes in morphine self-administration after brainstem lesions in rats. Psychopharmacology 52: 151-156.
Goeders, N.E. & Smith, J.E. (1983). Cortical dopaminergic involvement in cocaine reinforcement. Science 221:773-775.
Gratton, A. & Wise, R.A. (1985). Hypothalamic reward mechanism: Two first-stage fiber populations with a cholinergic component. Science 227: 545-548.
Gysling, K. & Wang, R.Y. (1983). Morphine-induced activation of A10 dopamine neurons in the rat. Brain Research 277:119-127.
Hamilton, M.E. & Bozarth, M.A. (1988). Feeding elicited by dynorphin 1-13 microinjections into the ventral tegmental area in rats. Life Sciences 43: 941-946.
Heikkila, R.E., Orlansky, H. & Cohen, G. (1975). Studies on the distinction between uptake inhibition and release of [3H]dopamine in rat brain tissue slices. Biochemical Pharmacology 24: 847-852.
Hess, W.R. (1949). Das zwischenhirn. Basel: Schwabe.
Hoebel, B.G., Monaco, A.P., Hernandez, L., Aulisi, E.F., Stanley, B.G. & Lenard, L. (1983). Self-injection of amphetamine directly into the brain. Psychopharmacology 81: 158-163.
Holmes, L.J., Bozarth, M.A. & Wise, R.A. (1983). Circling from intracranial morphine applied to the ventral tegmental area in rats. Brain Research Bulletin 11: 295-298.
Hull, C.L. (1937). Mind, mechanism, and adaptive behavior. Psychological Review 44: 1-32.
Hull, C.L. (1943). Principles of behavior: An introduction to behavior theory. New York: Appleton-Century-Crofts.
Hull, C.L. (1951). Essentials of behavior. New Haven: Yale University Press.
Hull, C.L. (1952). A behavior system. New Haven: Yale University Press.
Jenck, F., Gratton, A. & Wise, R.A. (1986). Effects of pimozide and naloxone on latency for hypothalamically induced eating. Brain Research 375: 329-337.
Jenck, F., Quirion, R. & Wise, R.A. (1987). Opioid receptor subtypes associated with ventral tegmental facilitation and periaqueductal gray inhibition of feeding. Brain Research 423: 39-44.
Joyce, E.M. & Iversen, S.D. (1979). The effect of morphine applied locally to mesencephalic dopamine cell bodies on spontaneous motor activity in the rat. Neuroscience Letters 14: 207-212.
Kelley, A.E. & Stinus, L. (1985). Disappearance of hoarding behavior after 6-hydroxydopamine lesions of the mesolimbic dopamine neurons and its reinstatement with L-dopa. Behavioral Neuroscience 99: 531-545.
Kofman, O. & Yeomans, J.S. (1989). Cholinergic antagonists in ventral tegmentum elevate thresholds for lateral hypothalamic and brainstem self-stimulation. Pharmacology Biochemistry & Behavior 31: 547-559.
Koob, G.F., Fray, P.J. & Iversen, S.D. (1978). Self-stimulation at the lateral hypothalamus and locus coeruleus after specific unilateral lesions of the dopamine system. Brain Research 146: 123-140.
Kornetsky, C., Esposito, R.U., McLean, S. & Jacobson, J.O. (1979). Intracranial self-stimulation thresholds: A model for the hedonic effects of drugs of abuse. Archives of General Psychiatry 36: 289-292.
Le Moal, M., Stinus, L., Simon, H., Tassin, J.P., Thierry, A.M., Blanc, G., Glowinski, J. & Cardo, (1977). Behavioral effects of a lesion in the ventral mesencephalic tegmentum: Evidence for involvement of A10 dopaminergic neurons. In E. Costa and G.L. Gessa (Eds.), Advances in Biochemical Psychopharmacology (pp. 237-245). New York: Raven Press.
Lindvall, O. & Björklund, A. (1974). The organization of the ascending catecholamine neuron systems in the rat brain. Acta Physiologica Scandinavica 412: 1-48.
Lippa, A.S., Antelman, S.M., Fisher, A.E. & Canfield, D.R. (1973). Neurochemical mediation of reward: A significant role for dopamine? Pharmacology Biochemistry & Behavior 1: 23-28.
Lyness, W.H., Friedle, N.M. & Moore, K.E. (1979). Destruction of dopaminergic nerve terminals in nucleus accumbens: Effect on d-amphetamine self-administration. Pharmacology Biochemistry & Behavior 11: 553-556.
Marshall, J.F. & Teitelbaum, P. (1973). A comparison of the eating in response to hypothermic and glucoprivic challenges after nigral 6-hydroxydopamine and lateral hypothalamic electrolytic lesions in rats. Brain Research 55: 229-233.
Martin-Iverson, M.T., Szostak, C. & Fibiger, H.C. (1986). 6-hydroxydopamine lesions of the medial prefrontal cortex fail to influence intravenous self-administration of cocaine. Psychopharmacology 88: 310-314.
Matthews, R.T. & German, D.C. (1984). Electrophysiological evidence for excitation of rat ventral tegmental area dopamine neurons by morphine. Neuroscience 11: 617-625.
Mendelson, J. (1966). Role of hunger in t-maze learning for food by rats. Journal of Comparative and Physiological Psychology 62: 341-349.
Milner, P.M. (1970). Physiological psychology. New York: Holt, Rinehart, & Winston.
Mitchell, J.B. & Stewart, J. (1990). Facilitation of sexual behaviors in the male rat associated with intra-VTA injections of opiates. Pharmacology Biochemistry & Behavior 33: 643-650.
Mogenson, G.J., Jones, D.L. & Yim, C.Y. (1980). From motivation to action: Functional interface between the limbic system and the motor system. Progress in Neurobiology 14: 69-97.
Mogenson, G.J. & Phillips, A.G. (1976). Motivation: A psychological construct in search of a physiological substrate. In J.M. Sprague and A.N. Epstein (Eds.), Progress in psychobiology and physiological psychology, vol. 6 (pp. 189-243). New York: Academic Press.
Mogenson, G.J., Takigawa, M., Robertson, A and Wu, M. (1979). Self- stimulation of the nucleus accumbens and ventral tegmental area of tsai attenuated by microinjections of spiroperidol into the nucleus accumbens. Brain Research 171: 247-259.
Mora, F, Sanguinetti, A.M., Rolls, E.T. & Shaw, S.G. (1975). Differential effects on self-stimulation and motor behavior produced by microintracranial injections of a dopamine-receptor blocking agent. Neuroscience Letters 1: 179-184.
Mucha, R.F. & Iversen, S.D. (1986). Increased food intake after opioid microinjections into nucleus accumbens and ventral tegmental area of rat. Brain Research 397: 214-224.
Numan, M. (1988). Maternal behavior. In E. Knobil and J. Neill (Eds.), The physiology of reproduction (pp. 1569-1645). New York: Raven Press.
Olds, J. (1962). Hypothalamic substrates of reward. Physiological Review 42: 554-604.
Olds, J. (1977). Drives and reinforcements: Behavioral studies of hypothalamic functions. New York: Raven Press.
Olds, J. & Milner, P. (1954). Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain. Journal of Comparative Physiological Psychology 47: 419-427.
Olds, M.E. & Olds, J. (1963). Approach-avoidance analysis of the rat diencephalon. Journal of Comparative Neurology 120: 259-295.
Palkovits, M. & Jacobowitz, D.M. (1974). Topographic atlas of catecholamine and acetylcholinesterase-containing neurons in the rat brain. II. Hindbrain (mesencephalon, rhombencephalon). Journal of Comparative Neurology 157: 29-42.
Pettit, H.O., Ettenberg, A., Bloom, F.E. & Koob, G.F. (1984). Destruction of dopamine in the nucleus accumbens selectively attenuates cocaine but not heroin self-administration in rats. Psychopharmacology 84: 167-173.
Pfaus, J.G., Newton, T.N., Blaha, C.D., Fibiger, H.C. & Phillips, A.G. (1989). Electrochemical detection of central dopamine efflux during sexual activity in male rats. Society for Neuroscience Abstracts 15: 558.
Phillips, A.G. (1984). Brain reward circuitry: A case for separate systems. Brain Research Bulletin 12: 195-201.
Phillips, A.G., Blaha, C.D. & Fibiger, H.C. (1989). Neurochemical correlates of brain-stimulation reward measured by ex vivo and in vivo analyses. Neuroscience & Biobehavioral Reviews 13: 99-104.
Phillips, A.G. & Fibiger, H.C. (1973). Substantia nigra: Self-stimulation and poststimulation feeding. Physiological Psychology 1: 233-236.
Phillips, A.G. & Fibiger, H.C. (1979). Decreased resistance to extinction after haloperidol: Implications for the role of dopamine in reinforcement. Pharmacology Biochemistry & Behavior 10: 751-760.
Phillips, A.G. & Fibiger, H.C. (1989). Neuroanatomical bases of intracranial self-stimulation: Untangling the Gordian knot. In J.M. Liebman and S.J. Cooper (Eds.), The neuropharmacological basis of reward (pp. 66-105). Oxford: Oxford University Press.
Phillips, A.G. & LePiane, F.G. (1980). Reinforcing effects of morphine microinjection into the ventral tegmental area. Pharmacology Biochemistry & Behavior 12: 965-968.
Phillips, A.G., Mora, F. & Rolls, E.T. (1981). Intracerebral self-administration of amphetamine by rhesus monkeys. Neuroscience Letters 24: 81-86.
Phillips, A.G. & Nikaido, R.S. (1975). Disruption of brain stimulation-induced feeding by dopamine receptor blockade. Nature 258: 750-751.
Phillips, A.G, Spyraki, C. & Fibiger, H.C. (1982). Conditioned place preference with amphetamine and opiates as reward stimuli: Attenuation by haloperidol. In B.G. Hoebel and D. Novin (Eds.), The neural basis of feeding and reward (pp. 455-464). Brunswick, ME: Haer Institute.
Reid, L.D. (1967). Reinforcement from direct stimulation of the brain. Unpublished doctoral dissertation, University of Utah.
Reid, L.D. (1987). Tests involving pressing for intracranial stimulation as an early procedure for screening likelihood of addiction of opioids and other drugs. In M.A. Bozarth (Ed.), Methods of assessing the reinforcing properties of abused drugs (pp. 391-420). New York: Springer-Verlag.
Reid, L.D. & Bozarth, M.A. (1978). Addictive agents and intracranial stimulation(ICS): The effects of various opioids on pressing for ICS. Problems of Drug Dependence 729-741.
Roberts, D.C.S., Corcoran, M.E. & Fibiger, H.C. (1977). On the role of ascending catecholaminergic systems in intravenous self-administration of cocaine. Pharmacology Biochemistry & Behavior 6: 615-620.
Roberts, D.C.S. & Koob, G.F. (1982). Disruption of cocaine self-administration following 6-hydroxydopamine lesions of the ventral tegmental area in rats. Pharmacology Biochemistry & Behavior 17: 901-904.
Roberts, D.C.S., Koob, G.F., Klonoff, P. & Fibiger, H.C. (1980). Extinction and recovery of cocaine self-administration following 6-hydroxydopamine lesions of the nucleus accumbens. Pharmacology Biochemistry & Behavior 12: 781-787.
Roberts, D.C.S. & Zito, K.A. (1987). Interpretation of lesion effects on stimulant self-administration. In M.A. Bozarth (Ed.), Methods of assessing the reinforcing properties of abused drugs (pp. 87-103). New York: Springer-Verlag.
Robertson, A. (1989). Multiple reward systems and the prefrontal cortex. Neuroscience & Biobehavioral Reviews 13: 163-170.
Schwartz, A.S. & Marchok, P.L. (1974). Depression of morphine-seeking behaviour by dopamine inhibition. Nature 248: 257-258.
Schwartz, J.C., Pollard, H., Llorens, C., Malfroy, B., Gros, C., Pradelles, Ph. & Dray, F. (1978). Endorphins and endorphin receptors in striatum: Relationships with dopaminergic neurons. Advances in Biochemical Psychopharmacology 18: 245-264.
Shizgal, P. (1989). Toward a cellular analysis of intracranial self-stimulation: Contribution of collision studies. Neuroscience & Biobehavioral Reviews 13: 81-90.
Spence, K.W. (1956). Behavior theory and conditioning. New Haven: Yale University Press.
Spence, K.W. (1960). Behavior theory and learning. Engelwood Cliffs, NJ: Prentice-Hall.
Spyraki, C., Fibiger, H.C. & Phillips, A.G. (1983). Attenuation of heroin reward in rats by disruption of the mesolimbic dopamine system. Psychopharmacology 79: 278-283.
Stein, L. (1962). Effects and interactions of imipramine, chlorpromazine, reserpine and amphetamine on self-stimulation: Possible neuro-physiological basis of depression. In J. Wortis (Ed.), Recent advances in biological psychiatry (pp. 288-308). New York: Academic Press.
Stein, L. (1964). Reciprocal action of reward and punishment mechanisms. In R.G. Heath (Ed.), The role of pleasure in behavior (pp. 113-139). New York: Harper & Row.
Stein, L. & Belluzzi, J.D. (1987). Reward transmitters and drugs of abuse. In J. Engel and L. Oreland (Eds.), Brain reward systems and abuse (pp. 19-33). New York: Raven Press.
Stellar, J.R., Brooks, F.H. & Mills, L.E. (1979). Approach and withdrawal analysis of the effects of hypothalamic stimulation and lesions in rats. Journal of Comparative and Physiological Psychology 93: 446-466.
Stellar, J.R. & Corbett, D. (1989). Regional neuroleptic microinjections indicate a role for nucleus accumbens in lateral hypothalamic self-stimulation reward. Brain Research 477: 126-143.
Stellar, J.R. & Stellar, E. (1985). The neurobiology of motivation and reward. New York: Springer-Verlag.
Stern, J.M. (1989). Maternal behavior: Sensory, hormonal, and neural determinants. In F.R. Brush and S. Levine (Eds.), Psychoendocrinology (pp. 105-226). San Diego: Academic Press.
Stewart, J. & de Wit, H. (1987). Reinstatement of drug-taking behavior as a method of assessing incentive motivational properties of drugs. In M.A. Bozarth (Ed.), Methods of assessing the reinforcing properties of abused drugs (pp. 211-227). New York: Springer-Verlag.
Stinus, L. Thierry, A.-M. (1973). Self-stimulation and catecholamines. II. Blockade of self-stimulation by treatment with alpha-methylparatyrosine and the reinstatement by catecholamine precursor administration. Brain Research 64: 189-198.
Streather, A. & Bozarth, M.A. (1987). Effect of dopamine-receptor blockade on stimulation-induced feeding. Pharmacology Biochemistry & Behavior 27: 521-524.
Streather, A., Bozarth, M.A. & Wise, R.A. (1982). Instrumental responding in the absence of drive: The role of reward-expectancy. Paper presented to the 43rd Annual Meeting of the Canadian Psychological Association, Montréal.
Toates, F.M. (1981). The control of ingestive behaviour by internal and external stimuli: A theoretical review. Appetite 2: 35-50.
Tolman, E.C. (1932). Purposive behavior in animals and men. New York: Century.
Trowill, J.A., Panksepp, J. & Gandelman, R. (1969). An incentive model of rewarding brain stimulation. Psychological Review 76: 264-281.
Ungerstedt, U. (1971a). Stereotaxic mapping of the monoamine pathways in the rat brain. Acta Physiologica Scandinavica 367: 1-48.
Ungerstedt, U. (1971b). Adipsia and aphagia after 6-hydroxydopamine induced degeneration of the nigro-striatal dopamine system. Acta Physiologica Scandinavica Suppl. 367: 95-122.
van Ree, J.M. & De Wied, D. (1980). Involvement of neurohypophyseal peptides in drug-mediated adaptive responses. Pharmacology Biochemistry & Behavior 13 (Suppl. 1): 257-263.
Wauquier, A. & Niemegeers, C.J.E. (1972). Intracranial self-stimulation in rats as a function of various stimulus parameters. II. Influence of haloperidol, pimozide, and pipamperone on medial forebrain bundle stimulation with monopolar electrodes. Psychopharmacologia 27: 191-202.
Weeks, J.R. & Collins, R.J. (1987). Screening for drug reinforcement using intravenous self-administration in the rat. In M.A. Bozarth (Ed.), Methods of assessing the reinforcing properties of abused drugs (pp. 35-43). New York: Springer-Verlag.
Welzl, H., Kuhn, G. & Huston, J.P. (1989). Self-administration of small amounts of morphine through glass micropipettes into the ventral tegmental area of the rat. Neuropharmacology 28: 1017-1023.
Westerink, B.H.C. (1978). Effects of centrally acting drugs on regional dopamine metabolism. In P.J. Roberts, G.N. Woodruff, and L.L. Iversen (Eds.), Advances in biochemical psychopharmacology (pp. 35-43). New York: Springer-Verlag.
White, N.M. (1989). Reward or reinforcement: What’s the difference? Neuroscience & Biobehavioral Reviews 13: 181-186.
White, N.M., Messier, C. & Carr, G.D. (1987). Operationalizing and measuring the organizing influence of drugs on behavior. In M.A. Bozarth (Ed.), Methods of assessing the reinforcing properties of abused drugs (pp. 591-617). New York: Springer-Verlag.
Wise, R.A. (1974). Lateral hypothalamic stimulation: Does it make animals `hungry’? Brain Research 67: 187-209.
Wise, R.A. (1978). Catecholamine theories of reward: A critical review. Brain Research 152: 215-247.
Wise, R.A. (1980a). The dopamine synapse and the notion of ‘pleasure centers’ in the brain. TINS April: 91-94.
Wise, R.A. (1980b). Action of drugs of abuse on brain reward systems. Pharmacology Biochemistry & Behavior 13: 213-223.
Wise, R.A. (1982). Neuroleptics and operant behavior: The anhedonia hypothesis. Behavioral and Brain Sciences 5: 39-88.
Wise, R.A. (1989). The brain and reward. In J.M. Liebman and S.J. Cooper (Eds.), The neuropharmacological basis of reward (pp. 377-424). Oxford: Oxford University Press.
Wise, R.A. & Bozarth, M.A. (1982). Actions of drug abuse on brain reward systems: An update with specific attention to opiates. Pharmacology Biochemistry & Behavior 17: 239-243.
Wise, R.A. & Bozarth, M.A. (1984). Brain reward circuitry: Four elements "wired" in apparent series. Brain Research Bulletin 12: 203-208.
Wise, R.A. & Bozarth, M.A. (1987). A psychomotor stimulant theory of addiction. Psychological Review 94: 469-492.
Wise, R.A. & Rompré, P.-P. (1989). Brain dopamine and reward. Annual Review of Psychology 40: 191-225.
Wood, P.L. (1983). Opioid regulation of CNS dopaminergic pathways: A review of methodology, receptor types, regional variations and species differences. Peptides 4: 595-601.
Yeomans, J.S. (1979). Absolute refractory periods of self-stimulation neurons. Physiology & Behavior 22: 911-919.
Yeomans, J.S. (1982). The cells and axons mediating medial forebrain bundle reward. In B.G. Hoebel and D. Novin (Eds.), The neural basis of feeding and reward (pp. 405-417). Brunswick, ME: Haer Institute.
Yeomans, J.S. (1989). Two substrates for medial forebrain bundle self-stimulation: Myelinated axons and dopamine axons. Neuroscience & Biobehavioral Reviews 13: 91-98.
Yeomans, J.S., Kofman, O. & McFarlane, V. (1985). Cholinergic involvement in lateral hypothalamic rewarding brain stimulation. Brain Research 329: 19-26.
Yokel, R.A. (1987). Intravenous self-administration: Response rates, the effects of pharmacological challenges, and drug preferences. In M.A. Bozarth (Ed.), Methods of assessing the reinforcing properties of abused drugs (pp. 1-35). New York: Springer-Verlag.
Yokel, R.A. & Wise, R.A. (1975). Increased lever pressing for amphetamine after pimozide in rats: Implications for a dopamine theory of reward. Science 187: 547-549.
Yokel, R.A. & Wise, R.A. (1976). Attenuation of intravenous amphetamine reinforcement by central dopamine blockade in rats. Psychopharmacology 48: 311-318.