Perspectives Functional Assessment of Problem Behavior: Dispelling Myths
Sample Answer for Perspectives Functional Assessment of Problem Behavior: Dispelling Myths Included After Question
Perspectives Functional Assessment of Problem Behavior: Dispelling Myths
Description
Activity
Read the article, “Functional Assessment of Problem Behavior: Dispelling the Myths, Overcoming Implementation Obstacles, and Developing New Lore” by Hanley, 2012.
Write up at least 4 questions to ask some BCBA about their practice of using FAs – refer to the Hanley 2012 article for some good ideas on the kinds of questions you may want to ask.
Perspectives Functional Assessment of Problem Behavior: Dispelling Myths
A Sample Answer For the Assignment: Perspectives Functional Assessment of Problem Behavior: Dispelling Myths
Title: Perspectives Functional Assessment of Problem Behavior: Dispelling Myths
Perspectives Functional Assessment of Problem Behavior: Dispelling Myths, Overcoming Implementation Obstacles, and Developing New Lore Gregory P. Hanley, Western New England University ABSTRACT Hundreds of studies have shown the efficacy of treatments for problem behavior based on an understanding of its function. Assertions regarding the legitimacy of different types of functional assessment vary substantially across published articles, and best practices regarding the functional assessment process are sometimes difficult to cull from the empirical literature or from published discussions of the behavioral assessment process. A number of myths regarding the functional assessment process, which appear to be pervasive within different behavior-analytic research and practice communities, will be reviewed in the context of an attempt to develop new lore regarding the functional assessment process. Frequently described obstacles to implementing a critical aspect of the functional assessment process, the functional analysis, will be reviewed in the context of solutions for overcoming them. Finally, the aspects of the functional assessment process that should be exported to others versus those features that should remain the sole technological property of behavior analysts will be discussed. Keywords: autism, descriptive assessment, functional analysis, functional assessment, indirect assessment, intellectual disabilities, open-ended interviews, problem behavior A fter a conversation with Timothy Vollmer, one of my graduate school professors at the time, in which we discussed the subtle differences in the manner in which we had learned to conduct functional assessments of severe problem behavior, we concluded that a paper describing functional assessment “lab lore” would be important and well received by those who routinely conducted functional assessments. By “lab lore” we were referring to the commitments people had to the various strategies and tactics involved in the process of figuring out why someone was engaging in severe problem behavior. My graduate school advisor, Brian Iwata, suggested that rather than focus on lore that I focus on detecting the different functional assessment commitments by reviewing the literature base that existed. These collective interactions eventually led to 54 PERSPECTIVES a review of functional analysis procedures being published several years later (Hanley, Iwata, & McCord, 2003). The 277 articles aggregated in that review, along with the hundreds that have been published since 2000, are the primary reasons practitioners are able to conduct effective functional assessments of problem behavior. Much has been learned from the functional assessment research base. Nevertheless, best practices regarding the functional assessment process are sometimes difficult to cull from this massive empirical literature. I never forgot about the idea of contributing an article that attempted to answer questions that arose when one put down an empirical study and attempted to conduct a functional assessment. This article is an attempt to fill in the gaps that exist between how the functional assessment process is described in published research articles and book chapters and how it probably should be practiced, at least from my perspective. This perspective piece is not merely a set of opinions however; it is a review of relevant existing literature synthesized with my own practice commitments. Some readers may disagree with particular assertions in this paper and lament that the assertion may not be followed by an empirical reference. I do include references when a satisfactory analysis has been conducted, but I admit that some of my assertions have developed through both experience conducting functional assessments and from my own conceptual interpretation of existing analyses. There are still many important questions to be asked about the manner in which problem behavior is understood prior to treating it, and I look forward Behavior Analysis in Practice, 5(1), 54-72 to reading and hopefully conducting some of that research, but practitioners cannot wait for this next generation of studies to be conducted. They need to know what to do today when given the opportunity to help a family or teacher address the severe problem behavior of a person in their care. I hope that this paper will help practitioners develop their own set of commitments regarding the functional assessment process and perhaps also stimulate some important future research if an assertion occasions skepticism from those who have different commitments. Some Rationales for Conducting a Functional Assessment What is a functional assessment of problem behavior? Despite the availability of a variety of functional assessment forms, you can’t hold it in your hand—it is a process that involves a lot of highly discriminated, professional behavior. More precisely, it is a process by which the variables influencing problem behavior are identified. Why engage the process? Because it allows you to identify an effective treatment for severe problem behavior. Behavior modification has been effectively used for many years to address problem behavior, especially of those with autism or intellectual disabilities (e.g., Hall et al., 1972; Risley, 1968). So you may be thinking, why conduct a functional assessment of problem behavior? In other words, assigning powerful but arbitrary reinforcers for not engaging in problem behavior or for behavior incompatible with problem behavior and assigning powerful punishers to problem behavior (i.e., modifying behavior) can effectively treat problem behavior, so why bother conducting a functional assessment at all? There are practical reasons; doing so increases treatment precision and efficacy. In other words, doing so identifies treatments that work and that can be practically implemented (as illustrated in Carr & Durand, 1985; Iwata, Pace, Cowdery, & Miltenberger, 1994; Meyer, 1999; Newcomer & Lewis, 2004; Taylor & Miller, 1997). There is an equally important humanistic reason for doing so; conducting a functional assessment dignifies the treatment development process by essentially “asking” the person why he or she is engaging in problem behavior prior to developing a treatment. Behavior modification, or programming powerful but arbitrary reinforcers and punishers without first recognizing the unique history of the person being served or the prevailing contingencies he or she is experiencing, is somewhat inconsiderate. It is like saying, “I don’t know why you have been behaving in that extraordinary manner, but it does not matter because I can change your behavior. . .” By contrast, a behavior analytic approach, with functional assessment at its core, essentially communicates: “I don’t know why you have been behaving in that extraordinary manner, but I will take some time to find out why and incorporate those factors into all attempts to change your behavior.” To drive this point home, let’s do some perspective taking. Imagine that you experienced some temporary muscle paralysis that does not allow you to talk, write, or engage in controlled motor movements. You are now hospitalized and on several medications that have the common side effect of drying out your eyes, nose, skin, and, especially your mouth. Water is viewable on the rolling table, but unattainable due to your lack of dexterity. You learn that if you bang the bed rails with the back of your hands long enough and loud enough, people will come to you and do things for you, like turning the television on or off or fluffing your pillows, or give you things, one of which is the water that you desperately need. Due to its functionality, the banging continues to such an extent that the backs of your hands are bruised and your care providers annoyed. The consulting behavior modifier shows up and recommends a program of contingent restraint with Posey® mitts “to ensure your safety” and access to music and some Skittles when you are not banging. Your problem behavior occurs much less frequently. It doesn’t go away, but your bruises are healing, and the staff is certainly less annoyed with you. Job well done by the behavior modifier? I doubt you think so. If there were a process available to allow your care providers to know the simple reason why you were hurting yourself and annoying them, wouldn’t you want it employed? Wouldn’t it have been nice to just be able to push a button that requested assistance obtaining water at any given moment (or perhaps simply have access to a long straw!)? The functional assessment process makes these humane and practical outcomes possible. So let’s return to the earlier question of why conduct a functional assessment and provide a better answer: Behavior analysts should do it to identify effective, precise, personally relevant, and humane treatments for problem behavior (see Hanley, 2010 & 2011, for additional reasons for conducting analyses). Defining the Parts of the Process Before I discuss some myths and isolate some good practices regarding the functional assessment process, it is important to define the three main types of functional assessment. With an indirect assessment, there is no direct observation of behavior; indirect assessments take the form of rating scales, questionnaires, and interviews (e.g., Durand & Crimmins, 1985; Paclawskyj, Matson, Rush, Smalls, & Vollmer, 2000). With a descriptive assessment,1 there is direct observation of behavior, but without any manipulation of the environmental conditions (Bijou, Peterson, & Ault, 1968; Lalli, Browder, Mace, & Brown, 1993; Lerman & Iwata, 1993; Mace & Lalli, 1991; Sasso et al., 1992; Because there is no manipulation of the environment when a descriptive assessment is conducted, the term descriptive assessment, and not descriptive analysis, is used here because as Baer, Wolf, and Risley (1968) noted, “a nonexperimental analysis is a contradiction in terms” (p. 92). 1 PERSPECTIVES 55 erspectives Vollmer, Borrero, Wright, Van Camp, & Lalli, 2001). This is the “fly on the wall assessment,” which takes multiple forms like A-B-C recording and narrative recording (Bijou et al.). With a functional analysis,2 there is direct observation of behavior and manipulation of some environmental event (see Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994, for the seminal example; see Hanley et al., 2003, for an expanded definition and a review of these procedures). These three types are all functional assessments; the term functional analysis is employed only when some aspect of the environment is systematically altered while problem behavior is being directly observed. Reconsidering the General Approach to Functional Assessment The necessity or utility of a least restrictive hierarchical approach to conducting functional assessment has not been proven, although it is apparent in practice and described (Mueller & Nkosi, 2006; O’Neill, Horner, Albin, & Storey, 1997) or implied (Iwata & Dozier, 2008; McComas & Mace, 2000) in book chapters or discussion articles regarding the functional assessment of severe problem behavior. The myth goes something like this: Start the functional assessment process with an indirect assessment. If you are not confident in the results, conduct a descriptive assessment. If you still have competing hypotheses regarding the variables controlling behavior, then conduct a standard functional analysis. Like all things based on a least effort hierarchy, this process has intuitive appeal, but there are several reasons why behavior analysts should reconsider their commitment to this assessment hierarchy. The first is that closed-ended indirect assessments (e.g., Motivation Assessment Scale [MAS], Questions About Behavior Function [QABF]) are notoriously unreliable; when two people who have a history with the person engaging in problem behavior are asked to complete a rating scale, analyses of their responses usually yield different behavioral functions (see Newton & Sturmey, 1991; Nicholson, Konstantinidi, & Furniss, 2006; Shogren & Rojahn, 2003; Zarcone, Rodgers, Iwata, Rourke, & Dorsey, 1991 for some analysis of the reliability of closed-ended indirect assessments; see Hanley, 2010, for a more in-depth discussion of the reliability of these instruments). Without reliability, I prefer the term functional analysis to experimental analysis and to experimental functional analysis in both practice and in science in general because of the very different effects “function” and “experimental” have on the listener. Function can be understood in a mathematical sense, but more importantly, it also conveys the operant or adaptive nature of the response being analyzed, which has obvious importance in the context of behavioral assessment (see Hanley et al., 2003; and Hineline & Groeling, 2010). The term experimental does not convey this latter meaning, and instead erroneously conveys that the procedures being implemented are in a sort of trial phase, awaiting a proper analysis of their utility, as in an experimental medication. In addition, considering the quote from Baer et al. included in the footnote above, experimental analysis is redundant. 2 56 PERSPECTIVES there is no validity, meaning that there is no opportunity to determine whether the function of behavior is correct from these instruments. Closed-ended indirect assessments are likely preferred because quantifiable results can be obtained quickly, and documentation regarding behavior function is created and can be easily filed or shared at an interdisciplinary meeting. Behavior analysts can probably save a little time and be no worse off by simply omitting closed-ended indirect assessments from the functional assessment process. At the start of the functional assessment process, behavior analysts should indeed talk to the people who have most often interacted with the person engaging in the problem behavior. But, instead of presenting generic scenarios and asking for numerical or yes/no answers (i.e., the substance of closed-ended assessments), the behavior analyst should ask questions that allow caregivers and teachers to describe in detail what happens before and after severe problem behavior occurs. These sorts of interviews are known as semistructured and open-ended interviews. The appendix at the end of this article contains an example of this sort of interview that allows behavior analysts to discover common, as well as unique, variables that may evoke or maintain problem behavior. Because of the likely unreliability of interviews, including the one in the appendix, treatments should typically not be designed based solely on the results of these interviews; instead, functional analyses are to be designed from the interview results. An open-ended interview allows for behavior analysts to discover prevalent variables that may be further examined and possibly demonstrated as important via functional analyses. An important thing to consider is that careful open-ended interviewing used to be the norm prior to conducting functional analyses (see Iwata, Wong, Riordan, Dorsey, & Lau, 1982).3 The second reason the least restrictive assessment hierarchy is troublesome is due to its reliance on descriptive assessment to determine behavioral function. I have yet to come across a study showing that the exclusive results of a descriptive assessment were useful for designing a treatment for severe problem behavior. This is likely related to the fact that descriptive assessments are notoriously invalid for detecting behavioral function (St. Peter et al., 2005; Thompson & Iwata, 2007). Why might this be? The fact that most people will attend to someone who just kicked them or to someone who makes a jarring sound when they bang their head on a wall leads to most descriptive assessments suggesting that attention is a possible reinforcer for severe problem behavior (McKerchar & Thompson, 2004; Thompson & Iwata, 2001). But studies that have compiled There are multiple articles that describe conducting an open-ended interview prior to conducting the functional analysis, but the interview appears to only inform the topography of the behavior targeted in the analyses because the analyses in these same studies are all standardized (i.e., including the same test and omnibus control conditions). 3 data on the prevalence of behavioral function show that attention maintains problem behavior in only about one quarter to one third of the cases examined (Derby et al., 1992; Hanley et al., 2003; Iwata, Pace, Dorsey et al., 1994). The lack of correspondence between descriptive assessments and functional analyses is often due to these false-positive outcomes regarding attention (see Thompson & Iwata, 2007). Consider also that most teachers and parents learn to avoid the presentation of events that evoke negatively reinforced problem behavior (Carr, Taylor, & Robinson, 1991; Gunter et al., 1994); perhaps this leads to the likely false negative outcomes regarding behavior maintained by escape. For instance, or consequent events to examine in a functional analysis, and they may be especially useful if the interview does not yield unique information for designing the analysis. The third reason the common hierarchy is troublesome is due to its reliance on a standard functional analysis. By standard, I am referring to the rapid alternation of four conditions in a multielement design with tests for all generic contingencies (i.e., an attention test condition, an escape test condition, and an alone condition testing for maintenance via automatic reinforcement) and an omnibus control condition usually referred to as the play condition (Iwata, et al., 1982/1994). Simply put, there is no standard analysis; a functional analysis of problem behavior simply involves the direct observation of behavior while some event suspected of being related to problem behavior is manipulated. Note that this widely agreed upon definition of a functional analysis does not specify where the analysis takes place (e.g., in a 3 m by 3 m therapy room or in a busy classroom) or who will conduct the analysis. More important is that it does not specify how many test conditions to include or any particular type of control condition (e.g., the omnibus play condition is not mandatory). These are decisions to be made based on the many factors that will become evident during an open-ended interview. For instance, if the results of the interview show that one child’s loud moaning and hand flapping occur under most conditions and seem to occur irrespective of the social environment, conducting a series of alone sessions first to see if the problem behavior persists in the absence of social consequences is a good idea. By contrast, if the results of the interview show that another child’s tantrums most often occur when the teacher removes toys from the child during free play, then two conditions should be conducted, with the access to the toys provided contingent on tantrums in one condition and perhaps uninterrupted access to toys arranged in the second condition. The former condition is known as the test condition because the contingency thought to maintain problem behavior is present, whereas the latter condition is referred to as the control condition because the contingency thought to maintain problem behavior is absent. The point being made with these examples is that behavior analysts should consider asking simple questions about the variables most likely influencing problem behavior and testing the ones that seem to be most important first. By testing one hunch at a time, more careful control conditions can be designed in which only the contingency differs between test and control conditions. The interested reader is directed to Thompson and Iwata (2005) for a thorough discussion of the importance of properly designing control conditions. If the The literature has shown that descriptive assessments are good at teaching us about the prevalence of the environmental events occurring before and after problem behavior, but that we need to conduct functional analyses to learn about the relevance of those events for the severe problem behavior we are charged with understanding. if the teacher has learned that difficult math evokes dangerous behavior, the teacher is not likely to present difficult math to the student while the behavior analyst is conducting the descriptive assessment. Furthermore, it is unclear how automatic reinforcement is to be detected and differentiated from socially mediated problem behavior via descriptive assessments (e.g., nonmediated sensory reinforcers cannot be detected and recorded). The literature has shown that descriptive assessments are good at teaching us about the prevalence of the environmental events occurring before and after problem behavior (McKerchar & Thompson, 2004; Thompson & Iwata, 2001), but that we need to conduct functional analyses to learn about the relevance of those events for the severe problem behavior we are charged with understanding. Therefore, behavior analysts can save a lot of time and be no worse off by simply omitting formal, lengthy, and especially closed-ended descriptive assessments from their functional assessment process. Brief and open-ended observations may be useful for refining operational definitions of the problem behavior or for detecting possible unique antecedent PERSPECTIVES 57 hunch from the interview or observation is affirmed in this initial functional analysis, then the behavior analyst will have a stable and sensitive baseline from which to assess the effects of a function-based treatment. Examples of this approach in which results of open-ended interviews informed the design of analyses involving a single test condition and an intimately matched control condition can be found in Hanley, Iwata, and Thompson (2001). More questions regarding other factors possibly influencing problem behavior can be asked separately and as often as there are still questions about that which is influencing problem behavior. In essence, there is no mandate that all questions be asked in a single analysis (e.g., in the analysis format first reported by Iwata et al., 1982/1994). It is equally important to consider that there is no single analysis that can answer all questions about the environmental determinants of problem behavior. Even comprehensive analyses such as that initially described by Iwata et al. (1982/1994) are incomplete in that these analyses do not test all possible contingencies that may influence problem behavior. The main strength of a functionalanalytic approach is that the analysis is flexible and can be individualized. Although this set of assertions awaits empirical validation, it seems likely that the probability of differentiated analyses will be strongest when more precise and personalized analyses are conducted based on the results of semistructured, open-ended interviewing. I suggest the following for consideration as practitioner lore regarding the general functional assessment process: Start with a structured, but open-ended, interview and a brief observation to discover potential factors that may be influencing problem behavior, and then conduct a precise and individualized functional analysis based on the resultant information to examine the relevance of those discoveries. Overcoming Common Obstacles to Conducting a Functional Analysis The importance of the open-ended interview (e.g., Iwata et al., 1982), especially for informing the design of the functional analysis, seems to have been passively overlooked in behavioranalytic practice, whereas the functional analysis (Iwata et al., 1982/1994) appears to be more actively avoided in practice (Desrochers, Hile, & Williams-Mosely, 1997; Ellingson, Miltenberger, & Long, 1999; O’Neill & Johnson, 2000; Weber, Killu, Derby, & Barretto, 2005). Behavior analysts who are charged with treating severe problem behavior but who do not conduct functional analyses are quick to provide multiple reasons why they do not conduct analyses. These reasons may have had merit in the past, but our research base regarding functional analysis has grown tremendously (Hanley et al., 2003; see JABA Special Issue on functional analysis, 2013, volume 46, issue 1). With this growth, solutions for common 58 PERSPECTIVES Problem Behaviors per min erspectives 5 Test Condition Control Condition 4 3 2 1 0 1 2 3 4 5 6 Sessions Figure. An example of graphically depicted data from a functional analysis. Note the presence of only two conditions; one in which a contingency thought to maintain problem behavior is present (test) and one in which the contingency is absent (control). and seemingly insurmountable obstacles have been discovered, properly vetted, and await adoption by those who would benefit from an understanding of problem behavior prior to its treatment—behavior analysts and the people they serve. Tables 1 and 2 provide a summary of the available solutions in the context of general and client-specific obstacles. Some references for the empirically derived solutions for overcoming the oft-stated obstacles to conducting functional analyses and accompanying rationales follow. Implementation Obstacle 1: Functional Analyses Take Too Much Time Multiple researchers have proven the efficacy of several timesaving methods relevant to functional analysis. Wallace and Iwata (1999) showed that 5- and 10-min sessions are as valid as longer sessions. Iwata, Duncan, Zarcone, Lerman, and Shore (1994) showed us how to trim our designs to include only two conditions. Considering only these adjustments, a functional analysis can take as little as 30 min to complete (three 5-min test sessions and three 5-min control sessions; see Figure). Sigafoos and Saggers (1995), Wallace and Knights (2003), and Bloom, Iwata, Fritz, Roscoe, and Carreau (2011) described trial-based analyses in which test and matched control conditions occur for a maximum of one-min each. Thomason-Sassi, Iwata, Neidert, and Roscoe (2011) showed that sessions could be terminated after a single response and that measurement of the latency to the first response can be sensitive to typical contingencies arranged in a functional analysis. In short, functional analyses need not require a lot of time.4 It is also important to consider the chief alternative to a functional analysis and that is to rely on a descriptive assessment that often yields spurious correlations as opposed to the more compelling functional relations derived from a functional analysis. Descriptive assessments often take a long time to complete because observers have to wait for problem behavior to occur in uncontrolled environments in which the establishing operation for the problem behavior may or may not be presented (and because there is no obvious criterion for terminating a descriptive assessment). In addition, considerable time and expertise is required to collect a sufficient sample of data to analyze and to undertake the increasingly complicated quantitative analyses necessary to depict and begin to understand the data yield via descriptive assessments (e.g., Emerson, Reeves, Thompson, & Henderson, 1996). These efforts certainly take more time than that required to conduct six brief observations of problem behavior in properly informed test and control conditions comprising an analysis. Implementation Obstacle 2: Functional Analyses Are Too Complex The functional assessment and treatment development process is complex, but functional analyses are less so, especially for anyone with training in behavior analysis. Iwata et al. (2000) showed that undergraduates could accurately implement common analysis conditions after two hours of training. Similar effects were shown by Moore et al. (2002). Hagopian et al. (1997) provided a set of rules that aid in the accurate visual analysis and interpretation of functional analysis data. In short, implementing the procedures and interpreting the data of functional analyses is possible with a little training. There are no equivalent studies teaching people how to conduct a proper descriptive assessment or how to analyze or effectively interpret the data resulting from descriptive assessment as they relate to detecting behavioral function. If you, as a behavior analyst, are still not confident you can conduct functional analyses, consider the following logic. Establishing a baseline of problem behavior from which to determine whether a given treatment is effective is essential in behavior-analytic practice (Behavior Analyst Certification Board, 2012). Problem behavior must occur with some regularity in baseline to detect the effects of treatment. Regularly occurring problem behavior will only be observed if the I do not recommend any sort of brief functional analysis that involves conducting only one of each test condition (e.g., Northup et al., 1991) because necessary replication of test and control conditions is distinctly absent from these analyses. I recommend the tactics described above because they retain design features that allow for replication of suspected relations, the key element for believing in conclusions regarding the function of behavior. 4 controlling contingency is present in that baseline (Worsdell, Iwata, Conners, Kahng, & Thompson, 2000); if that is the case, you essentially have created a functional analysis test condition. By arranging a second condition in which the controlling contingency for problem behavior is absent (i.e., the reinforcer is provided according to a time-based schedule or for an alternative behavior, or withheld for all responding), you essentially have created a functional analysis involving a test condition and a control condition. In other words, if you are capable of changing some aspect of the environment and determining the effects of that single change on a direct measurement of problem behavior, which is what all behavior analysts are trained to do when evaluating a treatment, then you can indeed conduct a functional analysis. Implementation Obstacle 3: Functional Analyses Are Too Risky for the Client or for the Person Conducting the Analysis When considering risk, the main question to be asked is will the child or client be at greater risk in the analysis than that which they normally experience during the day? Put another way, will their problem behavior be more dangerous or intense in or outside of the analysis? This question is often best discussed with other professionals, especially medical professionals, if the problem behavior is self-injurious (see the description of human subject protections from Iwata et al., 1982/1994). Important information for such a discussion is that a properly designed functional analysis will almost always result in problem behavior that is of lower intensity than that observed outside of the analytic context. This is the case because best practices regarding functional analysis emphasize the inclusion of clearly signaled contingencies, continuous reinforcement schedules, and inclusion of problem behaviors in the contingency class that are safe for the client to emit (Hanley et al., 2003). These tactics typically result in more quickly discriminated problem behavior and overall decreases in the intensity and often the frequency of severe problem behavior in the analysis. Risk is increased by certain tactics that may be adopted when conducting an analysis, such as not programming differential consequences in an analysis (Carr & Durand, 1985) or arranging consequences in your functional analysis on intermittent reinforcement schedules deduced from descriptive assessments (Mace, 1994; Mace

