I JBCT Volume 1, No. 3, Summer, 2005 Teachers' Use of a Verbally Governed Algorithm and Student Learning Dolleen-Day Keohane and R. Douglas Greer Abstract The effects of instructing teachers in the use of a verbally governed algorithm to solve students' learning problems were measured. The teachers were taught to analyze students' responses to instruction using a strategic protocol, which included a series of verbally governed questions. The study was designed to determine whether the instructional method would affect the number of verbally governed decisions which the teachers made as well as the number of academic objectives achieved by the teachers' students. A multiple baseline design across three teachers and six students was used to measure the effectiveness of the instructional procedures. The results indicated that the teachers' students achieved significantly more learning objectives when they used a verbally governed algorithm to solve instructional problems. Key words: verbally governed algorithm, strategic protocol, solving instructional problems, increasing students' achievement of objectives Applications to teaching of research findings from the basic and applied sciences of behavior require teachers that can function as strategic scientists. In other words, teachers need to analyze the learning problems of individual children and know when and how to apply relevant and tested tactics from the literature. This is the hallmark of differentiated instruction or therapy by evidence-based applied behavior analysis. The identification of effective tactics requires the location of the possible source of the problem as well as the selection of a tactic that is most likely to remediate the problem. Both of these steps require the teacher to complete a contingency analysis of the components of instruction (Greer, 2002; Greer, 2004; Greer, Keohane & Healy 2003; Greer & Keohane 2004;). The pursuit of solutions to particular learning problems involves many of the repertoires that Skinner (1957) characterized as the verbal behavior of scientists. The research that we report here consists of a functional analysis of the relation of these teacher repertoires to students' learning as well as a basic conceptual analysis (Catania, 1998) of some of the verbal repertoires used by scientists. Hall & Broden (1968) characterized teachers as researchers early on in the applied literature. It has been argued that teachers who use the principles of behavior in the context of the classroom environment become effective behavior change agents (Greenwood, Carta, & Atwater (1991), "facile" managers of contingencies (Bijou, 1970), and strategic scientists of instruction (Greer, 1991). A review of the applied literature identified over 200 replicated procedures or tactics (See Greer, 2002 pp. 83-115) that could be effectively applied by teachers in typical classrooms (e.g., stimulus prompts, response prompts, interrupted chains as establishing operations, interspersal of known items). A number of research studies have identified specific teachers' behaviors that function to increase students' achievement (Albers & Greer, 1991; Greenwood, Carta & Atwater, 1991; Greenwood, Delquadri, & Hall, 1984; Ingham & Greer, 1992; Nuzzolo-Gomez, 2002). Indeed, it can be argued that most if not all of the applied research literature has direct or indirect application to the teaching process (Greer, 1996; Greer, 2002; Skinner, 1968; Skinner, 1984). Research has demonstrated that students of teachers who were responsive to students' data and monitored students' achievement (Kinder & Carnine, 1991; Lindsley, 1991) were more likely to achieve objectives than students of teachers who did not. Selinske, Greer & Lodhi (1989) and Lamm & Greer (1991) found that when teachers were taught to provide all instruction within a behavior analytic framework, students achieved four to seven times the number of correct responses and instructional objectives. 252 I JBCT Volume 1, No. 3, Summer, 2005 Skinner (1957) characterized scientists as members of a particular verbal community. If so, then communication among members of that verbal community would revolve around the use of relevant tacts of the environment, related intraverbals and verbally governed analyses of behavior. Research and theory on verbal behavior include the analysis and treatment of verbally governed behavior (Catania, Shimoff, & Matthews, 1989; Chase & Danforth, 1991; Hineline, & Wanchisen, 1989; Malott, 1989; Malott & Malott, 1991; Overskeid, 1994; Reese, 1989, 1991; Salzinger, 1991; Vargas, 1988; Vaughan, 1989). Verbally governed behavior has also been characterized as rule following (Hayes, Zettle, & Rosenfarb, 1989; Johnson, & Chase, 1981; Salzinger, 1991; Verplanck, 1992; Zettle, & Hayes; 1982), and problem solving (Andronis, 1991; Chase, & Danforth, 1991; Kinder, & Carnine, 1991; Reese, 1991; Smith, & Greenberg, 1981). Related papers have also addressed the role of verbally governed behavior when the speaker (problem identifier) is her own listener (problem analyzer) or "thinking self (problem solver) (Chase, & Danforth, 1991; Hayes, & Hayes, 1989; Lodhi, & Greer. 1991; Skinner, 1957; 1989; Vaughan, 1989). The use of verbally governed behavior by teachers has been shown to correlate with increases in some behaviors associated with student learning (Sharpe, Hawkins, & Ray, 1995). Educational research is beginning to analyze teachers' use of verbally governed behavior associated with the identification and use of generic behavioral tactics to remediate problems in the classroom (Sharpe, Hawkins, & Ray, 1995; Watson & Kramer, 1995). Prior to the data we report herein, however, there appear to have been no functional analyses of the relation between teachers' use of verbally governed behavior to analyze and remediate instructional problems and the effects of this repertoire on students' achievement. Greer (1994, 1996, 2002) proposed a new measure of teaching based on a process that involved a yoked contingency for both a student and a teacher. He suggested that learning took place most often when a student, a teacher, and the curriculum were yoked within instructional units. The process involved the accuracy and rate of teacher and student behavior described as interlocking operants, or observable units of learning, within an environmental context. This measure was termed the learn unit and according to current evidence constitutes a necessary component of effective teaching (Albers & Greer, 1991; Emurian, Wang & Durham, 2000; Greer, 1994; Greer & Keohane, 2004, Ingham & Greer, 1992; McDonough & Greer, 1999. Learn units involve presentations by a teacher (Ingham & Greer, 1992), tutor (Greer, Keohane, Meincke, Gautreaux, Pereira, Chevez-Brown & Yuan, 2004) or teaching device (Emurian, Wang & Durham, 2000) to an attentive learner, followed by a response opportunity for the student. A correct student response results in a reinforcement operation and an incorrect response involves correctional feedback in the presence of the target Sd as the student repeats the correct response related to the potential operant. The correction procedure is not reinforced. The learn unit is derived from research in programmed instruction (Stephens, 1967; Skinner, 1968), the engaged time research in education (Greenwood, Carta & Atwater, 1991), the opportunity to respond literature (Greenwood Delquadri, & Hall, 1984) and research on the learn unit per se (Albers & Greer, 1991; Bahadorian, 2000; Lamm & Greer, 1991; Ingham & Greer, 1992, Greer & McDonough, 1999; Selinski, Greer & Lodhi, 1991) including replications with computerized instruction (Emurian, Wang & Durham, 2000). When learn units are present and the student is still having difficulty, current practice, which we describe herein, dictates that an analysis of the learn unit in context be completed. This analysis includes a review of the setting events, instructional history, and phylogenic factors associated with a particular student's immediate and prior environments. When a student is being taught, she is introduced to the antecedent, response, and consequence that are to form the new operant. She may have problems with the antecedent component of the learn unit which can be related to a lack of a particular history with that stimulus (e.g., does not have the prerequisites). Difficulties with the response component of the learn unit may indicate that the student does not have the response in her repertoire (e.g., does not have a vocal response). Students may not respond correctly if there is no reinforcer for the response immediately 253 I JBCT Volume 1, No. 3, Summer, 2005 identifiable (e.g., satiation). The setting events (e. g., establishing operations), the student's instructional history, as well as phylogenetic history, constitute the contextual environment in which the learn unit is presented and influence the success or failure of the instruction. There may be significant distractions in the classroom, the student may be hungry or tired, or an argument with another student may create factors that affect instructional outcomes. In addition to an environmental context surrounding the learn unit for the student there is an environmental context surrounding the learn unit for the teacher. The student's response to the teacher functions as an antecedent for the teacher's behavior. The teacher's response to the student functions to consequate the student's behavior. Setting events or environmental events also determine the accuracy and validity of learn unit presentations by the teacher. She may be tired or hungry, or disappointed when the tactics she has usually found helpful unexpectedly prove ineffective. She may also have deficits in her own instructional history that impact upon the instruction presented; she may not have the level of skills required to successfully analyze the student's difficulties with the instruction, and may not recognize an instructional problem when it arises. In an analysis of the verbal behavior of the scientist, Skinner (1957) said that a scientist should speak precisely, and, as a result of the precision of his or her verbal behavior, could identify conditions that would lead to effective action. To control for ambiguity, the scientific community defines terms and produces graphs, models, and tables so that the properties of the concepts discussed are conveyed to others as concisely as possible. Rules, laws, and measurement are basic to the verbal behavior of the scientist. The verbal behavior of the scientist is concerned with accuracy so that when a scientist "....accurately reports, identifies, or describes a given state of affairs, he increases the likelihood that the listener will act successfully with respect to it, and when the listener looks to the speaker for an extension of his own sensory capacities, or for contact with distant events, or for an accurate characterization of a puzzling situation, the speaker's behavior is most useful to him if the environmental control has not been disturbed by other variables" (p. 418). The structure of problem solving within this format (or in the speaker as listener interchange) is governed by rule following characterized as logical thinking (Mill, 1950). Speaker-listener behavior is overt when two or more individuals are involved, and typically covert when the speaker acts as his or her own listener. Therefore, thinking when the speaker acts as his or her own listener may be either overt (speakers may think aloud or think in a written format) or covert (speakers acting as their own listeners) in topography, and can be described as reflective, or behavior which shapes and is shaped during the speaker-listener exchange. Teachers who make instructional decisions based on scientific principles, research, and verbally governed algorithms for analyzing data are behaving as scientists in speaker as own listener roles. If conditions are designed such that the controlling variable for the teacher's decisions are verbal stimuli, and these verbal stimuli serve to affect behavior, one can attribute the behavior of the teacher to the verbal stimuli. In the following experiment, we tested for a functional relationship between teachers' use of a verbally governed algorithm and students' learning. Method Participants Three teachers and six students participated in the experiment over a 10-month period. The teachers are described according to their levels of preparation and expertise, and the students are described according to their levels of verbal behavior and instructional histories. 254 I JBCT Volume 1, No. 3, Summer, 2005 Teacher Preparation. While the study was in progress, teachers A and C had Baccalaureate Degrees in Special Education and were working toward Master's Degrees in Reading. Teacher B completed both a Baccalaureate Degree in Elementary Education and a Master's Degree in Special Education prior to the beginning of the study. Teachers A, B, and C had been employed by the school for one year, three years, and two years, respectively, and each had achieved criteria performance in an introductory course in applied behavior analysis at the master's degree level. Teacher A had completed 20 module units in teaching as applied behavior analysis (two CABAS® Board certified teacher ranks), Teacher B had completed 30 module units (three CABAS® Board certified teacher ranks), and Teacher C had completed 30 module units (three CABAS® Board certified teacher ranks) within a Personalized System of Instruction (PSI) format (Keller, 1993), used in the CABAS® system for in-service professional development. The teachers had mastered all of these basic teaching repertoires prior to the outset of the study. (Over the course of the study the teachers continued to read selected material and complete PSI modules). See Table 1. Table 1. The Three Tiers of CABAS® Teacher Ranks II. Contingency Shaped Repertoires: III. Verbally Governed Repertoires: I. Verbal Behavior About the Science: designed to provide teachers with readings that addressed pedagogical, theoretical, philosophical, and experimental issues and questions of past and current interest, and mastery of the topics was assessed through quizzes. were shaped in situ as direct measures of performance by supervisors who served as both models and observers. Supervisors, who had also demonstrated expertise in the classroom, modeled correct behavior and assisted to effectively respond to the contingencies of the classroom environment. designed to combine the pedagogical, epistemological, theoretical, philosophical, and experimental knowledge acquired in the first tier with the practical experience acquired in the second tier, and to unite that knowledge and experience in a problem solving format. (See Chapter Three of Greer, 2002 for an in-depth treatment) Target Students and Generalization Students. Two students were selected from each teacher's class for this study. One of the students was designated as the target student (the teacher was taught to use strategic questions about decisions related to this student's graphs), and the other as the generalization student (the teacher was not instructed to use strategic questions about decisions related to this student's graphs). Typically, the target and generalization students were chosen because their teachers had indicated that the student's progress in instructional programs was either inconsistent or in need of reevaluation. That is, these were students that the teacher had experienced the greatest difficulty in teaching. Student 1 (target student), and Student 4 (generalization student) were in teacher A's class; Student 2 (target student), and Student 5 (generalization student) were in teacher B's class; and Student 3 (target student), and Student 6 (generalization student) were in teacher C's class. Students 1, 2, and 5 were 16, 19, and 19 year-old females diagnosed with moderate to severe mental retardation. Students 4, 255 I JBCT Volume 1, No. 3, Summer, 2005 3, and 6 were 15, 10, and 10 year-old males diagnosed with autism, and severe mental retardation. All six students were diagnosed with significant behavior disorders, as well. Setting The study was conducted in three classrooms at a privately run, publicly funded suburban day school for students with autism, developmental disabilities, and significant behavior disorders. The CABAS® (Comprehensive Application of Behavior Analysis to Schooling) school program delivered the curriculum through individualized instruction based on the students' Individualized Education Plans and criterion referenced assessments. The curriculum was scripted and included both short-term and long-term objectives based on a task analysis and a sequential hierarchy of goals. In Situ Teacher Training. The teachers were trained in the implementation of the curriculum in situ during the completion of their first module by supervisors who functioned as mentors. The supervisors regularly monitored the reliability of programming and data collection through the Teacher Performance Rate and Accuracy observation procedure (TPRA) (Ingham & Greer, 1992) that assessed presentations of learn units and the teachers' accuracy in recording the responses of their students to all instruction. All the teachers had achieved criterion levels of performance for the presentation of learn units for all instruction. The teachers were also trained to graph and analyze data associated with each student's program graphs using a generic set of rules regarding decision opportunities (e. g., three descending data points = problem with instruction/phase change (new tactic); three data points at criterion = criterion/phase change (new tactic); three data points with no trend=phase change (new tactic, etc.) during the first 10 modules they completed and each achieved criterion prior to the outset of the study. Teacher Supervision As part of the school's method of teacher supervision, the supervisors monitored the reliability of programming through the Teacher Performance Rate and Accuracy (TPRA) (Ingham & Greer, 1992) procedure for teacher observation, and graphed and analyzed the data for each teacher's class. The supervisors also met with the teachers weekly to complete graph checks for all the students in the teachers' classes. Students' progress and indications of problematic trends were noted during these meetings, and the supervisor suggested verbally governed tactics when the data demonstrated that a student was having learning difficulties and the teacher had not devised any tactics herself. Response Definitions-Teachers' Decision Opportunities: Correct/Incorrect Teachers' decisions were defined as correct (verbally governed) when they were described by the teachers' in scientific language and based on an ongoing scientific analysis of the data. Correct decisions were defined as follows: when the teacher reported that she recorded student criterion as achieved, continued a phase when the data were low or variable but demonstrated an ascending trend, implemented a phase change after the data demonstrated a descending trend for 3 sessions (exclusive of the first data point in a phase), implemented a phase change after the data demonstrated no trend for three consecutive sessions (exclusive of the first data point in a phase), implemented a phase change after the data demonstrated five sessions of data with wide variability and no trend, based each new criterion on a scientific analysis of the data, considered strategic questions which were responsive to the graphic display of the data, and utilized appropriate tactics and materials when designing a new criterion. Teachers' decisions were defined as incorrect when they were described by the teachers in terms which indicated they had completed a quasi or non verbally governed interpretation of the data (e. g., "she just needs more time"), as demonstrated through the use of non-scientific language (e. g., he just doesn't seem to understand"), and the application of non-scientific procedures ("I ran a review because I always run a review before I begin a new objective"). 256 I JBCT Volume 1, No. 3, Summer, 2005 Data Collection-Teachers' Decision Opportunities: Correct/Incorrect In all phases the supervisor recorded only the teachers' initial responses. All data were collected when the supervisor and the teacher met for a graph check meeting, with the graphic record of the students' responses to instruction visible to both. The teachers were asked to provide a vocal verbal rationale for each of the decisions they had made. A plus (+) was recorded for each occurrence of a teacher's verbal response to a supervisor's question when the response outlined the verbally governed strategies and tactics the teacher utilized to make an instructional decision. Therefore, when a teacher verbally indicated that she had: (a), recognized a problem, one plus was recorded; (b), identified the problem based on an analysis of the data, an additional plus was recorded; and (c), chose a tactic based on an analysis of the problem, yet another plus was recorded. However, if a teacher indicated that she had re-used a tactic because it had proved successful in the same type of instructional situation in a prior phase (the teacher indicated that her decision was based on the prior success of the tactic, not on a new analysis of the data), the teacher received one plus (+) for a correct decision but did not receive any additional pluses (+'s) for the use of verbally governed strategies or tactics. Therefore, the teacher would not have received, in such an instance, additional pluses (+'s) for the identification of the problem, or for having made a choice of tactics based on new strategic analysis of the problem. During the treatment phase the supervisor asked a series of verbally governed questions based on the teachers' correct response to the supervisor's initial question (e. g., if the teacher correctly identified the source of an instructional problem as the antecedent component of the learn unit, but did not spontaneously identify the tactics she had utilized, the supervisor would ask the teacher another question, such as "What did you do next?" or "Why did you choose a stimulus prompt?"). The series of verbally governed questions were used for the analysis of the target students' graphs. When a teacher responded incorrectly during treatment, a minus (-) was scored and no further questions were asked. The supervisor provided the teacher with a correct answer and went on to another topic. Over the course of the treatment phases the series of verbally governed questions was used only when the teacher did not explain her decision completely, and were faded as each teacher acquired the repertoires which enabled her to make verbally governed decisions independent of the supervisor's questions. Definitions-Levels of Complexity of Teachers' Verbally Governed Decisions The definitions of the levels of complexity of verbally governed decisions made by the teachers were based on a task analysis of a hierarchy of teachers' verbally governed decisions, scaled from 1 to 11. Over the course of the study, the teachers made decisions parsimoniously. They based their use of verbally governed behavior (e. g., strategies and tactics) on an analysis on the instructional problems they identified, and implemented only the procedures (basic, mid, and high level decisions) that were required to remediate the students' instructional difficulties. Basic level decisions (levels 1 to 2) were defined as teachers' decisions based on basic school procedures for data analysis (these procedures were identified with increased student achievement), and/or a task analysis associated with the school's scripted curriculum. Mid level decisions (levels 3-4) were defined as teachers' decisions based on the use of context specific generic behavioral tactics from the science (not based on an analysis of the learn unit context), and decisions based on procedures previously identified as successful in similar situations after a theoretical contingency analysis of the learn unit was completed. High-level decisions (5 to 11) were defined as teachers' decisions based on a theoretical contingency analysis of the learn unit in context, and/or based on a re-analysis of the learn unit in context when a procedure proved unsuccessful after implementation. Very high level decisions (9 to 11) were also defined and included teachers' decisions based on a re-analysis of an instructional problem which required the completion of an experimental analysis to test for the controlling variables of the instructional problem. A task analysis of the levels of complexity of teachers' decisions was prepared and 257 I JBCT Volume 1, No. 3, Summer, 2005 decisions were scaled according to the definitions, and recorded in categories according to the level of complexity as defined. See Table 2. Table 2. Levels Of Rule Governed Decisions Basic Levels: 1. The teacher drew a line to indicate a phase change or made another basic rule governed decision which was recorded and did not require a line. 2. The teacher initiated a procedure consistent with a scripted program. Mid Levels: 3. The teacher selected a basic instructional procedure from the tactics of the science but did not do a theoretical or functional contingency analysis of the learn unit. 4. The teacher initiated a phase change because the student emitted patterns of behavior for which prior analysis of the data had been effective High Levels: 5. The teacher drew on a rule governed analysis of the learn unit (e. g., theoretical contingency analysis) in order to solve an instructional problem which had been identified. For example the teacher provided a rationale that located an instructional problem in the instructional history of the learn unit, and selected and implemented a tactic from the research literature. 6. The teacher selected and implemented a tactic from the research literature based on the contingency analysis described in number 5, and related to the learn unit, the setting events and/or instructional history. 7. The teacher identified the success or lack of success of a procedure implemented in number 6, and either continued to use that procedure if successful, or if unsuccessful sought a new procedure related to a theoretical analysis learn unit in context. 8. The teacher chose a new tactic consistent with the analysis described in number 7 if the initial analysis was unsuccessful. Very High Levels: 9. The teacher implemented a complete experimental analysis (e. g., reversal, multiple baseline, alternating Treatment design) to test whether or not the theoretical contingency analysis was empirically verifiable. 10. The teacher used the results of the experimental analysis described in number 9 to identify the controlling variables for the instructional problem. 11. The teacher used the experimental analysis of the controlling variables described in number 10 to select an appropriate, and related tactic from the research literature, in order to solve the instructional problem. Instruction in Verbally Governed Algorithm The independent variable consisted of instruction to foster increased complexity of correct scientific verbally governed decisions and decrease the number of decision errors or non-scientific decisions made by the teachers. A training package was used to implement the independent variable and included: a chapter reading on data analysis and data driven verbally governed strategies; a quiz on the assigned reading; a meeting with the supervisor to discuss the chapter; a series of verbally governed questions asked during weekly/bi-weekly graph check meetings in response to the graphic display of the target students' data; and the supervisor's response to incorrect answers in the form of a correction procedure or non specific approval for correct responses. That is, supervisor-teacher learn units were in place in this phase. The effect of the instructional package was measured in terms of the number of correct/incorrect decisions made by the teachers, as determined through the teachers' verbal report of their decisions as well as the level of complexity of the decisions in each phase. The intervention was designed to teach teachers to provide scientific rationales for the decisions they made. It was also designed to measure the 258 I JBCT Volume 1, No. 3, Summer, 2005 shift from basic to more complex levels of verbally governed analysis so that increases in more complex decisions could be traced, when they occurred . Dependent Variables There were two dependent variables: learn units to criterion for the teachers' students on instructional objectives and correct teachers' decisions. The first dependent variable was the number of learns units required to achieve mastery criteria of instructional objectives for the target and generalization students in the baseline, instruction, and post instruction and probe phases of the study. Student criteria was measured in terms of the number of learn units required for each of the target and generalization students to achieve objectives in each of the phases of the study. The second dependent variable was the number of correct and incorrect decisions the teachers made in the baseline, instruction, and post instruction phases of the study. Data Collection Procedures/Student Criteria-Achievement of Objectives Data were collected on 20 learn unit data forms and graphed for each of the students by the teachers. The teacher scored a plus (+) for each of the student's correct answers and a minus (-) for each of the student's incorrect answers, and graphed the results after each session. The supervisor reviewed the teachers' graphs for each student, and recorded criterion only when it was achieved for the first time (e. g., if students achieved criterion on objectives more than once as a result of teachers' analytic decision errors, only the first occurrence of criterion was noted to assure reliable counts of criterion achieved across all phases). Intescorer Agreement Interscorer agreement was determined by the percentage of agreement between the data scorer and an independent scorer, calculated by dividing the number of agreements by the number of agreements and disagreements, in scoring teachers' correct/incorrect decision opportunities, and levels of complexity of teachers' decisions, and on students' attainment of criteria. Teacher decisions were scored as follows: the first observer scored the teachers' decisions based on the graphic display of data and the teachers' oral responses to questions. The second observer independently scored the teachers' verbally governed decisions, and levels of complexity of decisions based on the graphic display of the data and the first observer's written record of the teachers' oral responses to questions. A third scorer compared the first and second observers' data and recorded the observers' agreements and disagreements. Students' attainment of criteria was scored as follows: the first and second observers scored the students' attainment of criteria based on the graphic display of the data and the schools' program guidelines for the attainment of criteria. The third scorer compared the first and second observer's data and recorded the observers' agreements and disagreements. Interscorer agreement for all phases and all sessions was recorded as follows: the point to point agreement for Teacher A on teacher decision opportunities for the target students' data was between 86% and 100% for all phases; for Teacher B was between 86% and 100% for all phases; and for Teacher C was between 92% and 100% for all phases. The point to point agreement for Teacher A on teacher decision opportunities for the generalization students' data was between 83% and 100% for all phases; for Teacher B was between 87% and 100% for all phases; and for Teacher C was between 98% and 100% for all phases. Interscorer agreement on the utilization of scientific tacts and rationales for both the target and generalization students was 100% for all three teachers in all phases. Interscorer agreement on the levels of complexity of rule-governed decisions was 100% for all three teachers in all phases. The point-to-point agreement on the attainment of students' criteria was 100% for all students in all phases. 259 I JBCT Volume 1, No. 3, Summer, 2005 Experimental Design A multiple baseline design across three teachers and six students was used. The design included a delayed baseline for Teacher C, and students 3 and 6, and 1 month probe sessions across all the subjects. Procedure At the outset, the teachers were naive to the focus of the study and were told only that data were collected to analyze some of the variables involved in the achievement of student criteria. All graph check meetings occurred after the teacher had recorded the data associated with both the target and generalization students in the study. Therefore, the teachers' decisions were made before each teacher/supervisor graph check meeting occurred, and the supervisor's questions were focused on the verbal report of decisions the teachers had already made. Baseline During baseline, each teacher worked independently, noted the attainment of criteria, chose tactics, and made decisions concerning phase changes based on her own interpretation of the graphic display of the data for both the target and generalization students. Once a week the supervisor met with the teacher to complete graph checks, and, based on the supervisor's analysis of the data for the prior week, asked the teacher initial questions about her decisions concerning both the target and generalization students such as, "What were you thinking at this point?" or "Why did you note criterion here?" or, "Why didn't you initiate a phase change at this point?" The teacher was given the opportunity to answer the initial question completely by outlining the strategies and tactics on which she based her decisions. Nonspecific feedback was given in response to the teachers' answers during this phase (e g., "O.K.," "All right," "I see," etc.). Instruction in Decision Protocol Before the outset of this phase the teacher was assigned a chapter to read which discussed and presented examples of the application of verbally governed strategies and tactics to the formation of instructional decisions (see Chapter Three of Greer, 2002 for a more in-depth treatment). The teacher also completed a quiz to 100% mastery on the assigned reading, and the supervisor and the teacher discussed the chapter reading and the teacher's answers to the questions on the quiz to be sure that the teacher understood the material thoroughly. See Table 3. Table 3. Instruction in a Verbally Governed Algorithm Prerequisites: Each teacher completed between 20 and 30 CABAS® Teacher I and II Rank modules prior to the outset of the study. Chapter Reading: The teachers read a chapter on data analysis and the application of verbally governed strategies to instructional problems (Chapter 3, Greer, 2002). Exam: Criterion was achieved on an exam that demonstrated mastery of the Chapter 3. Discussion: The supervisor and each teacher met and discussed Chapter 3 to assure that the teacher was fluent in the application of the mastered material. Meetings: The supervisor and each teacher met bi-weekly to review the data as graphed as a means of monitoring students' progress. The decisions were made and the data analyzed by the teachers prior to the meeting. Verbally Governed Algorithm: The supervisor asked each teacher a series of verbally governed questions in response to the graphic display of the data. Correct teacher responses were recorded and correctional feedback was provided for incorrect teacher responses within the framework of the verbally governed questions. Example: S— "What did the data show?" 260 I JBCT Volume 1, No. 3, Summer, 2005 T— "There was no trend and the data were variable, (correct). The tactic was not effective." (correct). S— "What did you think the problem was?" T— "Maybe "_" needed more time to achieve the objective." (incorrect). S— "The problem might be related to the schedule of reinforcement. Lets try a VR 2 schedule, (correction provided). The supervisor went on to another topic at this point. The supervisor increased the number of graph check meetings to two a week during this phase. Each teacher continued to work independently and noted students' attainment of criteria, implemented tactics, and made decisions concerning phase changes based on her own interpretation of the graphic display of each student's data. The supervisor continued to meet with the teacher once a week and, based on the supervisor's analysis of student data for the prior week, asked the teacher initial questions regarding both target and generalization students such as "What were you thinking at this point?" or "Why did you change phases?" or, "Why didn't you initiate a phase change at that point?" When a teacher answered a question correctly in response to a target student's graphic display of data, the supervisor asked the teacher the next question in a series of verbally governed questions (e. g., "Why did you use a response prompt at this point?" "What was the result?") until a decision was fully explained or the teacher was unable to answer the question based on a scientific rationale. If the teacher's answer was not phrased in scientific tacts and/or not data based (e. g, the teacher explained that she did not note student criterion because she "Did not notice", or she "Thought the student needed more time"), the supervisor answered the question for the teacher, as in a learn unit correction procedure, (e. g., the supervisor said "the student achieved criterion on this objective at session_"), asked no further questions about the decision and went on to the next topic. Post Instruction The series of verbally governed questions asked in association with the target students' data was faded as the teachers acquired the repertoires that enabled them to make verbally governed decisions independent of the supervisor's questions. After the series of verbally governed questions were faded, the supervisor discontinued the correction procedure for incorrect responses as well, and non-specific feedback was given in response to the teachers' answers ("O. K.," I see," etc.). Graph check meetings were held once a week during this phase. The conditions regarding the generalization students remained the same as in baseline. One Month Probes In the final phase, a weekly supervisor/teacher graph check probe was conducted one month after the completion of the Post Instruction phase. As in baseline, the teacher worked independently and noted the students' attainment of criteria for both the target and generalization students, changed phase conditions, manipulated independent variables, and made decisions based on her own interpretation of the visual data. The supervisor completed graph checks and, based on the graphic display of students' data, asked the teacher to explain her rationale for making (or not making) specific instructional decisions regarding the target and generalization students. Non-specific feedback was given in response to the teachers' answers during this phase. Results The data showed that the training procedure was functionally related to changes in the teachers' decisions during instruction, post instruction and probe phases. As teachers' scientific verbally governed behavior increased, and teachers' decision errors decreased, students achieved significantly more objectives. The results showed an ascending trend for correct decisions made by all teachers for both target and generalization students during instruction, post instruction and probe phases. The data demonstrated ascending steep slopes in verbally governed decisions made by Teacher A (target and 261 I JBCT Volume 1, No. 3, Summer, 2005 generalization students), by Teacher B (generalization student), and by Teacher C (target and generalization students). Although the data associated with Teacher B's increase in the use of verbally governed decisions for the target student do not demonstrate a steep slope, they do demonstrate a significant increase in verbally governed decisions during instruction and post instruction over baseline. Correspondingly, teachers' decision errors stabilized early in the instruction with two exceptions. Two decision errors were made by Teacher A associated with the data for the generalization student in the first half of the instruction phase, and errors increased slightly in the last part of the post treatment phase when Teacher C made three decision errors associated with the data for the generalization student. See Figures 1 &2. so -, 70 ■ 60 -50 ■ 40 ■ Baseline- Instruction in Decision Protocol Post istmctian it Deosian PratDCDr 10 : 26 Sessions Figure 1. Cumulative teachers' decisions across all phases for the target students. 262 I JBCT Volume 1, No. 3, Summer, 2005 3