Subscribe today to the most trusted name in education.  Learn more.
Current Issue Current Issue Subscriptions About TCRecord Advanced Search   

 

Teacher Talk About Student Ability and Achievement in the Era of Data-Driven Decision Making


by Amanda Datnow, Bailey Choi, Vicki Park & Elise St. John

Background: Data-driven decision making continues to be a common feature of educational reform agendas across the globe. In many U.S. schools, the teacher team meeting is a key setting in which data use is intended to take place, with the aim of planning instruction to address students’ needs. However, most prior research has not examined how the use of data shapes teachers’ dialogue about their students’ ability and achievement.

Purpose: This study examines how teachers talk about student ability and achievement in the era of data-driven decision making and how their talk is shaped by the use of data within teams, their school contexts, and broader accountability systems.

Research Design: The study draws on interview and observational data gathered from teacher teams in four elementary schools. In each of these schools, teachers were expected to use data to inform instructional differentiation. Data collection efforts involved regular visits to each school over the course of one year to interview teachers and conduct observations of teacher team meetings. In the process of analysis, interview transcripts and field notes were coded, and themes were extracted within and across codes.

Findings: Across schools, teachers used common labels (e.g., “low,” “middle,” “GATE”) to describe students of different achievement levels and the programs they were involved in. The use of labels and student categories was relational and comparative and influenced by the accountability and policy contexts in which teachers worked. At the same time, regular meetings in which teachers jointly examined data on student learning provided a space for teachers to examine students’ strengths and weaknesses on a variety of measures and talk in terms of student growth. Teachers questioned whether assessment data provided an accurate picture of student achievement and acknowledged the role of student effort, behavior, and family circumstances as important factors that were not easily measured. These discussions opened up deeper inquiry into the factors that supported or hindered student learning. The implementation of the Common Core State Standards also led some teachers to question prior categorizations of student ability.

Conclusions/Recommendations: The findings from this study suggest that educational reforms and policies regarding data use influence educators’ conceptions of student achievement and ability. On the one hand, accountability policies can narrow the dialogue about students. On the other hand, educational reforms and policies could also lead to new ways of thinking about student learning and to an examination of a broader range of data, and provide opportunities for professional learning.

Data-driven decision making continues to be a common feature of educational reform agendas across the globe (Schildkamp & Lai, 2012). In the United States, data-driven decision making has been inextricably connected with broader education accountability systems. The focus on data use for accountability purposes shapes the decisions about what data are used and for what purposes. No Child Left Behind (NCLB) brought attention to categorizing students in terms of their proficiency in grade-level standards. These categorizations have been powerful in influencing school practice, including targeting instruction to students seen as “on the bubble” (Booher-Jennings, 2005; Diamond & Cooper, 2007; Horn, Kane, & Wilson, 2015).  This tight connection between accountability and data use is apparent not just in individual teacher planning but in teacher conversations as well (Halverson, Grigg, Prichett, & Thomas, 2007; Horn et al., 2015).


Researchers in the field of data use have paid significant attention to teacher team meetings because these settings are the primarily vehicle for teacher capacity building for data use (Farley-Ripple & Buttram, 2015; Honig & Venkateswaran, 2012; Horn et al., 2015; Marsh, 2012; Means, Padilla, & Gallagher, 2010). Commonly, teachers engage in these structured collaboration opportunities with other teachers from their grade level and/or subject area. In some cases, teacher collaboration for data use also involves principals, instructional coaches, university researchers, or consultants who serve as facilitators.


Although numerous studies have documented the benefits of teacher collaboration for data use, studies also find that structures such as grade level agendas and cultural norms, as well as the level of expertise in the group, shape the process in significant ways (Horn & Little, 2010; Young, 2006). Teacher teams with limited expertise can misinterpret or misuse data, or work together to perpetuate poor classroom practice (Daly, 2012). This variance in the quality of conversations can impact student learning (Timperley, 2009). Coaches can help guide teacher collaboration around data in productive directions, but whether this occurs depends a great deal on the expertise and activities of the coach (Farley-Ripple & Buttram, 2015; Huguet, Marsh, & Bertrand, 2014)


Most prior research on data use has not examined in depth how teachers talk about student ability and achievement, either individually or in team meeting settings (for an exception, see Bertrand & Marsh, 2015). Meanwhile, we know that teachers’ conceptions of student ability and achievement shape instructional decisions and have important implications for equity (Bertrand & Marsh, 2015; Oakes, Wells, Jones, & Datnow, 1997; Rubin, 2008).  This article addresses the following questions: How do teachers talk about student ability and achievement in the era of data-driven decision making? How is teachers’ talk shaped by the use of data within teams, schools, and broader accountability systems? With its attention to a broader range of skills and new assessments, the use of data in the era of the Common Core State Standards may promote different conceptions of student ability, yet this is unexplored as well.


LITERATURE REVIEW


TEACHER PERCEPTIONS OF STUDENT ABILITY


This study is grounded in social constructionism, which posits that understandings of the world emerge in the course of social interaction (Berger & Luckman, 1966). In Berger and Luckman’s conception of reality as socially constructed, language is a key repository of meaning. In keeping with this theory, we conceptualize teachers’ perceptions of student ability as being produced in the course of their interactions other teachers, principals, other school staff, parents, and students. The teacher meeting setting is a particularly important place to examine the social construction of ability because students’ classifications as “gifted” or “learning disabled” are assembled in interaction among participants and are visible in discourse patterns (Mehan, 1991, 1993). Paying attention to how teachers conceptualize student ability and achievement is important because, as McLaughlin and Talbert (1993) explained, “policy coherence as intended by reformers and policymakers ultimately is achieved or denied in the subjective response of teachers—in teachers’ social constructions of students” (p. 248).


Prior research has found that teachers tend to have varying perceptions of student ability based on students’ background characteristics. These perceptions have important implications for students’ academic trajectories and have even been found to predict student achievement. Teachers often perceive students from low-income families as being of lesser ability and having deficits related to their home life and background (Alvidrez & Weinstein, 1999; Diamond, Randolph, & Spillane, 2004). Alvidrez and Weinstein (1999) found that the ability of preschool students from higher socioeconomic status (SES) backgrounds was judged more positively than the ability of students from lower SES backgrounds, who were judged more negatively.


In addition to socioeconomic status, teacher perceptions of student ability have also been related to race (Diamond et al., 2004; Hughes, Gleason, & Zhang, 2005; Minor, 2014). Diamond et al. (2004) examined teachers’ expectations and found that in schools with a majority of White and Chinese students, students’ assets were emphasized, as opposed to deficits. However, when speaking about students of lower SES backgrounds and African American students, teachers tended to focus on student deficits. When looking at the general perception of African American students in the schools studied, assets and good behavior were seen as exceptions to the rule.


Teacher perceptions of student ability have been documented to vary by gender as well (Busse, Dahme, Wagner, & Wieczerkowski, 1986; Riegle-Crumb & Humphries, 2012). For example, Riegle-Crumb and Humphries (2012) examined how gender stereotypes in math ability shaped high school teachers’ assessments of their students, which resulted into conditional bias. Both White females and Black males were less likely to be considered to be in a class that was too easy for them in comparison with White males. Teacher perceptions of ability have also been found to relate to students’ home language status (Hansen-Thomas & Cavagnetto, 2010).


Teachers’ perceptions of student ability also influence patterns of teacher–student interaction and classroom instruction. Jordan and Lindsay (1997) examined the linguistic interactions between teachers and students who were perceived to be “exceptional” or “at risk” and students who were perceived to be “typically achieving.” Teachers who assumed that disabilities were inherent in students demonstrated the least effective interaction patterns. Teachers who attributed student difficulties to interactions between the student and the environment engaged in more academic interactions and focused more effort on constructing student understanding when interacting with students.


Several studies have examined teachers’ beliefs about ability in the context of equity-driven educational reforms (Oakes et al., 1997; Rubin, 2008; Watanabe, 2006). In a study of 10 detracking schools, Oakes et al. (1997) found that some teachers held a conventional view of intelligence in which ability was seen as a unidimensional, fixed, and innate characteristic. This led them to believe that intelligence could be easily assessed and to support tracking and ability grouping in their schools. Beliefs that overlapped race with ability were also prevalent among this group. In contrast, another group of teachers believed that intelligence was multidimensional and plastic, leading them to support moving away from classes that sorted students perceived as “smart” and “not smart” (Oakes et al., 1997, p. 494).


Rubin’s (2008) study also captured how the social construction of ability complicated a school’s efforts to move away from tracking and ability grouping. Her study revealed that educators’ understandings of student ability shaped their goals for students. In one school, educators’ beliefs that all low-income students of color had similar needs led them to deliver an uninspiring, low-level curriculum. In another school, where educators viewed students as varying in ability and by race, teachers created assignments that were flexible and could accommodate students’ diverse needs.


Underlying teachers’ social constructions of student achievement and ability may be the attributions they ascribe to students. Attributions refer to the conclusions drawn by individuals to explain the behavior of others (Weiner, 1986). Following from the attribution theory framework, attributions can be thought of as the explanations that teachers’ generate from their perceptions of student academic performance. Teachers have been found to seek out explanations for student performance early in students’ school years (Clark & Artiles 2000), frequently attributing academic performance to notions of ability, effort, task difficulty, and luck (Weiner, 1974, 1985).


Yet, teachers have also been found to hold different expectations of, and interact differently toward, students based on these attributions (Fennema & Sherman, 1976) and the extent to which they believe they are internal or external, fixed or variable, and within the students’ control (Weiner, 1985). For example, one study found that teachers are less likely to modify instruction if they attribute a student’s poor academic performance to internal factors like ability (Jordan, Glenn, & McGhie-Richmond, 2010), whereas teachers may accommodate a student if they attribute his performance to a variable cause, like a bad night’s sleep (Medway, 1979). In addition, teachers may also make attributions based on the student’s gender, ethnicity, social class, past performance, or type of behavior (Fennema, Peterson, Carpenter, & Lubinski, 1990; Tiedemann, 2002; Tom & Cooper, 1986).


Finally, while attributions are often thought of as being the beliefs and thoughts of individuals and groups, they are also argued to be part of a “co-constructed process” whereby conversational actors influence each other’s attributions during ongoing discourse (Haan & Wissink, 2013). According to Haan & Wissink (2013), they are “both beliefs that reside in the minds of people and subjected to and formed by ‘language in action’” (p. 299).  As teachers seek out explanations for student ability through the development of attributions, they may be forming and simultaneously reinforcing their socially constructed notions of student ability. However, with increasing opportunities for collective discourse around student data and the availability of more varied data, it is possible that teachers may begin to question their beliefs about student ability and the factors that influence student achievement. Teacher conceptions of and talk about student ability may have a different character than they would in cases in which data were not part of the equation. Yet, it is important to note that teachers come to data-driven decision making with a set of preexisting beliefs about the value of evidence (Coburn & Talbert, 2006; Farley-Ripple & Buttram, 2015). Thus, some teachers’ interpretations may be more influenced by “data” than others.


TEACHER TALK DURING COLLABORATION TIME FOCUSED ON DATA


Paying close attention to teacher talk in the context of data collaboration meetings is one way to examine how teachers think about student ability, given that teachers often discuss data in relation to individual student achievement. The interactional piece of such meetings is an important shaping feature of these conversations. As Spillane (2012) explained, interpretations of data are “not just a function of their [educators’] prior knowledge and beliefs, but also a function of their interactions with others in which they negotiate what information is worth noticing and how it should be framed” (p. 14).


Routines influence how teachers talk with each other in collaborative settings, and they can either close off conversations or open up opportunities for deeper inquiry (Coburn & Turner, 2011). In other words, “routines for data use are a consequential context for how the process of data use unfolds” (Coburn & Turner, 2011, p. 182). Routines can also be shaped by individual teachers, by teacher teams, or by coaches and leaders. District and school leaders and policies play an important role in framing data use efforts, and these varied frames shape teacher conversations (Datnow & Park, 2014; Park, Daly, & Guerra, 2013).


Teacher talk about student ability in the course of their meetings may be reflective of their own beliefs, but also of the formal and informal category schemes that are part of the school culture and routines. As Horn (2007) explained, the categorization of students as “fast, slow, or lazy” or as “A” or “B” students is an example of the schemes that are deployed in teachers’ conversations. In other words, educators within a particular school (or system) may have their own ways of talking about students. These categorizations simultaneously communicate beliefs about students and get built into the organization of curriculum, “reinforcing the beliefs they represent” (Horn, 2007, p. 42).


Examining how teachers explain the causes of student outcomes observed in data, Bertrand and Marsh (2015) found that middle school teachers frequently attributed outcomes to instruction, but also to stable student characteristics. For example, teachers believed that a lack of proficiency among English learners and special education students was self-evident. Bertrand and Marsh also noted that adjectives teachers used to describe struggling students, such as “low” or “below basic,” corresponded to benchmark and state test scores. Rather than describing the students’ skills as low, the students themselves were described as low. The authors also noted that the ways in which teachers made sense of data were influenced by the school context, including the teachers’ interactions in professional learning community meetings, and by school organizational features such as tracking, which served to reinforce teachers’ determinations about student ability. Bertrand and Marsh’s (2015) study provides insight into some of the ways in which teachers may talk about student ability in the context of data use; however, there is still more to learn.


METHODOLOGY


This article draws on data gathered in 2014–2015 from a case study of teacher teams in four elementary schools. Using qualitative case study methods allowed us to capture teachers’ use of data in the context of the team, school, and district in which they were working (Yin, 2013). We chose qualitative case study methodology because of our desire to understand the process of how teachers make sense of data use from their own perspective. Employing qualitative methods also allowed us to interact with participants, observe behavior, and gain firsthand knowledge about the contexts of teachers’ work.


The selection criteria for this study included public elementary schools in which teachers were expected to use data to inform instructional differentiation. More specifically, we chose schools in which fourth- and fifth-grade teachers were using shared student performance data in English language arts (ELA) and math, including state-, district-, school-, or grade-level assessments, to inform instruction. We focused on the fourth- and fifth-grade teacher teams in each site because some research has shown that ability grouping in the upper elementary grades in reading and math has grown considerably in the past 20 years (Loveless, 2013). The schools in the sample used a variety of data sources to inform their decisions about instruction.


Because we were interested in how data use and decisions about instructional differentiation may relate to students’ background characteristics, we selected racially and economically diverse schools. Two of the schools served a majority of low-income students, and the other two schools served a mix of students from low- to middle-high-income families. All of the schools served English language learners, ranging from a low of 5% at one school to a high of 75%. Two schools served a majority of Latino students, and the two other schools served a mix of students from different ethnic and racial backgrounds, including Asian, White, Latino, and African American students. All four schools are located in one state, but in four different school districts. For the purposes of confidentiality, we do not refer to the schools or persons by name.  


Our data collection efforts involved regular visits to each school, sometimes weekly or biweekly, to conduct interviews and observe teacher team meetings. We conducted semistructured interviews with the fourth- and fifth-grade teachers in each school. The first interviews with teachers focused on gaining background knowledge of the participant and school contexts, existing data use culture in the school and teacher teams, and uses of data for instruction. Appendix A includes the semistructured interview protocol that we used with teachers during these interviews. The second round of interviews provided opportunities to follow up with participants to understand further in depth their data use practices to clarify any discrepancies from prior interviews and to verify our understanding of their perspectives and interactions from observations. A total of 20 teachers were interviewed, with most being interviewed two times over the course of the year and, in a few cases, three times when teachers wished to provide us with follow-up information. This yielded a total of 42 teacher interviews lasting 45 minutes or longer, gathered over the course of the 2014–2015 school year. We also interviewed principals and other key personnel (e.g., instructional coaches, where present) at each school, but these were not included in this analysis because the focus is on teacher talk. All interviews were taped and transcribed verbatim.


During the same school year, we gathered data from 49 meeting observations (gathered in 127 hours of meeting time). Teachers in these schools had established collaboration time as often as weekly or monthly, depending on the school, and we observed at these meetings whenever possible. Most commonly during these meetings, teachers met by grade level or across two grade levels (fourth and fifth) or at some points with the principal and other faculty as well, depending on the nature of the meeting. However, grade-level team meetings were most common, and all teachers in a particular grade (fourth or fifth) participated in those meetings.


During observations of meetings, we took detailed field notes on how teachers talked about data, students, and instruction. Using a semistructured observation protocol (see Appendix B), we noted the types of data used for discussion, how data were analyzed, and how data were discussed in relation to student achievement and backgrounds. It is important to note that our data for this article mostly came from how teachers talked about students in meetings and in interviews; we did not ask them to specifically describe their conceptions of ability in our interviews, though they did discuss student achievement and ability in relation to other questions we asked them. We collected or took photos of artifacts (documents) that were used in the meetings, such as printouts of student data that the teachers were examining, meeting agendas, and so on. Photos of the artifacts were pasted into our meeting notes and analyzed within the notes, though the documents themselves did not include instances of teacher talk and thus served mainly as background information for the purposes of this analysis.


We began by analyzing the observational and interview data with a set of a priori codes that we developed from our reading of the literature. For example, building on the work of Oakes et al. (1997), we coded for whether teacher talk reflected the notion that ability was a fixed construct, malleable, or a bit of both. Additional codes emerged in the process of analysis. For example, we found that teachers often talked about student ability in relation to student behavior and motivation, and thus we developed new codes. A list of the codes and their definitions appears in Appendix C. To establish reliability of our coding scheme, two members of the research team coded selected interviews and compared results. Achieving a high degree of overlap, we then proceeded with coding the remaining interviews individually. In Appendix D, we include examples of data from our interviews and observations that were coded according to particular codes. We used MAXQDA, a qualitative coding software tool, to facilitate the coding of the data. This process resulted in 1,212 coded segments of data. Appendix E includes a table listing the frequency of each code. We then reviewed the data both within and across codes to search for themes (e.g., how did teachers explain their belief that ability was malleable? To what did they attribute achievement patterns?). Within codes, we uncovered different patterns as well. For example, the data coded as “fixed” informed us both about how teachers talked about their students (e.g., as high, medium, and low) and whether they saw ability as something that was an innate characteristic.


As we consider teachers’ responses, we are mindful of the work of Baker (2004) who argued for looking at interviewee responses not as reports but as accounts—“the work of accounting by a member of a category for activities attached to that category” (p. 163). She noted that both researcher and interviewer are involved in the generating of a social reality around categories and activities in accounts. “What we hear to and attend to in these interview accounts are members’ methods for putting together a world that is recognizably familiar, orderly, and moral” (Baker, 2004, p. 175). Teachers in particular will respond to interview questions in ways that express moral accountability for their students (Baker & Johnson, 1998). In this study, teachers may also express accountability to their colleagues, given the focus on teacher team meetings in their schools, and to the use of data to inform instruction, given the questions we asked them about this issue. So too, how teachers talk about students in the context of a meeting may be reflective of the language commonly used in their team, school, and/or district (Horn, 2007). Thus, we must use caution when attempting to make sense of teachers’ beliefs via the language they use.


FINDINGS


Our analysis of the data reveals findings in several key areas. We first discuss the range of language teachers used to describe their students. The use of labels and student categories was relational and comparative and influenced by the accountability and policy contexts in which teachers worked. Subsequently, we explain how the use of data provided space for a conversation about student growth. Closely examining student work or assessments also led some teachers to move beyond categorizations of generalized ability to focusing on students’ skill levels in particular areas. We observed “moments of mismatch” in which teachers questioned whether assessment data provide an accurate picture of student achievement and also acknowledged the role of student effort, behavior, and family circumstances as important factors that could not be easily measured. These discussions opened up deeper inquiry into the factors that supported or hindered student learning. The implementation of the Common Core State Standards also led some teachers to question prior categorizations of student ability. Findings in each of these areas are elaborated next.


THE IMPACT OF POLICY AND ACCOUNTABILITY SYSTEMS ON TEACHERS’ TALK


Across schools, teachers used common labels to describe students of different achievement levels. Teachers frequently referred to their “low,” “middle,” and “high” students, likely representing the ubiquitous language of the categories arising out of NCLB and related state accountability systems, as Bertrand and Marsh (2015) found. Teachers used these terms to describe both groups of students and individual students. In this respect, they seemed to be talking about their assessments of student ability. For example, when describing students, a teacher explained, “in a class of 34 you’ve got like 5 that are super low, and 5 that are super high and everyone falls somewhere in the middle. This class is no different.” This statement may also indicate a belief that intelligence fits a bell-curve-shaped distribution. Another teacher described the students in his/her class as “typical,” where “your low ones are low and your high ones are high.” At other times, these terms were connected to student achievement, reflecting how students were performing, rather than an assessment of what they were capable of. These words were used as qualifiers for various other terms, such as “high achievers,” “low functioning,” or “middle level.”


These categorizations were reinforced by assessment data showing which students were below, at, or above grade level. Teachers not only talked about their students using the terms “low,” “middle,” and “high,” but they often referred to their students as “struggling,” “proficient,” or “advanced.”  Again, here we see the distinction between categorizing student ability and categorizing student achievement. In doing so, teachers’ discourse also reflected the proficiency markers from benchmark assessments that used the categorizations from NCLB. The categories of “far below basic,” “below basic,” and “basic” on district benchmark assessments were often used to refer to students—not just their levels of achievement but the students themselves. For example, when talking about a student’s progress, a teacher said, “he’s far below basic.”


Although we observed common patterns across schools, we also found that the ways teachers talked about students were shaped by the district or school context. In some cases, this was because schools were provided with or developed different ways of categorizing student achievement, such as “secure” in the grade level standards, “proficient,” or “benchmark.” Student achievement above grade level was described in a range of ways that also mapped onto categories promoted by school systems (e.g., “exceeds,” “challenge,” or “advanced”). These differences in language were not dispositive of different ways of thinking about student ability, but rather were simply reflective of the different terms used in the school.


Teachers also described students according to the programs that district policies supported. For example, at one school, a teacher referred to a student as “very smart” and “the most GATE [Gifted and Talented Education] child you will ever find.” Another teacher referred to a student as “typical GATE,” explaining that she “takes things internally” and that “she came in high and is still high.” In other cases, teachers simply made comments such as “he’s GATE.” By contrast, in one district that does not offer a GATE program because they believe in providing challenging learning opportunities for all students in the context of the regular classroom, students were never described using GATE terminology. Thus, the labeling of students occurred in relation to the particular policy contexts in which teachers were working.


Students in the Resource Specialist Program (RSP), in which they receive special education services on a pull-out or push-in basis, were often referred to as “RSP,” as in “he’s RSP.” At one school a teacher commented that most of the “lower” students were usually the “RSP kids.” A teacher at another school described a child as “very RSP” and suggested a Student Study Team (SST) meeting. The term “SST” was also used to describe students who needed additional support. For example, when describing the many needs of students in a particular grade level, a teacher said, “the whole group is SST.” English learners were also occasionally referred to by their program affiliation. For example, after describing a student who demonstrated little growth on the district benchmark assessment but who was considered to be a “hard worker,” a teacher referred to this student as “a typical EL kid,” meaning that the student struggled with English language precepts. For the most part, we view these kinds of labels as shorthand conventions for teachers who are assumed to share common understandings of what RSP, EL, or GATE designations mean for students in these contexts.


In sum, teacher talk about student ability and achievement was shaped by the accountability and policy contexts in which they worked. Assessments arising out of the NCLB era that categorized students into performance bands with particular labels influenced the language teachers used to talk about their students. Similarly, policies that required students to be sorted into groups based on English learner, special education, or gifted and talented designations also influenced how teachers talked about their students, although these practices varied by context.


QUESTIONS ABOUT ASSESSMENT DATA


While often reflecting the categorizations that arise out of accountability systems, we observed many instances of teachers questioning whether assessment data fully captured student achievement. These “moments of mismatch,” as we call them, allowed space for teachers to delve into discussions about using a wider array of data to inform instructional improvement, student learning, and progress. For example, when deciding on instructional interventions for a student, a teacher stated, “It’s hard because the class performance is so good, so the [benchmark assessment] scores aren’t matching.” Similarly, one teacher commented, “He’s a lot brighter than his reading score,” and another said, “His scores are a fluke.” These concerns were raised in relation to standardized benchmark assessments and also with respect to district writing assessments. In a meeting where teachers were scoring these assessments, one teacher said she did not like writing tests because she didn’t feel that they accurately captured a student’s progress: “I feel bad giving her a ‘one’ because she has improved so much.”


These questions about assessments were also articulated in interviews in which teachers espoused beliefs that test scores did not tell the full story. For example, one teacher explained, “I look at [the benchmark scores] as a snapshot on that day, but I need to use a range of data and see where the kids are falling.  I think it’s a mistake to use just one assessment and have it weigh too heavily on to anything.” A teacher in another school also used the language of “snapshot” when referring to benchmark assessment data:


There’s not a one size fits all anything with assessments. I don’t think, “okay I can give this, and it’s the perfect snapshot of these kids.” So to have a couple of different reliable measures balance that out then go with what you feel about the kid I think is . . . that’s the way that you feel good about how you grade them and what you’re saying about them.  The more information the better.


A teacher in another school similarly noted the importance of teacher judgment in the process of evaluating student achievement: “Sometimes kids don’t test well, and we know the kids better because they are more than just numbers . . . so we need to look at other factors such as classroom observations, informal observations, [English language development test scores], and what we see they can do.” Conversely, some teachers saw test score data as providing another lens on the achievement of students who may not perform well in the classroom. One teacher explained, “His [benchmark assessment] scores show he is capable and bright, but it’s all about what he puts into it.”


Although the majority of teachers adopted this holistic approach to assessing student achievement, a few teachers prioritized benchmark assessment scores in making judgments about student achievement, especially given that the annual assessment linked to the Common Core Standards was new.  As one teacher explained,


Our best way to assess our kids right now on . . . where to go next year in fifth grade, and also to assess if our teaching was adequate this year, is [the benchmark assessment], so we kind of value that more right now until we see what kind of feedback we can get from SBAC [the Smarter Balanced Assessment Consortium].


By and large, however, teachers were quick to notice inconsistences between their students’ performance on an assessment and how they viewed the students’ achievement potential based on other forms of data, including their own observations.


GOING BEYOND THE DATA TO MAKE SENSE OF STUDENT ACHIEVEMENT


As noted earlier, there were numerous instances in which teachers questioned whether assessment data provided an accurate picture of student achievement and potential. As teachers attempted to make sense of student achievement, they also acknowledged the role of student effort, behavior, and family background as important factors. Thus, consistent with prior research, teachers’ socially constructed notions of ability were based not just on test scores but also on students’ background and behavioral characteristics, though race and gender were rarely explicitly referred to. Moreover, ability was seen as just one dimension of achievement. In particular, they made linkages between what they perceived as effort and motivation to actual achievement.


In listening to teacher dialogue, we found that motivation was a major factor in how they explained student achievement. For example, one teacher described a student as “a smart child, but he’s just not applying himself.” Teachers lamented that some of the students in their classes were “slacking off,” “not trying,” or “not putting in the effort.” About another student, one teacher explained, “it really comes down to effort. His spring [benchmark assessment] scores are the best he’s done all year,” and a second teacher chimed in with, “Yes, they are indicative of how bright he is. . . . What he can do.” Teachers’ assessments of motivation levels and effort had important consequences because they influenced decisions about which students should receive intervention support. Describing a student, one teacher said, “She hasn’t received interventions. With her scores, you would think she needs it, but she’s a hard worker.” The student was deemed as not needing intervention because of her diligent efforts in class. Conversely, another teacher said she was “pushing [a student] to try harder” because he wasn’t working to his full potential.


Teachers’ questioning of test score data sometimes resulted from what they believed to be a mismatch between student achievement, effort, and behavior. Sometimes behavior overrode test scores as teachers made determinations about student achievement. For example, during a team meeting, when the principal noted that a student’s benchmark assessment scores were “okay” and that “she actually grew 20 points,” a teacher countered that the student’s test scores “don’t really match her class behavior.” At this school, we observed a shared notion of what it means to have proacademic behavior, which was referred to as being a “student.” Being a student involved exhibiting motivation and persistence and being attentive. When students’ learning habits did not match with this conception, efforts were made to help cultivate these behaviors.


As teachers drew on multiple forms of information to create a portrait of student achievement, they also noted the role of family support and students’ home lives. In two schools in particular, we noted a great deal of effort being made by teachers to both understand and address home circumstances that posed challenges to kids’ learning. The following exchange occurred between two teachers in a meeting:


Teacher 1: Many in your class are emotionally and socially needy, which is interfering with academics.

Teacher 2: Absolutely, many of my kids have family issues.


Family issues included parents who struggled with deportation, incarceration, illness, unstable employment, substance abuse, eviction, and other challenging circumstances. In one school, a teacher described a student who was labeled by a teacher as being the lowest in class and noted that he was “severely neglected” at home. The teacher noted that he would benefit from “one-to-one attention” at school, and the team arranged for this to occur. Teachers were typically very respectful and empathetic when talking about students’ circumstances.


Racial and cultural backgrounds of students were rarely explicitly discussed in relation to students’ achievement or home lives. We noted only a couple of exceptions. Only one teacher made comparative statements about students in relation to race and class: “Go into the [district] website there and go take a look at the scores. You’re going to find out that Asian kids score higher than White kids and Latino kids, and my experience is because of the discipline at home.” The teacher explained that this may also have to do with socioeconomic differences, given that the Latino students tended to be lower income, but that this did not necessarily constitute a barrier to achievement: “I have other Latino kids here who are very good.” In considering class placement, a teacher in another school noted that two of her students would be “lifted up” when they were placed in a class with “smarter Asian boys.” In addition to noting explicit references, we listened for implicit references to race in class in teachers’ talk about student achievement in relation to family background. A limitation of our analysis is that the background of the student being discussed was not always clear to us, and thus we may not have captured implicit attributions to race or class.


FROM FIXED TO GROWTH MINDSETS ABOUT STUDENT ABILITY


Although we observed common patterns in how teachers described their students, when we listened closely to their dialogue and interviewed them, we heard teachers express a range of different beliefs about ability and achievement. Some teachers talked about students’ ability as an innate, fixed construct, whereas others saw ability as more malleable and spoke a great deal about “growth.” Like the teachers in Oakes et al. (1997), some teachers spoke about student ability in ways that reflected a mix of both fixed and malleable conceptions. However, in the case of this study, teachers often pointed to data on student achievement to support their perceptions about student ability.


Some teachers espoused a more fixed view of student ability. For example, at one school, a teacher said,


That’s just the way it works out. If you’re high in reading, you’re usually high in math and you’re high in writing. We look at our district reading scores for both math and reading. That’s how we figure it out and then your report card obviously, so it’s very, very rare that you have someone who’s high in reading and not high in math.


Another teacher used the notion of a bell curve: “I’ve still got you know the bell curve of kids that I need to challenge on one end and kids that need quite a bit of support on the other.”


In contrast, other teachers espoused a more malleable view of student ability, often speaking about student growth. One teacher talked about students coming to schools with different “academic backpacks,” suggesting a belief that contents of that “backpack” could be subject to change if students’ needs were addressed. Almost all teachers reflected specifically on their students’ growth over time. For example, one teacher mentioned, “there have been times where students have stepped up” and “at the end of the year have done well and really have gained progress.” At another school, a teacher commented that one of her students “had a great year” with “super strong growth” and that she “moved him [to a higher level group] for vocabulary.”


Benchmark assessment data provided teachers in some schools with a lens to examine and dialogue about student growth over time. In both meetings and interviews, teachers frequently pointed to examples of students who had exhibited growth on these standardized assessments. For example, one teacher noted that two boys had shown “double the growth at the end of the year for both reading and math,” meaning that their performance improved by two full grade levels. A teacher at another school expressed the desire to “really look at the scores” of students in order to “best work with the intervention to best meet their needs and to show growth in reading and writing.”  In meetings, teachers talked about students who were “making growth.” While discussions of growth seemed to be most apparent in relation to benchmark assessment data, teachers also discussed growth that they observed in students’ writing over the course of school year. As one teacher explained, she considered, “What did they produce on day one versus in March when they were writing a three paragraph essay with topic sentence and indenting and spelling words correctly?”


The process of closely examining data in the context of teacher team meetings facilitated teachers’ focus on student growth, thereby contributing to shaping teachers’ beliefs about what they thought was possible for their students. The team meeting provided a space and a routine for teachers to point to various forms of “data” when making claims about student achievement. These opportunities for dialogue around data helped teachers move away from discussions of ability and toward discussions of student growth and potential.  When meetings focused on the active examination of achievement data, they appeared to have encouraged this type of thinking and discourse about students. The data-focused teacher team meetings we observed focused mostly on the use of benchmark assessment or formative assessment data, and the goal of these meetings was to use data to inform instruction. These practices suggest that inquiry focused on using data for instructional improvement is enabled by organizational routines and expectations.


TARGETING SKILL VERSUS ACHIEVEMENT LEVELS


Closely examining student work or assessments also led some teachers to move beyond categorizations of generalized “ability” to focusing on students’ skill levels in particular areas. This allowed for a more expansive, nuanced view of what students knew and were able to do. Assessment results, student work, and teacher observations all played an important role in teachers’ judgments about students’ skill levels. For example, discussing a student, one teacher explained that a student was strong in reading comprehension but struggled with reading fluency. This judgment was based on her observations in class as well as from the benchmark assessment data. Considering various forms of data and her own judgment, a teacher in another school concluded the following about a student: “In some areas, she is strong but she has the worst writing I have seen in a fifth grader given how strong her reading is.” Class work also came into play. For example, a teacher explained that a student “is advanced in reading but his ability to understand is really low. . . . He misses directions. . . . Like on [this] test he got a 48%.”


In some cases, teachers planned instructional interventions on the basis of the examination of these multiple forms of data. For example, a teacher reflected on the achievement of a particular student while in a team meeting and said, “She needs fluency [support] on top of comprehension, looking at her [reading assessment] score.” It was common for teachers to reference multiple forms of data, including their own observations, when making these judgments. The following statement by a teacher exemplifies this: “He [shows] lots of improvement in reading recently. Previously he was going down but now [he is at a] 3.6.  He is finally responding in complete sentences. Speaking with a partner is helping him a lot more.”


As teachers planned instruction on the basis of data, teacher talk around the “fluid” or “flexible” grouping of students highlighted their notions about student ability. For example, when speaking about how she made grouping decisions for small-group instruction, a teacher explained that the small groups she pulled in math were “pretty flexible” and were organized “by skill” students needed support with, but she also mentioned, “There’s a kind of core three or four [students] that I can pretty much say are always going to be in the group that they’re going to be and they just test into that or they show the skills.”


In another case, a teacher described using an approach of pulling students into small groups while working on a concept:


What’s really worked well with that is you can have a child that the other kids label as a “smart kid,” and he’ll have to come back to that small group for one reason or another, either because in math he’s not good at explaining his thinking, and so he’s in a small group that day working on that with the others, or in reading he’s not real great at inferencing, so he’s in that . . . so it mixes it up because the groups are more fluid based on whatever skill it is, so it’s just. . . . It’s constantly moving, there aren’t any set reading groups or set math groups.


As teachers spoke about this transition to skills-based grouping, their dialogue reflected a challenge to traditional ability grouping procedures that often involve more fixed ideas about student ability. In another paper, we examined instructional differentiation in more detail (Park & Datnow, 2017).


QUESTIONING ABILITY MARKERS WITH IMPLEMENTATION OF THE COMMON CORE STANDARDS


The focus on performance levels in particular areas rather than global assessments of student ability was also influenced by the implementation of the Common Core State Standards. Teaching to these standards led several teachers in the study to question their prior assumptions about “high” and “low” performing students. When teaching to the standards, particularly in math, these teachers discovered that the patterns in students’ conceptual thinking did not always map on to their earlier judgments about student achievement. For example, a teacher explained that teaching Common Core math created opportunities for students who didn’t ordinarily see themselves as high achievers: “I think that with allowing them to attack a problem any way they want to and not just teaching algorithm and then drill and kill just the algorithm; I think you see those kids that would maybe not consider themselves good at math attacking problems in very efficient ways.” To clarify, this teacher explained that teaching to the Common Core State Standards allowed her to create opportunities for success for a wider range of students in her class. It’s not that the content was considered to be “easier” (quite the contrary, as discussed next), but rather the embracing of multiple ways of solving problems allowed more students to shine. These insights came up in teacher interviews rather than in teacher team meetings, and they occurred as teachers reflected on math instruction. It is important to note that all teachers in this study taught mixed-ability classes in math rather than homogenously grouped classes.


Along similar lines, another teacher explained, “Just because you get the answers the fastest doesn’t mean that there aren’t other skills and strategies to learn, and other people can think about problems in different ways that maybe you hadn’t ever even considered.” The teacher elaborated, “I have a lot of really bright kids that know the algorithms but really can’t back it up with any kind of conceptual model to show that they understand what the numbers represent or mean.” One teacher explained that because the Common Core State Standards were challenging for all the students in the class, it was necessary to do more whole-group instruction: “Math pretty much stays whole class, and given the number of new things being introduced to them . . . it’s eye-opening to a lot of kids.”


Teachers at one school scheduled one-on-one math interviews with struggling students to gain insight into their performance. A teacher explained,


My next interview is a little girl that isn’t doing so great in math class, but she’s a great problem solver, very gregarious, very strong in other academic areas, so we just want to know, what is it about math? Does she have this block about numbers? A lot of girls do, so I’m going to see what’s going on in her mind.


This teacher had noted a pattern of numerous girls not showing confidence in mathematics when they came to her class and had actively sought out research that could help her better understand and address this issue.


At the same time, other teachers struggled with how to teach the Common Core State Standards in math to the wide range of students in their classes. One teacher explained, “The idea is to go deeper, but that still is a challenge when you’ve got kids who can’t add still in a room with kids who are fluently adding, subtracting, and multiplying and dividing fractions.”  Another teacher complained, “I can’t tell what’s up from down because this is only the second year of it. . . . I’m trying to do it justice by doing like three lessons a week with review, and it’s a painstaking task.” Teachers also believed that the language demands in Common Core math posed difficulties for students who were English learners. A teacher explained, “There’s so much reading with Common Core.”


It is possible that teaching to the Common Core State Standards can help to create opportunities for teachers to see ways in which students of varying prior achievement levels can experience success. However, as the comments above suggest, a great deal depends on how teachers engage with standards, as well as the supports that teachers have for engaging in learning around the standards. For many teachers, these standards are departure from past practice and involve rethinking curriculum and instruction in significant ways. Structured opportunities for collective learning among teachers can aid in this transition, especially if they allow for critical inquiry around existing practices and beliefs about student ability (Stosich, 2016).


CONCLUSION AND IMPLICATIONS


Teacher talk about student ability reflects the myriad ways in which categorization of student achievement set by accountability policies has become embedded in schools. Labels are imposed on teachers by broader policies for accountability and have become part of teachers’ lexicon. This lexicon appeared to reify a hierarchy of ability and constrain the ways in which teachers talk about student achievement and learning. This study finds that teachers consistently referred to students as “low,” “middle,” and “high”—terms that directly map on to the categories arising out of NCLB and the benchmark assessments that have been commonly used. Teachers in each school used their own specific labels to refer to students but often discussed achievement bands that translated to below standard, proficient, and advanced. The categorization of students according to English learner, GATE, and special education status also influenced the shorthand language teachers used when they talked about student abilities.


The ways in which teachers characterize specific student achievement and ability were influenced by accountability policies but also informed by teachers’ holistic assessments of students. Regular meetings in which teachers got together to examine data on student learning provided a space for teachers to examine students’ strengths and weaknesses on a variety of measures and talk in terms of student “growth.” At the same time, teachers also recognized the limits of the assessments in providing a complete picture of student achievement, and we observed “moments of mismatch” in which teachers noted inconsistencies between assessment data and other forms of information on student achievement. Teachers relied on various sources of knowledge and data in their descriptions of student ability, including motivation and family background. Teachers also made distinctions between ability and achievement. Ability was a component of achievement; however, definitions of achievement also included notions of effort and expected behavior.


How teachers in this study described the shift to the Common Core State Standards in math suggests that curriculum standards may have the potential to shift teachers’ conceptions about student ability. Implementing the standards led some teachers to question their assumptions about “high” and “low” performing students, as they discovered that the patterns in students’ conceptual thinking did not always map on to more global assessments about student achievement. These shifts may provide the impetus for some teachers to move beyond simple categorizations of generalized “ability” to focusing on skills and content in a variety of ways, including through heterogeneous grouping. Teachers’ language about students also reflected the notion of “growth” as teachers’ use of data prompted them to see changes in student achievement over time and adjust instruction to address specific skills or concepts. How these shifts are created needs to be examined closely in further research. It would also be important to dig more deeply into how new forms of data play a role in these shifts.


The findings from this study suggest that educational policies and routines influence educators’ conceptions of student achievement and ability. On the one hand, accountability policies can narrow the dialogue about students. On the other hand, educational policies and routines could also lead to new ways of thinking about student learning and to an examination of a broader range of data, and provide opportunities for professional learning. Explicitly addressing the fixed notions of ability that are reinforced by broader accountability policies and standards is critical for achieving equity, especially given that prior research shows that students categorized as “low” ability are often low-income students of color (Oakes et al., 1997).


Acknowledgment


We gratefully acknowledge the Spencer Foundation’s support of this research. We also give our sincere thanks to the educators who gave generously of their time to be part of this project. We so enjoyed learning from them and deeply respect their work.


Correspondence concerning this paper should be addressed to Amanda Datnow, UCSD Department of Education Studies, 9500 Gilman Drive, La Jolla, CA 92093-0070.

Contact: adatnow@ucsd.edu; 858-534-9598


References


Alvidrez, J., & Weinstein, R. S. (1999). Early teacher perceptions and later student academic achievement. Journal of Educational Psychology, 91(4), 731–746. http://dx.doi.org/10.1037/0022-0663.91.4.731


Baker, C. (2004). Membership categorization in interview accounts. In D. Silverman (Ed.), Qualitative research: Theory, method, and practice (2nd ed., pp. 162–176). Thousand Oaks, CA: Sage.


Baker, C. D., & Johnson, G. (1998). Interview talk as professional practice. Language and Education, 12(4), 229–242.


Berger, P. L., & Luckman, T. (1966). The social construction of reality: A treatise in the sociology of knowledge. New York, NY: Doubleday.


Bertrand, M., & Marsh, J. A. (2015). Teachers’ sensemaking of data and implications for equity. American Educational Research Journal, 52(5), 861–893.


Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability system. American Educational Research Journal, 42(2), 231–268.


Busse, T. V., Dahme, G., Wagner, H., & Wieczerkowski, W. (1986). Teacher perceptions of highly gifted students in the United States and West Germany. Gifted Child Quarterly, 30(2), 55–60. http://dx.doi.org/10.1177/001698628603000202


Clark, M. D., & Artiles, A. J. (2000). A cross-national study of teachers’ attributional pattern. Journal of Special Education, 34, 77–89.



Coburn, C., & Talbert, J. (2006). Conceptions of evidence use in school districts: Mapping the terrain. American Journal of Education, 112(4), 469–495.


Coburn, C. E., & Turner, E. O. (2011). Research on data use: A framework and analysis. Measurement: Interdisciplinary research and perspectives, 9(4), 173–206.


Daly, A. J. (2012). Data, dyads, and dynamics: Exploring data use and social networks in educational improvement.  Teachers College Record, 114(11), 110305.


Datnow, A., & Park, V. (2014). Data-driven leadership. San Francisco, CA: Jossey-Bass.


Diamond, J. B., & Cooper, K. (2007). The uses of testing data in urban elementary schools: Some lessons from Chicago. National Society for the Study of Education Yearbook, 106(1), 241–263.


Diamond, J. B., Randolph, A., & Spillane, J. P. (2004). Teachers’ expectations and sense of responsibility for student learning: The importance of race, class, and organizational habitus. Anthropology & Education Quarterly, 35(1), 75–98. http://dx.doi.org/10.1525/aeq.2004.35.1.75


Farley-Ripple, E., & Buttram, J. (2015). The development of capacity for data use: The role of teacher networks in an elementary school. Teachers College Record, 117(4), 1–34.


Fennema, E., Peterson, P. L., Carpenter, T. P., & Lubinski, C. A. (1990). Teachers’ attributions and beliefs about girls, boys, and math. Educational Studies in Mathematics, 21, 55–69.


Fennema, E., & Sherman, J.A. (1976). Fennema-Sherman Mathematics Attitudes Scales: Instruments designed to measure attitudes toward the learning of mathematics by females and males. Journal for Research in Mathematics Education, 7(5), 324–326.


Haan, M., & Wissink, I. (2013). The interactive attribution of school success in multi-ethnic schools. European Journal of Psychology of Education, 28, 297–313.


Halverson, R., Grigg, J., Prichett, R., & Thomas, C. (2007). The new instructional leadership: Creating data-driven instructional systems in schools. Journal of School Leadership, 17(2), 159–193.


Hansen-Thomas, H., & Cavagnetto, A. (2010). What do mainstream middle school teachers think about their English language learners? A tri-state case study. Bilingual Research Journal, 33(2), 249–266.


Honig, M. I., & Venkateswaran, N. (2012). School–central office relationships in evidence use: Understanding evidence use as a systems problem, American Journal of Education, 118(2), 199–222.


Horn, I. S.  (2007). Fast kids, slow kids, lazy kids: Framing the mismatch problem in mathematics teachers’ conversations. Journal of the Learning Sciences, 16(1), 37–79.


Horn, I., Kane, B., & Wilson, B. (2015). Making sense of student performance data:  Data use logics and mathematics teachers’ learning opportunities. American Educational Research Journal, 52(2), 208–242.


Horn, I. S., & Little, J. W. (2010). Attending to problems of practice: Routines and resources for professional learning in teachers’ workplace interactions. American Educational Research Journal, 47(1), 181–217.


Hughes, J. N., Gleason, K. A., & Zhang, D. (2005). Relationship influences on teachers’ perceptions of academic competence in academically at-risk minority and majority first grade students. Journal of School Psychology, 43(4), 303–320. http://dx.doi.org/10.1016/j.jsp.2005.07.001


Huguet, A., Marsh, J., & Bertrand, M. (2014). Building teachers’ data-use capacity: Insights from strong and struggling coaches. Education Policy Analysis Archives, 22(52), 1–26.  Retrieved from http://epaa.asu.edu/ojs/index.php/epaa/article/view/1600/1315


Jordan, A., Glenn, C., & McGhie-Richmond, D. (2010). The supporting effective teaching (SET) project: The relationship of inclusive teaching practices to teachers' beliefs about disability and ability, and about their roles as teachers. Teaching and Teacher Education, 26, 259–266.


Jordan, A., & Lindsay, L. (1997). Classroom teachers’ instructional interactions with students who are exceptional, at risk, and typically achieving. Remedial & Special Education, 18(2), 82–93.


Loveless, T. (2013). 2013 Brown Center Report on American Education: How well are American students learning? Volume 3, no. 2. Washington, DC: Brookings. Retrieved from http://www.brookings.edu/2013-brown-center-report


Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and gaps. Teachers College Record, 114(11), 1–48.

Means, B., Padilla, C., & Gallagher, L. (2010). Use of education data at the local level: From accountability to instructional improvement. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development.


Medway, F. J. (1979). Causal attributions for school-related problems: Teacher perceptions and teacher feedback. Journal of Educational Psychology, 71, 809–818.



McLaughlin, M. W., & Talbert, J. E. (1993). How the world of students and teachers challenges policy coherence. In S. H. Furhman (Ed.), Designing coherent education policy (pp. 220–249). San Francisco, CA: Jossey Bass.


Mehan, H. (1991). The school’s work of sorting students. In D. Boden & D. H. Zimmerman (Eds.), Talk and social structure: Studies in ethnomethodology and conversation analysis (pp. 71–90). Cambridge, England: Polity Press.


Mehan, H. (1993). Beneath the skin and between the ears: A case study in the politics of representation. In S. Shaiklin & J. Lave (Eds.), Understanding practice: Perspectives on activity and context (pp. 241–268). Cambridge, England: Cambridge University Press.


Minor, E. (2014). Racial differences in teacher perception of student ability. TeachersbCollege Record, 116(10), 1–22.


Oakes, J., Wells, A. S., Jones, M., & Datnow, A. (1997). Detracking: The social construction of ability, cultural politics, and resistance to reform. Teachers College Record, 98(3), 482–510.


Park, V., Daly, A. J., & Guerra, A. W. (2013). Strategic framing: How leaders craft the meaning of data use for equity and learning. Educational Policy, 27(4), 645–675.


Park, V., & Datnow, A. (2017). Ability grouping and differentiated instruction in an era of data-driven decision making. American Journal of Education, 123(2), 281–306.


Riegle-Crumb, C., & Humphries, M. (2012). Exploring bias in math teachers’ perceptions of students’ ability by gender and race/ethnicity. Gender & Society, 26(2), 290–322. http://dx.doi.org/10.1177/0891243211434614


Rubin, B. (2008). Detracking in context: How local constructions of ability complicate equity-geared reform. Teachers College Record, 110(3), 646–699.


Schildkamp, K., & Lai, M. K. (2012). Introduction. In K. Schildkamp, M. K. Lai, & L. Earl (Eds.), Data-based decision making in education: Challenges and opportunities (pp. 1–9). Dordrecht, The Netherlands: Springer.


Spillane, J. (2012). Data in practice: Conceptualizing the data-based decision-making phenomena. American Journal of Education, 118, 113–141.


Stosich, E. L. (2016). Joint inquiry: Collective learning about the Common Core in high-poverty urban schools. American Educational Research Journal, 53(6), 1698–1731.


Tiedemann, J. (2002). Teachers’ gender stereotypes as determinants of teacher perceptions in elementary school mathematics. Educational Studies in Mathematics, 50, 49–62. doi:10.1023/a:1020518104346.


Timperley, H. (2009) Evidence-informed conversations making a difference to student achievement. In L. Earl & H. Timperley (Eds.), Professional learning conversations: Challenges in using evidence for improvement (pp. 69–79). New York, NY: Springer.


Tom, D., & Cooper, H. (1986). The effect of student background on teacher performance attributions: Evidence for counterdefensive patterns and low expectancy cycles. Basic and Applied Social Psychology, 7, 53–62. doi:10.1207/s15324834basp0701_4.


Watanabe, M. (2006). “Some people think this school is tracked and some people don’t”: Using inquiry groups to unpack teachers’ perspectives on detracking. Theory Into Practice, 45(1), 24–31.


Weiner, B. (1974). Achievement motivation and attribution theory. Morristown, NJ: General Learning Press.


Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological Review, 92, 548–573.


Weiner, B. (1986). An attributional theory of motivation and emotion. New York, NY: Springer.


Yin, R. (2013). Case study research (5th ed.). Beverly Hills, CA: Sage.


Young, V. M. (2006). Teachers’ use of data: Loose coupling, agenda setting, and team norms. American Journal of Education, 112(4), 521–548.


APPENDIX A

Data Use Study Teacher Interview Protocol (Introductory)


I. Teacher Background


Tell me about your current teaching assignment.

How long have you been a teacher?

How long have you been teaching at this school? What brought you here?

Tell me about your teacher training (credential? graduate work since then?)


II. School and Community Context


Tell me about the student population at this school (e.g., demographics, stability, achievement trends).

Is there an expressed school mission? If so, what is it?

What is the school’s reputation in the community?

What is important in this community in terms of education?

How are parents involved in their children’s education and in the school?


II. Classroom Context


Tell me about the students in your classroom/classes this year (e.g., demographics, changes from past years).

How is instruction organized at this grade level? Do you have a self-contained class or do students rotate across teachers within the grade level? Are there instances in which a student might receive instruction by teachers at another grade level?

Do you have any additional adult support in the classroom (i.e., teacher’s aide, parent volunteers on a regular basis)?

How is space arranged in your classroom (desks in groups, rows, learning centers, etc.)?


II. Context for Educational Reform and Data Use


What are the important educational reform initiatives under way in your school?

What training has been provided to support these initiatives?

How does the use of assessment data fit in with these initiatives?

When/why did the use of data become an important part of your school improvement process?

What are the expectations of the district/ school/grade level about how you should use data?



III. Structures and Cultures to Support Data Use


Does your school have informal and/or formal grade/other groups, small learning communities, or other collaboration opportunities for teachers to talk about instruction and student achievement? What gets discussed? How useful are these opportunities?


Is time allocated for collaboration among teachers, with respect to analyzing data specifically? How often?  How comfortable are teachers in sharing data with each other?


What data do teachers bring to meetings, and what outcomes are expected from them or what actions are they responsible for?


Have you attended any training on data use? Has your district/school sponsored professional development for schools that focus on using data to make instructional decisions?


Have you had access to other opportunities to learn about the use of data to make decisions? (Probe: other possible resources such as books, classes at a university, etc.)


Do you have a person on staff to support you in the use of data? If so, what type of support is provided to you?



IV. Data Use to Inform Instruction and Instructional Differentiation


What types of assessment data does the school or district collect on all students at this grade level (e.g., state, district, school, curriculum embedded, teacher developed, etc.)? How often?


Do teachers receive student achievement data that come from school- or district- administered assessments? How often? In what format (e.g., spreadsheets, handouts, online)?


What kinds of data are examined by teachers in groups (grade level/across grade levels)? Individually?


Do you have the authority to make changes in the educational program as you see fit if they are based on data?  If not, who makes these decisions?


How does the use of data inform decisions about differentiating instruction for students? (Probe specifically about English language arts and math).


What data do you draw on to make these decisions? How does your own judgment as a teacher come into play?


What kinds of decisions about differentiation are made by teachers in groups? By you individually?


What does instructional differentiation look like here? Is it individualized?

Do you form small or large groups for particular instructional purposes arising from the analysis of student achievement data? How often do the groups shift, and how are these decisions made?


In what ways are students identified as high achieving or low achieving and in need of particular kinds of instructional supports?


Do you do any whole-class ability grouping based on student achievement levels? How are these decisions made?


Are these decisions you make on your own or does the school or district leadership have particular expectations about differentiation?


Do parents have particular expectations about how instruction will be differentiated for their children?


Have you received any training on how to use data to inform instructional differentiation or on instructional differentiation more generally?



V. Conclusion


Overall, what are your beliefs about using data for instructional decision making, particularly with respect to differentiation? What successes have you had? What challenges have you run into? How have you dealt with these challenges?  



APPENDIX B


Data-Related Meetings Observation Protocol


I.

Meeting Overview

a.

Date and time

b.

Location

c.

Topic and purpose

d.

Formal agenda

e.

Participants

f.

Types of data used for discussions

g.

Materials used

h.

Format of data used


II.

Guideline Questions for Observation:

1.

What was the nature of the discussion?

2.

Who facilitated the meeting/discussion?


3.

What type of data was brought to or handed out at the meeting?

4.

What were the prompts used to analyze data?

5.

What type of data was used as a basis for the discussion?

a.

informal student assessment data

b.

formal student assessment data—i.e., state assessments, publisher created, district created, school created, or teacher created assessments

6.

How freely are data discussed? Were weaknesses/areas of needs openly shared?

7.

How were data discussed or analyzed in relationship to student ability? How was student ability described? Were students described by specific ability groups/skills?

8.

Was there evidence of joint problem solving, sharing strategies for analysis and use of data?

9.

Were there any decisions made based on these analyses? What type (instructional, organizational)?

10.

Were there short/long-term goals or action plans established as result of these analyses?

11.

Is there a plan for follow-up on implementation or effectiveness of action plans?



APPENDIX C


Codebook



Code

Definition

Assessment

Referring to student/student achievement as directly tied to assessment data

Behavior

Referring to student behavior in relation to achievement

Fixed

Referring to students/student achievement in fixed ways (e.g., low, medium, high; in relation to a bell curve)

Home/family

Home/family life as tied to achievement

Mixed conceptions

Referring to students/student achievement in fixed and nonfixed ways. Teacher is grappling with both fixed and nonfixed ways.

Motivation

Referring to students/student achievement/ability in relation to student motivation

Nonfixed

Referring to student achievement/ability as being malleable or fluid

Pace

Referring to student achievement in relation to pace of learning (e.g., lower achieving students need slower pace)

Proficiency level

Referring to students/student achievement by proficiency level (e.g., Far Below Basic, Basic)

Program

Referring to students/student achievement by program type (RSP, GATE)

Skills based

Referring to students/student achievement in discrete skills

Teacher judgment

Referring to student/student achievement levels in relation to teacher judgment

  



APPENDIX D


Examples of Coding of Interview and Observational Data


Category

Examples

Assessment

“I was so disappointed in her [benchmark assessment] score because her quizzes are 100% and she’s doing so great.” [teacher comment during meeting observation]

Behavior

“Last year I was so excited he made huge jumps,” said the teacher. But she added that “he’s flighty” and “he has bad influences.” [teacher comment during meeting observation]

Fixed

“The rest of the class is . . . we tried to place students according to abilities like low, medium and high. So we’re trying to keep a balance. We have to have let’s say 10 low students, 10 medium students and the rest high level.” [teacher interview]

Home/family

The group discusses a student who has been chronically absent in the past but is now coming to school regularly now. They are keeping an eye on her as they know family support is an issue. The principal says, “Last year we gave her a backpack and supplies” and clothing since “things weren’t being washed. If she needs a refresh, let me know.” [meeting notes]

Mixed conceptions

“So maybe strategically grouping them with somebody who is good with words or good with expressing or even bigger groups, groups of 5 where there’s a couple high, medium, lows so there’s some mentoring going on in the group.” [teacher interview]

Motivation

“[Student name] sticks out regardless of the number,” one teacher says, “he doesn’t participate.” Another teacher says, “he’s not self-motivated to do independent work.” [teacher comments during meeting observation]

Nonfixed

“I thought the students really did well with their writing this year, which was something I haven’t experienced before. I saw almost top to bottom the students producing really clear organized work by the middle and end of the year. Despite language barriers or difficulties I felt like we really, really improved our writing. And I think how it worked was first this is a really expressive group of students, they have a lot to say, but second we really broke apart the writing tasks quite a bit into nearly step by step procedures and then worked on slowly removing those supports and having the kids do it on their own.” [teacher interview]

Pace

“Depending on what we’re covering at that time, maybe there’s one of the kids in my class who, for this particular skill, needs to go slower with me because they struggle with it.” [teacher interview]

Proficiency level






Teacher asks, “What is a realistic goal to improve from? Right now, it is 12% proficient.” Principal replies, “Typically what you want to do is, who is proficient and who is almost there—12% and 17%. But also consider some who are on the cusp of going from intensive to strategic. So the lowest you would look at [as your goal] is 29%. You want all of them to move but also be realistic about who can be proficient with lots of practice.” [teacher comments during meeting observation]

Program

“I didn’t realize he was GATE. That’s surprising.”  [teacher comments during meeting observation]

Skills based

“Kids still need help with organization, focusing on main ideas, and supporting details. That’s what I want to be working on rather than [an online reading program]. It is important for them to be reading and responding to a prompt since that is what they’ll be doing in SBAC.” [teacher comments during meeting observation]

Teacher judgment

“Beyond just those numbers you do have a sense in your mind too, and it’s from looking at student work, it’s from discussions with students, but it’s also just a general impression you have of each student.  You have a sense of them.” [Teacher interview]

  



APPENDIX E


Code Frequency


Code

Number of Coded Segments

Assessment

16

Behavior

96

Fixed

329

Home/family

34

Mixed conceptions

122

Motivation

44

Nonfixed

91

Pace

14

Proficiency level

45

Program

159

Skills based

247

Teacher judgment

15

  



Cite This Article as: Teachers College Record Volume 120 Number 4, 2018, p. 1-34
http://www.tcrecord.org/library ID Number: 22039, Date Accessed: 11/20/2018 7:45:23 PM

Article Tools

Related Articles

Catch the latest video from AfterEd, the new video channel from the EdLab at Teachers College.
Global education news of the week in brief.; NCLB; international education; software; This episode explores ten interesting and little known facts about Social Studies.; social studies; humor; media; research; schools; Three seniors at Heritage High School talk about education and what the next President should do about it.; Debates; Heritage High School; NCLB; NYC schools; education; election; girls; interview; politics; presidential election; schools; speak out; students; testing; EdWorthy Theater starring MIT Physics Professor Professor Walter Lewin.; MIT; physics; We feature new content about the future of education. Put us on your website ­ whether you're a student, teacher, or educational institution, we aim to create great content that will entertain and enlighten your audience. http://link.brightcove.com/services/link/bcpid1078591423http://www.brightcove.com/channel.jsp?channel=1079000717

Site License Agreement    
 Get statistics in Counter format