Questions & Answers
Practical Measures (or "PMs") are frequent, formative measures that aim to support educators’ daily practice by providing actionable data to inform instructional practice. In Edsight, PMs take the form of surveys that assess various aspects of either classroom teaching of professional development (PD) contexts.
For example, the Small Group Work PM indicates students' mathematical thinking. An example of a question is “In your small group, did listening to other students help you make your thinking better?” Student responses are collected via Google Forms, anonymized, and then displayed via Edsight.
2. Should I use Edsight to assess the progress of individual students?
No, we do not recommend that you use the student responses to assess the progress of individual students. The classroom measures are designed to be aggregated at the level of a classroom. We also have concerns about students’ anonymity and how that might affect students’ responses to survey items.
3. Should I disaggregate student responses by student demographics?
It is possible to disaggregate your student responses but only if it is tied to the focus of your instructional improvement work, you have a sufficient sample size (we recommend 20 students in a given category), and you take great care in framing how the disaggregated results will be analyzed.
For example, in one partner district, a math department (seven teachers) had engaged in substantial professional development together focused on improving opportunities for students to engage in whole class discussions of their solutions. Instructional leaders and teachers were interested in exploring whether different groups of students were being provided equitable opportunities to engage in class discussion. As a result, after a professional development session focused on improving discussions, grade-level teams each taught the same lesson (organized around a high-rigor task), and each teacher administered the whole class discussion survey (144 total survey responses across the 7 classrooms). The survey was administered electronically and automatically collected student IDs (i.e., students did not enter their names into the surveys). Instructional leaders then matched the student IDs to demographic information, and were able to disaggregate data by the student demographic categories that were of interest to the department (gender, racial, and ethnic background, English language learner status, students who received special education services). Instructional leaders did not share data on any group of students for which there were less than 20 responses across the sample, for both issues of anonymity and concerns that the resulting inferences may not be valid for such a small number of students (e.g., there were less than 20 students who identified as Native American in the school, and so data corresponding to that category was not shared). The instructional leaders also took great care when sharing the data with teachers (at the level of a school, not by individual teacher’s classroom) and explicitly framed this analysis as an opportunity to identify school-level goals for addressing issues of equity -- and emphasized that the team should be careful to interpret these data in terms of students’ instructional opportunities, not in terms of student deficits.
4. Should I aggregate student responses across the same grade level? Across a school? A district?
It is possible to aggregate student responses across classrooms, a school, and a district. However, it should also be done in relation to the focus of your instructional improvement work, and with careful deliberation about what makes sense to aggregate, and what does not.
For example, in one partner district, District Math Specialists aimed to improve the quality of the instructional materials they were developing for teachers. In particular, they aimed to increase the rigor of the instructional tasks and to increase opportunities for students to engage in both small groups and the whole class discussions in which they shared their reasoning. To do so, the District Math Specialists assembled teams of teachers to write lessons and units (what they referred to as Curriculum Guide Writers), and they recruited teams of teachers to pilot the new lessons (who they referred to as Early Implementers). As part of piloting the new lessons, Early Implementers were asked to administer the classroom measures of small-group and whole-class discussions in particular lessons. Members of the PMRR team then worked with District Math Specialists to aggregate data so that they and the Curriculum Guide Writers could use the resulting data to analyze the impact of the materials they had prepared on classroom discourse and thus improve specific lessons. From this perspective, it made sense to aggregate data across teachers in the same grade level from different schools who were teaching the same lesson (e.g., 7th grade Unit 1 Lesson 3). However, before aggregating the data, District Math Specialists examined the data separately for each teacher to check if data from any particular classroom was markedly different or unique from others.