Making the Most of the Rest of the School Year (Copy)

Build a Data Use Plan That Turns Assessment Into Action

This spring has been all about assessment in my world, which is , two projects pushed my thinking about assessment and data in important ways: my article, “From Student Data to System Impact: Evaluating Reading Intervention Success,” in the Winter 2026 issue of Perspectives on Language and Literacy, and TRL Summit 2026, with its theme “From Confusion to Clarity: Turning Data Into Instructional Impact.” Both centered on a pattern I see over and over: schools are surrounded by data, but without good systems, they stay stuck at the student‑by‑student level and never quite get to system‑level impact.

In the Perspectives article, I describe how teams often pull up one progress monitoring graph at a time, make a decision for that student, and then move on—“and so on, and so on”—without ever stepping back to ask, “How effective is this intervention as a whole?” At TRL Summit, my “Taking Action on Student Data” session and other summit sessions returned to the same idea: analysis does not equal action, and our data use improves when we get very clear about three things—What question are we asking? What evidence will we use? What action are we prepared to take?

Plan your 2026–27 data use now

As the school year winds down, most teams are focused on closing out grades, wrapping up interventions, and celebrating students. It is easy to think, “We’ll figure out our data plan in August,” and then August arrives with fires to put out and very little time for thoughtful design. The result is familiar: lots of data, but not enough planned spaces to use it well.

Drawing on the ideas in “From Student Data to System Impact” and on the Summit’s focus on assessment, I want to offer one clear charge: before summer, decide when each team will meet next year, which data they will use, and what decisions those meetings are expected to support. When we put data routines on the calendar now, we create the conditions for more intentional, less reactive work with both student‑level and system‑level data.

Step 1: Name the purposes of your assessments

In my article, I argue that “the actions we take next to support student reading growth are impacted by the perspective we take when we review our data.” If we only ask student‑level questions (“How did this student grow?”), we will only take student‑level actions; if we also ask system‑level questions (“How effective is this intervention overall?”), we are more likely to make system‑level improvements. That same logic applies to how we think about assessment purposes.

Before you plan meetings, sort the assessments you already use into a few broad purposes:

  • Screening. Brief measures that flag which students may be at risk and need a closer look (for example, universal early literacy screeners administered three times per year).

  • Diagnostic or targeted. Tools that clarify why a student is struggling—phonological awareness, decoding, fluency, language comprehension, and so on—so support can be matched to specific skills.

  • Progress monitoring. Frequent, brief checks that tell you whether instruction or intervention is working and whether students are on track to meet their goals.

  • Outcome or summative. Larger assessments that give a system‑level picture of progress (district benchmarks, state tests) and inform program, scheduling, and resource decisions.

In the Perspectives piece, I show how looking at aggregate progress monitoring data (for example, all students in Intervention A, B, or C) can shift decisions from “What’s wrong with this student?” to “What’s happening with this intervention?”—a far more efficient and often more effective focus. Being clear about the purpose of each assessment is the first step toward that kind of system‑level thinking and helps you avoid the “data dump” meeting where every report shows up and no one is sure what to do next.

Step 2: Match data to the right teams

Once you’ve named your purposes, the next question is: Which teams need which data to do their work well? In both the article and my MTSS Data Academy work, I emphasize the importance of teacher teams, Building Leadership Teams (BLTs), and District Leadership Teams (DLTs) using aggregate data to answer questions about intervention effectiveness—not just reviewing one student graph at a time.

Most schools already have some version of these teams:

  • Classroom or grade‑level teams. Use screening, classroom assessments, and progress monitoring to strengthen core instruction, form and regroup small groups, and identify students who may need supplemental support.

  • Intervention or MTSS teams. Use screening, diagnostic, and progress monitoring data to decide who receives which interventions, adjust intensity, and determine when to change or fade support; here, reviewing data by group (Intervention A, B, C) rather than student by student can surface patterns of intervention effectiveness.

  • School leadership teams (BLTs). Use aggregate intervention and outcome data to evaluate which interventions are working in which grades and where there are gaps—for example, strong fluency interventions but weak phonics options at certain grade levels.

  • District leadership teams (DLTs) or cross‑school teams. Use district‑wide data to identify system gaps, such as many phonemic awareness and phonics interventions but very few effective language‑building interventions in kindergarten.

Each of these teams does not need every data set. They need the data that matches the decisions they are responsible for—student‑level and system‑level.

Step 3: Build next year’s data meeting calendar now

With purposes and teams in mind, now is the time to get concrete: put next year’s data meetings on the calendar before everyone leaves for summer. The TRL theme, “From Confusion to Clarity,” reflects this kind of design work—aligning your actual calendar with what you say you value about assessment and data use.

A simple starting structure might include:

  • Three schoolwide data windows aligned with universal screening (for example, early fall, mid‑year, and spring).

  • Standing grade‑level data meetings within 2–3 weeks of each screening window to examine class‑ and grade‑level patterns and plan instructional responses.

  • Regular MTSS/intervention meetings (every 4–6 weeks) focused on progress monitoring and questions like, “How effective are our interventions?” and “Where do we need to intensify at the group level?”

  • Quarterly BLT and DLT meetings that use aggregate and outcome data to evaluate intervention systems, identify gaps (for example, missing intervention types at certain grades), and make resource and scheduling decisions.

From there, assign specific dates and times, and clarify who will pull reports, prepare aggregate views (like the Intervention A/B/C example from your article), and draft guiding questions.

Step 4: Give every meeting a clear question

In “From Student Data to System Impact,” I write about how the questions we ask drive the actions we take: if we only ask student‑level questions, we only take student‑level actions; if we ask system‑level questions, we open the door to more efficient, high‑leverage changes. In my TRL “Taking Action on Student Data” session, I framed this as three prompts: What question? What evidence? What action?

A general goal like “review data” almost guarantees a meandering conversation. Instead, give each meeting a one‑sentence purpose that answers those prompts, such as:

  • “Determine which students need supplemental phonics support based on fall screening.”

  • “Use progress monitoring data and aggregate intervention views to identify interventions that are not sufficiently effective and plan intensification at the group level.”

  • “Use schoolwide reading and intervention data to identify strengths, gaps, and professional learning priorities.”

This mirrors the decision‑focused approach to assessment at TRL Summit 2026, where the emphasis is on assessment as a tool to drive clear, shared action, not an end in itself.

Step 5: Protect the time and use simple routines

Even the best calendar will not change outcomes if teams show up and are unsure what to do. In the article, I describe how easy it is for educators to “explain data instead of taking action on data,” especially when they are unsure how to respond when students are not making expected progress. That is why I recommend scheduling regular time—about once a month—for teacher teams to review progress monitoring data and take action, supported by clear agendas and frameworks like ICEL (Instruction, Curriculum, Environment, Learner).

A few practical moves:

  • Treat data use sessions as essential events, not optional meetings that can be bumped whenever something else comes up.

  • Use a consistent protocol so staff know what to expect (for example: start with a clear question, review aggregate group data first, decide where to intensify at the group level, and then move to individual students as needed).

  • Clarify roles: who facilitates, who captures decisions, and who keeps time. It may feel a little structured at first, but those roles keep meetings focused and build capacity within teams to use data confidently.

When teams have these routines, they are less likely to get stuck “admiring the data” and more likely to make timely, concrete decisions for both groups and individual students.

An actionable next step (and how MTSS Data Academy can help)

In “From Student Data to System Impact,” I argue that focusing first on interventions—rather than intensifying supports student by student—helps avoid the frustration and resource drain that come with trying to individualize for large numbers of students when the intervention itself needs work. The Winter 2026 Perspectives issue and TRL Summit 2026 both reinforce a core belief: assessment is only powerful when it changes what adults do, not how we label students.

As you wrap up this year, consider gathering your leadership and MTSS teams for one focused planning session with this simple agenda:

  1. List the assessments you use and identify their primary purposes.

  2. Match each assessment to the teams who need to use it, including where aggregate intervention data will be reviewed.

  3. Put next year’s data meetings—with dates, participants, and clear purpose questions—on the calendar.

If this feels like a heavy lift to take on alone, this is exactly the kind of work the MTSS Data Academy is designed to support. MTSS Data Academy helps teams clarify assessment purposes, design efficient data routines, and build the skills and tools they need to use student and intervention data for both immediate instructional decisions and long‑term system improvement. As you plan for next year, consider how MTSS Data Academy can help you move from student data to system impact in your own context.


Next
Next

Making the Most of the Rest of the School Year