Assessment Variables and How to Utilize Assessment Vendors
Written by: Michael Gath
Learning within schools can be measured by student and teacher performance outcomes in a variety of ways. Data points are often obtained through summative, formative, or benchmark assessments. Each increases the frequency for more accuracy in determining teaching and learning. Accountability tied to these methods is constantly changing, yet the requirement and need to know how students are performing remains constant. This is a result of the different learning options we have been forced into during a pandemic. Through creative problem-solving and intentional actions, we can establish ways to trust the data we receive.
Initially, assessments were put on hold or canceled due to the uncertain validity of results that would be taken from an assessment that was not completed and proctored in an in-person setting—not to mention the lack of consistent technological resources available to implement those assessments equitably. While cautioned by several formative assessment companies within the field, several schools still participated across the country in a remote setting and the results were compared to trends under “normal” circumstances. Initial findings revealed that student scores on assessments, when given remotely, were trending in a direction that outperformed those than before in an in-person format. How could this be?
Was this data the result of virtual learning, or were there more resources made available to students at home than in school? Just a few ‘other variables’ to consider when considering this information.
While trying to maintain a competitive market among assessment platforms and meet the needs of our new normal, companies began to share their findings and pushed to make their product accessible to any learning environment. As data poured in and was analyzed, such as NWEA’s Executive Summary, they found margins to remain consistent across modalities of in-person versus virtual learning from Fall 2019 to Fall 2020 in certain grades. Interestingly, grades 1 and 2 showed large increases in percentile ranks for virtual learners as compared to those in-person.
Given this information from NWEA, and conversations with vendors such as i-Ready and ClearSight that had cautioned and shared similar findings from their data sets, testing coordinators knew we needed to address district formative assessments differently, and fast.
As we began to take a look at this we determined that all grades would likely yield unreliable data if we did not put something in place to give us reason to not rule the assessment data out. Many companies provided general tips and strategies for communicating and preparing teachers, parents, and students to take their assessments. The resources were helpful to consider and brought into focus some things we might not have thought about. Personally, I found that teachers who utilized the resources had better and less stressful implementation scenarios.
However, implementing these best practices wasn’t something that all teachers were comfortable using in their first months back in the fall. There were many other things that were of immediate concern. And first and foremost, the ever-changing health guidelines kept us shifting and adjusting to provide safe and healthy environments. As a more specific and helpful tip, you should check with your district’s assessment representatives to see what they recommend for use with their product. For example, our district switched to utilizing i-Ready and they shared this information about “Getting Good Data” as a resource on their site. Otherwise, I am confident that “10 Tips to Keep in Mind” by Kara Heichelbech will give you a good start in determining what will help in testing scenarios.