CPM Educational Program

Flipping the Traditional Assessment Model

Assessment Icon

Philip Ero  pero@astoria.k12.or.us
Kevin Goin  kgoin@astoria.k12.or.us
Alexa Haller  ahaller@astoria.k12.or.us
Chad Madsen  cmadsen@astoria.k12.or.us
Jose Sosa   jsosa@astoria.k12.or.us
Jessica Todd  jtodd@astoria.k12.or.us
Astoria, OR

Astoria High School, in Astoria, Oregon, has approximately 600 students, 47% receiving free and reduced lunch. Our math department is in the third year of CPM implementation. We offer classes ranging from Algebra 1 through Calculus and Statistics. Many of our advanced courses are dual-enrolled with Clatsop Community College.

Within the third year of CPM implementation, we took the traditional testing model and flipped it upside down in an effort to create a more student-centric approach to assessment. In year one of implementation, we re-evaluated our practices for delivering content. In year two, we looked deeper into our assessment methodology. After much trial and error, we restructured our assessment paradigm department-wide to better align with the CPM model:

Mixed-Spaced Testing Cycle
As can happen with traditional end-of-unit exams, students (and let’s admit it, teachers) often develop a learn and dump mindset. Yet in Astoria, students expect to be assessed every three weeks regardless of our place in the curriculum. We no longer refer to exams as “Chapter Tests.” Each test is a cumulative learning opportunity for students. By referring to an assessment as “Test #7,” students recognize that the test itself does not cover any one topic but rather the mixed-spaced material that is prevalent in their daily homework. Test questions are a mix of cumulative concepts up to that point, and which are spaced throughout the year. Hence, whenever we complete a chapter, we simply roll into the next chapter the following day.

During the weeks we are not testing, students are given a 15-20 minute quiz on current content. In our more advanced classes (all classes after Geometry), the students grade their own quizzes once everyone has finished. While the teacher solves the problems on the board, students are encouraged to make notes on improvements and to discuss their solutions with their peers. Typically, questions used on quizzes will show up in a similar fashion on the next test. When students are given the opportunity to preview their tests, they often talk about the similarities between the quiz and test questions. Students quickly learn to ask for the most challenging content to be on quizzes so they have the chance to see the material before the next test.

By testing every three weeks in a format aligned with their mixed-spaced homework, we have seen an increase in content retention, less “cramming” for tests, and a higher value placed on completion of daily homework. Our renewed focus on completing homework with fidelity also means we no longer provide test review materials. Students no longer see starting or ending points for each concept, but an understanding that the curriculum, as a whole, is closely connected. As such, their success will be determined by their level of mastery in all areas. Lastly, we see lower levels of test anxiety among our students, especially from those who have done their homework with fidelity. Our students better understand how closely connected the homework is to their exams. Gone are the days of students saying, “What is this test going to cover?” or “Chapter 4 was hard for me, I’m glad I don’t have to do that again!”

Collaboratively Written and Graded Assessments
(include PC trig question graphic)
Just as collaborative learning is an essential part of the CPM curriculum, collaborative test-writing better supports a consistent and cooperative learning environment. Teaching teams brainstorm concepts that will be on the next test using the mixed-spaced philosophy. Tests every three weeks cover 70% cumulative material from the entire semester and 30% new material (i.e. any content introduced since the last test).

Once a test is drafted by a teacher, the team suggests improvements to ensure we are effectively measuring student comprehension. Never have we used the first draft of a test; teachers always find areas for improvement or clarity. We expect constructive feedback on every test. The mixed-spaced tests are drafted with one key philosophy: students are not expected to have mastered new content. The first time a topic is assessed, it is via an entry-level question. As the year progresses, foundational concepts continue to appear with increased rigor as students develop new levels of mastery.

As a result, we have found students complain less about a specific teacher writing an “easier” or “harder” test. They understand all teachers were involved in the writing process. What is more, collaborative test writing gives content teachers a checkpoint to see whether we have over-emphasized or overlooked any concepts in our respective classrooms.

Just as we collaborate in writing tests, we also grade collaboratively. Teachers on the same team grade the same problems for every student in that course. For example, one teacher will grade pages one and two of every Algebra 2 test while another teacher will grade pages three and four. We have found that there are several benefits to using this longitudinal grading. First, this offers students more consistent feedback regardless of the classroom instructor. Also, every student has the “harder” and the “easier” grader, so each teacher has to prepare their class for the same standards. Additionally, this process has helped us align our points of emphasis in class. As content teams, we spend a good amount of time discussing exactly what we expect students to be able to demonstrate with a given topic. We often identify inconsistencies while grading a colleague’s exams, sparking healthy discussions on what standards and practices need to be adjusted to better prepare our students.

Preview & Review
At the start of the third year of implementation of the curriculum, we researched formative assessments and discovered that students show the most gains in learning from an assessment when they are given quality feedback with no grade attached. This led us to think about the purpose of our assessment process. Was it to simply test students and get a grade, or was it to create a learning opportunity?

We shifted our philosophy behind assessment feedback by creating a test preview. Students are given a chance to see each test a week in advance. Teams completely clean off their tables and are given blank copies of the test. They can not write anything down, but they are allowed to talk with their teammates. They have conversations about the content, where to find the content in their notes, homework, tests, and quizzes, and what they need to spend time studying. After five minutes, the tests are collected by the teacher to be administered the following week.

As a result of this practice, we observed a dramatic reduction in test anxiety. Before this change, students faced an enormous amount of cumulative content to study. After the implementation of test previews, we no longer hear “What’s on the next test?” Instead, we spend more time answering specific content questions. Students walk into the room on test days far more confident in their knowledge and abilities.

We have also noticed an improvement in teacher-student relationships. Because we are transparent with the students, teachers are no longer perceived as trying to trick students with our tests. There are no gotcha test questions since the students know what is coming. The entire testing environment has become less about “beating Mr. Teacher’s test” and more about the teachers and students teaming up to conquer it together. Now, when students talk about what questions they saw on the test, it is not cheating. We call that studying! In fact, we have seen a 50% reduction in the number of cheating incidents since implementing this strategy from the 2017–18 school year to the 2018–19 school year.

As we delved further into our assessment discussions, we also refined our test review practices. Previously, we scheduled review days before each test with some form of teacher-directed review activity. Because CPM’s homework is cumulative, we asked ourselves, Why were we taking more time out of our schedule to review? Students see the test a week in advance; they know what to study. Is there really a need for a review day? After some discussion, we came to the conclusion that a review day before the test was not the best use of our time. We wanted our assessments to be an opportunity for growth and learning, so we moved the test review day from the day before the test to the day following the test. It is our philosophy that dedicating class time to review for a test undermines the importance of completing the homework and core classroom problems.

On our review day, we give each team one blank copy of the test they took the class before.
Students use the class to work together through the test, going over each question, discussing their methods and determining what does or does not work to solve each problem. Teachers do not distribute the correct answers, but instead encourage students who got the problem correct to argue their case to their peers. Students have bought into this philosophy; everything we do is an opportunity for learning. They look forward to talking to each other about the test, often skipping right to the problems they struggled with the most. They use the time to learn from their mistakes. Due to the cumulative nature of our tests, they know that every concept will eventually show up on a future test.

Conclusion
We recognize our new assessment model is still a work in progress; adjustments will continue to be made as we move forward. For example, we have found our freshman and sophomore classes need more scaffolding and guidance for how to make the best use of the five-minute preview and the test review day. Addressing this issue is the current focus of our department.

Through these processes, we have seen tremendous gains in student performance, more content retention, and a reduction in test anxiety and cheating. Our precalculus classes had a 15% increase in the class averages on our final exam compared to the previous year. Furthermore, the development and implementation of these practices have sparked more meaningful mathematical conversations and strengthened the overall efficacy of our math team. Our students have made note of the improved collaboration and peer accountability. Regular teacher communication and collaboration has not only improved our holistic understanding of the content, but also our comfort in taking risks and seeking feedback. In this way, we model the very behavior we want from our students.

Exit mobile version