I have been working on good formula to look at student performance on the MAP tests for some time. Here is my latest effort to try to give credit for growth, progress, and some of the subtle things that make a difference. Should I be looking at things differently? Should I weight things differently? How could this actually be helpful. I removed the actual numbers for privacy reasons, but here is the template.
Each year, I try to develop a system of analyzing individual student data on MAP. Recently my focus has been on three main areas (gainers, stickers, sliders) and a few sub areas that flesh out some other progress and growth. Looking that the data for last year, here is what I found.
XX students rose one level (basic to proficient, proficient to advanced) GAINERS
XX students fell one level (proficient to basic, advanced to proficient) SLIDERS
XX students maintained their level from the previous year (STICKERS)
Digging a bit deeper....
Of the XX STICKERS, XX showed progress on their scale score, while XX declined in this area.
XX students achieved a NEW, HIGHER level of achievement
XX African-American students are in the GAINERS category
XX African-American students are in the SLIDERS category
The average increase in the STICKERS category was XX and the median score was XX.
(Remember this is a average of students both gaining and losing ground on their scale score)
OVERALL SCORE=
I tried to develop an overall score using these numbers by placing them into a formula. It is the first time that I have used this formula, so I am looking for tweaks and feedback on making it more representative of the progress of our students.
Students Rising One Level +2
Students Falling One Level -2
Students Rising Two Levels +5
Students Falling Two Levels -5
Students Remaining on Level, Higher Scale Score +1
Students Remaining on Level, Lower Scale Score -1
Student Achieving a New Level of Achievement +2
African American Student Rising One Level +1
African American Student Falling One Level -1
My hope is that this formula is something that we can look at and discuss over time as it puts more variables in play when looking at the data. Your overall score is XX.
Robert, along with looking at the data, we've learned that you have to lower the affective filter for the teachers. We say 'how fascinating' to all data, good and bad. You have to train your staff to do this, and it does take years, otherwise, guilt, shame and blameshifting ensue. So behind those two simple words, how fascinating, is a whole 'woodenesque' psychology. Love your blog!
ReplyDelete