“Ability to analyze data in multiple areas including but not limited to race/ethnicity, IEP, 504, grade, age, enrollment status (e.g., open enrollment.)”
This is a requirement from an RFP for a Social Emotional Wellness tool I helped my team fill out last week. It’s also a question that Shmoop will NEVER answer yes to and that on a daily basis saddens us as we attempt to put students back at the center of social emotional learning efforts.
The question we need to be asking is if the intent of Social Emotional Wellness tools is improving student outcomes? Or is it to create analytics sets about them?
THE WRONG WAY
Somehow “industry experts” and software tool vendors have convinced extremely well-intended educators that putting students’ emotional health on a normative scale and assigning students static numbers as a rating of binary success like they are financial data is the right or best way to help them grow (nevermind the fact that this requires frontline teachers to process and do something with that data on their own). Imagine a tool where your student’s or your child’s behavior was assigned a score and a color based on a 1-time static survey. See below actual screenshot and this article :
Even professional adult learners at large corporations know how damaging this approach is. During my time at Pluralsight we used dynamic assessment to understand technologists’ hard skills on a scale. Even those companies fought tooth and nail to make sure those scores weren’t exposed as a number to managers or peers. How is it that corporations, driven by monetary success, are protecting their employees better than we’re protecting our own students?
THE RIGHT WAY
This isn’t to say that having data on the health of our student’s Social Well Being isn’t helpful or even necessary. What’s important here is how, why, and who the data is presented to.
Why - there is NO more important reason for this work and these tools than helping the students move forward and change their outcomes in a healthier way. There may be other valid needs, but they should come far behind this as the core.
How - the tool should do everything in its power to not allow the data it’s presenting to be weaponized against the student or groups of students. This includes concepts that we’ll dive into below on: normalization, red/yellow/green & number ratings, relating and not labeling, expanding reductive language, static profiles, and much more.
Who - nothing should be provided to anyone surrounding students that they also don’t get their own lens into. SEL is not a directed journey, it’s a mutual and personal journey. While it’s sometimes logical for the display layers to change, students should be treated as active agents in the learning journey.
Now imagine a lens for students and educators that accomplishes all these things. Putting the student back at the center of the work with recurring, student-driven engagement at low enough levels to be accurate and actionable:
- Normativity - comparing a student to another student, or a group of students to another group automatically cuts out all variables of individuality. Groupings of student SEL data at large scale where they’re pitted against each other can only paint an inaccurate lens for individuals. It’s almost unbelievable that tools are grouping by race, gender, etc, and comparing them to each other.
- Static Profiles - a static profile created by a one-time capture of data is the exact opposite of a growth mindset. A growth mindset is the key to helping students down their SEL learning journey. If the data being actioned is from a single capture, not only is it likely to be inaccurate, but it also requires manipulation in questions to capture as much information as possible at that time. SEL tools should be built and implemented for very frequent engagement.
- Relating Not Labeling - a tool that tells you what you are (top image of a “behavior claim”) is very likely to create self-fulfilling prophecies and enforce vs. guide students. All language & visualization when presenting SEL information to students & educators should be from a lens of relating (“is this how you’re feeling?”) vs setting a label to the student based on the interactions.
- R/Y/G & # Ratings - scales that this data is put on are completely arbitrary. Any lens showing a hard beginning or end (0-100) immediately discredits the information and puts students into that label. Any SEL dimension is far too complex to be able to normalize on a scale and present it to a student or educator that way without damage (see the second image with no numbers or beginning/end).
- Reductive Language - any labels or content provided to students needs to have the intention of broadening the student’s language about themselves. Today their terms are very binary (lazy/motivated, smart/dumb). SEL tools should be presenting data & analytics to both educators and students in a way that expands that language to be more precise and helpful. See our blog here to learn more on this concept.
Never forget that each student represents another you, your child, your niece, your sibling, or someone that should be cared deeply for. Keeping the human lens on SEL work is something that needs to be refocused on. This work isn’t about KPIs or datasets, it’s about improving student’s lives in a time where they need it more than ever and have very few resources at their disposal to do so. #keepselhuman