If you’ve talked to an engineering nerd or a tech bro recently, you’ve probably heard the phrase “fail fast, fail often” a lot. The idea is simple: Value trying and learning from failure rather than moving slowly and demanding perfection. I teach it in my classes, particularly in engineering, but also in the more traditional science classes where I want kids to feel free to try and be wrong in order to grow.
As a teacher, when I hear “failure,” I think about the number 60 percent. For students, a failing grade is deeply tied to the stress of not hitting a required benchmark. Grading and failure have come up a ton at my school this past year. Like many other schools, we’ve discussed grade inflation and the merits of flexible grading policies that many people adopted for pandemic-era learning.
In my 10 years in schools, I’ve never seen a debate where faculty (and students) are so deeply divided on the right way to proceed. At times, discussions about these grading policies have felt like people rooting for competing baseball teams: Team Mastery-Based Grading vs. Team Old-Fashioned Grading. “Liz is anti-retakes, and she’ll debate you” is a sentiment I’ve heard in meetings.
Earlier this year, I navigated this conflict in my own classroom when I started co-teaching a course called Design at Human Scale with my co-worker Brendan. As a math teacher, Brendan has played a big role in leading our school’s math department’s mastery-based grading. He assigns students assessments based on benchmark topics, which they are allowed to retake until they demonstrate mastery. He uses a 4-point scale which he adjusts to reflect the 0-100 percent that is used schoolwide.
I, on the other hand, remained much more traditional. I gave assessments at regular intervals, which I graded based on percentage of correct answers. I then tallied all assessments for a final grade. I wasn’t sure how we’d consolidate these grading philosophies.
I had learned enough about grading to have some doubts about my system, but I was still tied to “normal grading.” The idea of letting go of my policies felt inconceivable. I felt like my grading made sense, and people who didn’t agree just didn’t understand.
That said, I had a hard time with grading that felt subjective. This subjectivity happened a lot in my engineering class, which included more creative assignments than a physics class.
My engineering courses are all about iteration, where projects start small and simple and become more complex. Failed prototypes are good things to grow from. In my fidget spinner project, we start by making paper fidget spinners with spaghetti bearings. A next version is made from cardboard to test size and shape before committing to a final laser-cut product. But I was grading the project based on a final paper that didn’t reflect the value of iteration.
In the face of my challenge with subjectivity, I aimed for higher, more pleasing grades, and suddenly, engineering was an “easy A.” (Any university-level engineering student would be surprised by this, given how traditionally challenging engineering is.)
I became curious if my grading could reflect the “fail fast, fail often” mentality I taught. My new design course offered an opportunity to start trying.
When planning the course, Brendan and I worked together to design a system we both felt excited to try. We considered our course objectives and identified four key learning focuses: design, iteration, tools and software, and community.
For each project we tackled, we invited students to submit a piece of evidence about what they learned in each category. We reviewed the evidence and determined if students’ work demonstrated growth and earned a pass. If the evidence was insufficient, students had the opportunity to resubmit.
This meant we never graded anything except the students’ reporting of their own growth. I didn’t have to look at each clock students designed and assign them a grade like a 90 or an 88. Instead, the student did the work of telling me what they learned by making the clock. Brendan and I used a table that correlates the number of pieces of evidence to a grade scale to translate that evidence into a numerical grade.
While our system still has area for improvement, I like it enough to keep tinkering and try to apply it elsewhere in the future. The moments of subjectivity have been nicely replaced with a more objective scale. Creative work shouldn’t be graded based on whether I am happy with it but rather on how well students justify their design choices. It still isn’t perfect. But it doesn’t need to be. I just needed to be willing to “fail fast and fail often” myself.
Co-teaching this course enabled me to learn the way I want my students to learn: trying new things, failing, and growing from that failure. In the face of different professional opinions about grading, I have learned we succeed by being open-minded and curious instead of contentious. There is no perfect, easy, or obvious solution to good grading.
Being more open to iterations of my own practice has led to greater professional growth than all of the debating of the merits of retakes, corrections, and mastery ever did. I am happy to find the learner in myself that I strive to help my students be.