Coding Rooms is discontinuing Free and K-12 services. Click here for more information.

How Automated Feedback Improves the Classroom Experience

Learn how automated feedback can improve teacher and student experience in your classroom.

March 10, 2022
by
Evan Sayles

In my first year teaching computer science, I was fresh out of a college CS program and eager to pass on what I could to a crop of fifteen first-year coders. I had planned some great lessons! There were interesting and challenging projects in store! But very quickly I learned that the class had some unique and challenging structural problems. Whenever students had code to work on, all my time would be spent flitting from desk-to-desk, answering student questions, and evaluating the correctness of some part of their code. Teaching to the class at large became nearly impossible while students were working. They needed feedback much more frequently than I could handle alone.

Since then, I have discovered how useful an autograder can be for my instruction. These days, I seldom release an assignment in my AP Computer Science A or DataStructures and Algorithms classes without using an autograder, even if I don't intend to use the auto-grader’s “grade” to score the assignment.

Using automated feedback as part of my assignments has been a huge boost, both for my students and for me. It saves precious time when working on projects, freeing me to focus on harder problems. As long as the feedback is well-developed, it can be a useful part of the learning process and help students build independence. In this post, I‘ll go over my philosophy about automated feedback- why it’s worth the effort to set up an auto-grader for assignments, and how to do it well.

As teachers, we have very limited time to get through a whole lot of material. It’s frustrating to get bogged down with smaller tasks when we’d rather be instructing the whole class or helping students with especially challenging problems. Yet it’s typical in computer science classrooms for there to be a massive queue of students asking for clarification about something or trying to determine if their code works before they submit it. And that alone isn’t a problem! Students absolutely should be asking those questions. But time spent working with one student is time not spent with everyone else, and students with a question will often sit idle while they’re waiting for their teacher's attention.

Autograders save time by providing instant feedback about the functional correctness of the code -whether it meets the expectations and whether it works in all cases. Students can move faster through the course material and develop more independence: when they see cases where their code doesn’t work as expected, they can immediately get to work trying to fix it. Plus, who doesn’t like to see a bunch of green checkmarks after working hard on a problem?

However, using automated feedback to save time and effort in class requires a bit of up-front work. Whether using unit testing or input/output matching, it’s important to consider a wide range of cases including any possible edge cases. For example, when running tests with numbers, consider adding tests with positive numbers, all negative numbers, and other inputs that could cause arithmetic errors; for strings, try empty strings, strings with mixed case, or any other unusual input that could catch a programmer off guard. Having code that tests every conceivable case is good for your students while they develop as independent learners. My AP students tend to get better and better about considering the possible weird edge cases on free-response questions because of the practice they get through their in-class assignments.

Just as important as the thoroughness of an automated code tester is the quality of feedback that students receive. Especially for formative assignments, I keep no secrets when using an autograder. In my Java classes, my unit tests all use JUnit assert statements. Should a test fail, JUnit typically reports a message like “expected <true> but was <false>.” In almost all cases, that is not useful feedback! I always include a failure message with my autograder output, usually including the data I used as input (for example,“maxVal([-2, -1, -4]) should return -1”). Other assignments use input/output matching, which normally provides the typed input and expected output for each test. That way, students know exactly why their code fails and they can begin debugging on their own. When I was an AP Computer Science A student, I really benefitted from seeing the failure points of my code when I was practicing on websites like CodingBat, which lists dozens of unique inputs for some problems.

Automated feedback has been a great tool in my CS classes and I am an advocate for their use whenever feasible. It’s a great time-saving tool for teachers and students alike, and it encourages students to look at error messages and try to solve their own problems. Of course, using autograders to provide this feedback requires a bit of work upfront to make it effective. Consider a wide variety of tests and make sure students know what input(s) are causing their programs not to pass. Coding Rooms makes it easy to set up auto-grading for assignments in lots of languages! Try following one of their guides to add automated testing to one of your problems today.

For more information about auto-grading with unit tests (with examples in Java, Python, and Ruby), check out: https://auto-grade.joemazzone.net/

Evan Sayles
Evan Sayles

Computer Science Teacher and Robotics Coach
Avon Old Farms School
Avon, Connecticut (just outside of Hartford)

March 10, 2022

Learn why these organizations choose Coding Rooms as their next-generation learning platform.