Grade Expectations

Grade Expectations

Context first. We're a group of 12 TA's and graders responsible for assisting and grading a class of 311 students. We don't impose limits on the number of attempts at assistance and getting projects graded. How do we effectively balance requests for getting help and getting graded? This was when we really decided to invest time in an auto-grader. Once deployed we could leave out the mundane part of work to the auto-grader and actually work on the singular thing that mattered-our students' understanding of the course materials. Here's what we learnt over the past week.

Ship fast. Iterate faster.

You will conjure up ideas in your head which you will call your vision. And you will keep working on it tirelessly within the 4 walls of your room without expanding its scope to include what your to-be "users" actually want.

You know how our instinct tells us to keep a product in the shadows lest we face ridicule for putting out a half baked product? Yeah well, kill that instinct. Its what holds us back in the first place. Your users will tell you exactly what they want. And their urgency will spur you to action faster than anything else that would have "inspired" you.

Ask and it will be given to you

Help will always be given to those who ask for it.

As long as you are driven enough, you'll always find help. Your friends will get inspired by your motivation to create impact and join you. And sometimes you don't even need to ask-they'll see you chipping away at the problem and bring their energies to assist.

Do things that don't scale

This one's a cliche. But it's true! You'd like to automate every single part of the process. But that's nigh impossible. You have limited resources to throw at the problem. Reserve your bets. The way we now operate is taking students' feedback on the auto-grader and updating it on the fly when we discover flaws.

Listen to your users

Know your audience!

Think about the classic pareto rule-20% of the features drive 80% of the impact. But to identify the 20%, talk to your users. We got a ton of feedback from our students that we intend to accommodate when we do our next auto-grader release-things that will matter more than the initial ideas we had around "enhancing" the tool.

On Version Control

Practice what you preach

We had a major lapse in collaborating for this project. We'd share zip folders over slack. At one point I had 5 different zip folders for the same thing! Using github should have been a natural course of action but somehow this eluded us (and we'd go around telling students to use git-Oh the irony!). Anyways, we've moved the project to github where we'll continue collaborating.

Data Matters

In god we trust. All others must bring data.

So its all sounded hunky-dory till now. The real question is how was this perceived. Well, we were struggling till Thursday with various fixes. I'm sure that made things a little complex for students who'd already invested time in getting their projects to work. Sincerest apologies for this🙏.

And then there were teething troubles. I guess we have to make progress on the following:

  1. Package naming is perhaps too rigidly enforced. So also is string matching and id matching. Either give students starter code or make peace with the fact that only partial matches can be expected.
  2. Autograder takes too long to run through (~10 minutes per run). Is there a way we can speed things up?
  3. The failed tests aren't verbose enough to allow students to discover what went wrong. We need to fix this.

There's also too much of a mindset shift we expect from students. From being graded by TAs, we expect them to seamlessly move over to an automated version. That isn't as minor as I want to believe it is. We need to work through the benefits for all stakeholders-not just instructors and TAs but also the students.