Clarity is an Act of Respect

This week, I had the opportunity to serve as an external committee member for a doctoral qualifying exam outside my department. As always, it’s a privilege to be part of the process. I enjoy being able to witness how disciplinary cultures (beyond my own experience or contexts) shape knowledge building and research practices. For academics, these moments are often where we see a student-scholar’s thinking come into focus, where semesters of training begin to cohere into something more independent. But I left the examination meeting unsettled.

During the post-written exam discussion, a faculty member raised concerns that the student’s responses may have been produced with the help of generative AI. The concern itself wasn’t the issue. It was how it surfaced: directly, publicly, and without any prior shared expectations about whether or how such tools were permitted, or how students should document their process. To my knowledge, there was no guideline, no agreed-upon language, no protocol… and yet the burden fell entirely on the student to account for something she was never explicitly asked to disclose.

In that moment, I don’t think the comment was meant to be punitive. But intent doesn’t erase impact. In a high-stakes setting like a doctoral exam, even a passing suspicion can feel like a destabilizing force, especially for students who are already navigating academic spaces shaped by linguistic difference, cultural expectations, or uneven access to institutional norms. The student in that room had worked hard. She was prepared and clearly invested. That moment of public scrutiny, however unintentional, was unfair to her. I couldn’t shake that.

Here’s what I keep thinking about: this moment wasn’t really about AI. It was about clarity, and the absence of it. If we expect students to adhere to certain practices, whether involving the use or non-use of tools, collaboration boundaries, or writing processes, those expectations need to be articulated in advance. Not implied. Not assumed. Not enforced retroactively. Otherwise, we create a situation where students are asked to succeed in a system whose rules are only revealed when they appear to be broken. And that’s not a learning environment… that’s a guessing game.

The stakes of that guessing game are also not evenly distributed. Students who are multilingual, who come from different educational traditions, whose writing style doesn’t match what a particular committee expects—these are the students most likely to draw a sideways glance when AI is suspected but never directly addressed. Now that’s a problem we created, and it’s one we can fix.

I’ve also been thinking about what a more constructive approach might look like. Not a rigid “policy” (I’m wary of that word, honestly) but a shared framework. Something that makes expectations visible and navigable, and shifts the conversation from suspicion to transparency. What if doctoral programs asked students to include a brief process statement alongside major exam submissions? Not as a confession, but as a reflection: How did you approach this work? Where did you face challenges? What decisions did you make along the way?

That kind of practice does two things, I think. First, it normalizes the reality that writing is a process, not a product that appears from nowhere. And second it creates space for emerging tools to be discussed openly rather than policed indirectly. By extension, it gives faculty a clearer basis for evaluation, one grounded in what the student actually learned and how they engaged the work, not in inference or suspicion.

Because at the end of the day, qualifying exams are meant to assess a student’s readiness to think, research, and contribute confidently to their field. That’s the learning outcome. If our assessment practices introduce ambiguity or fear along the way, we risk undermining the very thing we care about. Trust doesn’t mean lowering standards. Instead, it should mean making them legible. And respect, especially in moments of evaluation, isn’t just about how we respond to students. It’s about how we design the conditions in which they are asked to perform.

I’m taking this week as a reminder to revisit how I communicate expectations in my own contexts. Not because of AI, exactly, but because of what this moment revealed about teaching, learning, and the importance of getting the basics right. A few clear sentences in an exam prompt could have prevented all of it, and our students deserve at least that much.

Banner Photo by Buddha Elemental 3D on Unsplash

What do you think? Share your thoughts here!