On metaphors for managing machines
It’s exams and thesis formation season for students in my department’s graduate program and so I have been reading and responding to ideas/arguments across dissertation topics. Naturally, I find myself in a headspace for doing a lot of definitional work and asking repeatedly questions like “what is X…” and “what does it mean…” to encourage junior scholars to describe their projects in a clear and critical manner.
As I prompt my students to provide clarity, I am inspired by them to do a similar exercise with my own explication of AI interactions. This time, I am thinking about the metaphors we use to represent our exchanges with generative systems that could have further implications on our perception of agentic technologies.
Metaphors matter, as Lakoff & Johnson and others remind us. Metaphors, as conceptual references, inform our assumptions about people, activities, events, and everything that makes up our physical and philosophical worlds. My peers and I in a research cluster at the Digital Life Institute have previously investigated the metaphors of digital literacy, i.e., mental models individuals use to make sense of unfamiliar or abstract technological activities. We know that abstract domains tend to call for prior knowledge with adjacent, tangible references (some call it “cognitive mapping mechanisms”). In simpler words, we use what we already know to reason about what we do not yet fully understand. We borrow language, logics, and values from familiar resources to stabilize the uncertainty of emerging technologies.
With AI, we’ve landed quite early on some distinctive metaphorical actions that I feel are only loosely construed. Specifically, I am thinking about human-in-the-loop models where the human-actor exerts influence on the system or machine.
For instance, we keep saying we train artificial intelligence. Sometimes we say we teach it. Both metaphors feel natural. They come from the worlds we know: classrooms, workshops, job sites. But lately I have been wondering whether those metaphors are quietly doing political and ethical work that we haven’t fully examined. What if, instead of training or teaching machines, we began to think about mentoring them? (Bear with me while I unpack, please.)
Mentoring machines may sound strange or maybe even a little uncomfortable, but it signals something uniquely human about how we imagine our relationship to emerging technologies, especially in the sociopolitical moment we are living through.
Across the United States and globally, we are watching trust in institutions erode, labor become increasingly precarious, and automated systems take on greater roles in hiring, healthcare, education, and governance. At the same time, we are being asked to believe that these systems are neutral, efficient, and scalable. In this environment, the language we use to describe AI is not incidental. It shapes what kinds of power we think we have, what kinds of responsibility we accept, and what kinds of futures we imagine.
“Training” is an industrial metaphor. We train workers. We train systems and models. The goal is performance: fewer errors, faster output, more reliable compliance. Training presumes hierarchy and control. Something acts on something else to make it more useful.
“Teaching” is gentler, but it still assumes a one-way flow of knowledge from expert to novice. It imagines the learner as a deficit that needs to be filled, corrected, or improved.
Both metaphors fit early AI systems reasonably well. But large language models and other adaptive systems are no longer just executing predefined rules. They learn from interaction. They retain patterns. They increasingly shape the conditions under which humans communicate, decide, and act. In rhetoric, we might say they are no longer just instruments of persuasion but part of the persuasive environment itself.
Here is where mentoring becomes an unsettling alternative.
Mentorship is not just about transferring information. It is also about cultivating judgment, orientation, and character. A mentor does not just show a junior colleague how to do something. They model how to think, how to weigh consequences, how to navigate ambiguity, how to be accountable to a community.
In rhetorical terms, mentoring is about ethos. It is about helping someone develop a way of being in the world, not just a set of skills.
When we interact with AI systems today—correcting them, giving feedback, setting boundaries, nudging them away from harmful outputs—we are already doing something that looks a lot like mentorship. We are not simply optimizing accuracy; we are trying to shape how these systems behave in human contexts that are ethically, politically, and culturally charged.
Thinking in terms of mentorship does not require us to believe that machines are human. It requires us to acknowledge that some nonhuman systems now participate in social and rhetorical life in ways that matter. Bruno Latour’s work on actor-networks reminds us that agency is distributed across humans and nonhumans alike, and Hannah Arendt’s scholarship on responsibility suggests that power without accountability is always dangerous. AI now sits squarely in that tension.
A shifting metaphor matters
We are living in a moment when algorithmic systems increasingly mediate who gets heard, who gets hired, who gets flagged, and who gets ignored. These systems do not just process information; they arrange social reality. Richard Buchanan once described design as a form of “argument,” a way of organizing the world so that certain actions and values become more likely than others. AI systems are now some of our most powerful designers.
If we keep thinking about AI only in terms of training and optimization, we reinforce a logic of extraction and control. We treat these systems as tools to be pushed harder and faster in service of productivity and profit. That logic aligns neatly with broader economic pressures in higher education, industry, and public life, where efficiency often displaces deliberation and care.
Mentorship may offer a different orientation. It suggests stewardship rather than domination, development rather than exploitation. It asks us to consider not just what a system can do, but what it is becoming—and what we are responsible for in that becoming. I argue that it resonates with feminist and care-ethics traditions in technology studies, which have long argued that relationships, not just rules, are the foundation of ethical practice. It also aligns with emerging work in human-computer interaction that treats AI not as a neutral interface but as a participant in ongoing, negotiated activity.
Of course, many people resist the language of mentoring when it comes to machines. They worry that it anthropomorphizes technology, invites misplaced emotional attachment, or erodes the boundary between human and nonhuman. These concerns are understandable—but they also reveal something deeper. We are anxious about losing our monopoly on agency, authorship, and moral standing. We want to preserve what we think of as uniquely human at a time when machines are beginning to perform recognizably human communicative acts, i.e., writing, advising, tutoring, even comforting.
Yet refusing to rethink our metaphors will not preserve human dignity. It will only leave us with systems that are powerful but unaccountable, optimized but unmoored.
It’s about deliberation
To talk about mentoring machines is not to claim that they have inner lives or moral worth in the way people do. It is to recognize that the systems we build and deploy now shape our collective futures in ways that demand more than technical fixes. They demand rhetorical, ethical, and educational engagement.
If training makes AI useful, mentorship makes us responsible.
For those of us working in rhetoric, technical communication, and education technology, this reframing opens new questions: How do we design interfaces that support ongoing ethical guidance rather than one-time configuration? How do we teach students not just to use AI, but to relate to it critically and care-fully? How do we build systems that can be corrected, redirected, and held to account over time?
These are problems of engineering, design, deliberation, and civic life. And in a world already strained by polarization, automation, and uncertainty, the metaphors we choose may matter more than we think.
To that end, I am working on a manuscript that aims to bring together rhetorical agency, design ethics, and cultural philosophy from the East to imagine AI futures. I am asking what is the freedom to deliberate and what it means to create and apply systems that preserve space for deliberative restraint as well as algorithmic prediction (sorry, that’s a mouthful). I think anticipatory optimization and individual agency can be combined to enhance both machine and human intelligence for a time when democratic expressions are often suppressed. I welcome suggestions or questions you wish to see further deliberated.
Banner Photo by Marek Mucha on Unsplash