Purpose Over Product

An Ongoing Tension

As I prepare for the new semester, I have been keeping up with recent developments in agentive generative tech. A good chunk of my course content desperately needs to engage with AI discussions given how the field of communication design is preoccupied with technological determinism as well as fear. My NYT newsletter this week featured some unsettling data highlighting the number of human jobs slashed in 2025 due to AI adoption. Right next to the report are links to tech analysts’ takes on steady AI deployments and increased trust in businesses’ AI integration.

This public discourse is creating an increasingly tense, structural bind that’s actively shaping our field’s sense-making pertaining to technological inevitability and existential precarity. We’re stuck in a polarizing binary right now. On one side, AI dependence. On the other, AI quarantine. Either way, I don’t think we have yet to meaningfully grapple with the messy reality we’re in. AI tools are already woven into our students’ writing processes, our administrative workflows, and increasingly, our own research practices. As it prevails in the communications world, the ends (outcomes) still justify the means (methods).

But whether we are concerned with how the products turn out or how those products were made, I think we’re losing sight of a central design question––purpose. What is any given activity supposed to do (e.g., why do we write or compose)? What rhetorical work is it performing? What capacities is it building? What effect is it creating?

The Rhetorical Purpose Question

Here’s an exercise familiar to many instructional designers. Pick any assignment in a course. Now ask: what is the learning objective here? Not “what do I want them to produce?” but “what do I want them to be able to do when this is over?”

Is the objective to demonstrate that they can construct a literature review? Or is it to develop the ability to synthesize across sources, trace scholarly conversations, and position their own thinking within a research discourse?

Is the objective to produce a research paper? Or is it to practice the recursive process of inquiry—forming questions, following leads, hitting dead ends, revising their understanding?

Is the objective to have correct APA citations? Or is it to understand why citation practices exist, what work they do, and how to make deliberate choices about acknowledging intellectual debts?

Rhetoric scholars have long taught about the dissociation of function and form, substance and style, or content and presentation. The rhetorical purpose of a process may determine the tools for the task.

Writing as an activity, writing as a tool, writing as a product––these aren’t the same things. And when we collapse them, i.e., when we treat the product as equivalent to the rhetorical purpose, I believe we set ourselves up for all kinds of problems. AI problems, sure. But also plagiarism problems. Rubric problems. Equity problems. We may be assessing the wrong thing.

The product is evidence. It’s an artifact of someone doing something. But if we don’t know what that “something” is supposed to be—what rhetorical purpose it serves, what capacities it develops, what effects it imposes—then I don’t see how we can make good decisions about tools, methods, or assessment.

When Products Fool Us

Here’s where I think AI makes this problem visible in new ways. Because now a polished product can be generated in minutes. And suddenly we’re forced to ask: what were we actually assessing all along? If the product can be automated, then either (a) it was never really getting at the purpose we claimed, or (b) we were never clear about the purpose in the first place.

I don’t mean this as an indictment of AI. It’s an indictment of our own lack of clarity.

When a student uses AI to generate a discussion post, and that post gets full credit because it “hits all the requirements,” I’d argue the problem isn’t the AI. The problem could be that our requirements weren’t aligned with meaningful purposes. We wanted participation that looked like engagement.

When a committee uses AI to write a policy document, and that document gets approved because it “sounds official,” I think the problem isn’t the AI. The problem is that we were more concerned with the product looking a certain way than with ensuring the policymaking process actually accomplished its rhetorical work—creating consensus, anticipating conflicts, forming organizational identities.

Our rhetorical purposes should determine our tools.

If the objective is for students to practice making communication choices—deciding how to frame an argument, choosing what evidence to foreground, calibrating tone for a specific audience—then I believe AI is probably working against that purpose. Because the tool makes those choices for them, or at least obscures the fact that choices are being made at all. Here, the process of deliberation is the point, and the product is just the record of those deliberations.

If the objective is for students to understand disciplinary conventions (not just follow them, but understand why they exist and what functions they serve), then handing them an AI tool that automagically generates citations or formats papers might produce a nice-looking product, but it seems to me it completely undermines the purpose. They need to wrestle with those conventions to understand them.

If the objective is administrative efficiency—e.g., getting a routine reminder sent, summarizing meeting notes, creating an infographic of institutional enrollment trends—then sure, I think AI might be fine. Because the rhetorical purpose here is speed. But I believe we should be honest that we’re making a trade-off: we’re prioritizing efficiency over other values. Sometimes that’s the right call. But I think we should make it deliberately.

Here’s the uncomfortable part, though: sometimes I think we were fooling ourselves. We were already treating products as proxies for purposes that they never really captured.

So when organizations panic about employees using AI to write reports and strategy memos, perhaps what they are really confronting is the fact that those documents were never doing the epistemic work they claimed to do. They were serving as formal artifacts of decision-making rather than as products of deliberation. If those genres truly functioned as sites of organizational judgment, an AI-generated version would be immediately recognizable—not because it sounded artificial, but because it would lack the lived, situated reasoning those documents are meant to carry.

Agency, Autonomy, and Automation

When we are clear about rhetorical purposes, we can have more productive conversations about when, where, or how AI fits in communication and design practices.

These conversations shape writing, UX design workflows, content operations, and organizational sense-making, since AI is increasingly positioned as a co-producer of professional knowledge. In these contexts, agency, autonomy, and automation are design variables that structure whose judgment counts, where responsibility resides, and how organizational knowledge is produced.

First, agency is the condition of rhetorical action. In any communicative practice, agency refers to the capacity to make consequential choices: how to frame a problem, what values to foreground, which evidence to privilege, and how to position audiences. These are not neutral operations; they are rhetorical acts that shape meaning, trust, and institutional authority.

When AI systems pre-compose these choices, they risk relocating rhetorical judgment away from practitioners and into opaque systems. The issue is not that such systems are inaccurate. It is that they may obscure the fact that rhetorical choices are being made at all.

If the purpose of a design or communication activity is to cultivate judgment, then agency must remain visible and practiced. No tool should short-circuit the development of that capacity.

Second, autonomy is a navigated constraint. Professional communicators and designers never operate autonomously in a pure sense. They negotiate institutional policies, brand guidelines, legal standards, accessibility requirements, deadlines, client expectations, audience needs, and technological affordances. AI introduces a new layer of constraint—one that shapes what is suggested, what is normalized, and what becomes cognitively effortless.

The critical question here is not whether practitioners “have autonomy,” but whether they understand how tool infrastructures structure their available rhetorical moves. In UX design, for example, template-driven AI systems may preconfigure interaction patterns, persona archetypes, or accessibility defaults in ways that feel neutral while silently narrowing design imagination. Autonomy is a matter of recognizing how tools shape choices, and developing the capacity to intervene rather than merely comply.

And third, as we have been told, automation does not eliminate labor. It redistributes it. For writers, automation often shifts work away from drafting and formatting toward oversight, editing, alignment, validation, and risk management. What appears as efficiency may actually increase cognitive load and accountability pressure. More importantly, automation is not itself a rhetorical purpose. Speed is not a value-neutral good. Optimization is not synonymous with effectiveness.

When automation becomes the goal rather than the means, communication risks being reduced to throughput rather than sense-making. What I fear in UX work, for instance, user research becomes data extraction rather than interpretive inquiry and advocacy. Design becomes production rather than deliberation. Documentation becomes compliance rather than sites of reasoning.

Ongoing Negotiations

We’re trained to care about what things look like. We grade based on what we can see on the page. We assess products because they’re tangible, while purposes are abstract.

But AI is forcing us to get comfortable with that abstraction. Because if we keep focusing on whether the product looks good, I believe we’re going to keep getting fooled. By AI, sure. But also by students who’ve learned to mimic academic discourse without understanding it. By polished presentations that say nothing. By well-formatted documents that don’t accomplish learning outcomes. We need to shift our attention from products to purposes.

So where does this leave us? Right in the middle of ongoing negotiation. About purposes. About tools. About what constitutes evidence of rhetorical work.

And I believe that’s exactly where we should be.

Because rhetorical purposes are contextual. They shift based on students, courses, disciplines, moments in a semester. This requires judgment. The kind of phronesis that can’t be automated or reduced to template. It requires us to be honest about our values, and to keep asking: what are we actually trying to do here? We should teach our students to ask these questions too. Not by giving them rules about when AI is “allowed,” but by helping them develop the awareness to make purposeful choices about their tools, their processes, their work.

Because in the end, I believe that’s the purpose that matters most.

Banner Illustration by Rudra K on Unsplash

What do you think? Share your thoughts here!