Skip to content

Conversation as Care: Why Talking to Students About AI is Our Most Essential Task Right Now

A model for talking to students about genAI from asynchronous English Literature courses

For my second blog post of my Hub Affiliate series “Responding to Generative AI with an Ethics of Care,” I want to talk about what happened when I asked my students directly about generative AI. In the winter and summer of 2024, I developed discussion prompts for talking to students about generative AI by turning my genAI policy (which was the subject of my first blog post) into a discussion activity in my online, asynchronous literature courses*. The first time I ran this discussion was in W24. First, I asked students to annotate both my genAI policy and the episode of the Teaching In Higher Ed Podcast How to Know Our Audience in an AI World” (featuring Jennifer Coon the director of the Mitchell Business Communication lab in our College of Business) using the peer annotation software, Perusall. This activity gave me my first glimpse into what students are thinking about generative AI, as they engaged with the material, and with one another. I chose this specific episode of the Teaching in Higher Ed Podcast because of its focus on writing and audience, but also for an exchange between the host and Jennifer about the cognitive experience of struggling with a blank page, an experience that professors value, even revere, but that may not be the most relevant aspect of the writing process for future generations. Doing the peer annotation and reading and listening to content about genAI and writing gave students the opportunity to do some active thinking about these topics before being asked to offer their thoughts on them. 

After completing the annotation exercise, students were directed to a discussion board where students were reminded of my own policy and where I reiterated my values in relation to AI, which can be summarized as “I care; I’m open minded; and I trust you.” Then I asked students some general reflection questions about genAI and writing “What do you think of AI? How do you feel about it? How are you using it? What uses do you think of as cheating and what uses do you not think of as cheating? What ethical concerns do you see with this new technology?” 

The responses to this first discussion board were enlightening, which I will talk about in a moment, but at the same time, there were too many responses that made me feel and think that students were saying what they thought I wanted them to say, rather than what they really thought. So, several weeks later, I raised the question again. This time, I incorporated ideas that I developed in response to the Frances Willson Thompson Critical Issues Conference: Generative AI in Education at U of M-Flint, where a panel of student presenters were asked how they would feel if their professors used AI to develop their lesson plans. One student spoke passionately about how you can tell if a professor is writing their own lessons or using AI based on how much heart they put into it, in other words, based on how much they care about their students and about the subjects they teach. 

We can certainly debate whether or not AI generated content can have heart or how much we can use AI before it becomes unethical, or before our students can tell, but the takeaway for me was that I found the student’s association between authenticity and heart profound.. Therefore, the second time I raised the question of AI with my students I asked a different question, “How do you feel about writing more with AI, when you think about it in relation specifically to your heart and your authenticity?  Do you worry about AI taking away your authentic voice? About making your writing less heart-felt? About making your writing less you? Do you worry about these changes to the way your professors teach? In relation to things you read and art you view?” This prompt was also accompanied by a reading, from The New York Times, “When Your Technical Skills Are Eclipsed, Your Humanity Will Matter More Than Ever,” (Gift link with full access for 30 days) written by Aneesh Raman,  a work force expert at LinkedIn, and Maria Flynn the president of Jobs for the Future. I repeated this same discussion activity in my Summer 1 24 course.

As I mentioned, student responses were enlightening. They shared complex and vulnerable insights into genAI specifically, but also about education more generally. One takeaway that faculty might find surprising is that my students are not universally using AI, nor are they universally excited about it. In fact, in their responses, my students indicated that they are highly ambivalent about this new technology.  The reasons for their ambivalence varies, some are afraid using genAI will be considered cheating, some feel it will make their work inauthentic and generic and some don’t really know how to use it, and some fear inaccurate results. The point is, our students’ opinions on and experience with genAI vary. 

What faculty may also find surprising when they start to talk to students about generative AI is how it brings out students’ feelings about education. Students say that they turn to genAI when assignments are confusing or when they see them as busy work. Some students expressed that their heart wasn’t in their schoolwork anyway, so they were more ok with AI in that context than, say, using AI to create literature or art.

Why we should talk to our students about AI right now

Talking to students about AI is, to my mind, the most important thing we can do and I argue that  we must do so now, even as AI tools, impacts, and ethics are still developing and many questions remain. Having these conversations with students while we are still learning about what gen AI will be and do can help ease our students’ and our own anxieties and help make us all a part of this future rather than observers of it.  In fact, I would argue that we have an obligation to have these conversations. This blog post comes as we are entering the fall 24 semester. Based on the experience I described in the previous section, I would argue that we should talk to students about AI early in the semester, rather than later. In my case, for example, I started this conversation with the syllabus annotation exercise, so on the first day of class. 

What I have learned by talking to my own students about AI, by organizing my Hub Affiliateship experience around this topic, by serving on the CASL AI Taskforce, and from the genAI questions in the annual student experience survey conducted by the Office of Institutional Effectiveness*, is that students are seeking guidance about AI more than anything else. We assume that gen z and gen alpha students are automatically able to parse any new technology that comes their way. But that is not true. Much of the exposure people, including young people, have had to technology is as consumers, often in the form of social media, not necessarily as producers of technology or through engagement with specialized products. In their essay “Digitally Native, Yet Technologically Illiterate: Methods to Prepare Business Students to Create Versus Consume” in the Journal of Applied Business and Economics, Daniel Pfaltzgraf and Gary S. Insch explore this contradiction in depth. Furthermore, there is a well documented “digital divide” present in these generations, where students with less access to technology do not have the same experience with these products as those with more access, access that generally falls along class and racial lines.. This divide, first articulated by former Assistant Secretary of Commerce for Communications and Information, Larry Irving, is even more insidious in the context of “digital redlining,” a concept that describes this divide as the result of the pervasive and systemic underdevelopment of technologies,like broadband internet, in low income communities. The Hub’s Autumm Caines (my Hub Instructional Design Partner throughout my Affiliateship) and Jessica Riviere discuss these complications in this article from the Reporter (“The dangers of assuming students are great with technology” and Caines has contributed to the wikipedia article on digital redling, if you want to learn more. 

And even if it were true that our students have an inherent facility with technology, genAI is new to everyone, even to so-called “digital natives.” While it is too early to say what this technology’s long term impacts will be, and to say which elements of genAI are overhyped and which will change the world, it is undeniable that this technology really is different from much of what has come before.  The conversations about what that means, for art, ethics, tech, business, the environment–for our being–are happening. Our students will be as confused, curious, scared, excited, and so on, as we are. The classroom, in every field, is a place where we strive to consider difficult, even unsolvable, questions and have big conversations. And this is a big conversation happening around us right now. Our classrooms have a role to play and talking to students is a way to play that role.(Dave Cormier’s Learning in A Time of Abundance is an excellent recent book about the intersection of technology, an abundance of information, uncertainty, and pedagogy).

One final argument I would make for talking to your students about genAI is that it is something you can do right now. Don’t feel confident about your prompt engineering skills? Not ready to develop a writing assignment that incorporates AI strategies because you’re not yet writing with these strategies yourself? Too unsure about privacy concerns to incorporate genAI into your classroom? These are all valid concerns and teaching with and to a certain extent about AI is not for everyone. Maybe it should not be for everyone. The Hub’s Autumm Caines wrote this informative blog post about the value of *not* teaching with AI, but teaching about AI. For those of us at University of Michigan, UM-GPT alleviates some concerns about student privacy and data mining when bringing genAI into the classroom, but still, incorporating genAI is not the answer for everyone. But we can all talk to students about AI, and all of our disciplines have a framework to approach the big questions AI opens up. 

 What I have learned (so far) from having these conversations is that students  are hungry for discussion and guidance from their professors. I also learned, unexpectedly, that they possess a level of ambivalence and even anxiety, stress, and fear about their educational experiences. Which leads me to wonder how we can alleviate some of those experiences and if alleviating those experiences may dissuade students from an over reliance on AI. Certainly, individual faculty can’t undo 18+ years of educational experiences our students have had before us, but we can create a different atmosphere in our own classrooms.  In my next blog post in this “Responding to Generative AI with an Ethic of Care” series , I will talk about how the emergence of genAI has encouraged me to incorporate un-grading into my courses, a practice I have been interested in for years, but was hesitant about until recently. In the meantime, talk to your students about their experiences with AI and see what you are able to learn. 

~~~~

  • *The AI policy I am discussing in this paragraph is the policy I developed for W24, by the time I taught my Summer 1 24 course, my policy had changed. I have included both in this document so you can see the evolution. Students annotated this policy as part of a peer annotation of the course syllabus and in annotations were grateful for a clear policy and for an alternative tool, UM-gpt, to protect their privacy and their work. 
  • *This link is not directly to the student experience survey, but to a slide deck from a presentation by Autumm Caines, Instructional Designer at the Hub for Teaching and Learning Resources, Christopher Casey, Director of Digital Education, Pamela Todoroff, Lecturer IV in Professional Writing in CASL, and Stein Brunvand Associate Dean, Director of Graduate Programs, and Professor of Learning Technologies in CHHS given at the Frances Willson Thompson Critical Issues Conference: Generative AI in Education at U of M-Flint. The slide deck gives an overview of the state of generative AI at the University of Michigan-Dearborn and provides guidance for talking to students about generative AI and incorporating generative AI activities into the classroom, including the advice “Talk to them OFTEN. Talk to them ALL the time.”
  • Featured Image via https://www.pexels.com/