Skip to content

AI Use in Classroom: Transparent and Ethical Practices

This is a guest post by Dr. Shalini Jayaprakash

Introduction

As a part-time lecturer teaching mostly online courses, changes in academia are something I always fear. On one hand, society insists that changes are necessary for improvement and a step forward, and on the other hand, nothing prepares me for the anxiety that comes along with some changes. It is mostly a lone battle. By being in the academic field, I have chosen automatically to be in a creative field that can shape lives. Embracing change then becomes a necessary thing. The recent AI revolution has reemphasized in me the need to take risks, fail, learn, unlearn, and relearn if I want to lead this creative life. Technological innovations have been an elusive entity for me only because embracing it would require a lot of effort on my part. But participating in the Hub’s 7-Week Faculty Enrichment Program on GenAI has allowed me to imagine how I can or cannot use such new tech tools in my teaching and learning. The idea that resonated with me is the question of how GenAI can affect trust in teacher-student relationships and how faculty can help students navigate trust-building with teachers in an AI-mediated assessment landscape. The change within me was to learn to use this new technology to improve my teaching, research, reflection, and be transparent about it. Of course, I am still a skeptic learning how I can add value to my teaching using AI to benefit my students. In this post, I will talk about how it becomes crucial to establish trust and transparency in the use of AI in the teaching and learning process.   

GenAI continues to enchant and puzzle us, especially in academia, and has intensified the need to have conversations about its impact on educational standards. Generative AI can both inhibit and enhance learning, particularly in educational settings, by providing knowledge and clarifying misunderstandings, but the dangers of overreliance on this technology cannot be overlooked.   Academic research shows how GenAI cannot and will not replace human brains, and that GenAI can harm learning. In spite of that, the use of AI in effective learning for cultivating higher-order skills is happening in teaching environments. Equally important is helping students use it ethically and responsibly. For that to happen, there is a need for credibility, accountability, and increased transparency when using AI in the classroom. This must happen two ways- from instructors and students. Of course, every faculty member has the freedom to make decisions about their courses based on their expertise and course goals, especially about AI use. Faculty members incorporating AI into teaching and curriculum development must share ethical considerations, transparency strategies, and tips to balance AI reliance and not over use it. 

Building Trust and Transparency

If faculty members decide on the acceptability of GenAI use in their course, they must talk to students openly about a tolerable level of assistance from GenAI and warn against seeking a reckless use of it. Having a syllabus statement helps ensure that their expectations for appropriate interaction with generative tools are clear to students. It is important to communicate to students the extent to which it is permissible to use Generative AI and define the specific expectations and guidelines for how students should use these tools in the class. They should give opportunities for students to ask questions and provide feedback (if it is too much, improper, or unethical use of AI) with the first few assignments with GenAI use. One-on-one face to face or virtual meetings with students can reassure students about faculty’s expectations for them. It is important to also remind them to properly cite GenAI content and convey if there will be any changes in grading for the AI aided stuff. Insist on documentation. Remind students to verify the AI citations by reading the original sources themselves, and to assume responsibility for the integrity, originality, and validity of their own contents. Specify if they should share their chat response or the whole thread, the AI-generated content, or conversation thread as part of their submission. Along with that, add to the syllabus whether use of GenAI to write assignments might violate University’s academic integrity policies. Go that extra mile to provide examples of assignments that show the correct use of GenAI and its citation styles. State what the consequences of breach of this transparency bond would be. Also say clearly on the syllabus, circumstances where GenAI cannot be used- such as for taking quizzes, writing final papers, final take-away exams, discussions etc. Finally, remind them to ask follow-up questions to dive deeper into the subject even if GenAI provides them answers. The outputs generated need not always meet the expectations and goals of the course. So, encourage them to dig further outside chatbots.  

Conveying ethical considerations to students

I have come to see that shifting from AI detection to AI-driven critical thinking could be a useful path forward. As a step towards this, I feel that instructors must convey to their students the ethical considerations regarding GenAI model use in academic work. What is also important is to have open conversations with students about AI integration/use or else we risk a trust breach.  At the beginning of the course, it is important to communicate this on the syllabus and clarify its predisposition to biased outputs. Instructors must explain how GenAI models are made and based on certain explicit and implicit biased data that shows their built-in perspectives. What’s more important is to warn students to watch out for these preconceptions and lack of diversity in data that can have dangerous and harmful outcomes. It is also important to ask students to verify the credibility of the GenAI generated outcomes using independent research and primary sources. Specify how societal inequalities are baked into the data and systems favor students from specific backgrounds. Also, caution students against relying too much on GenAI tools, for that can hinder their critical thinking skills. It is important for instructors to convey to students that the primary goal of their course is to allow them to engage deeply with what they learn and enhance their human decision-making abilities. Along with this, provide feedback on their AI-assisted work by highlighting areas where students can further develop their own understanding beyond AI-generated content. This can give students greater insight into the subject matter. Students may also take away the fact that ultimately, it’s the educator’s intellectual labor and pedagogical expertise that shape and refine the AI-generated content.

GenAI use and promoting transparency

The elephant in the room must be addressed. GenAI literacy can be integrated into our courses with enough emphasis on its capabilities and its limitations. Transparency could help model appropriate use of GenAI. It is not just enough to hold students accountable for academic integrity without conveying to them a faculty member’s own ethical compass and GenAI use processes. We can end up in an education culture that is simply marked constantly by distrusting students and surveilling them. Self-acknowledgement on our part for GenAI use is a sure step in achieving transparency. Educator disclosure will ensure students are aware of their engagement with this growing technology and its informed and sensible use. Faculty must acknowledge and disclose to students if we have used AI for the course. Specify whether we used it to convert our learning objectives into lesson plans, to plan our lectures, or to create new assignments and assessments, to build GenAI images to support our lessons for an online class etc. or for any other instructional or support use. Share with the students that we are learning the nuances of AI by being vulnerable, making mistakes, and reevaluating prompts patterns to tap into powerful capabilities of GenAI.  These are ways we can convey to them that we are also learning along the way. This will help to make a meaningful, trustworthy relationship with students. 

At the same time, ask  students to include a declaration at the end of their assignment to provide greater clarity and transparency in their AI use. Include a few examples of the templates in the syllabus to allow students to appropriately use GenAI and prevent academic misconduct. Ask them to document their experience of using GenAI and share it at the end of the assignment. Encourage them to specify whether they use GenAI for brainstorming, outlining, summarizing, searching for the research questions, editing, revising, grammar check, or other exact uses. Ask them with clarity to think about how they arrived at their conclusions for the assignments we gave them. Taking such necessary steps will nurture an educational space that will make education judgment free and ethical. Doing this together can place immense value in education and how we want to learn/teach in this GenAI environment.  

Conclusion

GenAI offers transformative   pedagogical   opportunities, while simultaneously   posing   ethical   and   academic   challenges. By adopting a transparent method on the part of both teachers and students, much of the anxiety in AI use can be put to a better use in both learning and teaching. Instructors must seek to create assignments that allow students to analyze, synthesize, and critically think rather than solely focus on generating content. It is important to remind students to use their authentic voice in their writing and assignment, not to kill their creativity. Along with that reiterate that GenAI interactions can be driven by data and lack empathy, compassion and human understanding. The essence of higher education is to underline a generation’s journey back to the fundamentals of humanity. It was Aristotle who said, “Educating the mind without educating the heart is no education at all.” As faculty members, our overarching aim always should be to create an educated generation of people with compassionate, thoughtful, and ingenious intelligence. Bottom line, we should be able to generate AI policies for students that enhance the academic work while attentively bearing in mind its ethical consequences. Such fertile classroom spaces should be provided by instructors for students. 

~~~

Dr. Shalini Jayaprakash is an LEO Lecturer II in the Department of Women’s and Gender Studies at the University of Michigan Dearborn.

This blog post was created as part of the GenAI 7-Week Faculty Development Program from the Hub. A huge shout out to all in the Hub for facilitating this amazing program. Special thanks and a huge shout out to my cohort leaders: Autumm Caines, Belen Garcia de Hurtado, Carla Vecchiola, Chen Wang, and Jessica Riviere. You guys were amazing!

Image by Gerd Altmann from Pixabay