Skip to content

Demystifying U-M’s AI Offerings: Part 1

University of Michigan recently announced in house access and support to generative AI for all three campuses – specifically access to large language model (LLM) chatbots. Three tiers of access were announced U-M GPT, U-M Maizey, and U-M GPT Toolkit. 

Right now everyone has access to U-M GPT, those with a budgetary short code have access to U-M Maizey (which is free till Sept. 30th), but U-M GPT Toolkit is something that you have to request access to.

Most people became aware of LLM’s with the release of ChatGPT last year. The release was the first time that I actually started using LLMs but I’ve been following news about generative AI broadly for a few years now. I’ve been critical of the tech for various reasons on my personal blog and on social media. There are all kinds of societal impacts that it brings up like concerns about labor, copyright, misinformation, and climate to just name a few. But I also grapple with being able to be critical about something that I don’t understand so I spend a lot of time reading about, and yes even using, these technologies. 

I talk to a lot of people on campus about these technologies: I sit on the Dearborn GenAI Taskforce, I’m currently co-teaching a first year seminar with a focus on AI, and I’m involved with planning our DigPed at Dearborn conversations with our Scholar in Residence Bonni Stachowiak which are focused on GenAI. I think that there is still some confusion and mystery around the U-M offerings so I wanted to write about them from an end user perspective while using story and narrative to help demystify them for our campus community. I also think this could be of interest to our public audience because (I believe) we are the first university to offer LLM access like this. 

It is kind of a lot so I’m breaking the posts up into at least two parts. This first one will focus on U-M GPT and my next will focus on U-M Maizey.

U-M GPT

The first tier of U-M’s AI offerings is U-M GPT. It has been getting the most attention because it is the easiest for those in the U-M community to access – you just log in with your university credentials. 

U-M GPT is basically like ChatGPT in that you just start typing in a box to send your prompts and get responses. You can also start different “chats” to have different contextual conversations, the same as you would with ChatGPT. If you are not familiar with these from a tool like ChatGPT, this allows you to have different conversations where the bot maintains the context of the conversation. I usually call these “chat threads” but I’ve heard others call them “conversations”, U-M GPT just has a button at the top that says + New Chat.

The biggest bit of demystifying I’ve had to to do around U-M GPT has to do with the name. It is not a GPT model in and of itself and the name is just marketing as far as I can tell. U-M GPT is not one model; it is a portal to multiple models and they are not all GPT models. There is a little drop down menu which lets you choose which model you want to work with – right now you can choose GPT 3.5, GPT 4, or LLaMA2 and according to the GenAI Committee Report adding more is envisioned. This is similar to using ChatGPT Plus, the paid version of ChatGPT, which opens up access to GPT-4. One of the nice things about U-M GPT is free access to GPT-4. Once you choose a model you are stuck with it for that chat thread but you just start a new chat thread to choose a different model. 

What is the difference between the different models? I can’t get into that too much in this post for fear of it becoming too technical, but if you are curious I’d suggest you can just search it using your favorite search engine. Mostly this has to do with how fast, accurate, creative, contextual, etc. they are/can be. GPT-4 is famously trained on a bigger data set and is more advanced than GPT 3.5. I’ve found LLaMA2 to be more friendly (sometimes a little over the top) and it uses a lot of emojis. If you are part of the U-M community you could also try the different models yourself with the same or similar questions and compare outputs.

Right now more advanced features that come with something like ChatGPT Plus like Plugins or Advanced Analytics (formerly Code Interpreter) are not available in U-M GPT. There is no ability to turn off chat history in U-M GPT.

Impacts

It is one thing to teach/learn/work in a ChatGPT world but it is another thing to teach/learn/work in on a U-M GPT campus. And just like the larger conversation about the impacts of these tools I think we are still kind of figuring a lot out.

One of the benefits of having this tech in house is that it comes with certain privacy protections. All of the U-M AI offerings are approved up to use of moderately sensitive data. But issues of privacy don’t go away with bringing the tools in house – the responsibility just shifts to the university.

I continue to talk to my students about these tools. We even did a social annotation of OpenAI’s privacy policy and compared it to U-M ITS AI’s Privacy Notice. I found many of my students felt great about the fact that the data would remain with the university but others were skeptical. Would their prompts be used against them at some point? If they were accused of cheating for instance.

And what does having a tool like this available on campus mean for teaching and learning? Issues of “cheating” are a big conversation broadly and it is one thing to be “cheating” in terms of not following assignment rules but it is another to be “cheating” yourself out of a learning experience. How and when do we use (and/or reject) these tools to increase and support learning and not just generate text – not to mention text which could be inaccurate or biased.

I don’t have proven answers to that last question. There are a lot ideas about that question around instructional designer types but the truth is all of this is just too new to know for sure and almost all of it is experimental.

And there are a ton of other questions like what kind of impacts these technologies could have on the workforce at U-M or what is the carbon footprint of enabling access at a level like this. They are all just microcosm questions of the larger ones being posed by just having this tech out in the world. Again, I don’t have answers but I’m reading, thinking, writing, and talking about these matters to try to keep up with it all.

Stay tuned for the next part in my exploration of these tools where I will take a dive into U-M Maizey which is an LLM offering described as a way to “create an experience relative to you based on your data”.

Image by Gerd Altmann from Pixabay