Skip to main content
Focus Area: Artificial Intelligence

The Future of Higher Education in a World with Generative AI

The College of Innovation and Design (CI+D) is supporting the university in meeting the opportunities and challenges that rapidly emerging Artificial Intelligence (AI) presents for our students and our community. In partnership with AI-focused leaders across campus, CI+D is helping establish Boise State University as a leader in this area.

Latest Updates

Generative AI Committees

In recognition of the complex and diverse ways that AI will ultimately impact our campus, the University has recently formed working teams focused on education, students, scholarship, and operations.

Faculty Workshops

A variety of AI-focused workshops have been developed by campus experts. The Center for Teaching and Learning, eCampus, and the College of Innovation and Design have created opportunities for individuals and departments to explore AI’s significance for the future of higher education

Focused Conversations and Events

CI+D frequently sponsors AI-related events, and our faculty and staff frequently give public presentations. A few of the latest include:

The First of Many Discussions to Come

AI, Boise State, and the Future of Higher Education

Closed captions are available and a transcript is provided on this page.

MashUp: Little Lectures for an Odd World

An interactive speaker series sponsored by CI+D and the College of Arts and Sciences. Two speakers give short, engaging talks on disparate topics, and then audience members are invited to ask questions connecting the two. This season, we are highlighting the role generative AI plays in our digital future, and have featured speakers presenting on anything from shadowbanning on Instagram to making music using AI. Make sure to follow our MashUp page for future events!

RSVP to Mashup Events

Get Involved

We welcome faculty, researchers, and community members to join the dialogue, participate in our committees, and attend our events. Let’s collaborate to position Boise State University to empower our students, faculty and staff to thrive in an AI-infused world. Contact us if you are interested in having us speak at one of yours.

Let’s Partner: Contact Us

Video Transcript

[Jen Schneider] Welcome to today’s presidential address and faculty panel on AI, Boise State, and the future of higher education. My name is Jen Schneider. I’m the Associate Dean of the College of Innovation and Design. Before I introduce President Tromp, I want to call your attention to the important work that has led us to today’s event. First, I wanna thank the inaugural co-chairs of Boise State’s AI Task Force, Ti Macklin, Leif Nelson, Dan Sanford, Amy Vecchione, and Sarah Wilson. The task force was formed early last spring because these forward-looking colleagues were paying close attention to developments in generative AI and they wanted to make sure that our campus would be ready for the big changes to come. They’ve met with AI task force members, with our leadership, and with many of you over the last few months to ensure that Boise State is responding to generative AI developments in thoughtful, ethical, and meaningful ways. They’ve also helped stand up training for all of you in a very short amount of time. So if you are one of these co-chairs or a current co-chair or a member of the task force, would you just wave so we can thank you for your service to the university.

(audience applauding)

[Jen] Thank you so much. Second, I wanna thank members of the President’s Office staff, College of Innovation and Design staff, University Television Productions staff, and Student Union staff who’ve made today’s event possible. And thank you to all of you for being here today, especially at such a busy time at the beginning of the semester. It is my pleasure to introduce to you President Marlene Tromp. Dr. Tromp became the seventh president of Boise State University on July 1st, 2019, the before times.

(laughter)

[Jen] She has worked in partnership with our top tier faculty to increase the academic excellence of the university. She has broken student graduation record, a research funding record, and a philanthropy record since her arrival. She has increased access and enrollment for Idaho students and provided an affordable education for students from elsewhere to fuel the booming Idaho economy. With our academic leadership, she has created pathbreaking partnerships with industry and nonprofits to advance students in the state. And in case you hadn’t noticed, she’s also been a very active leader in the tech sector. She brought Boise State University into the upwards partnership at the G7 Summit and oversaw the launch of the Microelectronics Education and Research or MER Institute to advanced efforts to prepare a broad array of students for work in the semiconductor industry. She brought Boise State onto the Council on Competitiveness, the federal level designed to enhance US’ productivity and prosperity for all Americans. She formed strategic partnerships with industry, higher education, and government by launching the Institute for Pervasive Cybersecurity. She supported enormous growth in the health sciences in every sector, graduated more students in education and increased interdisciplinary thought leadership through cutting edge programs like the School of the Environment and the School of Computing. She also had the privilege of opening an award-winning fine arts facility and a world-class engineering facility focused on material science and pathbreaking QDNA research for quantum computing. Dr. Tromp remains a dedicated scholar and is author of several books. I don’t know when she has the time, but she is still writing them and many articles on Victorian literature and culture and its relationship to our current cultural moment. She has been thinking about and reading about AI a lot as you’re about to hear, and she’s reflecting on what this moment means for all of us. I’m thrilled that we get to hear her perspective on this pivotal social moment. Please join me in welcoming Dr. Tromp.

(audience applauding)

[Dr. Marlene Tromp] Thank you so much. Good afternoon, everybody. I’m so happy to see you all here today. And what I was struck by as Jen was talking, thank you so much, Jen. What I was struck by is that those aren’t things I did. They’re things we did and I just have the privilege of getting to be here, to be a part of that with you. And this is another one of those moments where there is an enormous challenge before us and we have an opportunity to provide not just leadership for our community, but to provide national leadership and coming together is an opportunity for us to do that. I wanna share with you that when we went through the pandemic, which was an enormously challenging time, we heard from parents all over the country who had children in multiple institutions who said that the transition at Boise State and the faculty at Boise State were better than any of their other children experienced. And that was because there was a lot of thought and work that went into preparing for that response. So we’re at a moment again where we’re facing what to some people feels like an existential threat. I’m a humanist, I’m a Victorianist, I teach English, I teach women’s studies. And in this moment, the thing that I teach, one could perceive this as an assault on that moment, but I would invite us, as a community, to think about it not just as a moment of challenge but as a moment of opportunity. And I think we, as an institution, in part because of our unique character, the ways in which we have always, always sought to figure out how to take challenges and build incredible outcomes for our students and for our community out of them. I think this is a moment in which we can take the challenge that is before us and ask questions about how we can make things better for our students, better for our faculty, better for our staff, better for the university, better for the state that we serve, and be a part of the thought leadership on that. And you coming together is an opportunity for us to do that. Today what I asked before we came to do this event, I asked the group, thank you to Shawn Benner and Jen for their work in CID and thank you to the AI task force for all the work they’ve done. But I said to them, what are the key things that you want me to think about as I come before our faculty and in what ways do we need to come together? Because AI could change the very nature of knowledge. We have to reckon with it. At first, and some of you’ll probably remember this, there was a spate of articles that came out in “The Chronicle” that said how to stop people from using AI in your classroom. I think the train was moving so fast that it didn’t take long for people to figure out that really wasn’t gonna be an option. And the metaphor, when I was at the Council on Competitiveness this summer, we had a special session where we spent hours listening to Senate subcommittees that were about AI and talking with folks about AI and talking with leadership from industry and higher ed and labor all about this question of how AI changes things. And the metaphor I used was it’s like fire and fire can be a powerful tool or it can burn your village down. And so we have to think about how we grab it as a tool. And I recognize that it’s not like we’re gonna have this conversation or we’re gonna come to these final conclusions because it’s changing so rapidly right now, there’s no way for us to know where exactly it’s going and what exactly things are gonna look like, how it will evolve. But it’s important for us to really begin asking the questions together so that we are prepared as a community. And my primary concern and the reason that the provost and I determined that that AI task force needed to kick up right away last year was we wanted to support our faculty and staff who are engaging with our students and to really begin to think together with them and to help prepare people. And I’m very grateful for the work that everyone did to get that course up and running so that we could make that available to folks. But there’s so much more to be done and there are a few things we need to think about together. AI is fast and powerful. So there’s some data here that I think is fascinating. Generative AI can increase productivity by 40% in writing, and that’s based on an MIT study. Those productivity jumps are being demonstrated across a lot of sectors in society and I think we have an obligation to ask about what that means. I’m working on a new book project right now and it’s a different kind of project than I’ve typically worked on. It’s a primary documents collection where a colleague and I are writing all these introductory pieces about the founding of detection, about the emergence of detective, police and detection. And we’ve spent the last eight months slogging through thousands of pages of primary documents and carefully crafting these intros. And we had a meeting this weekend and I pulled up ChatGPT, and I plugged in a prompt for it from the research just to see what it would say. And then I took my thesis, the one that I have spent the last eight months crafting and working through. And what I essentially asked it was, what would you say about the ways in which policing, the concerns about policing in the early 19th century resemble the concerns about policing today? And it spit out this actually quite brilliant essay. Now it wasn’t my thesis and it didn’t get all the things right. We know that. It’s one of the things that we see now, but I was really astounded by it. And when I got on this phone call with my colleague, ’cause I’d played with it before but never with a serious scholarly question. When I got on this phone call with my colleague, I showed her these documents. Do you know she almost cried and she said, “Imagine how much better we can all become at honing things.” It was a really interesting moment for me and I don’t have all the answers yet and we don’t have, as a community, all the answers about what ways it’s appropriate to use in research. I feel like I’m sort of keeping myself with a little distance from it, especially as a humanist. So it’s not writing anything for me, but I am thinking about it and thinking about what it could mean because one of the other things we know about AI is it’s ubiquitous. It’s everywhere. And I think it’s gonna be hard, it’s a little bit like an iPhone. When they first came out, my mom said, “What do I do with this?” My mom is 95. She doesn’t ask that question anymore. She does everything from Words with Friends, to talking on the phone, to searching the internet, to checking her lottery numbers, you know, all the things. She interacts with healthcare on her phone. But I don’t think when the iPhone came out, we could really imagine all the ways in which it would be used. And one of the things I’ve said many times in forums where I’ve been talking to our community is I’m struck profoundly by the ways in which our students are literally neurologically wired differently because they’ve had a computer in their hands from the time that they were children. And that changes the way you think. So it’s everywhere. We are gonna have to grapple with it. It’s also going to be largely undetectable. Now I think a lot about the calculator metaphor. We can use a calculator to do math, but you really can’t do the math if you don’t understand how the problems need to work. It does some functional effort for you. And you could do that math yourself, but it might be faster. But you can’t even put in the problem if you don’t know how it works. And I think the same is probably true with generative AI, but it’s gonna be very, very difficult for anyone to create a detector that, you know, there was all this talk at first. Don’t worry, people, we’ve got you covered ’cause we’ll create detectors. Part of the problem we know now is that those detectors tended to flag students whose second language was English. We also see other kinds of problems with the ways that these tools are operating. But what we know is that we can’t meaningfully exclude AI. So we’ve got fire. How do we turn it into a tool and keep it from burning, not just our village down, but the people that we care about? How do we use it to help them develop and grow instead of to ask the question, you know, how do we stomp it out? That’s just not gonna happen. So we have to begin formulating responses to these complex questions. Social scientists and humanists, because of the way that they are trained, because of the ways that their minds are trained, they might feel the most vulnerable. But who didn’t ask the question? Where are the humanists and the social scientists when generative AI was first announced? Because we have to ask questions about ethics. We have to ask questions about cultural impacts. And those are gonna be the people who are gonna help us think through those questions because that’s how they’re trained. So even though it feels like an existential threat in some ways, we need those voices as a part of this conversation and they will be a critical part of the conversation going forward. We know that tech, as it evolves, we’ll need humanists, we’ll need social scientists. The idea that we should now start phasing that out of the academy, I think, is a mistaken one. But we need those folks in dialogue with the people who are developing these technologies and these tools. These are the questions that are at the forefront of the work that we’re doing. We know that the problems and the challenges are big enough that tech alone cannot answer them. It’s going to require an interdisciplinary effort from all of us. It’s going to require all of us being in dialogue, faculty from across these fields. And everybody has to be involved. Not just our computer scientists, but our composition and rhetoric faculty. Everybody’s gotta be involved and that’s why coming together really matters. Responding to these questions in an interdisciplinary way is going to be key. And that’s a Boise State strength. It’s something that we’ve done. In fact, one of the things I’ve said to folks on our campus before is one of our greatest strengths is we became a research institution relatively late in our life. So we don’t have these ossified walls between fields of study that a lot of places have that gives us the opportunity to talk to each other in new ways. We know this tool has the power to do great good. We know this tool has the power to do harm. So we have to think together. We also know that our students will consider competency in AI as critical to their future. And it will be increasingly necessary for people to be able to navigate that landscape. And they’re gonna look to you. It has had one of the, ChatGPT has had one of the fastest download rates in history. 1 million users in five days. Do you know that it took Instagram two and a half months to get to a million users? ChatGPT currently has a hundred million users only nine months after launch. The rate of the adoption of this technology has been so fast. Six months after its launch, 22% of individuals in the global business community were already using it. So how do we prepare our students to enter into that world? How do we prepare ourselves to navigate in that world? The other thing, and I’ve mentioned it once already, is that AI isn’t always accurate. Where it doesn’t have information or can’t glean good information, it will still represent something, but it might be wrong. When I hear that, what I think is this is a profound moment to help people begin to understand the ways in which not everything they’re presented is true. There are vulnerabilities we’ve had culturally because people couldn’t tell the difference between what was factual and what was false. And if we can use this moment to help teach that skill, that’s powerful. So the fact that there is a problem with the technology in terms of its fact base doesn’t necessarily mean that’s a problem for us as pedagogues. It’s important to remember that this, because it can amplify challenges that are already out there. Some of you may have heard that there was a study where they took medical records and they assessed medical records, they used AI to write responses to patients. And what they found is that racial biases that were probably already present in the record were amplified. That’s a concern that we should have and we should think about that. It’s wrongly flagged Black defendants in court cases when it’s been utilized. MIT did a study on this. It falsely identified Black patients as healthier than white ones and so prescribed the wrong thing. So we know that where there are biases in the data set that it’s absolutely possible for AI to amplify them. One of the things that actually I find a little eerie is that MIT did a big study and found out that AI could identify a patient’s self-identified race from a CT scan. The researchers don’t know why. And what that suggests is there’s something there we don’t see. We don’t know. We know that race is much more cultural than it is biological, but what is happening that it can, with great accuracy, predict this? We also know it’s gonna be difficult to regulate when there’s all this data coming in and we don’t understand all the ways in which the output comes out. It’s taking in volumes, AI is taking in volumes of data that a human can’t possibly take in in that time. So one of the things that we need to think about is that it’s super human capacity could be more difficult to control or regulate. And that’s something for us to really consider. In order to prevent it from harming people, we must be very conscientious about it because it will magnify disparities, it will magnify problems in a data set, and we have to be keen at studying those things. Algorithms, however, can be taught to counter bias and that might be a way in which we can utilize this tool going forward. But we’re gonna have to be a thoughtful community of researchers and pedagogies coming together to figure out the ways that we can do that. I’m gonna jump ahead to my next point ’cause I wanna leave most of our time today for the panel. So what I wanna share with you is just a couple of other key concepts and then I want to take just some final, make some final points. Boise State has the opportunity in this moment to become a leader in the way that it has been across its lifespan. The AI task force, which was led by CTLE campus, ITS, and others are offering short one to three hour faculty workshops on AI and the university’s offering a $50 stipend. I know that’s not much, but what we wanted to do was say it mattered and that it mattered to us that you felt like your time was valuable in that. This moment is just the beginning of a much larger conversation that lies ahead. This moment is an opportunity for us to open a dialogue. Dean Bannister shared with me a piece by Bill Gates and I’m gonna give you a couple of quotes from this piece. He said, “I think in the next five to 10 years, AI driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests, your learning style, so it can tailor content that will keep you engaged. It will measure your understanding. It will notice when you’re losing interest and immediately understand what kind of motivation you respond to and will give you immediate feedback.” I told you I did that experiment with those prompts. It was a fraction of a second before I got those essays back. He also said, “Even once technology is perfected, learning will still depend on great relationships between students and teachers. It will enhance, but never replace the work that students and teachers do together in the classroom.” This work is at the heart of everything we do as a university. And the opportunity for us to come together and explore these issues in these questions together, some of which are big, challenging questions, is an exciting opportunity. It’s a meaningful chance for us to take up leadership on this critical front. I’m so grateful for your presence here today, and I will tell you that Dean Benner said to me that he had heard from a lot of people there were other conversations like this they wanted to have right now because there’s so much, so much that’s available for us to explore together as the world changes so quickly. And so if there are those things that you would like to see the university engage with in the same way that we are attempting to begin and launch this conversation on AI, I would be so glad to hear from you. I wanna thank you for taking your time to be here today. I wanna thank you for engaging in these workshops for those of you that already have. And I wanna thank the team of people who have worked so hard to help us, as a community, be prepared. Thank you all so much.

(audience applauding)

[Jen] Thank you so much, Dr. Tromp. In the College of Innovation and Design, we’re tracking what other universities are doing to respond to AI and I believe that we are really at the sort of leading edge when it comes to our peer institutions and how we’re responding. And it’s because of the leadership of President Tromp and Provost Buckwalter. So thank you so much. I’m gonna go ahead and invite our faculty panelists to come to the stage so I can introduce them. Chairs are about to appear. Thank you so much. Let’s see, to my right is Steven Hyde. Steven is an assistant professor of management. His research explores the application of AI in teaching and building AI psychometrics. Amy Vecchione is the assistant director, I’m sorry, second here. Assistant director for Research and Innovation in the eCampus Center. Amy’s research evaluates the use of emerging technologies and evidence-based practices. Amy recently published a book on how to collaborate within higher ed to foster student success. I’m gonna move back to you, Sarah. I’m sorry I skipped over you. Sarah works in the office of the Dean of Students as the Academic Integrity Program director, meaning she offers interventions to students after their instructors find them responsible for cheating or plagiarizing their coursework, which prevents their learning. Before this role, she was a sixth through 12th grade English teacher for eight years, hero, and has been a tutor, graduate student, instructor, and now a current staff member at Boise State. Thank you, Sarah. And last but not least is Brad Weidgle, my colleague in the College of Innovation and Design. Brad is the co-founder of a creative agency here in Boise called Against. They work with rebel brands willing to go against the grain and define themselves not just by profit, but by what they do for people and the planet. His background is in strategy and technology and he also teaches here at Boise State as an assistant clinical professor in the Society for Ideas program. Please join me in welcoming our panelists.

(audience applauding)

[Jen] So I thought it would be helpful given that we are now, you know, in the thick of the semester to hear from our panelists about how they are using AI in their own work. We heard some really good examples from Dr. Tromp about how she’s using it in research, but Steven, your name comes up a lot as somebody who’s really invested in using AI in the classroom and in your research. I wonder if you talk a little bit about that.

[Steven] Yeah, sure. I think AI is probably the most useful tool for knowledge creation since probably the internet. And I’ve really incorporated in every aspect of my job teaching, research. And just to give you an example of how I use it in class, right after ChatGPT came out, I adjusted every single one of my in-person activities and exercises so they’d be AI enabled. So to give you an example, before we would do, like, a case study together on Microsoft. Instead, what I’ve done now is I’ve taken the principles that I would want the students to learn and put that into a prompt that I give the students. And now the students can choose how that case is customized to whatever interests they have. They’re probably not interested in Microsoft, maybe they’re interested in Simplot or maybe they’re really interested in ballet, I don’t care. Now the AI can customize that case to their interests so that the content of the course is reflective of what they wanna learn about and it’s still getting the main objective of the course. And so that’s how I’ve incorporated my class and I’m generally just excited about how AI can be used to customize the content of our courses to the students’ interests as well as can be such a useful tool for inclusion. The students, if they have a language disability or they have any disability of any kind, it brings down those barriers of entry for any digital task and makes it so that those barriers are far less. And so I’m really excited about how it can be used for education.

[Jen] What about you, Sarah? What about through your work dealing with academic integrity or as a co-chair of the AI Task Force? How are you using it?

[Sarah] Yeah, thank you, Jen. I would describe myself as much more of an end user and a new end user for generative AI. So Leslie Madsen and I, in February, developed a conversation guide for faculty members that they might use to speak with their students about potential concerns with their coursework. And we thought that it might be kind of fun and cheeky to try to involve ChatGPT in that development. So we originally developed that with ChatGPT 3.5 and that was kind of one of my inaugural uses of it. And that was a really exciting way to try to bring it into my work. Then just yesterday, I partnered, which I’ll speak in more detail about this later, but I partnered with a couple different units on campus to amplify our ability to use Excel formulas to capture some data that we have from across campus units and combine it in a way that would allow us to share some data that will be forthcoming. But that was really something that allowed me and a fellow English educator to advance our work in ways that we wouldn’t have been able to without an Excel expert with us previously. Thank you also for pointing out my work with students, Jen. I’ve so far, since January when this really became widely available, I’ve worked with 24 students on campus, which is quite a small number actually, whose professors found them responsible for replacing their learning in class with content copied directly out of ChatGPT. And this was problematic for their learning because it means they didn’t have contact with their course content. So their professors referred them to my office and we were able to have some really developmental conversations about what the learning outcomes were for those classes, how those were disrupted by that use, and what acceptable and unacceptable uses might be, how to have that conversation with their faculty members. And I haven’t heard from them again, so I know they’re out there doing fabulously in their classes. And I think that that is a great example of A, how this program can be utilized to support student learning and offer them feedback on their learning and B, a testament to our faculty’s commitment to their student success.

[Jen] Yeah, thanks, Sarah. What about you, Amy?

[Amy] So in my role at eCampus, I’ll be working with folks to help figure out ways to include AI, the new AI tools into different assignments, and how to foster adoption in that way. But I wanna share two things I have found incredibly useful. So in my work doing data analysis, if you have never utilized Python before or you may know that Excel and Python, you have to do different things to make them work together. Code Interpreter, which is available in ChatGPT for the paid version, allows you to upload a spreadsheet. Of course, you should only use a publicly available spreadsheet. So I used a data set that I had already made public, I uploaded it, and I asked it to do a few things that, in the past, would’ve taken me 16 to 20 hours to do. And not only did I get results, I also got sentiment analysis and I got P values and data visualizations and it took less than 30 seconds. So you can see there’s a pretty robust savings there. I would not recommend that anybody do that with your intellectual property or research that has not yet been published. So please keep that in mind. And then, I don’t know if anyone else knows anyone that struggles with different kinds of reading comprehension. My family, there’s a lot of dyslexia. Natural Reader is an AI tool that I use almost daily, at this time, to screen read, but it translates it out of a robot voice and into a voice that I prefer. So that’s one that I would recommend that you all use and it’s great for accessibility for that reason. Thanks, Jen.

[Jen] I love that recommendation. That’s so great. Brad?

[Brad] So my life is split between being here on campus and working with students and then also working at a creative agency. And what’s interesting about creative agencies is they’re typically early adopters. They have to be. Clients pay us to be on the forefront of technology and trends. And so early adoption of technology is nothing that’s new for us. The interesting thing about this technology though is what it means for our roles now and in the future. You know, adopting mobile and social trends and cloud computing and those kinds of things that agencies have done for the past 10 or 15 years, those were easy adoptions. They made your life more efficient. What’s interesting now though, is that we’re using this technology to be more efficient while also coming to grips with that, it’s replacing some of our job duties. That great amounts of the work that we have done in the past are starting to change. The question of how AI is gonna be changing our job is a little bit like asking a delivery driver what autonomous driving is gonna do to their job. You know, I’ve got a really, really nice electric truck now and I’ve got some instruments to look at that I’m getting new training on. I still have to be in the truck and do this. The role is growing, but it’s also different. And so a lot of the conversation at the agency is, you know, not feeling how do we make ourselves feel better about the past and holding on to the jobs that we’ve held so true. But how do we progress into the future together? And, you know, change isn’t something that’s gripped and held onto, it’s something that you dance with and we’re all just trying to find the beat right now.

[Jen] That’s really well put. Trying to find the beat. So these are examples of how we’re using AI and the power of AI, but I think a lot of people are coming into this semester are looking forward with some concerns. They’re worried about learning it themselves, generative AI tools, and they’re worried about their students using them in ways that they maybe don’t understand or approve of. Dr. Tromp talked about the different ways in which we’d learn maybe now as a result of things like generative AI. What advice do you have for people who are sort of in the thick of the semester, they’re in their classrooms or they’re advising professors?

[Steven] I think the importance is finding a way to incorporate into your workflow now, right? The best way to find out where it doesn’t work is actually applying it, right? When you find out, oh, it messes up here, it doesn’t work in this way, it really demystifies a lot of the concerns. And so just incorporating your workflow now, using it to edit parts of papers, using it to help you edit a, or even write the rough draft of a student recommendation, things like that. Finding ways to incorporate your workflow now and be adaptable. That’s I think is the best advice you can have. If you are too fearful of it and you don’t use it, then you won’t know how it can’t work or when it won’t work or why it would break down or how to even incorporate into your classroom. I think a really helpful exercise is take your final assignment and give it to ChatGPT and see how much of it it can do by itself. And if it can do all of it by itself, then you should probably adjust your class. You should probably adjust that assignment. But that’s what I would say is incorporating your workflow and test your assignments.

[Sarah] I really appreciate, Steven, your insights about the learning outcomes from the class and how those are going to be assessed and what types of tasks are gonna be valuable for students to engage with. And I think something that I’ve noticed that is endlessly helpful is just having open conversations with your students. I speak with a lot of faculty and I speak with a lot of students and all the time, I hear we had this conversation that set expectations or clarified something ahead of time and those are the conversations that I’ve heard can support our students in the best possible way. So I think that’s really the foundation for this fall is conversations. And then as Steven was sharing about ensuring that you’re using it or trying it, I found that trying it with topics that I’m very experienced with are easy ways for me to identify where the gaps might be and what it can do ’cause I can tell that right away. Also with Excel the other day, we could tell if the formulas that it was suggesting that we might try, are they working? ‘Cause we added them to the Excel spreadsheet right away and maybe it’s totaling or maybe it’s finding an error. So that gave us immediate feedback outside of what we were using it for. So those are ways that I found that are really helpful so far to kind of clarify my own ways that I might apply it because I’m very early in that phase as well.

[Jen] Amy, earlier you mentioned when you’re doing your research, don’t upload sensitive information or let’s say IRB protected data sets, pulling that out of the air into ChatGPT. What are some things that faculty need to be thinking about when they’re using these tools in their classes in terms of student privacy or intellectual property?

[Amy] I appreciate that. So the students technically own their data. They own their work, their work product. And this is something I I think very deeply about is if you do upload something, you’re agreeing to some terms of use, one of which may be that you have the right to upload that and use it. There’s one tool called Perplexity that Leif Nelson has pointed out to me in LTS that if you upload something, you’re giving them a worldwide license to reuse it. And I think those are the kinds of things we could start educating our students about. I don’t see that so much as a risk, but an opportunity because I think we should have always been learning about those risks of the data and the privacy implications. But now we have this great opportunity to bring our students into that conversation and help them learn what those boundaries might be.

[Jen] What about you, Brad? Do you have advice? I mean, your program Society for Ideas really focuses on giving students the tools they need to be competitive in the workforce. How are you all thinking about…

[Brad] Yeah.

[Jen]Generative AI?

[Brad] I think we could go down the rabbit hole of advice on tools to use and ways to change your curriculum and how to bring those things to students. But maybe I’ll just take the moment to just remind everybody that we’re in the early stages of this. It’s okay to take a breath. I think all of us as educators try to have the answer and bring the answer into the classroom and share that answer with students. At this point, there aren’t a lot of answers. You know, we’re still inventing how this story is gonna shake out. And I think for all of us, taking a deep breath and realizing that we’re a part of the solution and creating an environment with our students that we can discuss these things and debate these things because the story’s still being written on AI and how it’s really gonna be unfolding.

[Jen] All of you have been thinking for some time about what AI means for higher education and for our institution. If you had an opportunity to make a recommendation for where the institution should invest, where we should be building out our skills, our people, our technology, what would you recommend? What would you like to see moving forward? There aren’t any administrators in the room right now who would be interested in your answers.

[Amy] I’m happy to jump in out of order if that’s okay.

[Sarah] Sure.

[Amy] Okay. So I think we need to think about our curriculum and I would love to invest resources if I had a magic wand into saying, how is this gonna look in our students’ future careers and apply that into the curriculum to give them the chance to practice it, practice using different tools, get adjusted to it to help them be really competitive and really change the world around them. So that’s the biggest dream that I have. But also that our at Promise students, I think when they come in, they don’t always have as the same level of skills as an expert communicator or an expert coder. And so if we let them have the chance to learn and catch up, they can actually come out on par or ahead of students that also do have those expert communication and coding tools. So why wouldn’t we give them that advantage? And so that’s where I would put the resources. I think some of the risks we just talked about too, if we had a locally hosted large language model that had pre-trained data sets, if that’s something we could do, if I, again, wave a magic wand, we may be able to do different things that avoid some of those risks where we agree to terms that we don’t wanna agree to. So thanks for letting me hop in there.

[Steven] Well, I’m just gonna back up. Having a local language model that we have would be very awesome. We’d be able to incorporate that not only in our instruction, but we can incorporate it in grading, in service roles, all kinds of things that would really not only increase our students’ autonomy and their ability to learn whatever concept we’re interested in learning, but also make it so that we are more effective as researchers, as instructors, and take away some of that rough draft aspects of our work. Because that’s really what I see AI doing is it takes away the rough draft, it makes it so the rough draft can be done immediately. And so much of our time is spent doing that. And so if we have our own model, like if I had my own wand, it would be that. We all have our own language model we could customize, train, and be used for whatever we need. That would be awesome.

[Jen] So just to clarify, for people who are just starting to think about this, instead of relying on OpenAI’s large language model or Google’s large language model, we might have a sort of Boise State language model, a local language model that might help mitigate some of those concerns about privacy or about access. Does that capture it?

[Steven] Yeah. And even demystify it for our students. If they have an active role that they could train and customize and tailor their own language model as well. Like, that takes away a lot of the mystery and that makes it easier.

[Sarah] Yeah, I think I’m really interested in this question for students and how we’re gonna make this a valuable experience for our students. And that data set that we created those formulas to run yesterday really gave me some insight into how we might do that. So yesterday, I met up with the writing center and we took all the data that they have for student appointments that were attended from January, 2023 through yesterday, the learning assistant program, which is run out of the academic advising and support center and kind of has 10 drills into many of our STEM classes on campus and offers tutoring there. We also took all those student attendance logs and we compared those to the students that are coming through my program. And students that have incidents of academic misconduct are students that are, they’re looking to meet needs in a way that they haven’t discovered success with yet. So what we’re hoping to do is connect them with better time management strategies, for example. Maybe a sense of mattering and belonging. And so when we looked at these two data sets side by side, what we found was really interesting, and I think it’s going to allow me to say that my magic wand would be to enhance our campus student to student tutoring services. So what we found that there was no overlap between the student groups that either are the tutors in those spaces, they’re in a professional learning community with faculty members or staff members that train them on a weekly basis, and then they’re delivering services to students or the students that attend regularly. And then the students that are struggling and trying to identify strategies that will work that don’t include cheating, since that does prevent their learning and is not yet gonna offer them success, those students might be moving towards that space. And we also found just one student who had attended two sessions for the learning assistant program. This is not yet at the participation threshold as they define it, which is three. And then towards the final exams time, when we see time management become challenging, right? Assessment in every class the same week, woo. That student had one incident. And so they’re in that exploratory, experimental, what’s going to work for me phase so they’re becoming successful. So I think what that highlighted to me is retaining and advancing all these opportunities for our student tutors, for our students to attend tutoring, perhaps building generative AI opportunities into that. Maybe that includes the large language model from our other panelist’s magic wand choices. And ensuring that those people are also learning about how to apply generative AI in their professional role on campus as well as their learning role on campus. And ultimately, in my view, this is gonna hugely increase our students’ experience of mattering and belonging, which is really the goal of the student affairs division where I come from. This can happen through clubs and different aspects of student affairs, but it can also happen in the academic space. And I think this is one way that we can really advocate for that, is sharing our students with the professional opportunities and offering that peer-to-peer experience with our learners in ways that might amplify if we were to teach them about generative AI.

[Jen] Do you want the magic wand? I was gonna skip you.

[Brad] Don’t give it to me. You should have skipped me. It’s a hard question for me to answer, honestly. There’s much smarter people than than me to answer that question. I’m a teacher first. I’m not as great at the admin and research parts of my job. And so from a classroom perspective, where can we invest there? You know, there’s the side of the coin where it’s about curriculum and getting students the latest and greatest and getting them the tools that they’ll actually see in the workforce. And, you know, and I think our minds quickly go to the fields of computer science and engineering and those departments are gonna do a wonderful job. But I think a lot of that is gonna be table stakes for higher education that those departments have to lead a lot of those efforts. What I’m personally interested in is the historians and the artists, political scientists. Not the first thought that comes to mind when you are talking about the term artificial intelligence. Because to me, those are the departments that’ll help shape opinions about how we can all react to artificial intelligence. I’m interested in bringing things to students that are how do we deal with copyright issues to come where we’re using this technology to borrow, if you wanna use that word, these works, and then manipulate them into something new that they have rights to. How do we continue to shrink the diversity gap that we have in engineering to ensure that the biases that we already have aren’t proliferated through this technology at an even bigger scale? How do we hold companies accountable for using low wage workers across seas to train our data models? There’s so many challenges with the technology and I think it’s gonna take some pretty unique departments and some pretty unique individuals that may not be the first thing to think about. And I think investing there would be incredible.

[Jen] Yeah, what a great answer. All right. We just have a minute or two left. I’d like each of you to weigh in on, we heard Dr. Tromp used the metaphor of fire. AI is like fire. So much of the generative AI conversation in the media is sort of very, either very apocalyptic or very tech utopian. When you talk with people who aren’t thinking about AI a lot yet, what metaphors are you using? Other people have used the atomic bomb, you know, the sort of arrival of the horse on the plains. Are there metaphors or ways of talking about generative AI that feel more nuanced and appropriate to you?

[Steven]I mean, metaphors I use to explain, I use a lot of metaphors to explain how it works to my students. Like, I use “Family Feud” as the example I’ve used of how to explain how ChatGPT comes up with an answer. It’s any possible answer that a “Family Feud” person can give, not necessarily the right one.

[Brad] That’s good.

[Steven] But yeah. But as far as fears about AI, I don’t necessarily use as, I guess to me, it’s looking at the next iteration of the internet similar to or as innovative as the printing press. When we saw those innovations happen, there’s always a little bit of disruption that occurs where we’re trying to figure out how to use this technology. And sometimes those disruptions are a little messy, right? Like Salem Witch Trial happened after the, maybe I’m wrong on this, maybe it’s a story of who knows better so I shouldn’t use that example. After the printing press came out, there was a lot of false fake media that was printed, right? And then we had to figure out and relearn, just because it’s been printed, doesn’t mean it’s real. And so similarly, I think that there’s gonna be some bumps in the road when AI comes out, but ultimately, it expands voice in the same way the printing press did. In the same way the internet did is it expands voice. And so that’s a net positive in our society. When voice is expanded, it’s a good thing.

[Jen] Other metaphors that are top of mind?

[Sarah] Ooh. I really love the fire metaphor because I think it really asks the question how. How are we gonna do this in a way that’s supportive of our students? And I think that’s the same question that we’re always asking at Boise State and that we’re very wonderful at asking, and I think we’ll find so many different excellent answers to that question. But I think that the metaphor that comes to mind is perhaps a Victorian home, which is heated by many different small fireplaces that might be slightly different in different areas. Maybe a call back to Dr. Tromp’s Victorian specialism. But I think that what that shows is there’s going to be different solutions in different areas of our campus for different student faculty relationships and they can look many different ways. And those will all hopefully help our students uncover at this early phase, as well as us, what are these acceptable uses or helpful, productive uses, and what are these uses that are gonna undermine my learning, undermine my success that I want to avoid so that I can be the most successful version possible. And I do wanna say that the eCampus and the CTL workshops are gonna be great places for those of us that work here to explore that question for ourselves as well.

[Jen] Thanks, Sarah. Amy, do you have one that you use?

[Amy] I do. So I’m not a mathematician. I’m not an ecologist. In the ’50s and the ’60s, we first started sending information packets, and this coincided with our understanding about mathematical models and network analysis, how we compute things with nodes. It also was around the first time we understood how clonal Aspen worked. They weren’t just multiple trees, it was all one the same tree with a giant body underneath the ground. And I am so curious how we are going to change what we think about everything in our world after we start working with these tools. I don’t know. And I also don’t think it’s a foregone conclusion that there’ll be harmful disruptions. I think we get to decide and we have the power over choosing what we do with this. So we get to choose what we’re gonna do to help each other with this tool.

[Jen] Yeah, I love that. What an empowering message. What about you, Brad?

[Brad] I don’t know if I can beat “Family Feud”. It’s pretty good. It’s pretty good. Maybe I’ll draw an analogy to just the rate, the rate of innovation rather than specifically to artificial intelligence. You know, you think about, if I were to tell you that we’re in the year 2007, okay? And we just watched Steve Jobs unveil the iPhone, and I came on stage after that, and you all experienced that for the first time and you were all blown away that you can pinch and zoom on your phone, right, as the first feature. And I told you that within 16 years time, you’re gonna be able to pull that device out and you can click a button and a stranger will come to your house in their car and you’ll get into that stranger’s car
(audience laughing)

[Brad] and you’ll be okay with it. You’ll actually prefer it. If I told you that the Harvard dropout that launched the Hot or Not website, he’s gonna be one of the richest people in the world and you were gonna willingly tell him all of your personal data and he’s gonna turn around and sell it to businesses. If I were to tell you that, you know, the guy that’s selling books online right now, he’s eventually gonna become one of the richest people in the world and you’re willing, his whole job is to sell you more stuff and you’re willingly gonna put cameras and microphones all over your house so that you can see if your dog’s doing okay when you leave the house. You know, in 2007, we could not have told that story. You know, in 2008, when the app store launched, I don’t know if anybody remembers this, but the most popular app on the app store in 2008 was called iBeer. I don’t know if anybody remembers this. It was 2.99 in the app store and you could hold your iPhone and it would use the technology to drink the beer and the beer would go away on your screen. He sold it for 2.99. He made $10,000 a day for a year. That’s the real genius. Technology moves fast. From 2007 to 2023, it’s just a completely different world. And you know, when you think about the adoption of AI to today, things are gonna change from here. We’re not gonna have the details, right, but let’s attempt to start to get some of the theory right and start asking the right questions to hopefully get to some of those solutions.

[Jen] You didn’t even mention Elon Musk, which was probably a good thing. Probably a good thing. All right, we’re at the end of our time today for the panel. I wanna thank all of you so much for coming today. When you leave today, you’ll notice that there’s some resource tables out in the lobby. You can sign up for some of these workshops through CTL or eCampus that you’ve been hearing about, you can sign up for. We’re having some fun events called mashup events through the College of Innovation and Design. We would love to have you come to those and you can talk to some of the AI Task Force members about what you heard today. We would love to chat with you. But in the meanwhile, please join me in thanking our panelists and Dr. Tromp for their remarks today.

(audience applauding)

[Jen] Thanks, everyone.