The Rant Podcast

Agentic AI and the Student Experience with Lev Gonick

Eloy Oakley Season 4 Episode 6

Send us a text

What if AI actually made college more human—more supportive, more accessible, and more affordable? We sat down with Arizona State University CIO Lev Gonick to unpack how agentic AI, low-code tools, and faculty communities are transforming the student journey from recruitment to graduation.

Lev takes us inside “Agentic AI and the Student Experience,” a standing-room-only event that drew 650+ attendees from 25 countries. The focus wasn’t jargon; it was outcomes. We talk about “Experience AI” as a design principle that keeps value visible: faster curriculum design with human-in-the-loop checks, assessment aligned to skills and competencies, and agents that scout resources, research, and opportunities while faculty sleep. You’ll hear how ASU empowers staff and instructors through CreateAI, a low-code platform that moves innovation closer to the work, and why the university is replacing data silos with “data rivers” that support learners end to end.

We also dive deep on trust. Privacy, IP protection, and security aren’t footnotes; they’re the foundation. Lev explains how ASU differentiates its environment from consumer platforms, educates students and faculty on how AI engines operate, and uses principled innovation to set ethical guardrails. The result is a culture that scales experimentation without abandoning accountability. On the access front, we explore how agentic, multimodal recruiting tools help first-gen and low-income students discover pathways, compare programs, and get timely guidance that used to be out of reach.

If you care about student success, equity, or the future of learning, this conversation offers a clear, usable playbook: build communities of practice, define your trust posture, pilot targeted use cases, and scale what proves value. Enjoy the episode, share it with a colleague, and if it resonates, subscribe and leave a review so more educators and leaders can find it.

https://ai.asu.edu/

eloy@4leggedmedia.com

SPEAKER_00:

Hi, this is the lawyer T S Oakley, and welcome back to the Rand Podcast, the podcast where we pull back the curtain and break down the people, the policies, and the politics of our higher education system. In this episode, I sit down with Lev Gonick, the Chief Information Officer at Arizona State University. Lev leads the technology portfolio for the massively innovative university, Arizona State University. Lev makes all of Michael technology dreams come true. And that is a tall order. My conversation with Lev will touch on the latest event at Arizona State University called Agentic AI and the Student Experience. Lev will talk about the event and the overwhelming response he got to the event from all across the world. Agentic AI, AI in general, and technology innovation are at the center of the insatiable appetite for innovation at Arizona State University. So I'm looking forward to jumping into the conversation with Lev and talking about how he and his team handle that enormous appetite for innovation at ASU and how he keeps up with the changes in technology today. But before we get into that, I want to take a moment as we approach the end of the year to thank all of our listeners, all of our subscribers, all the folks who watch us here on YouTube, thank you for continuing to follow the Rant podcast. I hope the content has been valuable. I hope my conversations with leaders in the higher education marketplace gives you a bit of an insight into what's happening today and what are some of the issues that we're going to have to pay attention going forward. Certainly, technology and especially AI have been a big part of the conversations we had this year, but also the enormous changes that have been brought on by the Trump administration in the higher education marketplace, as well as the rise in the lack of confidence that Americans feel about higher education and the drive to show more value, more economic mobility for learners who enter programs of study. We've had some great guests this year from Lev Gonic to Phil Hill, to many others who have helped us understand what's shaping the higher education marketplace today. And all this would not be possible without our sponsors. So I have to take a moment to thank our amazing sponsors who make the Ramp Podcast possible. I want to thank, of course, Arizona State University for continuing to support the Ramp Podcast as we shine a spotlight on innovation throughout the higher education marketplace. I also want to thank Alliance International University that's making professional graduate programs affordable and available to Americans throughout the country. I also want to thank Open Classrooms. Open Classrooms had an amazing milestone this year. They received their accreditation from WASC. They moved here from France, opened up shop in the US about three years ago, and are now accredited in creating multiple apprenticeship programs and programs of study that lead to good paying jobs. Also, a big thank you to RisePoint. I recently interviewed Fernando Blake Schmarr, and that episode will be published in the next few weeks. We talk about ROI and online programs and how RisePoint supports institutions trying to grow their footprint in the online education marketplace. A big thanks to REAP as well. REAP Education is helping learners who either had a poor experience in higher education or who never went to get reconnected to higher education in their adult working life and get the skills that they need in order to compete in today's economy. A big thank you also to Ilusian. Illusion continues to evolve and support higher education institutions, big and small, throughout the country by leveraging their enterprise resource planning platforms and providing data solutions to colleges and universities across the country. I also want to thank EESG, Education Strategy Group. Education Strategy Group is helping states and institutions throughout the country to come up with strategies and understand how to thrive in this ever-changing education marketplace. Also, a big thank you to College Futures. College Futures is also my day job. They've been very supportive of the RAN podcast, helping to spread the word about the work that we're doing on the RAN podcast and helping us find great guests like Michael Iskowitz from the HEA Group to talk about how we are creating more and better economic mobility for learners and how we're going to measure that going forward. I also want to thank Branded and Brandon Bastide, the CEO. Brand Ed was a sponsor at the beginning of the year, and we really appreciate the support that they gave us to help the Rand Podcast continue to grow. Without this support, the Rand Podcast would not have been able to see the tremendous growth that we experienced in 2025. Our growth on YouTube has been tremendous. We have now reached over 210,000 views, well over 8,000 hours of watch time, and we continue to add more and more subscribers. Last I checked, we were at more than 4,500 subscribers. We also continue to see steady growth in our audio platforms as well. So thanks to all of you for watching us, for listening to us, and continues to give us great feedback. So with that backdrop, I hope everybody has a great end to 2025. I'm looking forward to continuing the conversation in 2026. And with that, please enjoy my conversation with Lev Gonig, CIO of Arizona State University. Lev, welcome to the Rant Podcast. Great to be with you. It's good to have you, Lev, and thanks for taking time out of your busy schedule. I know there's always a lot going on at Arizona State University, but thanks for taking some time. So as we launch into this podcast interview, I know we've got a lot to talk about, a lot going on at ASU and the AI world. But before we jump into all that, let's level set a little bit for our listeners. Tell us about your role at ASU and how you and your boss and partner, Michael Crow, has have created a culture of innovation there at ASU over the last few decades.

SPEAKER_01:

Great. Thanks again for having me. What I'd say is that my role as the enterprise CIO, working for Michael and the executive team here at ASU, is a combination of stewardship of all of the utility functions that go into running a huge enterprise institution. And at the same time, as a partner in co-designing and thought leadership and strategic work together to advance this seemingly insatiable orientation towards innovation that you uh intimated. And I think at ASU, it really is an unrelenting commitment to our charter that focuses in fundamentally on being democratic in the sense of making teaching and learning, especially learning and student success, an essential of the institution and mobilizing all of the technology tools that are possible to advance student success, reduce the cost and the burden of education, and fundamentally to prepare students to have the opportunity to gain the skills that they'll need for the workplace of the future today while they're at the university.

SPEAKER_00:

I appreciate using the term insatiable. The university does have this insatiable drive toward innovation, toward improving, toward democratizing access to a great post-secondary experience that seems to be very intentional. Those of us who have had the opportunity to visit the various functions that ASU see it everywhere we go. How do you keep up with it? As you said, you're the guy behind the scenes. You and your team are always having to go out there and problem solve and try to figure out which technology will help meet the goals that the university is after. And so many CIOs or CTOs are in the no business. How do you put you and your team in the yes we can business?

SPEAKER_01:

Yeah, it's a bit really important. I guess I would say I'm genetically predisposed to the yes we can, and maybe that's what attracts a certain kind of talent around Michael Crow, because there is a growth mindset that is core to the sort of philosophical orientation to all of the complexity of being part of an enterprise like ASU or any other large complex organization with a mission. I think in terms of the tactical and the operational issues, I have always found that the best way to actually learn something is to teach it. And so I've been writing and speaking almost for 30 years because I find that is the way I actually get prepped to actually get up to speed on what is going on. And I'm constantly asking people around me, not only here at ASU, but in my broader network that I have out there, for insights and for what are you working on in terms of experimental? And are you trying this particular set of orientations to the teaching and learning environment? Or what are you doing in the HR area along the way? And so that that is a case of understanding again, one of the things that there is a little bit of NIMBY stuff that happens in especially in the technology world. Well, that it's not invented here, it's not worth doing, not at ASU, and certainly not for me, because if we've learned anything in the last 20 years, we have found a way to partner in a way that is not about simply the transaction of a vendor to a buyer relationship, to try to create more value to actually help shape the way a commercial player can emerge into the marketplace, but more generally to shape the marketplace.

SPEAKER_00:

I'm really glad you brought up that example. You really do partner with all sorts of different entities to bring to fruition the technology, the ideas, the innovations that either a commercial or nonprofit partner may be able to bring to bear and to help uh ASU complete its mission. Now, you just had one of your signature AI events recently happened. It was, I think it's titled Gentic AI and the student experience. I think that's a perfect name for an event today because we've gone from just talking about the release of Chat GPT and the growth of LLMs throughout the country to now really talking about a GENTIC AI. First, tell us about the event and what are some of the interesting things that you came across there.

SPEAKER_01:

It was a spectacular event in the following way. Our expectations were that if we actually focused the conversation on the student outcome, again, totally appropriate for all of us, but certainly for ASU, that we would probably get maybe a couple hundred people from around the country to come share what they were doing and to get out of this as only nerd speak, only techno babble, and to actually start talking about early experiments. And what do we actually know about how students are thinking about and making use of? And as things happen in the ASU world, I was traveling and speaking, and we were inviting people individually to come to campus to start sharing. And all of a sudden, like it became clear that this could really be a combustible moment, not just a couple of the usual suspects getting together, telling war stories about how hard it is to work with the academics and so forth. No, it became just a literally we blew the roof off the actual program with well over 650 people participating from 25 countries around the world coming to again, not just to listen, because we were very intentional in design of this, not to make this about selling ASU, because again, we have some things to share, and we think what we're doing is actually very much at the leading edge, but this was really about sharing. And we literally had over a hundred speakers of the 650 people there involved in one way, shape, form, or another. We solicited through paper proposals and poster sessions and keynotes and the like, and really it was a here's the best evidence of why this was such a special event. Friday at 11 o'clock on the last day in the last session of the event, standing room only. Wow. Now that's crazy. Those of us who've been around the academic event space know that you can almost never plan for the last day because no one's ever there. But my point there is that there was, there is, we tapped into a moment in which there's this incredible excitement and interest around the application of the ability to translate the nerd speak into English and have it applied to how it gets used in the learning environment. And not like the very last session was actually a musician talking about music and the creative experience for classes. So it wasn't just about how do I use this for computer science. In fact, very little of it was the traditional computer science. A lot of it was the intersection of what we've come to, what we describe here at ASU about experience AI, which is which people can understand. People want to experience AI. They don't care whether or not it's generative AI or gentic AI, right, or multimodal AI. That's all nerd speak. What people want is AI that they can experience, experience for purposes here of their learning. And that what we were able to capture was this extraordinary breadth of interest, including assessment, early assessments from Australia, from Canada, from across the US, of people actually studying the early pieces. So we now have a sense of where the baseline is as we continue our collective journeys.

SPEAKER_00:

So before we dive into some of the questions I have around some of the use cases that you see emerging for AI, and you're right, there's lots of nerds speak around this. Some of it is probably still in the realm of machine learning, some of it is truly AI, and now agentic AI, where we can turn these agents loose to do some work on our behalf. But before we go down that road, I work with a lot of colleges, universities, and both in California and across the country, where the college may view ASU or uh Southern New Hampshire or WGU and see some of the aspects of the deployment of AI happening. And they want to rush to those examples, rush to those use cases, but they have so much work to do on the back end to actually be prepared to leverage AI. What is some of the advice that you have for college and university leaders to actually become AI ready so that you're not just buying products off the shelf, you're actually identifying a problem to solve?

SPEAKER_01:

To be sure, and again, what I would say here, Eloy, is what we need to do is actually understand how to walk on both sides of the fence. The best way you know to predict the future is to invent it. The best way to invent the future is the start. And so that that's on one leg. And at the other side, you're absolutely 100% correct that you actually have to set the conditions and the context for why the faculty or the staff or the student body should be actually understanding why the institution is interested in preparing students for the 21st century workforce, or why students should be prepared to grapple with the ethics of AI, or why students need to understand how, in fact, the ability to do research and discover new science is going to be radically transformed. There is absolutely a step before all of that, and that is essentially creating communities of practice. ASU began this journey now just about three years ago, where we understood that the first order of business, while we had early experimentations underway, was to actually engage the faculty community in a series of conversations. We knew coming out of the pandemic that the faculty under through the pandemic understood that technology could be an ally in a very disruptive moment, not because it was perfect, not because we didn't have all kinds of challenges through the pandemic with the technologies that we provided. And we thought again, there'd be a couple hundred faculty at ASU that would very quickly engage in conversation and early experimentation. And we were we've been blown away. Again, we have almost 6,000 faculty uh teaching at ASU in one way, shape, form, or another. And over 3,000 have gone through a program of essentially structured learning that we, along with our colleagues at Ed Plus, I know it's Phil work with him, along with our provost office, Nancy Gonzalez and her team, created a curriculum content that we've made available. And alongside of the sort of self-paced learning that the eight module learning that we created, we also began to create community broad communities of practice with thousands of faculty engaged, which now, three almost three years later, is essentially inappropriately largely discipline-specific. How are you and in your own, whether it's in the humanities, looking at philosophy or English, at this point, the community practice needs to be context really contextual? Or if you're working inside a biomedical engineering lab, those are all at this point all around context-specific efforts. And so we maintain those communities of practice. We engage intentionally. We've created a faculty ethics committee where I'm constantly asking for feedback because this is not like one and done. We understand there are all kinds of complexities associated with the AI journey that we are on. And we, certainly in central IT, we don't want to be presumptuous to understand essentially where all the biases and ethical considerations need to be taken into consideration. And we have a research community that is constantly and in many ways driving the technical cutting edge, while at the same time, we've got significant, I would say, commitments at ASU back to our charter that focuses on equity and equitable access to these tools a lot of the way. So that's the juggling act that is essentially much of it critically important in addition to all the sort of leading edge demonstrations of the art of the possible. And all I would say, which again, I think I know you will very much resonate with this, but also to your listeners and viewers, and that is that these are still early days. And again, we're privileged and lucky here at ASU to be working on both sides, both the faculty engagement, the student engagement, the staff engagement. And one of the things that we've learned, Eloy, in that regard, is that we need to change, we have to have a different kind of mindset. This is not about managing applications like IT has done forever. This is a complete rethink of the ways in which we can use technology to actually help students instead of looking at this as kind of data silos, it's the way most applications have been developed over the last 30 years, to think about this as data rivers that can literally support students along their entire journey and not only students, but unlock the opportunities to let subject matter experts in HR create AI apps that work for HR professionals. And the same thing for lab professionals, rather than saying, Oh, we have to wait. For our technology people. So we've developed the low-code, no-code platform technology, which we call CreateAI. And our job is to make sure that the campus community understands its use and understands that it's being guided by an ethical consideration. We call that principled innovation here at ASU and get feedback. This last weekend, I was hanging out in the in our foot at the football game, and a group of student leaders were talking to me exactly about this kind of work. It's not the CIO writes a note, and all of a sudden, AI is birthed all. This is a series of conversations, including the ones that happen at a football game.

SPEAKER_00:

Yeah, no, I would love to grab either uh Division I football coach or athletic director to find out how they're deploying AI, although they probably wouldn't want to tell me because probably secret sauce at this point. Based on what you're seeing right now, what are some really interesting use cases for agentic AI in the student experience? Are there one or two that either you're seeing at ASU emerge or that through the this latest event you had that you you found really interesting?

SPEAKER_01:

Yeah, I'd like to give you one of both just as an example. Again, this is not exhaustive, but just one of both. So again, let's just level set that the idea of an agentic AI is that all of us have one who use any AI at all have had one relationship at least with one agent. Let's call that your favorite tool, whether that is taking a look at Chat GPT or you're using Claude or whatever tool that you're using. That's you and the agent. But in the agentic age, these are really an opportunity for agents to actually be working with you and with each other, agents being machine agents, to actually streamline workflows, to create opportunities for accelerating activities and for discovery in ways that might not be apparent to you if all you were doing was having your individual either work in a pre-AI era or even with a single agent, your interactions. Just two quick examples that you know that I'd share with you. One is that's easy to understand, I think, is the process of actually creating curriculum. And you could sure, you could ask ChatGPT, I'm about to teach a course in microbiology, and I'd like you to give me a course outline with all of the details. Prompt, send. All right. But the truth is you've not really described actually any of the objectives, any of the critical resources that need to be uh engaged with, any of the ways in which you want to assess learning, what skills and competencies are actually going to be will students actually receive if they actually take the journey on this microbiology course with you? What uh opportunities are there for study and or for work afterwards? At ESU, what we've done is actually set up an agentic workflow that allows faculty to frame, again, in this low-code, no-code fashion, everything that starts from learning objectives to course outline, to assignments, to assessment. Again, both in the form of short assignments, short different ways of leaning into the different research that helps you understand kind of where students have historically found challenges in a particular area of the academic program, through to what skills and assessments that we know students will be able to demonstrate coming out of it, what skills and competencies they'll have, and then actually help them understand their journey going forward. Now, all of that is actually where humans can be in the loop as much as the designer, the instructor or the instructional designer wants to afford them. But that what that does is enormous, not only does it remove the administrivia of a lot of the way in which course design happens, it also is a great way to understand uh the opportunities to allow the agent to do the research on the back end, whether it's new reading materials or new discoveries or new opportunities for students to be able to participate in things like hackathons or poster sessions that again, the agent, while you're sleeping, can go out and research essentially all of these and bring them back together as what happens. And on the commercial side, there's just so many interesting agentic stories that are beginning to unfold at ASU. We're always interested in the top of the funnel, which is to say, student recruiting students into the ASU environment. There are terrific companies that are out there. Again, I've just mentioned one that we're particular we're using at ASU now called College Vine, which again is an agentic workflow that allows students to be sharing their interests and aspirations as they enter college, colleges to be able to actually work with the exchange infrastructure of the agentic platform technology, it's to be able to indicate to students that the level of our interest in having conversations with them, and then using multimodal AI, that is, say, voice, text, speech, and the like, to be able to have to help students on their journeys to a decision. Those are just two simple examples of the ways in which agentic AI is happening in real time. We see it all the way out to the alumni experience here at ASU, right? And we see it obviously all the way in on the other side to the admissions journey.

SPEAKER_00:

I think that last example is a great example. The more and better information we can give to learners who are thinking about going to college, particularly learners, low-income learners or learners that don't have a very wide social net, maybe, or the first in their family to go to college, just opening their eyes to the various possibilities of various majors, what that major means to their career aspirations, how to be prepared. So there's so many use cases that people complain all the time there's not enough high school guidance counts. AI is certainly a way to get more and better real-time information to learners when they need it, wherever they can get it. So I think that's a great use case. Now, how do you think about privacy? I know you wrote recently about privacy and data. AI is all about getting more and more data. And a lot of people are growing concerned about data privacy, whether or not information that they're sharing as part of their experience is being used to train an AI model somewhere else. How do you think about privacy from the university enterprise point of view?

SPEAKER_01:

The central orientation is that in this age, our biggest challenge and opportunity is to rethink trust. How do you actually remain relevant in a world in which there is this massive diffusion of sharing, the sharing economy that is underway that actually erodes a lot of the trust that again we in higher education have taken for uh essentially taken for granted that we always have the impramater around we are the trusted institution. And because uh data privacy is such a central issue in this age, for the exact reasons that you indicate, our objective here is to use the opportunity to educate the campus community into the importance of understanding how the engine of AI works and to indicate what the institution, again from the enterprise, is doing to protect privacy, security, personal health information, but personal information in general, intellectual property, and all the things that actually, if you didn't understand it, you might think somehow that all of those are just being protected in your consumer commercial experience, which they're not at all. One of the things that we need to work on here is to understand that this is not about the CIO or the council, uh the legal council, the institution delivering it. This is w about the framing of how we will actually deliver the coin of the realm of this AI era, which is around building trust. We have to find a way to actually indicate and differentiate the role of uh coming to a university for discovery or for the journey of learning and understand that it's a safe place around which you can have trust that it will not be used for nefarious purposes, or just incidentally helping to tune a language model with the information that came out of a lab where you thought you were just doing what the professor asked you to do.

SPEAKER_00:

No, that's a great that's a great point. And I think something for all colleges and university leaders to consider because it is ultimately building the trust of your learners, the trust of the community, the trust of employers, that the data you're using is protected and it is accurate and is verifiable. So good points to consider. Now, let me ask you one last question as we begin to wrap up. Given the pace of change, the pace of transformation, and ASU has been sitting right in the middle of all this over the last decade or so. Where do you see this leading ASU over the next five years? So if you're we're sitting here talking five years from now, what do you think that conversation is going to be like as we describe the impact of AI on the learning enterprise?

SPEAKER_01:

I think I'll end where I started, which is here at ASU, we genuinely have a North Star, which is unrelenting focus on the mission. And that won't change. That won't change for now or for the next five years. We will be focused in on the ways in which we can harness the tools as they continue to mature and evolve in a fashion that assures us that we're providing every opportunity for students to succeed. And I have no doubt that even in the next year or 18 months, we're going to have a completely different conversation about the ways in which AI, as it evolves, continues to unlock that possibility to advance a growth mindset. And at the same time, to be sure, there'll be all kinds of ethical and important questions that we think about as we go down the path together. And I think the last word would be we want to co-design this effort with our faculty and with our students in a way that has them owning their own destinies.

SPEAKER_00:

All right. On that note, Lev, thank you for joining us here on the Rant Podcast. Thanks so much. Great to see you. Great to see you too. Thanks for joining us, everybody. You've been listening to my conversation with Lev Gonick, CIO at Arizona State University. We've been talking about agentic AI and the great innovations happening at ASU and throughout the Learning Enterprise. If you're watching us on YouTube, please hit subscribe, continue to follow us. And if you're listening to us on your favorite audio podcast platform, download our episodes and continue to follow us. Thanks for joining us, everybody, and we'll see you all soon.