Artificial Intelligence and Social Media
On this edition of Iowa Press, we discuss the ways artificial intelligence and social media are changing business, politics, education, personal interactions, and more.
Our guests are two educators who founded Innovation Profs. Chris Snider is a Drake University professor who teaches social media and digital media courses, and Christopher Porter is an associate professor of computer science and director of Drake's artificial intelligence program.
Joining moderator Kay Henderson at the Iowa Press table is Stephen Gruber-Miller, Statehouse and politics reporter for The Des Moines Register.
Program support provided by: Associated General Contractors of Iowa and Iowa Bankers Association.
[ Recorded: November 12, 2025 ]
Transcript
[Kay Henderson] Artificial intelligence and social media are changing how we work, learn and connect with one another. We'll explore how with two Drake University professors on this edition of Iowa Press.
[Announcer] Funding for Iowa Press was provided by Friends, the Iowa PBS Foundation.
Banking in Iowa goes beyond transactions. Banks work to help people and small businesses succeed, and Iowa banks are committed to building confident banking relationships. Iowa banks, your partner through it all.
[MUSIC]
[Announcer] For decades, Iowa Presshas brought you political leaders and newsmakers from across Iowa and beyond, celebrating more than 50 years on statewide Iowa PBS. This is the Friday, November 14th edition of Iowa Press. Here is Kay Henderson.
[MUSIC]
[Henderson] To borrow a lyric from Madonna, we are living in a digital world, and our guests today are guiding businesses and students through that world. Joining us at the Iowa press table are Chris Schneider. He teaches social media and digital media at Drake University, and Chris Porter teaches computer science and is the director of Drake University's Artificial intelligence program. They are both co-owners of Innovation Profs. It's a company that teaches businesses and professionals how to use generative AI. Welcome to the program.
[Chris Snider] Good to be here.
[Henderson] Joining our conversation is Stephen Gruber-Miller of the Des Moines Register.
[Stephen Gruber-Miller] So Chris Porter, let's set the scene a little bit as we're going to be talking about AI a lot. Tell us a little bit about what AI is, how it works. And just give us a baseline that we can work with.
[Porter] Sure. So artificial intelligence is the use of computer systems to perform tasks that typically require some degree of intelligence. So that might be problem solving or decision making. But there's a lot of things that we do that involve intelligence that is maybe kind of less glamorous, like navigating through a room without running into things. Right. So building systems that can do that, that's what AI is about. Generative AI to kind of add to the mix here is the use of AI systems to create content. So that's content like text, audio, video, computer code, that sort of thing. And the way that it works, I mean, these these systems are trained on data that is from us. So if it's a generative AI system that produces language, it's trained on lots and lots of examples of texts from the web. If it's a system that's trained to identify images and it's trained on lots and lots of labeled images that also are often pulled from the web so that it can figure out what image represent, what represents, what things. So that's kind of a general idea of what's going on with AI.
[Gruber-Miller] Yeah. And Chris Snider give us a little bit about some examples. Right. So people may see AI in their daily lives, or maybe it's doing things that they don't see. What are some examples of how it's being used around businesses and elsewhere?
[Snider] Well, you know, the big reason why we're here talking about this is generative AI side. And so that's your your tools like ChatGPT, like Google Gemini that, you know, first burst on the scene because they could suddenly, write human sounding text. And so, you know, we see them now being used, yes, to write text, but also to brainstorm ideas, to rewrite things. All kinds of uses for these generative AI tools in the tech sphere. And then sphere, and then starting to get more into creating images, videos. Right in this world where lots of things are made by AI and it's hard to distinguish what's real and what's what's not.
[Henderson] Christopher Porter explain where you see the reason to be optimistic here about the use of AI.
[Porter] Sure. So I think that there's a lot of uses of AI that we've incorporated into our lives for years now, since around what, 20 2012? Machine learning really kind of kicked up, picked up for reasons I can't really go into. And that led to really well developed recommendation systems that we use when we're perusing what content to to watch on Netflix or navigational tools that we use or ways of using computer vision. Now we have the promise of vehicles that can navigate themselves when we move into the generative AI sphere, which is really something that's only happened in the last few years. Now, the possibility of using natural language processing to process massive amounts of language. For me, I can find new ways to engage my students with AI tools where they can not just read an article, but have a conversation with that article. And so I think that there are interesting developments that open up new ways for me as an educator to learn, but I think there's just a lot of fun things that you can do with it. I create, you know, fake songs with my kids, and it's barrels of laughter when we're doing that. So I think there's a lot of great, interesting possibilities. And then on the business side, I think for productivity sakes, for the sake of productivity, you're seeing a lot of new ways of processing information, getting insights, of taking routine aspects of of jobs and replacing the those, those parts of the work with, with, you know, having the ability to, to get those tasks accomplished that, that were freed up to do the sorts of things that that we really find enjoyable, that are the best parts of, of our work.
[Henderson] So, Chris Snider, looking at your business innovation profs, and you're working with businesses, how are Iowa businesses using AI?
[Snider] I think a lot of Iowa businesses are starting to explore this area and even starting to implement some some of these tools. Right. And so, you know, we're seeing businesses who have kind of gone through this process of starting to understand the tools, and now they're looking for ways that we can implement these in what we do. A lot of smaller businesses still maybe aren't sure what tools to use, what not to use, you know, and so we we really try to talk to companies about how they can have a human centered approach to bringing these tools, because they can do a lot of things that humans maybe are doing right now. And so we want companies to think through what are the what are the parts of our job that we want to make sure that we keep people doing? And what are the parts that maybe we're open to? The idea of AI having a role in here, and how do we do implement AI responsibly, where, you know, there's still a human driving the car, but we're bringing AI along for the ride.
[Henderson] What are the pitfalls?
[Snider] Well, pitfalls would be that these can get things wrong. Right? That just because you get something from AI, you shouldn't assume that it's correct. And so there are people who are creating what's being called work slop. Now where you create your piece of a project and just send it off to the next person down the chain, and they realize, I can't even work with this because this is just AI generated slop, right? So, pitfalls are when you just kind of assume when you're not using your human intelligence with it as well. There are issues on the copyright side. So what was used to train the data to make this AI and is what it puts out in the end something that you can actually put out in the world or you potentially infringing on someone else's IP. So several things that you need to think about before you --
[Henderson] Intellectual property for people who don't know what IP is.
[Snider] But several things you need to think about before you just say, let's go all in on on AI.
[Gruber-Miller] Yeah. So as we talk about businesses, sort of automating certain processes, AI can affect the job market. And we saw nationally, I think October was a record October for for layoffs. And I know at least some of that. There's some reporting that has to do with AI Chris Porter, you know, how is this affecting the job market and workers?
[Porter] Yeah. So there was a Stanford study done, I think over the summer that showed that there was a 13% decline in entry level positions in certain fields, like programing. And I think in another, like a digital media side, a communication side of things. And the study also looked at mid-career workers and found that there was no shift at all. And so they took a step back and said, you know, what accounts for this difference? And it looked like recent college grads have a lot of book knowledge. That is maybe easier to automate than the kind of knowledge that a mid-career person has. They refer to as tacit knowledge. And so we're seeing that experience and expertise is making folks more resilient to the ways of AI But the students that we have that we're putting out there who don't have that expertise and experience yet, they're more vulnerable right now. So that's one aspect of what we're seeing is, you know, we want to prepare our students with internships, with project based learning, give them those kinds of exposures to developing tacit knowledge so that they're more immune to this wave of automation. So I think that that's definitely a strand that we want to take into account as educators, as people talking out in the world. On the flip side, we've seen that there are some operations, some companies that are so small that they have a hard time filling positions. And for them, AI fills a need that they don't have any other means of filling. And so that's that's an interesting counterpoint to that. Is that, you know, we'll find folks in rural Iowa, Iowa who say, I had this project, I had to create a pipeline, a workflow to pull it off. It would have taken me a day to do that. I'm the only person working here. I was able to use AI to do that for me, and it saved me a full work, you know, a full workday. And so there was a real gain there.
[Gruber-Miller] Yeah. So as you mentioned, you're both educators. And if, if especially entry level positions or early career folks are being affected by this, what are you teaching students? You know, either one of you that that can help them be prepared to kind of still be hired in this environment where there might be more competition.
[Snider] So, I mean, in the school of journalism at Drake, we've sort of always taken this approach, but really doubled down on it. Now that we're we're not preparing you for that entry level job. We're preparing you for that job beyond there. So we need to get you internships or experiences in the classroom that are that entry level job. You know, because these workers who the real, you know, unlock here is your human intelligence plus artificial intelligence plus AI And so we need to make sure we're giving them, you know, advanced skills in their area, plus the knowledge to apply AI because there are a lot of people out in the workforce who would rather just bare their head in the sand and not I don't want to use AI, I don't want to learn a new thing. And so I think our students can have an advantage if we think about the the idea of teaching them a little bit more, a little more advanced work, and then teach them how to use AI to do it as well.
[Gruber-Miller] And how? Oh, go ahead.
[Porter] Yeah, I can add so when Drake created its AI major, that was we had the initial conversations back in 2019, actually, before the wave of generative AI and the major rolled out in 2020. That was extremely informed by the liberal arts and professional preparation. So students who major in AI at Drake, maybe half their curriculum is technical and the other half is philosophy, psychology, neuroscience, linguistics, design, law with the thought being that these students need to be problem solvers across across a wide range of domains. And so, you know, it's we talk about Drake having that liberal arts focus, and that makes a huge difference for us. And so now when we're teaching classes, I find myself telling my students, you know, don't rely on these tools to do the work for you, because why would anyone hire you if all the work that you're doing is AI could do for you? So I tell them, you know, the digital ethics course that you take is just as important as the machine learning course you take, because you can automate machine learning, but being able to solve ethical problems like companies, your employers are going to rely on you to have that know how and have that ethical outlook and be able to lay out the issues. And that's not something that you can easily automate at all. And why would we? So I think the students are getting that message that not just technical skills are important, but supplementing that with that broader outlook is really, really central.
[Gruber-Miller] Yeah, that's a great segue into I wanted to ask about more about how this is changing education. Right at the college level, but maybe also at the K-12 level, like, is it providing teachers new ways to engage with students, you know, are there? What are some of the positives there that that you're seeing in the classroom, but also sort of pitfalls? I mean, potential for plagiarism and other things. I don't know if either of you would like to address some of those changes.
[Snider] Yeah, I think on the on the positive side, right. We're we're you know, we worked with teachers who are able to do things they couldn't do before, right? They can make customized assignments for students who are falling behind. So I think there's a lot of maybe excitement around the idea that education, you know, we hear about teacher shortages. Is that an area where AI can come in and help, you know, can we use AI generative AI to create lesson plans, to create custom lesson plans for kids who are falling behind, or maybe kids who are getting ahead, or maybe kids who just learn better in a different way than how that teacher is, is teaching and, you know, free up teacher time for more of creativity, of how do we create assignments and how do we keep kids engaged. But things, you know, that's a better use of teacher time. So I think there's a lot of ability here to improve in the classroom. And I think, you know, one of the first teachers we ever worked with came up with, you know, used AI to rewrite an assignment for a specific student. And we said, how would you have done that before? And the answer was, I would have never done that before. I had no tool to do that before. So I think, you know, there's definitely some positives here in terms of education.
[Porter] And I can speak to the flip side of what, you know, Stephen you ask about plagiarism or, you know, this kind of worry of maybe what we call a offloading? I don't have to think anymore. I'll just have the AI do it for me. I have colleagues and I've done this in some of my classes as well, where, you know, the response is to go analog. All right, put away your laptops, get your paper and pencil out. And now we're going to write. And we want you to build those muscles and engage your own thought process in a way that's independent of of technology to, you know, the temptation to push the button. And I think that that gives students a chance to engage the material on their own, with their own thoughts. And I think by and large, students are appreciating that. So I think that that's always I mean, I my background is as a math instructor, and I feel like you can't do math on a laptop that's a paper and pencil discipline through and through. And now, you know, we've we've shifted to writing primarily being something that you do via word processing. What happens if we go back and we make that much more tactile and at least in terms of developing ideas, get students to kind of, you know, be more physical in how they're doing that work. I think that the benefits are there. And so that's a shift that we're seeing in the college classroom. And it works at Drake because, you know, our classes aren't overwhelmingly large. We've got, you know, you know who all your students are. So, you know, the being able to to manage the grading load, that sort of thing is, is is doable in a way that I think, you know, we've seen some success there.
[Henderson] Let's address some of the freak outs, if you will. One of them currently is that so many people are going online to apply for jobs that businesses are having to use AI to go through hundreds, sometimes thousands of resumes. How do you, Chris Snider advise students who are trying to get a job and avoid some of the ways that they would be screened out by AI?
[Snider] So really, the way I always have. So, you know, one thing I think that, you know, a lot of college is about building relationships, and a lot of life is about building relationships. And that's maybe something that AI can't emulate as well. And so I tell students all the time, because I'm hearing this right now, they're applying for internships. Right now, we're talking about in class. And they're if I can just get past the AI, then I'm good. Can I talk to the real person? And I say, well, where are you applying? You know me. I probably know someone who knows someone at that place. Like, let's let's build human relationships here and let's try to cut through some of the stuff that, you know, is, is in the way right now. So, so I think that, you know, that's another thing that you get from going to college is you you build those genuine relationships with people. That is going to be that thing that makes you someone people still want to hire, right? I asked my students all the time. I say, you know, in a world where AI can do a lot of the tasks associated with the job that you're coming here to learn to do, how? Why are they still going to hire you? Right. And some of that might be because, oh, I met you and I was extremely impressed by, you know, the question you asked at this event. So human relationships I think, are the key for these students.
[Henderson] The other freak out is about how much electricity and water is used by the data centers. That power AI Will it at some point sort of restrict AI do you think?
[Porter] I do think that in order for there to be restrictions, that's that's a regulatory move that has to be made. And so you'd have to have an environment where bringing those regulations and making them a reality is there's a stomach for that. I don't know if there is currently. And so that's, you know, a partial answer to your question. But the question of should we regulate these sorts of things, I think is, is definitely for environmental purposes, you know, a live one. I think that there is a response on the industry side of trying to find more ways to be efficient. So studies that suggest if we are using the right tool for a job, rather than using the most powerful tool for the job, that will definitely decrease the energy footprint for a usage. Now, recently, Google put out a study about what the Say median prompt uses in terms of water output. So how long will I use ChatGPT before the water that I'm using? The amount of water used to cool the servers is equivalent to, say, flushing a toilet. And so you can see now some of the costs and the trade offs. And I think for people to be aware of what those are might inform how individuals use that. And I think that that's a good thing. Yeah. So I'll leave it at that.
[Gruber-Miller] We're going to do a little crash course in digital literacy here. Right. So we know, as you mentioned, that AI can get things wrong. So let's start with Chris Snider. But either one of you can jump in. You know what are some good rules of thumb for whether you can trust what AI is telling you, whether that's ChatGPT, whether it's your Google search results or something else.
[Snider] I think we live in this world now, where with AI, video, images, text. Even before AI, there were things we couldn't trust, right? So. So I think we sort of live in this post-truth world where your first inclination should probably be not to trust something you see online. Until what I teach my students, I say don't trust anything until you can somehow verify that that is real, right? And oftentimes our best way to do that is just the source. Is it a trusted source that put that out there? So I think that's a key thing is it's it's become very hard in the world of AI Right. And so social media is where all this stuff is shared. And I mean, on a regular basis, someone's showing you something and like, can you believe this happened? And your first thought should be, I do not believe that happened. Let's let's verify if this was true. Right? So media literacy becomes a skill that everybody, not just students, but probably everybody in the world needs to be taking some sort of classes on right now and, and understand how things are changing and what that means.
[Gruber-Miller] Yeah. Is there anything different I mean, so but to identify if something is wrong, I mean, is there anything like with photos or video? Are there telltale signs or is it just really about going and running down where that source is coming from?
[Snider] So, I mean, there are still some telltale signs, right? It used to be six fingers on people's hands or the small text in the background of the photo made no sense at all. But these tools are getting better. And so those things are going to go away. And so relying on those is is difficult. And you know, all these companies put their own kind of invisible watermark in these, these tools that identify it as made by generative AI But they don't there's no standard there that they're using. It doesn't pass on. So you know, I fear that we're kind of coming to this world where it's more about verifying that things are true than it is about trying to track down whether something is false or not. And so there's new cameras that news photography journalists can use that will put that verification on that image from the point that it is made in the camera, that this is a real image made by a real camera. And so maybe it's come to that, that it's more about verifying what's what's true. And assuming everything else is, is false.
[Gruber-Miller] And beyond AI, when when your students are navigating social media or just as you're teaching social media, are there other ways that you advise them to kind of interact with that, to both get accurate information, but also just, you know, to to have a productive, you know, to make sure that their interaction with social media is a positive and a productive one.
[Snider] Yeah. It's just it becomes harder and harder. Right? So this is where AI and social media can meet is these, you know, social media is more about entertainment these days than it is about being social with our friends and family. And so that leads to you. It wants to show you content that it thinks you will find interesting. And so you might go down the rabbit hole a little bit of something that is a little bit false, and it might take you farther and farther down that rabbit hole. So it's a tough it's a tough question to ask. You know, I think the best thing to to do is, you know, just understand social media for for what it is and understand where we can find trusted sources.
[Henderson] I'm wondering, people watching this program might want to ask both of you, how do I protect myself? How do I police what's online about me? Do you have any recommendations? Either one of you? How do you do it?
[Snider] So, you know, I have Google Alerts set up for my name, so I'll see if something is out there. I regularly will search my name to see if things are out there to to see what's out there. Not that I'm important enough for that, but because I want to teach my students, these are things you need, you need to do. But the truth is, you know, deep fakes now. So it's very easy to take a photo of any of us and make a video and make it sound like us saying whatever we want it to say. And there's really nothing that can stop any of that once it gets put online, it's very difficult to get it taken down anywhere. And so this is an area that, you know, I think we have a we have PR students at Drake University who, you know, their job is is very safe because you're going to have many crisises in companies that come from AI, and companies need to be thinking about how are we going to respond if this happens to our CEO, if our CEO suddenly has a video online saying something, we need our plan for how we're going to address that.
[Henderson] Chris Porter, in these sessions you have with businesses, what's your advice?
[Porter] Usually when it comes to those kind of outward facing sorts of issues, I defer to this guy.
[Henderson] Oh, okay.
[Gruber-Miller] Well, how about the issue of, you know, I think earlier you brought up we talked about regulation a little bit. Right. Are there certain standards or rules that you want to see put in place to govern some of these things that we're talking about?
[Henderson] We'll let you go, Chris.
[Porter] Yeah. So I, I find one promising development is there's some bipartisan legislation that's being kicked around right now. I think it's called the Guard Act. So guard is guaranteeing user authentication and responsible dialog. And I think that this is promising for protecting kids from being exposed to harmful content or protecting them from the pitfalls of potential interactions with AI systems that could lead to undesirable consequences. And if you look at the senators who are sponsoring this legislation, it is truly bipartisan. And that, to me is very encouraging that there's. A real appetite to to go for putting this into place. So I think for me that's a, that's a, that's a quick win as a, you know, a father of three kids, three boys that use the internet. I mean, the other day I saw my oldest son on ChatGPT without a paid account, without even a login, asking for advice for what video games he should ask for for Christmas. And it was giving him responses and I, I was surprised to see that, you now you can access ChatGPT without even having to log in. And that wasn't always the case. And so, you know, our our kids, if they can get online, they can access these tools even if their terms of use say, oh, you can't use these tools if you're under such and such an age. Like, who's checking that? Especially if you can just get to a website and immediately access those things. So that one seems to me a kind of a no brainer. I would love to see more of an understanding of the environmental impact, and what a regulatory response would be. The EU, I'll say, has a framework for regulating AI as well, and that we might see a version of that here in the U.S.
[Henderson] Right now, I don't see any time left on our counter for this conversation. Thanks to you both for sharing your expertise with our viewers.
[Porter] Thank you.
[Snider] Yeah. Thanks for having us.
[Henderson] You can go online and watch every episode of Iowa Press at iowapbs.org and follow us on social media as well. For everyone here at Iowa PBS, thanks for watching today.
[MUSIC]
Funding for Iowa Press was provided by Friends. The Iowa PBS Foundation.
Banking in Iowa goes beyond transactions. Banks work to help people and small businesses succeed, and Iowa banks are committed to building confident banking relationships. Iowa banks, your partner through it all.