53 Min Read

[The AI Show Episode 163]: AI Answers - AI Environmental Concerns, Agentic Workflows, SEO Impact, The Future of Creative Careers, & Human-First Processes

Featured Image

Serious about learning how to use AI? Sign up for our AI Mastery Membership.

LEARN MORE

From the environmental costs of data centers to the cultural biases baked into today’s models, Paul Roetzer and Cathy McPhillips answer your questions from our 50th Intro to AI class. Throughout the episode, they unpack the gray areas of AI-generated content, debate what the rise of agents means for work, and consider how creatives can stay ahead with AI.

Listen or watch below—and see below for show notes and the transcript.

 

Listen Now

Watch the Video

 

What Is AI Answers?

Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 40,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike.

AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.

In this episode, we address 20 of the most important questions from our August 14th Intro to AI class, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time—unscripted and unfiltered—just like we do live.

Whether you're just getting started or scaling fast, these are answers that can benefit you and your team.

Timestamps

00:00:00 — Intro

00:05:13 — Question #1: Which environmental concern feels most urgent for the AI industry to solve in the near term—and who should be responsible for leading the solution?

00:07:58 — Question #2: How well do AI models reflect diverse languages and cultures, and will they ever move beyond an American-centric bias? Have you seen any progress on this front?

00:10:25 — Question #3: What risks and ownership issues come with AI-generated video and images in marketing? Has this evolved over the past few years? Have you seen any legal clarity, or will this remain a gray area in the near term? 

00:15:26 — Question #4: What are the best ways to start experimenting with AI agents, and are there good resources for building them? What’s a smart first step for a solo professional vs. a mid-sized team?

00:18:22 — Question #5: Is there value in using multiple platforms to cross-check results, or is committing to one ecosystem a better strategy? Is this a short-term strategy until the tools improve, or something to build into long-term workflows?

00:22:06 — Question #6: How should businesses weigh built-in AI assistants (like those in Google/Microsoft) versus standalone tools like ChatGPT? Do you think enterprises will eventually standardize on one, or live in a hybrid world?

00:24:30 — Question #7: Are we moving toward a standardized way for websites to guide how AI systems interact with their content?

00:29:27 — Question #8: How do you see different search engines being used or leveraged by AI companies?

00:32:24 — Question #9: How do you choose the right AI model for marketing, HR, and sales tasks? Is there a framework? We often focus on outcomes and use cases, but should we consider transparency, governance, or integration? 

00:34:56 — Question #10: What role do you see AI playing in building and managing communities? Is it more about efficiency (automation, moderation) or about enhancing human connection? 

00:38:31 — Question #11: From an information architecture perspective, what frameworks should teams use when integrating AI into CRM or workflow automation to keep systems scalable and secure?

00:40:51 — Question #12: What are the most common mistakes companies make when trying to ‘force-fit’ AI into a workflow?

00:42:23 — Question #13: Which AI tooling is best suited to develop and monitor a marketing communications strategy at SME vs. enterprise scale? Do you see different adoption patterns between small vs. large companies?

00:45:11 — Question #14: Do you think AI fluency will become a baseline requirement for executives, or is it creating an entirely new kind of leadership role?

00:46:55 — Question #15: What should creatives in fields like graphic design or UX/UI be thinking about as AI continues to evolve? What have you seen creative professionals do successfully to stay ahead?

00:52:29 — Question #16: How do you see coding and technical skills as careers in a world where today’s kids will grow up with AI? And if needed, what other skills should be developed in tandem? How might schools or parents prepare kids for that world?

00:55:35 — Question #17: What’s the best way to handle situations when AI gets things wrong, and how do you approach fact-checking? What processes and humans are needed? Has your answer changed as AI has improved?

00:58:39 — Question #18: If you had to narrow it down to just one ethical principle that matters most right now, which would it be—and why?

01:00:48 — Question #19: How should companies address internal concerns around data privacy, compliance, and governance? Do you see regulatory momentum changing how companies handle this?

01:01:53 — Question #20: Which AI applications do you expect to break through sooner than people think—and which ones are overhyped?

Links Mentioned


This episode is brought to you by Google Cloud: 

Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.

Learn more about Google Cloud here: https://cloud.google.com/  


This episode is brought to you by AI Academy by SmarterX.

AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI.

Learn more here.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: AI isn't the answer to every problem or every need to increase efficiency or productivity. It's great to assess workflows. It's great to look at problems differently, but AI isn't always the answer sometimes. More human is the answer. Welcome to AI Answers a special q and a series from the Artificial Intelligence Show.

[00:00:18] I'm Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of ai, but we never have enough time to get to all of them.

[00:00:36] So we created the AI Answers Series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization. These are the practical insights, use cases, and strategies you need to grow smarter.

[00:00:57] Let's explore AI together.[00:01:00] 

[00:01:03] Welcome to episode 1 63 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host today, Cathy McPhillips, our Chief Marketing Officer at Marketing Eye Institute and SmarterX. Welcome, Cathy. Thank you so much. It is weird to look across the screen and not see Mike there after there's been so many of these.

[00:01:20] But this is, I mean, this is like our, our fourth together, right? Like I think, 

[00:01:24] Cathy McPhillips: yeah. 

[00:01:24] Paul Roetzer: So this is not Cathy replacing Mike. This is not our weekly show we do every Tuesday. This is a special edition we call AI Answers. So we introduced this series, I think it was what, June or July of 2025. Yeah. And the idea here is,   as part of our AI literacy project, we, we do a intro to AI class every month free, and we have now done 50 of them, and Cathy and I host that together.

[00:01:50] So we do that every month since the fall of 2021. And then we do a five essential steps to scaling AI class every month for free. And we are on. [00:02:00] 10th, 

[00:02:00] Cathy McPhillips: 10th or 10th. 10th is tomorrow, I guess, the day this drops. 

[00:02:03] Paul Roetzer: Yeah, the day this drops. All right. So Cathy and I are spending a lot of time virtually doing these things this week.

[00:02:08] So AI answers is a, you know, basically every other week or so, we do about two, two to three a month where we just go through and answer question. So when we do these intra AI classes and the scaling AI classes, we will get dozens of questions, and we usually get to maybe five to 10 of them,   on each episode or on each class.

[00:02:27] And so we introduce this new podcast series in partnership with Google Cloud, and we thank them for their support,   to just try and get through as many of these questions as we can. And so that's the gist of it. it is literally just,   unscripted. Cathy has questions from the thing, and we answer 'em because in real time that's what happens.

[00:02:43] The questions come in, we answer 'em. So Cathy and Claire on our team curate the questions and then we jump on a call. And,   so if there's, if there's questions Cathy asks that I don't have great answers for. I didn't prepare for it like it is. It is meant to be sort of real time. And if I can provide some [00:03:00] guidance on some things, we'll direct you to other resources.

[00:03:03] So that's what we're gonna do today. Today's episode is in addition to being presented by Google Cloud. It is brought to us by AI Academy, by SmarterX.   we announced this and launched this on August 19th. So this was just Tuesday of this week.   this is the thing we've been working on for 10 plus months.

[00:03:20] If you listen to the podcast regularly, you hear us talking about this. So we finally,   brought a bunch of new courses, professional certificates,   live experiences, product reviews, all these new things that we've built into our AI mastery membership program,   as part of AI Academy. So you can now go check it out.

[00:03:39] We have a brand new website, academy.SmarterX.ai You can go learn all about the individual plans. You can learn about our new business accounts that we're really excited about, and you can kind of check that out. And you can also access the webinar from Tuesday, the launch event webinar, where we shared the entire vision.

[00:03:57] We went through, you know, really a lot of just making the [00:04:00] business case for AI education and training internally in your organization. The, I would say the majority of the presentation on the launch, it was actually more about,   educational value related to how to make that case and what the value of investing in AI literacy is.

[00:04:15] And then it ends with a kind of an overview of what we're doing with AI Academy. So again, go to academy.SmarterX.ai and we will also, in the show notes, put a direct link to the launch event webinar,   which is available on demand. Cathy, I'm gonna turn it over to you and kick us off. 

[00:04:31] Cathy McPhillips: Okay. Let's do this.

[00:04:33] Okay. So this week was different. So this is our fourth class and while oftentimes the questions are so very different, sometimes we do get a lot of the same. So usually Claire will export all of the questions. She'll go through and do a read and give some recommendations, and I'll run them through,   AI of some sort.

[00:04:50] Today I used or ChatGPT and said, put these in a flow so Paul and I can have a great conversation. I also ran them through Notebook LM to be sure that they weren't questions that we've [00:05:00] asked, just to make it different just so people could go back to the other episodes. And this is all fresh, different questions and I tweaked 'em a little bit.

[00:05:06] So we're continually figuring out how to evolve that process for these questions. All right. 

[00:05:13] Question #1: Which environmental concern feels most urgent for the AI industry to solve in the near term—and who should be responsible for leading the solution?

[00:05:13] Cathy McPhillips: Question number one. What, which environmental concerns feel most urgent for the AI industry to solve in the near term? And who should be responsible for leading the solution? We're start strong. 

[00:05:23] Paul Roetzer: Yeah, really. Um. So just a, a little background, the environmental concerns, this is a question that does come up in various forms.

[00:05:29] Quite often the concerns are,   like I just literally saw this morning that,   Oracle is planning, planning to spend a billion dollars to power an openAI's data center with gas turbines. Like that's not great for the environment. Like so, so there are these very real, like immediate concerns where they don't have enough power in the electrical grid to do the things they want to do.

[00:05:55] So they're using gas powered machines to power these data centers like that as [00:06:00] an immediate and obvious challenge. The bigger picture here is to do what these major labs like Google and Meta and OpenAI and others want to do requires way more data centers than we currently have. And those data centers require way more energy than we currently have in the grid.

[00:06:22] And so we're going to have to do things and it cannot all be clean energy. And so there's a bit of a trade off. Well, there's a significant trade off, I should say, probably for at least the next decade, where environmental concerns are largely going to be pushed aside by the US government at least, and the labs themselves.

[00:06:43] And the bet that they're going to make is that if we build more intelligent ai, it will actually help us solve the bigger picture climate problem, long run. And so whether it comes to economics and jobs or energy, that is [00:07:00] generally the talking point of all the leaders of these labs is it's a trade off.

[00:07:05]   it's not gonna be where we want it to be in terms of,   being, you know, net zero in terms of the carbon emissions, like we're gonna emit more carbon.   but. In the long run, we think it's gonna enable us to solve the bigger problem. So it's a very real issue. The thing I talked about on the podcast recently that any of us can actually do ourselves, it's not a great thing, but basically use the smaller, more efficient models that that is.

[00:07:30] Like if you use a reasoning model, if you use image generation, video generation, those require way more compute power,   or if you use just a larger language model versus smaller, more efficient models. So I would say the one thing you can do if you really care deeply about this, it's kind of like turning the lights off in the room when you leave.

[00:07:49] Like it's a little thing, but use a smaller model like that that it adds up when you're talking about billions of users of the AI technology. 

[00:07:58] Question #2: How well do AI models reflect diverse languages and cultures, and will they ever move beyond an American-centric bias? Have you seen any progress on this front?

[00:07:58] Cathy McPhillips: Okay. [00:08:00] Number two, how well do AI models reflect diverse languages and cultures, and will they ever move beyond an American centric bias? And have you seen any progress on that front?

[00:08:09] Paul Roetzer: Geez, a man picture. You're from an intro class. This is incredible.   yeah, I mean, it's gonna inherently be bias. There's, I, I've talked about this a lot in the podcast. There's bias in every element of this. The data that goes in to train the models, the post training of the models, the system prompt that the determines how the models behave, the languages they learn from all these things.

[00:08:31]   and the reality is that most of the models being used today, whether it's in chat, GBT or Gemini, whatever, are trained by companies in California and the United States. And,   you know, I think that there's a lot of effort to diversify that. But generally speaking, I think that's basically where we're at.

[00:08:50] Like there, they're gonna be US based models. Now obviously, like China's a major player, their deep seek is a Chinese based lab that made some waves earlier this year. [00:09:00] And so you're gonna have other countries that, you know, build models that maybe are inherently trained on,   localized languages. For the most part, what's happening is companies like Meta and Google and others are training on English, and then the models learn to translate into other languages.

[00:09:18]   I think a lot of it might come down to like post training and things like that, but yeah, I mean this is just kind of, it's the way they work right now.   and I don't know that that's gonna change dramatically in the next year, you know, few years. 

[00:09:30] Cathy McPhillips: Yeah. I think wonder if the more we're using these tools and the more the international folks,   non-English speaking folks are using the tools, you know, we talked about that.

[00:09:38] I think I wanna say it was on one of our mastery courses that people who are bilingual were using the tools in English and in their native language and we're seeing the results. And does that contribute to this a little bit? 

[00:09:50] Paul Roetzer: Yeah, I mean, it could, I mean, OpenAI said that I think their largest user base is actually out of India.

[00:09:57] Like I think part of this is [00:10:00] just gonna be market driven, where, you know, where the users are, they're going to have to adapt the products to be more localized to the user base. So I could see more diversification in that way where they, they just look at the market and say, okay, we have to start catering more to this audience.

[00:10:16] Sure. And it might come back to even the training of the models themselves or the, you know, the specialization of the models after they've initially been trained. 

[00:10:25] Question #3: What risks and ownership issues come with AI-generated video and images in marketing? Has this evolved over the past few years? Have you seen any legal clarity, or will this remain a gray area in the near term?

 

[00:10:25] Cathy McPhillips: Okay. Number three, what risks and ownership issues come with AI generated video and images and marketing? Has this evolved over the past few years and have you seen any legal clarity?

[00:10:35] Or is this still just a big gray area? 

[00:10:37] Paul Roetzer: Yeah, there's not much legal clarity here. the basic premise,   whether it's text or video or image or anything, is in the United States, if you use AI to create something, you can't own a copyright to it. It's gotten a little bit more fuzzy in the last few months,   because the current administration is not as friendly to creators, I would say.

[00:10:58] Like they, they don't [00:11:00] really put as much stock in copyright.   there's actually some who have influence within the administration who would like to just throw it away, that there is basically no, you know, no protections for copyright holders. So that could change things. But as of right now, the US   trademark office says that AI generates stuff, can't hold a copyright.

[00:11:22] So if you're gonna create videos, if you're gonna create logos, things like that through marketing,   using ai, you do have to talk to your legal team and be very clear. If it's something that's very important to you to hold a copyright to and to be able to protect that,   under US law, then you want to have those conversations with your attorneys.

[00:11:42] I always tell people. We, we pay very close attention to this space. I have worked with IP attorneys for years. I have probably an above average understanding of what's going on, but I am not an attorney and I am not providing legal advice. So I would just say, yeah, you gotta kind of,   [00:12:00] really just know is it something you want to be able to protect, that you would be willing to spend resources to protect and also understand It's just getting so hard.

[00:12:09] Like,   one of the things that, you know, I think brands have to worry about, creators have to worry about is just how easy it is to deep fake somebody, like literally deep fake a podcast host and like start a new podcast that looks and sounds exactly like them. And that's gonna happen to executives of companies.

[00:12:26] It's gonna happen all across the spectrum. So this is a really important area to pay attention to, but there is not a lot of clarity right now as to where this is gonna go and how it will evolve there. There's just, there's a lot of court cases right now. Dealing with this, but I also still don't feel like we're gonna have clarity in the next year or two.

[00:12:43] I think it's just gonna go on for a while. 

[00:12:46] Cathy McPhillips: And is there a difference between generating an image in a tool ver, you know, and using it or ideating in a tool and having an artist create it from that? Is that the same thing? 

[00:12:58] Paul Roetzer: Yeah, I mean, I think everything's a gray [00:13:00] area. Like right when, yeah, when you submit an application to protect something, you have to like provide that clarity.

[00:13:06] And I think everything's just gonna be case by case. And if you have to, you know, at some point go through an audit trail of how something was created, it's going to be up to a reviewer within the patent and trademark office to determine whether that's good enough. And that's gonna be subjective on its own.

[00:13:22] There's gonna be human bias tied to those decisions. So yeah, it's, you know, I think the general guidance is if it's something that's really important for you, that you want to have the human as deeply in the loop as possible, and you want to be able to show the human involvement in that process.   no one is gonna take your word for it.

[00:13:41] If you say, well, it's actually my idea. I gave it this and then all I had to do is this and this. It's like, okay, show me the thread. Like, show me that chat. So I think you almost have to,   assume you're gonna have to prove that the human element and you know, make sure you go through that process. So [00:14:00] yeah, my general guidance is like, again, if it's critical, like a logo for your company, right?

[00:14:04]   you don't want 95% of that work done by the AI because that's something you want to be able to protect and you don't want other people to steal it and put it on a baseball cap and you can't do anything about it 'cause you actually used AI to create it. Like, that's the kind of stuff I think about, 

[00:14:21] Cathy McPhillips: you know, and that responsible AI manifesto you did years ago.

[00:14:24] The one I always, the point and that I always bring back to people is like, legal precedent is lagging so far behind all of this. Like, do the right thing. 

[00:14:31] Paul Roetzer: Yes. Yeah. Yeah. and you know, I think part of it is people, people don't know what the right thing is. Sometimes when it just comes to these like. Not even knowing that copyright is an issue with ai.

[00:14:41] I can't tell you how many times I've stood on stage and said like, Hey, if you use outside creative firms or you know, outside copywriters, you need to have in your contract with them that they can't use Gen AI unless you approve it because they may be transferring work to you that they used AI for and you don't hold a copyright to it and they just stare at you like, [00:15:00] wait, what?

[00:15:01] And I mean, even last year at  MAICON, we had a whole panel about this and I think most people in the room, and this is what, September of 2024 were in shock that, that that was the thing. 

[00:15:13] Cathy McPhillips: And they're really smart people in the room. 

[00:15:15] Paul Roetzer: Yeah. Really advanced marketers at an AI conference. So it's still very early and I just think at minim  like an awareness that this is a thing is very important.

[00:15:26] Question #4: What are the best ways to start experimenting with AI agents, and are there good resources for building them? What’s a smart first step for a solo professional vs. a mid-sized team?

[00:15:26] Cathy McPhillips: Absolutely. Okay. Number four, what are the best ways to start experimenting with AI agents? And are there good resources for building them? And what are is, are there different steps between like a solo. Entrepreneur or, and a midsize team. 

[00:15:40] Paul Roetzer: Yeah. So first thing with AI agents is to know what they are. So they're basically AI systems that can take actions to achieve a goal.

[00:15:46] Now, the confusion comes in with AI agents as to how autonomous they are. So this is, you know, it's like, Hey, I'm just gonna ask the thing, do the work for me, and it's gonna do it and it's gonna be perfect, and I have to verify it. It's like I, the human's almost out of the loop. [00:16:00] That's not where the vast majority of AI agents are today.

[00:16:03] The human is actually heavily in the loop, the best place to start that I think, gives people a, the best example of what an agent is and is going to be, is to go run a deep research project in Google Gemini, or ChatGPT. That's an agent at work. You're giving it a prompt. You're saying, Hey, I wanna do a research report on,   you know, my competitors, here's the three competitors.

[00:16:27]   here's their websites. Can you run an analysis of positioning and pricing and product mix?   take a look at their leadership team. Like whatever you're just, you're asking for this thing, like you would ask another human to do a project for you. And then it goes and does it goes and looks at all their websites.

[00:16:44] It analyzes everything. It does a summary of it. It pulls out highlights and entities and all these things. That's an agent at work. So the human set, the project gave the goal, the agent develops its plan of how it's gonna do it. It goes and does it, and then it comes [00:17:00] back and creates the output. Now you as the human step back and it's like, okay, is this all true?

[00:17:05] Like, am I gonna verify all the facts? Things like that. But that's roughly an AI agent at work. It's a AI system that can go do something.   and so again, it's, there's different degrees of autonomy of how much of the work it can do on its own and how much or how little the human needs to be involved.

[00:17:22] That's where we're progressing. Another way you could go look at it's, go look at agent do ai. So this is Dharmesh Shaw, co-founder and CTO of HubSpot created Agent ai. And it allows you to build these much more rudimentary agents where there isn't much autonomy. It's kind of like the human sort of saying, okay, here's my workflow.

[00:17:40] Wanna build an agent that does this workflow for me? The agent itself may not be doing a bunch of thinking and reasoning on its own, but it is executing a sequence of tasks. And so,   I think the agents are gonna get better. They're gonna get smarter, they're gonna get more reliable, they're gonna require less human [00:18:00] instruction.

[00:18:00] But deep research, like I said, is is probably the best example for most people of this idea of an AI system that actually takes action, not just creates an output. 

[00:18:11] Cathy McPhillips: Yep. And we can include in the show notes, the deep research webinar that we did, that you did mm-hmm. To kind of go through that process, both with the input as well as with the output and what was, what's possible.

[00:18:22] Question #5: Is there value in using multiple platforms to cross-check results, or is committing to one ecosystem a better strategy?

[00:18:22] Cathy McPhillips: That's pretty cool. Yep. Okay. Number five. Is there value in using multiple platforms to crosscheck results or is committing to one ecosystem a better strategy? 

[00:18:33] Paul Roetzer: So I do this all the time.   you know, I told this story with our AI academy that we, I mentioned we just launched, I built two,   what I call ada AI teaching assistants.

[00:18:44] I built one at Google Gem and I built a custom GPT, same system instruction, same knowledge base, same everything. And because it was a very important project to me, I wasn't sure if one was gonna be better or the other. And I wasn't sure, based on the different tasks I was gonna ask of [00:19:00] it, if maybe Gemini was better at helping me write abstracts versus maybe chat GBT was better at images for the cover, you know, slide, things like that.

[00:19:09] And so I used both of them until I got to a point where I realized the gem from Google Gemini just. Was better at what I was looking for. It was good enough at everything that it stopped being worth my time to repeat the task in both of them. And I just spent probably 90% of my time working on with the gem instead of the custom GPT.

[00:19:31] Now, that's not always gonna be the case.   the other thing I will do is like, if I output a research report, say in in cut in ChatGPT, I may give it to Gemini and have Gemini function as the critic that assesses it and verifies outputs, things like that. So I'm a big fan of, of having multiple, of using them, especially for really important work or, or, you know, deeper thinking where I want to get multiple perspectives.

[00:19:57] Sometimes they come out with roughly the same [00:20:00] output, verifies it. Sometimes you get like a, a little different thing. And so I really like it in those situations where you are doing planning and thinking and creativity and you just want to kind of bounce, bounce around the ideas. Um. If you use it as a critic to crosscheck the output.

[00:20:16] So let's say you use Gemini to crosscheck ChatGPT, they both still hallucinate. Like you can't just rely on Gemini to make sure everything in chat. GPT was factually correct. Like there's no way to get the human out of the loop and I don't know that there should be, honestly, in the near future.   but yes, I do the cross checking thing all the time.

[00:20:37] I constantly have both Gemini and ChatGPT active, and then IU depending on the project, I will use both of them sometimes 

[00:20:44] Cathy McPhillips: with bigger teams that, you know, can't afford to have everyone have two different, you know, licenses. What do you recommend? 

[00:20:53] Paul Roetzer:   the, yeah, you make your bet. Like they're both great.

[00:20:56] I mean, and I know some people like Anthropic, Claude,   some [00:21:00] people, you know, if we're talking about corporate work, like you only have access to copilot. So it's not just ChatGPT and Gemini, but, um. I mean, I think the models are roughly commoditized.   they're, they're kind of on par with each other.

[00:21:15] They sort of leapfrog each other every three to six months. But if you have access to Gemini or chat GBT or copilot,   I think you just work with the one you have. I don't, I don't know that you can go wrong, and I think they just kind of keep improving in different areas. I love Google, Gemini 2.5 Pro. I mean, that's my go-to for work.

[00:21:35] I would say I probably use ChatGPT more personal, but I also really love,   the pro versions of ChatGPT, like they're, they're reasoning models, but I pay the 200 a month for that. Like it's worth it for me. So, I don't know, at a, at a very high level, like 20 bucks a month for Gemini, 20 bucks a month, Forche, GPTI mean, we're talking about PhD level intelligence in your pocket.

[00:21:56] Like, it, it's hard not to be able to justify 40 bucks a month if you [00:22:00] have enough use cases for them. Sure. But if you're just like using it three, four times a month and no. You just pay for one of 'em and move on. 

[00:22:06] Question #6: How should businesses weigh built-in AI assistants (like those in Google/Microsoft) versus standalone tools like ChatGPT? 

[00:22:06] Cathy McPhillips: Yep. Okay. Question six. We kinda dipped our toes in this answer already.   how should businesses weigh built-in AI assistance like Google or Microsoft versus standalone tools like ChatGPT, and do you think enterprises will eventually standardize on one, or do you think we'll just live in a hybrid world for the time being?

[00:22:24] Paul Roetzer: Yeah, I mean, it's probably gonna follow very similar along to productivity software, you know, for the last 20 years. Like, companies are gonna, you know, have an in-house thing, whether they're a Microsoft shop or a Google shop, or eventually maybe an openAI's shop if they get into the productivity game, which it seems like they may.

[00:22:41]   so yeah, I think we're gonna con, we're gonna continue to live in this world where there's choices, probably two to three is what normally happens. One of them is gonna have 40 to 60% of the market share, and then somebody's gonna have 20% and someone's gonna have single digits. Like, it's probably gonna play out like that.

[00:22:56]   it's like the problem I've seen, [00:23:00] I mean, we, so we have Google Workspace internally. Um. The Gemini app as a standalone is way better than Gemini built into Google Workspace. So like, if I go into Google Docs or Google Sheets, Gemini in those platforms is, is almost useless to me. Like I don't use it yet.

[00:23:18] I think they'll get there. But the Gemini standalone app is incredible. Mm-hmm. And then you can just export to Docs or sheets. So I kind of reverse work, right? I do my productivity in the app and then I bring it into the workspace.   the challenge people face within corporations that only have access to like, you know, copilot or something is sometimes it's a watered down version of what you can get directly from ChatGPT PT.

[00:23:46] And that's where the issues come in, is if people are, have a ChatGPT PT account themselves, and they're used to working with the full version that's available through there. And then because maybe they're in a healthcare company or financial services or a law firm. [00:24:00] There's more restrictions internally on what they want that copilot to be able to do, then they might just not have all the feature sets in their corporate environment that they have outside of it when it's not watered down.

[00:24:13] And that's where I think a lot of the frustration comes in where people are like, oh, I have copilot and it doesn't really do what I want it to do. It may just be because there's some guardrails in place that are limiting its functionality for you. But you know, it's,   I think you're gonna, you're gonna use whatever your company gives you, basically.

[00:24:30] Question #7: Are we moving toward a standardized way for websites to guide how AI systems interact with their content?

[00:24:30] Cathy McPhillips: Right, right. Okay. Number seven. Are we moving toward, are we moving toward a standardized way for websites to guide how AI systems interact with their content? 

[00:24:42] Paul Roetzer:   this is a tricky one. So CloudFlare recently enabled a capability where you could basically say like, you don't want the large language models to be able to learn from your content.

[00:24:55] You can kind of turn it off. So it's like a, almost like a robot stock, TXT, where it's like, don't come and take my content. [00:25:00] That's a, it's a challenging environment. Like we are entering a whole new world of how search engine optimization works, how people discover content. We are definitely starting to see reports now of fewer click-throughs to websites because with Google's ai, o ai mode now, and AI overviews, like they're just getting the answers they need right from the search engine, or they're just getting them right from the chat bot and or AI assistant and they're not having to go to the website.

[00:25:28] So there's no like, best practices yet. I I think this is a very much a, like a independent, so,   decision has to be made by brands. I would probably, at this point caution overreacting,   because we know so little about how consumer behavior is going to evolve. I would hesitate to wall off your content and think that that's gonna get you ahead.

[00:25:55] It's not ideal that we see traffic plummeting to corporate sites, [00:26:00] but we knew this was gonna happen. We said this like early last year on the podcast, like, I assumed our SEO goes to zero. Like I assumed our, our search traffic just goes to nothing. And so I, you know, years ago kind of followed this approach of like, well, let's go to YouTube, let's go to podcast.

[00:26:15] Like, let's diversify our content. Just put it everywhere. and like, if people don't come to our corporate website, fine. Like, so be it. Like we'll just be where the audience is. And so I would, I think this is much bigger picture around your overall content strategy,   how people find you. If your company is dependent upon search traffic, you need to be urgently like assessing that because I think it's very safe to assume whether you're B2B, B2C or, or both.

[00:26:46]   we just can't rely on search engine traffic the way we used to. 

[00:26:51] Cathy McPhillips: But going back to just answering our customer's questions. I mean, the best thing we could be doing, in my opinion. Right, right. I mean, [00:27:00] yeah. And it's interesting. We were doing some, I was in GA four a few weeks ago, and ChatGPT is our one of our top referers right now.

[00:27:06] And of course, like to your point, I was like, oh my gosh, what can I do right this second? Yeah. And with Academy, I just couldn't stop what I was doing and focus on that. But you know, it does like, okay, let's have a strategy behind what we're doing and what's up there and if it works with the LLMs then awesome.

[00:27:22] Paul Roetzer: Yeah. And I think, you know, we just kind of assume this, like the, so this AI answers is a great example,   of like just create value. Now the answer like this transcript will be on the internet. It'll be sucked into the training data of all the models. And maybe the answer to these questions just shows up in ChatGPT with no citation.

[00:27:40] Like there's a very good chance something like that happens. Um. I think we're just playing the long game of like, okay, but what's the alternative? I, we don't put our transcripts online and we don't solve for the customer, or like for the end user who just wants the knowledge. So we're just making a bet on like, listen, let's just create as much value as humanly possible.[00:28:00] 

[00:28:00] As a result of that, we build an audience of people who come to trust us and seek our knowledge out, whether it's through their podcast choice or their YouTube channel choice, or the searches they consume, whatever. and like, it'll just work out. Like I I, it's weird because I'm so much a metrics driven person.

[00:28:17] Sometimes it's hard to take that leap of like, I don't know the actual metrics that's gonna prove this is working, but sometimes you just have to use your instinct of like, the alternative seems like the less than ideal choice of just shut our content off from these engines.   and so we're just gonna kind of take a leap of faith and do what we think is right.

[00:28:38] And I, I'm always a believer that like in the end, if you just solve for the audience. Everything works out. And so if we just stay focused on, Hey, we've got what, whatever the podcast gets now, 110,000 downloads a month, whatever it is, it was 45,004 months ago. Like it's, it seems to be working like it seems to be helping people.

[00:28:57] The audience keeps growing and as long as [00:29:00] you do that and then you get the qualitative feedback from listeners about how it's helping them and how it helped them with a CR career transition or help them reimagine their company. Like you just feel like it's the right path, even if the numbers don't always add up and tell you it is.

[00:29:13] and I think that's where you have to make these like judgment choices that the AI's not gonna make for you. AI don't have human judgment and they don't have human experience,   that that's been gained over years. And sometimes you just have to trust that human side of it. 

[00:29:27] Question #8: How do you see different search engines being used or leveraged by AI companies?

[00:29:27] Cathy McPhillips: Right. Which leads us to question eight.

[00:29:30]   how do you see different search engines being used or leveraged by these AI companies? 

[00:29:36] Paul Roetzer:   yeah. I mean, this is such a unknown space. I mean, we're watching a, a real time.   innovator's dilemma right now with Google where other people are coming in and changing the search engine market and, you know, chat, CBT still dominates in terms of overall search to these,   chatbots and it changed the way people [00:30:00] seek information.

[00:30:00] And so the search engine company had had to evolve and in some ways they seem to be willing to cannibalize their original business, which is what, a year back, I don't think most people thought they had the will,   to do. And they do seem to be willing to do it now. And so I think search overall is just going to evolve as consumer behavior and how we seek information changes.

[00:30:26] And I don't know that anybody really has a great view into how that looks, two, three years out because there's just too many big time variables. Like how much will voice play into all of this? Know, search historically has been, we type in something and we get a results page. Now it's evolved to, we type in a prompt and we get a response from a chat bot or an AI assistant.

[00:30:51] Well, if Sury actually becomes intelligent, and if chat CPT voice gets integrated, and if meta has their way and [00:31:00] we start interacting with our glasses and you know, maybe Apple comes to the market with like AirPods that people actually just talk to the, maybe voice becomes the way we search and then all bets are off.

[00:31:12] So I think there's too many people who see voice as the possible next major interface to be able to accurately predict what happens to search engines. Because whatever we think a search engine is today looks nothing like that. If voice becomes a dominant interface for, even if it's just like Gen Z, like even if it's just the next generation that uses voice all the time, then you'll see this slow progression.

[00:31:37] So maybe there is like. I don't even know what generation we are. Whatev whatever, gen X, whatever,   

[00:31:43] Cathy McPhillips: that you and I are. 

[00:31:44] Paul Roetzer: Yeah. What are we? We're 

[00:31:45] Cathy McPhillips: Gen X. 

[00:31:46] Paul Roetzer: Okay. Yeah. 

[00:31:49] Cathy McPhillips: I'm very proud of that. 

[00:31:50] Paul Roetzer: Yeah. So like, maybe we don't change, maybe we still like our search engine and maybe we type it in and maybe like, we're always gonna kind of be more comfortable doing that.

[00:31:58]   but, but maybe [00:32:00] voice just gets really good and maybe we do change. So I don't know. And I think it's something, again, if you're in a position within your organization where search matters, it's a space you should be watching very closely because we're learning new things each month as it goes by and we see new data points where now you're starting to actually be able to watch the trend line of organic traffic, like plummeting to a lot of major sites.

[00:32:24] Question #9: How do you choose the right AI model for marketing, HR, and sales tasks? 

[00:32:24] Cathy McPhillips: Absolutely. Okay. Number nine.   we often focus on outcomes and use cases when selecting tools, but should we consider other things like transparency and governance integration? We've talked about how, you know, sometimes it's best to pick a tool that. Aligns with your tech stack, but should we look at transparency, governance, environment?

[00:32:43] Should we think that big yet? 

[00:32:45] Paul Roetzer:   yeah. I mean, I think you should always be having those conversations.   you know, I think you, this is where the general AI policies come into play so much where, you know, you're thinking about how your organization uses ai, you're thinking about,   [00:33:00] kind of the user stories behind it.

[00:33:01] Okay, what's HR gonna do? What's marketing gonna do, what's sales gonna do? how much do we need to put guardrails in place? And I'm kind of a believer in not getting too,   into the weeds on this. Like, you can't con, you can't govern every behavior. you want to govern the overall responsible usage of this technology and you wanna be clear on how to do it safely.

[00:33:25] Like, so for example, I just,   built the generative AI policies course for AI Academy and within that, it was the first time where I conceived of. AI agent guidance specifically related to computer use. And what that means is you can now through Anthropic, through Google and through openAI's, enable these AI agents that can kind of take over your screen.

[00:33:50] You can also do it through Microsoft,   and they can perform things on your screen, like filling out forms, clicking on things they can actually go and interact, potentially even make [00:34:00] purchases on your behalf. I am a, a huge believer that should be outlawed within companies. Like your employees should not have the independent choice to turn on a computer use agent because there is so little known about the risks of those things.

[00:34:15] And so that has to be considered within your policies. And at this moment, like, I don't know of people who have done that, like, because most business leaders aren't even aware, computer use is a thing. So I think that, again, you have to know your, your employee base. You have to know the risks you have within that organization.

[00:34:34] But this is where legal and it really need to be deeply involved across different departments of the organization to make sure that we're giving people the freedom to experiment with AI and to drive efficiency, productivity, performance with it, but also protecting them from themselves to make sure we're not misusing the technology in a way that creates greater risk than we need to.

[00:34:56] Question #10: What role do you see AI playing in building and managing communities?

[00:34:56] Cathy McPhillips: Absolutely. Okay. Number 10, [00:35:00] what role do you see AI playing in building and managing communities? Is it more about efficiency, like automation and moderation, or about enhancing human connection? 

[00:35:09] Paul Roetzer: Yeah, I don't know, I'm, you're way more involved in our communities than I am, Cathy, so maybe you have something else to say here.

[00:35:14] But I think like, the way I think overall about automation is automate the things that are low impact, low human, where it's just like, people just want the information. They, they, they don't, not trying to like make a human connection to free your people up to spend more time on the human connection side.

[00:35:31] So, yeah, I mean, I think like if it's,   automating, like I don't, I don't know, just random example.   let's say if we took our podcast transcript from every Tuesday and we had an AI do a summarization of that, that it does in 25 seconds, that would take Claire two hours. Otherwise, nobody in our community cares if the summary of the transcript was written by AI or Claire.

[00:35:57] They, they just want the 10 bullet points of what are we [00:36:00] talking about this week. Now, if they had questions about why was Paul saying this and like, what does he mean by that thing he said. They're gonna want me or Claire or you to come in and say, listen, I think here's the intent of what he's trying to say.

[00:36:13] They don't want ChatGPT then interpreting. So I think that's where you have to kind of like draw these lines of what is auto like automatable? What are the things we should automate? And then where are the things where the human should be there? And then how do we use the automation to free the humans up to do the more human stuff more often?

[00:36:31] I is that again, you're, you're in the time. I don't know if it very explained. 

[00:36:35] Cathy McPhillips: I agree. You know, I tell this story, I told it about for like four years is that the first time I ever used AI working with the institute, I was writing  MAICON 2021 copy. And I was just like, what is this magic? Mm-hmm. And it saved me.

[00:36:49] So, I mean, and it was fine. I had to go through and edit a lot of stuff, obviously, but then I was like, that just saved me like half a day. So then it was like, I'm calling people, I'm emailing people one-on-one. Yeah. And that was such a better use [00:37:00] of my time. So that's, you know, obviously that's what we're all doing right now with efficiency gains.

[00:37:04] But like right now, if Macy. Came to me and said, oh, I hand wrote, you know, I typed out all of the social and I got it all posted and it took me this long. I'm like, why didn't you use AI to do that? So you could be in our community, engaging with our customers, listening to them, hearing what they need, getting to know them.

[00:37:21] That's so much more valuable to our business and to us. And that's just about, that would bring me so much more joy than writing social posts. 

[00:37:29] Paul Roetzer: Yeah. I think like in the responsibility of principles that you mentioned earlier, which we'll put a link in the show notes. There was a line I wrote that said,   automation without dehumanization I think is what it said.

[00:37:41] And so this whole idea of like, yeah, we're not trying to automate everything out. We're not trying to automate relationships and human connection. We're actually trying to enrich those things by automating the stuff that we should be automating that's just data driven, repetitive, like no real human value to the output, other than they just want the information.

[00:37:57] And that was the whole premise of my AI for Writers Summit [00:38:00] keynote this year is like, when do we use ai? Like, when, when is it the human that should be in it? And even if we can use AI to automate the whole thing, when should, um Right. And I think that's a, you know, it's, it's kind of a subjective thing.

[00:38:14] Like we all kind of make those choices, but hopefully your community managers,   make those choices. But again, even beyond community, like customer relationships, doing customer service, like when is a chat bot? Okay. and when does the human need to step in? Right. We have to make these choices. 

[00:38:29] Cathy McPhillips: And back to the, you know, writing social posts.

[00:38:31] Question #11:  From an information architecture perspective, what frameworks should teams use when integrating AI into CRM or workflow automation to keep systems scalable and secure?

[00:38:31] Cathy McPhillips: These are posts to distribute content, not to respond to somebody like that needs information. Yeah. So, yeah. Okay. Number 11, from an information architecture perspective, what frameworks should teams use when integrating AI into a CRM or workflow automation to keep the systems scalable and secure? So I think it goes back to that whole it and legal side of things.

[00:38:52] Paul Roetzer: Yeah. and you know, I think anytime you're looking at workflow automation, the first thing you have to do is just define the workflow. Like, I think [00:39:00] so many times the greatest gains early going in adoption of AI is just take your 5, 10, 20 top workflows. Say, okay, here's the 10 steps of this one, 15 steps of that.

[00:39:12] One, where can AI fit into these steps? Which ones do we want the humans to, you know, remain either in the loop or fully doing? And then from there, you really start to, you know, address these bigger questions around security. So maybe you look at something, I don't know, just stay on the podcast example.

[00:39:29] Say there's 50 steps in our workflow to do the podcast. Every week you go through and say, okay, 20 of these, we can use AI on two of these. We probably don't want to though, because some data's gonna go into the system that we don't wanna put into the chat bot, whatever. And so you can then go through and kind of do it.

[00:39:45] So it starts with, you know, identification of the workflows. Then it starts with a breakdown, that workflow into the tasks that go into it. Then which ones can AI actually help us with? Then do we want AI to actually help us with this? Is it safe to use AI in this process?   and so [00:40:00] again, I'm, you know, I'm using the podcast, but you can expand this out to say like, what's the workflow to do,   the customer,   analytics report each week.

[00:40:08] And so maybe there's a step in that process where like, okay, well we can't put this information into chat GBT, so even though it would help, let's not do that yet, until we have an internal like private chat bot through an API or we don't have any concern about data going back to openAI's or somebody like that.

[00:40:24] So again, like depending on your level of sophistication, you may need to be working with other people within your organization cross-departmental to make those final decisions. Like, Hey, I've identified 20 ways I can make, make my efficiency improve. Here's three. I'm a little uncertain about though, about whether we should do this, whether it's a little gray area in our general AI policies.

[00:40:44]   can you know it team, can you please look at this and assess it, or, you know, the risk department, whatever. It's depending on your industry. 

[00:40:51] Question #12: What are the most common mistakes companies make when trying to ‘force-fit’ AI into a workflow?

[00:40:51] Cathy McPhillips: Yeah. Which is a good flow into number 12. What are some common mistakes companies make when trying to force fit AI into a workflow? [00:41:00] 

[00:41:00] Paul Roetzer: AI is not always the answer.

[00:41:01] Like so often. I think that again, it's,   I think it's just a level of like, competency in what AI is capable of and when we should use it. And I think when people are very early in their comprehension of it and how to use it. Like again, AI agents might be a great example here. If you just hear that term and you think, oh, I'll just make everything agentic, like, everything's just gonna be like automated through agents.

[00:41:24] you probably don't have like an advanced enough understanding of what agents are, where they are in their current capability. So like, again, for. AI Academy. I just built an AI agent's 1 0 1 course. So all this is like kind of top of mind to me. you have to kind of understand the capabilities of AI and then that subjective part about when do we want the human, when do we want the AI to do things?

[00:41:48] And so everything, AI isn't the answer to every problem or every need to increase efficiency or productivity. And so I think going in with that mindset that it's great to assess workflows, it's great to look [00:42:00] at problems differently, but AI isn't always the answer. Sometimes more human is the answer.

[00:42:05] Sometimes simple automation that has nothing to do with ai. It's just literally rules based like, Hey, we're gonna set up this workflow with a make or Zapier or whatever. and it, it's no AI at all. It's just literally workflow automation. And so again, it comes down to understanding what the technology is capable of, and then you go from there.

[00:42:23] Question #13:  Which AI tooling is best suited to develop and monitor a marketing communications strategy at SME vs. enterprise scale? 

[00:42:23] Cathy McPhillips: Okay. Number 13, do you see adoption patterns differ between small businesses and large businesses, enterprises? 

[00:42:30] Paul Roetzer: This one's probably real similar again, to any traditional technology or software decisions. I mean, certainly smaller companies can move quicker. They can decide, you know, in an afternoon the CEO of like 20 person, 50 person, a hundred person company.

[00:42:43] It's like, all right, we're, we're getting ChatGPT team for everybody. We're gonna roll it out. We're gonna do a quick training session next Monday. And then I expect everyone to be like, using it by next week. Like, things happen fast. We see this with our AI academy. Like, you'll get on a call and they'll be like, all right, we, we want 25 licenses tomorrow for like our, our team.

[00:42:59] Like, [00:43:00] let's go. There's no procurement process, there's no anything. I could just, you just move, you make decisions and you go. And then larger companies, obviously, you know, sometimes there's bigger procurement side to this.   there's more,   bureaucracy, there's more,   guidelines. There's a sometimes,   a less of a tolerance for risk.

[00:43:20] And so obviously things just move slower. Like we've advised some really big companies. Where say like a marketing team just wants to function like a small unit and doesn't want to have to wait for the bureaucracy to figure everything else out. And so sometimes what happens in large companies is the IT department, the CIO, whomever they're working with, say a Microsoft to do a massive installation, we're talking about five, 10,000 licenses.

[00:43:47] And meanwhile, the marketing team's like we, we just want 10 licenses to ChatGPT team so our team can build some GPT and send our emails faster like we newsletters or whatever. And so we've worked with those kinds of [00:44:00] organizations where we'll just like, all right, fine. Like let's just go do that. And sometimes you get permission, sometimes you don't.

[00:44:06] Depending on your organization, you have to make those calls yourself. But you just go like, you default to like, we can't wait six months for them to figure this out to maybe we get some copilot licenses in the marketing team. We just gotta go now. And so I think sometimes within large companies, you need individual business units with some autonomy to function,   in a, in a more nimble way that doesn't put anything at risk like that.

[00:44:30] You know, make sure like the use cases are still safe within the gen ve policies, things like that. But yeah, that's the biggest thing is the speed. I guess, is,   you know, small companies just move faster and they can take more risks. It's how it's always been though. Mm-hmm. This isn't new to ai. 

[00:44:48] Cathy McPhillips: Yeah. My husband and I have that conversation a lot 'cause he's an enterprise and he's like, it's just done.

[00:44:53] Like, yeah, we just did it. 

[00:44:55] Paul Roetzer: I mean, yeah. Taken months the way we function, it's like, all right, we're gonna launch an AI [00:45:00] academy and in like three months and it's gonna have. 40 new courses and say, and you have, people are like, you're gonna do what? Like, we would take three months to even decide the first course was gonna be, 

[00:45:10] Cathy McPhillips: right.

[00:45:11] Question #14: Do you think AI fluency will become a baseline requirement for executives, or is it creating an entirely new kind of leadership role?

[00:45:11] Cathy McPhillips:  Yeah. Okay. Number 14, do you think AI fluency will become a baseline requirement for executives? Or is it creating an entirely new kind of leadership role? 

[00:45:20] Paul Roetzer: Yeah, I mean, obviously we are,   very big believers that AI literacy is, is maybe the most important skill moving forward at all levels.   I think it's gonna be very difficult to continue to maintain the authority and trust you have with your employee base as a leader if you don't understand ai.

[00:45:43] So like, if you're a CEO, A CMO,   head of hr, like whatever it is, your employees are going to be figuring this stuff out. And if they're the ones always trying to explain to you or to get buy-in from you to do [00:46:00] something. They're gonna get really frustrated because once you understand this stuff, it's so obvious that it has tremendous benefits to the company, to the efficiency, the productivity, the performance, the creativity, the innovation, the decision making, problem solving.

[00:46:17] And so it's very hard to run companies where the executive team is unaware of all of the ways they could be making the company smarter and better with ai. So, yes, I am, I do believe deeply that AI literacy, I fluency at the executive level is an imperative. And I think that's gonna become very painfully obvious in the next six to 12 months at all levels.

[00:46:43] Like I think we're getting there now with public companies because these executives are, you know, being asked about it on earnings calls every three months.   but I think we're getting to the stage where, you know, it truly is required. 

[00:46:55] Question #15: What should creatives in fields like graphic design or UX/UI be thinking about as AI continues to evolve? 

[00:46:55] Cathy McPhillips: Absolutely. Okay. Number 15. What should creatives in [00:47:00] fields like graphic design or UX and UI be thinking about as AI continues to evolve and what have you seen creative professionals do successfully to stay ahead?

[00:47:09] Paul Roetzer: Yeah, this is interesting. So I was actually this morning listening to a Lex Fridman podcast with Sundar Pichai, the CEO of Alphabet and Google, and they were talking about the impact of VO three, their video generation model. And Sundar was, you know, bringing up the point of like, you know, you know, if we go back and we think about the disruption of media, you know, you go back 10 years, the idea that you could have podcasters like Lex Fridman who have these massive audiences, like that's very disruptive to media companies.

[00:47:37] Like media companies we're the gatekeepers. They're the ones that that in for information out into the world. And that was it. And now we have tens of thousands, probably hundreds of thousands of podcasts. So we, like, we en empowered all these people that was through a distribution channel that wasn't through ai, but we empowered all these other people to become gay peacekeepers themselves, to become media,   [00:48:00] channels.

[00:48:00] And I think the people who are really good at podcasting or meet each other, they'll rise to the top just because, like everyone can create podcasts doesn't mean everyone gets to build an audience. And so I think creativity as a whole is gonna follow a similar path. You're, yes, like I can go in and create an eight second video now.

[00:48:18] I have zero ability to do video production, but I can do that now. But someone who does video for a living can do things I can't even dream of. With VO three, like Claire on our team, Claire Claire's way, well beyond any of our abilities with video creation. And so what Claire can do with VO three versus what you or I could do, Cathy, it's like, it, it's like magic.

[00:48:41] So I think that's what's gonna happen at all levels, whether it's graphic design, video production, even with writing research, like all of these,   fields where we have to output something, where there's creative elements to it. The people who are already good to great are just gonna 10 X up. They're gonna have just [00:49:00] tremendous superpowers to improve their outputs and to improve the volume of outputs if they choose to.

[00:49:06] And then it's gonna democratize it for everybody else who all of a sudden can now create stuff.   and so I think it's gonna be a noisier space, but it's gonna be a bigger pie of creativity. And yeah, I don't know. I, I, that's kind of how I think about it, is just like the people who sort of embrace this and figure it out, they're still gonna be creative.

[00:49:25] Like they're still gonna be designers and video professionals and writers,   but they're just gonna have these kind of underlying superpowers. And you know, I think that's exciting. But I can also see how, if you don't wanna embrace it. It can be a bit daunting and it can feel like the thing that defines you maybe isn't as special anymore.

[00:49:42] And I don't think that's true. I mean, my, my wife is an artist. My daughter's an artist at 13. Like, I don't think that at all, like they're way more talented and if they choose to use AI and what they do, it's just gonna level up what they're capable of. 

[00:49:56] Cathy McPhillips: Right. Yeah. One of my really good friends is a, is in graphic [00:50:00] design, and for a long time he's like, absolutely not.

[00:50:02] Absolutely not. And then recently he's like, Hey, it's doing all these things that I don't wanna be doing so I can spend, you know, really be more creative. Or I'm using it for ideation with my team who isn't creative, help us be able to communicate better with each other. So there are so many ways that he's been using it that aren't taking away anything from him.

[00:50:21] Paul Roetzer: Yeah. And I think that comes back to that awareness and understanding of if, if you, if you haven't embraced AI yet and you just think it's a replacement to you, like if, again, whether you're a writer, graphic design, whatever. If you just look at it as that thing that's gonna replace what you do and so you don't want anything to do with it versus, well, maybe there's like 50% of my job that I actually don't enjoy.

[00:50:42] What if I just use it for that and I can actually do more with the other 50% now? And I think once people take the time, whether it's coming to like our intro class or just have that first experience like, oh wait a second, this is amazing. Like I hate writing the report on Sunday [00:51:00] nights that my CEO wants and I don't have to do that part anymore and I can be Sunday night back with my family and I can actually like do something else.

[00:51:07] I think once you find those use cases that make you realize you get to still be you and the thing that made you special still, you're still special, like you still have those abilities, then I think you sort of change your perspective on ai. When you get, when you realize you still get a choice, you doesn't have to replace you.

[00:51:25] You get to choose how you use it. 

[00:51:27] Cathy McPhillips: One of the first conversations I had with Jeremy on our team who started a few months ago was he was showing me this tool that could version out ads and do it well. And I was like, excuse me. What? Because right now that's been me and Canva. 

[00:51:41] Paul Roetzer: Yeah. 

[00:51:41] Cathy McPhillips: And it takes forever.

[00:51:43] Paul Roetzer: Yeah. So, and there's no fulfillment from there's, you don't get fulfillment in your job from that. It's a task you have to do as part of your job 

[00:51:51] Cathy McPhillips: just being bitter about versioning out ads. 

[00:51:54] Paul Roetzer: Yeah. And honestly, like that's a, that's an interesting filter. Cathy is like, you know, we talk about with jobs GPT, you can go in and like, [00:52:00] here's all the ways AI can help.

[00:52:01]   which a custom GPTI built that's available with people, we'll put it in the show notes, but one way you can think about it is like, if you just took a spreadsheet and went and wrote down like, okay, here's the 25 things I do in my job. And then you made a column that says,   fulfillment. And it's just a yes or no.

[00:52:15] Like, do I get fulfillment from doing this thing? Do I enjoy this part of my job? Take the things where you say no, and those are the first things you should automate. Like the things that give you fulfillment, bring more time up to do those things. 

[00:52:29] Question #16: How do you see coding and technical skills as careers in a world where today’s kids will grow up with AI?

[00:52:29] Cathy McPhillips: Yeah. Okay. Number 16, how do you see coding and technical skills as careers in a world where today's kids will grow up with AI and if needed, what other skills should be developed in tandem?

[00:52:42] Paul Roetzer: I think I've talked about this one on the podcast where I'm, so my son is 12. He has taken a keen interest in coding, game design, robotics.   I'm all for it, like watching them play Minecraft, watching the things he builds when he goes to these coding camps, [00:53:00] you can just see it. It is teaching problem solving.

[00:53:04] It's teaching, working through hard things, doing repetitive tasks that like require two, three hours of focus that is transferable. Like whatever coding looks like when he gets outta college in nine years or whatever. Anything he learns, these skills and behaviors will be applicable. And so like, would I, would I pay a hundred thousand dollars a year for a college right now for someone to go get a computer science degree if my son was a senior in high school?

[00:53:37] Like, that's a conversation we would probably have to have of like, I don't know that it's necessary to do that. Like you could take these classes at, at Ohio University, like, and not spend a hundred thousand like great college, liberal arts college, do the computer science there. Like I would have a hard time with that.

[00:53:55] I would think more deeply about the true value of a computer science degree [00:54:00] versus getting that knowledge from anywhere and those skills from anywhere.   so I think the prestigious universities may struggle in the, in the coming years to justify the cost of a computer science degree. Not the degree, not like the degree itself isn't valuable, it's just is it as valuable as it would be at a major university?

[00:54:20] That's something they're gonna have to face. I think that's probably already happening. I just saw a stat yesterday that computer science majors are, are having a, a very difficult time getting jobs right now. So I think we're in this challenging job environment where there's questions, but the technical skills, the behaviors, the traits developed are valuable and I think we have to figure out economically what that means to getting, you know, degrees in it and things like that.

[00:54:46] But I I am not at all discouraging my son from pursuing that path right now. I think it's a very viable path. And if I was schools, I would, I would be leaning into training these skills and traits regardless of what the. [00:55:00] Job market may look like,   for computer science degrees at the moment, 

[00:55:03] Cathy McPhillips: but I think it's also as important to be teaching them communication skills and relationship skills and all of that because you, we all need that, especially sometimes 

[00:55:13] Paul Roetzer: don't go hand in hand.

[00:55:14] Like I do worry about that. It's like problem solving, strategic planning, like you're getting that, playing Minecraft and doing these things and building these environments, but like, okay, now let's step outta this and let's go to the playground. Let's like, it is a hard balance to give kids those, those skills as well.

[00:55:29] But you're a hundred percent right. the communication skills are, are fundamental and I would make sure they're getting that balance. 

[00:55:35] Question #17: What’s the best way to handle situations when AI gets things wrong, and how do you approach fact-checking? 

[00:55:35] Cathy McPhillips: Number 17, what's the best way to handle situations where AI gets things wrong and how do you approach fact checking?   what processes in humans are needed and has your answer changed as AI has gotten better and has AI gotten better?

[00:55:51] Paul Roetzer: Yeah, I mean, it's getting better. The hallucination rate, the air rate is, is going down as the models get smarter, but it's still there to the point where you can't rely on the AI output [00:56:00] on its own without human fact checking, especially if it's a important piece of information you're putting out. So I shared this example.

[00:56:07] We talked about the AI gaps on the podcast recently, and one of them was the verification gap. It's, I can go into Google, I can run a deep research project in Gemini right now. It'll gimme this 40 page output that looks incredible. It has all kinds of data, dozens of citations, and it's like, man, this on the surface looks better than any human I've ever hired would, would output.

[00:56:27] And then you dig into it and you're like, okay, but the whole thing comes down to this one data point, and where did it get that data point from? And then you go into the citations and you're like, Ooh, boy, I would never cite that source. And where did that come from? And then you start digging into it, and then the dominoes start falling where you're like, this looks amazing.

[00:56:44] It looks like a PhD student wrote this thing. It's all based on flawed assumptions and data, and so I have to throw the whole thing out. And so I think that's the problem we see now is like people who don't understand that these things get stuff wrong all the [00:57:00] time.   entities like, you know, facts, names, places, data points, whatever, and they just assume they can just publish whatever it comes or share internally, whatever it says.

[00:57:11] Like you do that surface level scan, it's like, oh, it was amazing. I just did the five hour job in five minutes and I'm gonna send it to my boss. And then the boss looks at it and it's like, wait a second, like two lines in. And I know that nobody checked this thing. And I think that that is the danger right now in companies is there's so little true understanding of how these things work and where the heirs can occur.

[00:57:32] And so you have lower level managers outputting things with ChatGPT and Gemini, passing on to their leaders. The leader who has maybe some more domain expertise or intuition. Questions things more thoroughly than maybe the middle management does. And that's where we're kind of have problems. And the same with like interns and entry-level employees.

[00:57:52] Like they can do things really fast, but sometimes fast is not good. And I always say like re like [00:58:00] the simplest litmus test I always give is,   I, I've done this since like the early days of my agency, I would just ask somebody like, is this the best you can do? Like, gimme this research report, great, gimme this strategy.

[00:58:10] Great. Like, is this the best you can do? And if the answer is like, like internally, you're like, yeah, I didn't actually like check the sources or maybe I didn't do like a full edit,   whether AI helped you or not, the question is the same. Is this the best you got? Because if I, if I'm gonna take the two hours to read this and I find errors in it, we got a problem.

[00:58:28]   and I think too many people are hitting the easy button right now when it comes to like using AI for research and planning. And I think there's gonna be,   there's gonna be some repercussions for that within businesses. 

[00:58:39] Question #18:  If you had to narrow it down to just one ethical principle that matters most right now, which would it be—and why?

[00:58:39] Cathy McPhillips: I agree. Number 18, if you had to narrow it down to just one ethical principle that matters most right now, what would it be and why?

[00:58:48] Paul Roetzer: Ooh, wow. So I don't know. I mean, for me,   we talk a lot about this, but like everything we do is [00:59:00] about putting humans at the center of this, like unlocking human potential, not replacing it. Like I, I'm just a big believer that it's, it's too easy to just look at what AI is capable and say, well, let's just, let's get fewer people and let's just do things.

[00:59:15] Let's save some money, let's increase our margins. Like, and ethically, I don't think that's the right thing. Like I think the right thing, ethically and morally is to say, how do we create more fulfilling lives for people? How do we create more time for people in their personal lives, their business lives, so they get more fulfillment outta their jobs and.

[00:59:33] Their family lives and like that's the most important thing. Like I, if I didn't think that was possible, I wouldn't be doing what we're doing. It's why I'm doing it myself. Like I think, and I don't know if I've ever publicly told this story, so whatever, but   so like the SmarterX logo, the icon is a black hole.

[00:59:49] Like I, I, nobody probably knows that other than Cathy who works with me on the logo design. But the whole premise of a black hole, if you don't, you know, know the concept is as you [01:00:00] approach a black hole time dilates, it slows down because of the gravitational force of the black hole. And so I have fascination with cosmology.

[01:00:08] I have a fascination with physics and all these things. And so we were building the logo. I wanted the logo to represent the slowing down of time because to me, the greatest value that AI can give humanity is to slow time down. It's the one thing none of us can get back. And so if we are able to automate some things that we don't get a ton of fulfillment out of, and if that gives us more time to do the fulfilling things, or to be with our families and friends.

[01:00:33] Like we've, we've made an impact. And like, that's why SmarterX exists. That's why I started pursuing AI 13 years ago, was like, I wanted to create more time. And so that's, to me, like keeping that centered in what we do is very important. 

[01:00:48] Question #19: How should companies address internal concerns around data privacy, compliance, and governance?

[01:00:48] Cathy McPhillips: That's such a nice answer. Okay. Number 19, how should companies address internal concerns around data privacy, compliance, and governance?

[01:00:56] And do you see regulatory momentum changing how companies handle this? [01:01:00] 

[01:01:00] Paul Roetzer: This is definitely gonna be in, in many ways tied to what industry you're in. And again, AI or no ai, like you are governed by these same policies and laws and regulations. And so you have to just accept that and be aware of that. Now, it is a dynamic environment.

[01:01:19] The laws are evolving,   the regulations with diff different industries. The data privacy regulations, all of this is a constantly evolving thing, but again, regardless of ai, that is true. AI is just accelerating a lot of it and creating more questions and unknowns that need to get addressed. But this is why it's so important to work closely with legal team, with your risk team,   to do things within the parameters that keep your data safe, keep your customer's information safe, keep your, your, your employees,   safe from doing things they shouldn't be doing.

[01:01:53] Question #20: Which AI applications do you expect to break through sooner than people think—and which ones are overhyped?

[01:01:53] Cathy McPhillips: Yeah. Okay. Last question. Number 20.   which AI applications do you expect to break through sooner than [01:02:00] people think and which ones are overhyped? 

[01:02:03] Paul Roetzer:   so I think AI agents are overhyped for sure. They're just, they're just misunderstood.   and that's the fault of the technology companies themselves that presented them as these autonomous things that they're not.

[01:02:14] Yeah. That being said, two, three years from now, they're not overhyped like I I think that long-term AI agents will transform the future of work and business. I just feel like out of the gate they got a little bit o over their skis. In terms of autonomy, I think the thing that's overlooked right now is reasoning models.

[01:02:32] I really, very confidently believe that most business leaders have no concept of how significant reasoning models are like to high level knowledge work, strategic planning, decision making, problem solving innovation.   the ability to go through these chains of thought to think more deeply about problems.

[01:02:52] They get smarter the longer they think, like, that's just weird. and most people have never [01:03:00] even tried a reasoning model knowingly. They've never run a deep research project. And I think once you do, you, you're, you can't look at anything the same. Like you look at business differently. So I think over the next, you know, six months or so, more and more business leaders are going to knowingly or unknowingly start experiencing the power of reasoning models.

[01:03:22] And I think that will accelerate change within businesses even more than we're already seeing. 

[01:03:28] Cathy McPhillips: Wonderful. Since we still have you, and since tomorrow is Friday, August 22nd, and prices for MAICON early bird are ending, do you wanna give like a 32nd or six second plug on MAICON and some of the new speaker announcements?

[01:03:41] We have, 

[01:03:42] Paul Roetzer:   I don't know, are we making speaker announcements? 

[01:03:44] Cathy McPhillips: We are. We've got a couple of them. 

[01:03:47] Paul Roetzer: So, yeah. So MAICON is October 14th to the 16th in Cleveland. This is our sixth annual, Cathy. It's, is that right? It's okay.   so you can go to  MAICON.AI. You can see the [01:04:00] agenda,   the speaker lineup.

[01:04:02] We do have what it looks like, I dunno, six or seven new speakers that we've just added. Are they added to the site now? 

[01:04:08] Cathy McPhillips: They are, 

[01:04:08] Paul Roetzer: yeah. I'm learning things when we do these podcasts. I didn't know who was actually added to the site. So we have an incredible lineup,   on the main stage. Incredible lineup of breakout talks.

[01:04:17] There's four amazing workshops.   and yeah, I mean, go to the site, check it out, and you can see all the speakers. And I'm, I think the marketing team's probably gonna be spending out announcements of, you know, some of the keynotes that we're adding,   as we go. So yeah, it's, it, it's awesome. You can do pod100 promo code and if you get in by Friday the 22nd, you can take advantage of the,   earlier bird pricing.

[01:04:40] Cathy McPhillips: Yes, you can. All right. Thank you, Paul, as always. And we will see everyone next time. 

[01:04:45] Paul Roetzer: Thank you. And thanks to Google Cloud for, for sponsoring the AI Answer series. Thanks for listening to AI Answers to Keep Learning. Visit SmarterX.ai where you'll find on-demand courses, upcoming classes, [01:05:00] and practical resources to guide your AI journey.

[01:05:03] And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about ai.

Recent Posts

How to Create an AI-Powered Search Strategy with Wil Reynolds [MAICON 2025 Speaker Series]

Cathy McPhillips | August 21, 2025

In our ongoing speaker series, we’re spotlighting the remarkable AI leaders featured at MAICON 2025. During this upcoming session, Wil Reynolds will discuss how generative AI is reshaping traditional SEO.

[The AI Show Episode 163]: AI Answers - AI Environmental Concerns, Agentic Workflows, SEO Impact, The Future of Creative Careers, & Human-First Processes

Claire Prudhomme | August 21, 2025

Explore AI’s impact on the environment, culture, and creativity in Ep. 163 of The Artificial Intelligence Show as Paul Roetzer and Cathy McPhillips answer your questions.

Google DeepMind's Demis Hassabis Reveals His Vision for the Future of AI

Mike Kaput | August 19, 2025

Demis Hassabis doesn’t just want to build AI. He wants to use it to understand the universe.