56 Min Read

[The AI Show Episode 154]: AI Answers: The Future of AI Agents at Work, Building an AI Roadmap, Choosing the Right Tools, & Responsible AI Use

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

In this episode of AI Answers, Paul Roetzer and Cathy McPhillips tackle 20 of the most pressing questions from our 48th Intro to AI class—covering everything from building effective AI roadmaps and selecting the right tools, using GPTs, navigating AI ethics, understanding great prompting, and more.

Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 32,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike. This series is our way of diving deeper—offering quick, unscripted, and honest takes on what’s top of mind across the AI landscape.Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

 

What is AI Answers?

AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.

In this episode, we address 19 of the most important questions from our July 10 Intro to AI class, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time—unscripted and unfiltered—just like we do live.

This weeks episode has been organized into five key areas: the philosophy of AI, emerging technologies, business strategies and career impacts, responsible use and future outlooks.

 

Links Referenced

Timestamps

00:00:00 — Intro

Vision & Philosophy of AI

00:08:46 — Question #1: How do you define a “human-first” approach to AI?

00:11:33 — Question #2: What uniquely human qualities do you believe we must preserve in an AI-driven world?

00:15:55 — Question #3: Where do we currently stand with AGI—and how close are OpenAI, Anthropic, Google, and Meta to making it real?

00:17:53 — Question #4: If AI becomes smarter, faster, and more accessible to all—how do individuals or companies stand out?

Emerging Technologies & the Agent Ecosystem

00:23:17 — Question #5: Do you see a future where AI agents can collaborate like human teams? 

00:28:40 — Question #6: For those working with sensitive data, when does it make sense to use a local LLM over a cloud-based one?

00:30:50 — Question #7: What’s the difference between ChatGPT Projects and Custom GPTs—and how do you decide which is better for a given task?

00:32:36 — Question #8:  If an agency or consultant is managing dozens of GPTs, what are your best tips for organizing workflows, versioning, and staying sane at scale?

00:36:12 — Question #9: How do you personally decide which AI tools to use—and do you see a winner emerging?

00:38:53 — Question #10: What tools or platforms in the agent space—like HubSpot, Salesforce, or chatbot integrations—are actually ready for production today?

Business Strategy, Adoption & Career Impact

00:43:10 — Question #11: For companies just getting started, how do you recommend they identify the right pain points and build their AI roadmap?

00:45:34 — Question #12: What AI tools do you believe deliver the most value to marketing leaders right now?

00:46:20 — Question #13: How is AI forcing agencies and consultants to rethink their models, especially with rising efficiency and lower costs?

00:51:14 — Question #14: What does great prompting actually look like? And how should employers think about evaluating that skill in job candidates?

00:54:40 — Question #15: As AI reshapes roles, does age or experience become a liability—or can being the most informed person in the room still win out?

00:56:52 — Question #16: What kind of changes should leaders expect in workplace culture as AI adoption grows?

Trust, Ethics & Responsible Use

01:00:54 — Question #17: What is ChatGPT really storing in its “memory,” and how persistent is user data across sessions?

01:02:11 — Question #18: How can businesses—especially in regulated industries—safely use LLMs while protecting personal or proprietary information?

01:02:55 — Question #19: Why do you think some companies still ban AI tools internally—and what will it take for those policies to shift?

Closing: Future Outlook

01:04:13 — Question #20: If AI tools are free or low-cost, does that make us the product? Or is there a more optimistic future where creators and users both win?

 


This week’s episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.

 

 

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: I do think that, you know, three to five years from now, it's gonna be very commonplace that it's just part of your job description to build and manage agents and agent systems. Welcome to AI Answers a special Q&A series from the Artificial Intelligence Show. I'm Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute.

[00:00:20] Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of ai, but we never have enough time to get to all of them. So we created the AI Answers Series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing.

[00:00:43] Whether you're just starting your AI journey or already putting it to work in your organization. These are the practical insights, use cases, and strategies you need to grow smarter. Let's explore AI together.[00:01:00] 

[00:01:00] Welcome to episode 1 54 of the Artificial Intelligence Show. I'm your host, Paul Reer. Today I am joined by my co-host Cathy McPhillips, our Chief Growth Officer. This is the second edition of our new AI Answer series. So if you haven't heard this before, it is not replacing our weekly, every Tuesday we drop the weekly episode that Mike and I do, AI Answers is a new series we just introduced, weeks ago,  Cathy, early June.

[00:01:28] Yeah. So basic premise here, uh,Cathy and I do two free classes every month. So one is Intra ai. We started that one in fall of 2021. Is that right? Yeah. So we're coming up on episode or session 50 of that Intro to AI class. I think we're on 49 is our next one, correct? Yeah. So we've had almost, like 35,000 people roughly register for that class over the last four years or so.

[00:01:56] Then scaling AI, Five Essential Steps to Scaling [00:02:00] AI is another free class that we do each month. That one we are coming up on number nine or 10, I think we've been doing that one for almost a year now. And that one we've had probably a little over 10,000 people register for. So each of these classes gets, you know, somewhere between 800 and 1500 people depending on you, what time of the month we're doing it.

[00:02:21] And, how many weeks in between? It's usually like four weeks in between. and we will get dozens of questions, sometimes 70 to a hundred or more. And we can only get to like, I don't know, seven to 10 on a good day.Cathyand I might tackle on a one of these sessions. So the AI Answer series is all about trying to answer more of those questions.

[00:02:41] So the idea is to try and provide as, as much kind of input as we can. But also we just find it interesting to look at how the questions evolve. So the kinds of questions we were getting a year ago are completely different than the kinds of questions we're getting now. And so in some [00:03:00] ways it's almost like real time insights into kind of where the market is and what people are thinking about related to ai.

[00:03:07] So hopefully the series is really helpful to people. I, we had great feedback for the first episode. So we are planning to do this. They'll probably be two-ish a month, there might be a third 'cause we'll also do these for our virtual events. And then we may mix in a couple other special, AI answers sessions.

[00:03:22] But we've got the intra ai and then we'll do, the next one will be, probably next week on scaling ai. 'cause we have our Scaling AI class on Thursday, the day Thursday. This is dropping. Yeah. So, so yeah, that's, that's the background on AI answers. Again, not replacing the weekly, the weekly still comes to you every Tuesday, with me and Mike.

[00:03:41] And then AI answers is two to three times a month with me and Cathy.   So today's episode is brought to us by MAICON at MAICON 2025. This is the Marketing AI conference that we started in 2019. this is the sixth Annual Marketing AI conference. This is the big thing. I [00:04:00] mean, there's lots of big things we do every year, but this is sort of like the first, this was kind of the origin of, you know, as we really started building out Marketing AI Institute.

[00:04:08] The Marketing AI conference was the flagship event.Cathyworks tirelessly along with many of the other people on our team to put this event on every year. It is happening August 14th to the 16th. This is in our hometown of Cleveland, Ohio at the Convention Center right across from the Rock and Roll Hall of Fame.

[00:04:24] And Lake Erie is a beautiful spot. we are looking for, I don't know, we're, last year we had, what, 1100 I think came,  Cathy, we had 700 the year before, 300, the year before that, roughly. So we are trending, continuing to trend up. We're hoping for 1500, I think is the goal. I usually try and like usually 

[00:04:42] Cathy McPhillips: That usually goes up when you are on air.

[00:04:45] Paul Roetzer:Cathy always like cringes whenever I start throwing out numbers and I try real hard to be like conservative at these things.

[00:04:51] But, 1500 is, is kind of what we're shooting for this year.  

[00:04:54] Cathy McPhillips: That's my goal too. So you're good. 

[00:04:56] Paul Roetzer: Okay, good. We're aligned. 

[00:04:56] Cathy McPhillips: Yes, we are aligned. 

[00:04:57] Paul Roetzer: So we'd love to see everyone [00:05:00] in Cleveland. If you can be there. It's gonna be an amazing three days. so you can go to maicon.ai. That is M-A-I-C-O-N.AI again, that is October 14th to the 16th in Cleveland, Ohio.

[00:05:11] the agenda's live, it's not full yet. The notice is not finalized. We're still working on some of the main staged and keynote talks, but I dunno what about eight 80% or so of the agenda's probably up there. Well, more 

[00:05:22] Cathy McPhillips: that probably. 

[00:05:22] Paul Roetzer: Okay. And the speaker lineup. So you can go get a sense of, you know what, what you can look forward to it at the event.

[00:05:28] And 

[00:05:29] Cathy McPhillips: you that yesterday's. Well, today's email, I guess yesterday's when, when this goes live. Yes. 10 reasons why you should be in Cleveland in October? 

[00:05:37] Paul Roetzer: I didn't, I didn't open that yet. I was actually in New York. So we're, I guess, context for people. We are recording this on Wednesday, June 18th at about 5:00 PM Eastern Time.

[00:05:46] 'cause I was actually in New York this week at a Movable Ink event. So Movable Ink's an AI powered personalization platform for digital marketers. And I've done a series of talks with them this year. They've been a great partner of ours. And so I was actually doing a [00:06:00] keynote for them at their think Summit Tuesday morning.

[00:06:03] And then I was running a workshop for a group of some incredible, marketing leaders Tuesday afternoon. I just landed back in Cleveland from that, 45 minutes ago. And now Cathyand I are recording this. So yes. Jobs, on Thursday the 19th. 

[00:06:21] Cathy McPhillips: I actually, we were supposed to record yesterday. We had a little hiccup.

[00:06:23] Paul Roetzer: I wasn't gonna get into that story. 

[00:06:25] Cathy McPhillips: Well, I was just gonna tell you that I was coming off a red eye, so I was a little bit happy that it got pushed to today. 

[00:06:31] Paul Roetzer: We try, I, we probably don't want to give the story, but no, we don't. We tried to thread the needle and record this. We had a perfect plan to hook it up at a, a production studio.

[00:06:42] And sometimes plans just don't work as intended and  it's okay. Like it ended up being working out fine. And I met some amazing people because it didn't work out the way we intended.  and  here we are. And here we are. Okay. And you're not coming off a red eye, so, all right. So the [00:07:00] plan is we're gonna, we got about 20 questions.

[00:07:01] These are all gonna be kind of rapid fire. As long as I don't talk too much. I said this word. I don't think some people, believe me when I say this, I literally don't know what the questions are gonna be. I have not looked at this doc until three minutes ago. I opened this document, so this is completely unscripted.

[00:07:15] It's how most of the stuff we do works. Cathy coordinates everything, curates all the questions, organizes them, and then we get on and we just go, because that's how it happens during the live class. So I kind of prefer that feeling of just like, this is what it is. If I don't know an answer, I'll, I'll move, we'll move on.

[00:07:33] But we try and just kind of be as, authentic as possible with these things. 

[00:07:38] Cathy McPhillips: Yeah. So just to tell you again, so we did the class last week. Claire took all of the Q&A and the transcript. She ran it through some GPT, she did worked her magic. She gave me the list of the 20 questions that she thought best aligned with, you know, what people were asking, how to make this flow with Paul.

[00:07:56] I went through, did a little bit of tweaking, and then, so Claire and I bounced back and forth a [00:08:00] little bit, but again, behind the scenes, Claire did this heavy lift and it was her awesome idea to get these started. So this is fun. I'm excited because like you said, Paul. Sometimes I'll, I would throw those questions in our Slack community, but there were still 20 or 30 that weren't getting asked.

[00:08:13] Yeah. And answered. So. Okay. So this week we have five different themes, vision and philosophy, emerging tech and agent ecosystems, business strategy, adoption and career impact, trust, ethics, and responsible use, and then future outlook. So I am just going to jump right 

[00:08:30] Paul Roetzer: in. Okay. We're not gonna keep this easy at five o'clock on a Wednesday afternoon after traveling all week.

[00:08:36] All right, let's go. 

[00:08:36] Cathy McPhillips: So, and some of these are actually repeats of what you did answer because I, because they were just good ones to ask and I thought the public should, should know about some of these things. Okay. So let's start with the big picture. Okay. 

[00:08:46] Question #1: How do you define a “human-first” approach to AI?

[00:08:46] Cathy McPhillips: How do you define a human first approach to ai?

[00:08:49] Especially as machines begin outperforming us in most areas, many areas. 

[00:08:53] Paul Roetzer: yeah. So anyways, was following us for a while. I published something called Responsible Ai, [00:09:00] manifesto in early 2023. And it was basically 12 principles of how to do AI responsibly within an organization. And the main thing was that it had to be human centered, which means every decision you make, every you know, technology, you're gonna integrate how you think about the future of the organization.

[00:09:15] You have to think about the impact it's gonna have on people. So if all we're thinking about is efficiency  and , you know, cutting costs, that's not human centered per se. so I think of what is the, you know, what is the good of the, not just your employees, but what's the, the impact on customers? So, yeah, you can throw a chatbot up and it might save you a bunch of money and you need three less, you know, CSMs.

[00:09:41] But is it a great experience for your customers? Is, you know, are you, are you really thinking about the impact on the people? And so that can be your technology partners, it can be, you know, your service partners, it can be your customers, your employees. So that's what we mean when we talk about human centered is like, don't just throw AI at things just to do things [00:10:00] faster.

[00:10:00] you know, think about the impact and the downstream stuff too. Just, you know, how it affects people in lots of different ways. So yeah, I mean, the obvious thing is that it, you connect it to jobs and we don't just want to get rid of the people and the jobs, but it's actually way more than that It thinks about all your different stakeholders.

[00:10:18] Cathy McPhillips: Yeah. We were just, actually about an hour ago, some of the team was talking about something we're working on for academy, and we were talking about different technologies and what opportunities were, and all of us were like, okay, let's start with what's the best human, what's the best experience for our customers, for the humans?

[00:10:32] Everything else we probably could figure out, but let's make sure that we are putting that, that human on the center of all of that, which is like, that's been, that should be the case for everything in your whole entire life. So, 

[00:10:40] Paul Roetzer: yeah. And when we launched MAICON back in 2019, the tagline I created for that event was more intelligent, more human.

[00:10:46] And following that, we actually tried to like live that tagline. And when we create strategy documents, I'll, you know, often challenge our team. Like, you know, think about those two things. What is the more intelligent part of this? Like how are we gonna infuse AI to do things smarter? [00:11:00] But what's the more human side of this?

[00:11:01] What does that open up for us? So if we use AI to drive personalization through our email outreach and things like that, does it free us up to actually go have a coffee with someone who might be able to bring 10 people to the event? So it's like, what is the thing that AI can't do that we actually enjoy doing?

[00:11:17] We enjoy that Face time, we enjoy meeting with people and talking to them and having me free to be able to go and spend an afternoon at event, at, you know, running a workshop. Like I. That's the more human stuff to us. So yeah, it can, can be carried out in a lot of different ways, but I think that's a good lens.

[00:11:31] What's the more intelligent, what's the more human? 

[00:11:33] Question #2: What uniquely human qualities do you believe we must preserve in an AI-driven world?

[00:11:33] Cathy McPhillips: Yeah. Okay. Number two, what uniquely human qualities do you believe we must preserve in an AI driven world? Kind of feeding off what you just said. 

[00:11:41] Paul Roetzer: Yeah. You know, it's interesting. I put a, like in our sandbox for the episode, the weekly episode next week with Mike.

[00:11:48] I've been having a lot of thoughts about this one lately, and I'm not sure they're fully baked yet. But, I will say, you know, upfront, like I, more and more I just really look at the value of critical [00:12:00] thinking. the, the, the easier it is to have the AI do the thing, it, I can see it already happening with myself.

[00:12:08] I can sometimes see it in our organization. I can see it in schools that I talk to. I can see it in, you know, enterprises that we consult with or have in workshops.  it 's like hitting the easy button and sometimes when you hit the easy button. You don't have as much at stake in the output  and  you're not as like, bought into the process of the learning that went into creating that output.

[00:12:32] And so like, I guess the way I, I've been thinking about this, and again, I, I, I'm, this is totally off the top of my head because I wasn't really ready to talk about this yet, but it's kind of like, in high school, I remember you would've a reading assignment and it's like, God, I didn't read Tom Sawyer or whatever the book was.

[00:12:47] So I just go get the cliff notes and you read the cliff notes and like, you think you're good to take the test and you get, get in and realize like, I actually don't know, like the details of this book. And I kind of feel like that's what AI strategies and deep [00:13:00] research projects have become for me. Like I can just hit the easy button, I can create the 34 page document, but I didn't do anything to create the document.

[00:13:08] And like all that energy that goes in and the research and the thinking that goes into creating it, like, yes, the doc may be great, maybe better than anybody else could have done in the company. But I didn't do the hard work and like, I can't actually stand behind the document because I don't even really know the ins and outs of it.

[00:13:25] I just know it was good and I approved it. And so I think that this idea of like critical thinking, I think things like empathy and interpersonal communication and like, you know, all those things are gonna matter, but it's the critical thinking part I'm really worried about. Like, I don't, I don't know how to preserve that when everything can just be created by hitting the button.

[00:13:45] And so I find myself thinking a lot about that. I think about, you know, imagination is uniquely human still. I, you know, I think, and so I think creativity and imagination and empathy and critical thinking, like they're all gonna matter. I'm, [00:14:00] I'm, it's just like a moving target for me. Like how we preserve them and how we actually truly use AI, amplify them and not replace them.

[00:14:07] Cathy McPhillips: And we talked about this in the past before about like, Mike uses AI to get ready for the podcast, but if he doesn't read those articles, if you don't read those articles and you just use AI to generate questions or to write the and transcript to talk about the beginning of it, you can't have a good conversation about that because you don't understand, you don't really know fully what you're talking about.

[00:14:26] Paul Roetzer: Correct. Yeah. And that's why, like for the podcast, I mean, we'll go through 40 to 50 sources that make the cut of the 150 to 200 things that I listen to or read every week. And yeah, like I couldn't sit there and ask unscripted or give unscripted answers to the things Mike asks or presents if like, I haven't actually consumed the information.

[00:14:47] Mm-hmm. So I can't just throw something in and hit, summarize a notebook lm and be like reading off of a study guide basically. So yeah, you can't fake expertise and thought leadership, in my opinion. It becomes really obvious if you [00:15:00] are. And the, the, the thing I've said, and I've said this to my own kids, is like.

[00:15:04] If you're gonna do the work on a topic, I want you without notes in front of you to be able to stand up there and answer questions for 15 to 30 minutes about that topic. And if you can't do that, then you didn't do the right amount of work. And I'm not saying you have to be like debate prep, like ready to like debate somebody on a topic.

[00:15:21] But if I can't take the notes away from you and have you explain to me the premise of what you did the research on, if you can't do that, then you relied too much on the ai. And in some instances that's fine. But not if you wanna be a thought leader on something or if you actually wanna be trusted or if you want to charge people money to like provide them advice and recommendations and insights.

[00:15:40] Like you better put the work in. And AI can't replace that. Like I just don't see it. It can synthesize it or it can like simulate it, but it can't replace your ability to just stand there and unscripted and answer questions about something. 

[00:15:55] Question #3: Where do we currently stand with AGI—and how close are OpenAI, Anthropic, Google, and Meta to making it real?

[00:15:55] Cathy McPhillips: All right. Good answer. Okay. Number three, we are hearing more about [00:16:00] AGI.

[00:16:00] Where do you think we stand today and how close our OpenAI, Anthropic, Google and meta to making it real. 

[00:16:06] Paul Roetzer: So if any of them could agree on what a AGI is, I think they would all agree. We're probably pretty close. they all, even internally like OpenAI looks at it differently. I've talked about this recently on the podcast, like Sam is, Altman is giving different definitions than what the OpenAI website gives.

[00:16:24] Like it's just this moving target. But if we're talking about general intelligence that's roughly able to do what an average human can do, like the majority, what an average human can do, and we say, give me like a marketer and then say, okay, a marketer's job, there's, here's the 35 things that marketer does.

[00:16:41] I don't, I don't know that we're that far from being able to say, when you look at individual tasks that the AI often, I. Probably better than the average marketer at doing each of those things. Writing subject lines, drafting an email, writing a proposal, creating a blog post, developing social shares, creating an image, creating a video like it's probably on [00:17:00] par.

[00:17:00] ChatGPT on its own is probably on par with an average marketer at the vast majority of those things. Now that's not uniform across every industry, every profession, but if that's the definition, which is the one I generally look at, because I think of replacement value, well, if the AI is able to do what the average employee can do, then we're, we've kind of approached the thing we always thought was AGI before we started moving the goalpost.

[00:17:24] So I think that they all think we're really close. I think that whatever they define it as, it's probably sometime in this next, you know, two to five years, I think five is unlikely, would take that long. But I think probably two to three years is very realistic. I just don't know when they're gonna think that.

[00:17:44] They've achieved the benchmark that lets them claim it. But I would not be surprised at all if one of the labs in the next 12 months claims they've, they've done it. 

[00:17:53] Question #4: If AI becomes smarter, faster, and more accessible to all—how do individuals or companies stand out?

[00:17:53] Cathy McPhillips: Okay. Number four, if AI becomes smarter, faster, and more accessible to all, how do individuals or [00:18:00] companies stand out? Or is it just about being early?

[00:18:04] Paul Roetzer: So this kind of ties back to the one on individuals that I've been thinking a lot about. so there's in, in, in AI research, I don't know if it's carried out in other professions, but in AI research there's something called taste. So taste in AI research means you can go a lot of paths with how you try and make these models smarter.

[00:18:24] The algorithms you build, the, the, the systems you put in place and taste is like your choice in which thing to go on based partially on instinct, partially on experience. I would imagine this probably plays out in like the arts as well. There's just like the, the taste. You have graphic design like. You just know something.

[00:18:41] When you see it, kind of have this instinct like, I'm gonna go after this. that I think becomes even more valuable when everyone can kind of hit that easy button and create anything that the, the people who have the ability to look at the output, look at a deep research and say, this [00:19:00] is actually really important work.

[00:19:01] We, we need to spend 10, 15 hours vetting this thing. We talk about the air AI verification gap was like something that we talked about on a recent episode. It's this idea that you have the ability to look at something and know that  it   matters, but it's not there yet. And so you have, and that can be applied to strategy.

[00:19:18] It can be applied to creative. And the hard part is, I don't know how you get that without years of experience. And so I've been thinking a lot lately about which jobs are actually gonna be most impacted. We've talked a lot about like entry level jobs, and we might be, there might be a question related to this later on, but, talk a lot about entry level jobs.

[00:19:37] We've, we've looked at middle management, we looked at senior level. There's sometimes an argument that the senior level maybe goes first 'cause they cost the most and it's easiest to cut out. there's certainly an argument that it's just entry level 'cause it's task driven and we just don't need as many people doing the tasks.

[00:19:53] There could be an argument, it's middle management 'cause they maybe haven't developed the taste yet. Like they, they don't know really what great looks like yet. [00:20:00] And I'm not sure where I fall yet. Like this is again, one of those things I wasn't even ready to talk about yet. But I think the way you stand out is by finding the balance between using AI  and , and I love it for strategy and creative thinking and things like that and outlining ideas.

[00:20:16] Like I love it for that, but I also get overloaded by it. Like, there's so much strategy you can create so quickly that it's when to use the ai, how to use the ai, how to use the output of the ai, and when to just be human  and  like allow yourself the permission to spend five hours on something that, yes, the AI could do it in three minutes, but like.

[00:20:38] You gotta put in the work to know the end. Like, so for my presentations, like when I do keynotes or when I create courses, I, AI is assistive, like ideation and maybe like vet things I've developed, but I have to create all those ideas myself. Like I have to write the stuff because I could never present it otherwise.

[00:20:57]  and  so I think that's gonna be a differentiator [00:21:00] at an individual level. And then the same probably applies when you zoom out at a company level. It's like all of us have access to the tech, but like sometimes you just can't take the shortcuts and there's no blueprint yet for how to know when that when you do and don't take the shortcuts basically.

[00:21:17] And so I think the people who spend a lot of time experimenting, you start to just sort of develop an instinct for when no, an AI output isn't enough here. Like I actually want you, the employee, so me as a leader. I don't want you to do this one in chat GPT first. I actually want you to spend a week on this thing because you are gonna own this  and  you need to know it inside and out.

[00:21:40] And you need to be able to stand behind it. I don't like, again, these are kind of like emerging thoughts from conversations I've been having in some cases in the last like 10 days and personal experiences in the last 10 days. But I think using ai, like knowing how to use it and when is, it could be a huge differentiator for people if all else is equal [00:22:00] and we assume everybody's using it.

[00:22:01] But right now the differentiator is oh, a whole bunch of people who have no idea what to do with it. Right. And so for a while that's the opportunity is like just to race the ahead and do this because not everybody's doing it. 

[00:22:12] Cathy McPhillips: And I'm guilty of that. You know, a few weeks ago we're, we have so much going on right now and it's like, okay, I gotta start tackling some big things.

[00:22:18] Yeah. And I started with one of my GPTs to answer some questions for me to, or to give me an outline. And then I was trying to like retrofit. What I needed it to do. And I was like, wait, I'm not doing this the right way. And I actually stopped scrapped it and just started over. 

[00:22:33] Paul Roetzer: Yeah. Yeah. I've found the other thing is like I'll have these random thoughts to develop a strategy for something and I'll have the either conversation of voicemail, I'm driving to pick up food, or I'll ask it while I'm laying in bed at night, I'll think to like run a deep research project.

[00:22:46] And then like two weeks later I'm like, God, I feel like I did this bef. Like when did I, and then like you completely forget that you actually did the project already because again, you had no stake in it. You literally just gave a prompt and it did the thing and then you [00:23:00] kind of forget that you even went through that process.

[00:23:02] Why? Like Cathy knows I journal everything in business. Like anytime I run a project that's like, I have journals for each component of the business, each business unit. Because like sometimes you just forget you've already done some of the work. And I find myself doing that all the time with ai. 

[00:23:17] Question #5: Do you see a future where AI agents can collaborate like human teams? 

[00:23:17] Cathy McPhillips: Yeah.

[00:23:18] Okay. Section two, emerging technologies. number five, do you see a future where AI agents can collaborate, like human teams? And how important will it be to know how to build and manage those agents? 

[00:23:29] Paul Roetzer: Yeah, so agents collaborating with each other is already starting to happen. That is very much gonna be a part of the future of every department, every business, every industry is agents working together.

[00:23:42] Hard part there is, like, how we manage those is, who knows. I mean, you're, in some cases you'll have leaders like Jensen Huang, from Nvidia saying, we're gonna have millions of agents in every business unit. Like, how could we possibly as humans, like manage what they're doing? We can't even keep [00:24:00] track of it all.

[00:24:01] So yes and yes. I I guess like they, they're, they're gonna be there. They're gonna be working with each other. Humans may have involvement in the early going as these agents are, you know, they're kind of raw still. Like they make mistakes. They're not fully autonomous. in most cases. So there's a lot more management and oversight and connecting it to the right data sources and the right tools.

[00:24:25] But over time  it's kind of probably gonna just function more like, you know, you're used to a chatGPT where you just give it a prompt and then if you've connected it to Google Drive and your CRM and like it has access to all the things you have access to, then it's just gonna go do things. And you might not even know if it's calling on a different agent to do a thing.

[00:24:49] So as long as you've set up the permissions where this agent is allowed to go talk to these other agents, it'll, it would function again. Like this can be abstract for people, but it truly would function. Like if I went toCathyand [00:25:00] said, Hey Cathy, we need to do this project next week. let's meet next Friday and review it.

[00:25:05] And then Cathy goes and brings in five people on the team and they each do a piece of the thing and then it comes back. And then Cathy and I meet and she goes, Hey, here we go. And Cathy and I sit there and talk like, I don't know who she worked with or what part they played in it. I, Cathy was just like the hub, basically.

[00:25:19] She was the lead agent and it, and she went and found the, the components to, to do a thing. And so that's how it's gonna work. Except it would be like access to dozens or hundreds or thousands or millions of these agents like that. That's what AI labs envision the future being. and our agents will talk to other people's agents and things just get done.

[00:25:39] So, yeah. I think that a lot of jobs in knowledge work is going to be managing these agent networks. So I do think that, you know, three to five years from now, it's gonna be very commonplace that it's just part of your job description to build and manage agents and agent systems. Like, I mean, we have it in our job [00:26:00] descriptions now that, because we're starting to think that way.

[00:26:03] Right. But, you know, it's like you're building these distinct, like one agent for this, one agent for that. We're talking over time of like, almost like a marketing ops. You almost think of like an agent ops thing where like your job is literally just to be the operations behind all these agent networks that work with the marketing team or the sales team or the customer success team, 

[00:26:21] Cathy McPhillips: right?

[00:26:22] So how can someone, this is, this is question five a, 'cause this is not question six yet, but, so how, if someone is thinking about this, where do they get started with learning about AI agents? 

[00:26:32] Paul Roetzer: Yeah, I mean part of it is just using tools like deep research from Google and OpenAI and starting to get a sense of how these agents will work and how they'll look because that's an early form of it where it just kind of goes and does the project for you.

[00:26:47] all the big tech companies are selling agents already positioning it as agents. Again, it's early and they're not like fully autonomous for the most part. And humans are still pretty heavily involved in building and running these things. [00:27:00]   But I would imagine like Salesforce, Google, Microsoft, HubSpot, like any anybody, major tech companies that are building around this idea of agents, they're gonna have to provide education around it.

[00:27:12] Like we're creating, so we're Cathy mentioned Academy earlier. So we've had an AI academy for five years now, but it's only had piloting AI and scaling AI and then live components. and then some other benefits to members We're reimagining and rebuilding that, like as we speak. Like, I'm gonna be recording all the videos here in the next like, you know, three to four weeks for the, for the new courses.

[00:27:36] and one of the ones is like agents 1 0 1. So like we're gonna do our part to soar and try and help people understand the fundamentals. But then as part of our Gen AI app series, that'll be part of AI Academy. we have a agents, component to that where we're actually going to start doing weekly drops with gen ai apps of productivity and vision and, images and audio [00:28:00]  and  agents.

[00:28:01] To try and just make this stuff more approachable to everybody. 'cause it's just abstract until you start seeing it more and more. So we'll do our part, but we're gonna be more focused on kind of like the macro level understanding agents and then showing examples. But I would probably like push heavy on places like Salesforce and Google and Microsoft to say what education are they offering that can be complimentary to the kind of stuff we're gonna try and provide to people.

[00:28:22] Cathy McPhillips: And probably think more about what it's able to do versus what they're calling it. 

[00:28:26] Paul Roetzer: Yeah. Yeah. 'cause agents is basically just like automations with some intelligence baked in. It's just the new term that people have onto. Right. And 

[00:28:34] Cathy McPhillips: they're using the term differently. Yes. In a lot of instances. 

[00:28:37] Paul Roetzer: Kinda like AGI, like everybody's Right.

[00:28:39] Already got their own definition of what an agent is. 

[00:28:40] Question #6: For those working with sensitive data, when does it make sense to use a local LLM over a cloud-based one?

[00:28:40] Cathy McPhillips: Right. Okay. Number six. For those working with sensitive data, what does it make, when does it make sense to use a local LM over a cloud-based one? 

[00:28:50] Paul Roetzer: So this is one I will, I'm not gonna punt it completely and like not answer it at all, but I will say I.

[00:28:56] This is one where your IT department comes in. This is why the CIO [00:29:00] is C-T-O-C-I-O. They're often involved at the higher level what's going on, especially if you're in a bigger enterprise. This is more technical stuff. at, at a very high level, the concept here is do you trust ChatGPT, Google, Gemini, Anthropic, Claude to have your data that like, I want to, I want to do an analysis where we take our marketing data or our profit and loss data, or customer data, and I wanna, I wanna have ChatGPT run an analysis on it, find insights in it.

[00:29:30] so the core of this question is, are they trustworthy to provide that data so we can use these chat bots we're used to, to help us with this stuff. that is an individual company decision. It's an individual decision. If it's just you. You have to look at the terms of use. You have to be comfortable with how secure your data is.

[00:29:51]  it   may be something you wanna bring your attorney in to make sure you're fully understanding the terms of use and what the rights they have to your data and the different things you [00:30:00] put in. in enterprises that have more sensitive data are more highly regulated. That is an instance where people may make that choice to build, you know, an LM that can be on premise and that doesn't live in the cloud, and then you, you don't have as much concern.

[00:30:18] But again, you know, it's hard to give one broad answer here, knowing everybody's got different situations with their data. This does come up all the time though. Like, one of the questions I get the most is, is it safe to put my data into ChatGPT? Like, I want to use their data analysis, but like, I'm not sure I'm comfortable giving it everything.

[00:30:37] And again, I think it's like a personal preference thing at this point. as well as, you know, understanding the guardrails that your company provides about whether or not you should do that. 

[00:30:50] Question #7: What’s the difference between ChatGPT Projects and Custom GPTs—and how do you decide which is better for a given task?

[00:30:50] Cathy McPhillips: Right. Okay. Number seven. You answered this yesterday on the podcast on episode 153, but either you can do a  cliffs note version, or you can go expand a little bit.

[00:30:59] [00:31:00] But what's the difference between a ChatGPT projects and custom GPTs, and how do you decide which is better for a given task? 

[00:31:06] Paul Roetzer: Yeah, so I did explain this best I could on episode 153. The gist of it is based on my current understanding. 'cause again, I'm still trying to make sure we're providing the best guidance here, but I looked into it.

[00:31:19] I use custom GPT all the time. I do use projects. I think of projects as folders, like if you have Google Drive or OneDrive, Microsoft whatever, Dropbox Box, whatever, whatever your system is. You have folders, and in those folders you can put images and videos  and  chats and whatever,  and  they all live there and you can kind of keep everything organized.

[00:31:40] So that's how I think of projects. Custom GPT is I have distinct tasks or projects that I probably, we shouldn't use the word projects here, let's say distinct tasks or workflows that I want, to train a specific instance of ChatGBT to do. And then I might want to actually share that with [00:32:00] my team or with the public.

[00:32:01] So we have jobs, GPT, that helps people assess the impact of AI on their job. You can put your job title in, it'll break it down into tasks.  and  that's a publicly available free GPT to my understanding, that is not something I can do in projects. Like if I was doing projects, I can't share out a single GPT.

[00:32:22] So I think of GPTs as like things we wanna do that are distinct tasks and sometimes we share them with our teams. Sometimes I keep 'em for my personal use and sometimes I put 'em out in the public. Projects is a foldering system basically to keep everything organized. 

[00:32:36] Question #8: If an agency or consultant is managing dozens of GPTs, what are your best tips for organizing workflows, versioning, and staying sane at scale?

[00:32:36] Cathy McPhillips: Well, that segues great into number eight.

[00:32:38] Okay. It's almost like you planned this, actually I didn't plan this, but, chatGPT must have known your how you're gonna answer that. Okay. Number eight. If an agency or consultant, or I guess even us, is managing dozens of GPTs, what are your best tips for organizing workflows, versioning, and staying sane at scale?

[00:32:56] Paul Roetzer: Yeah, this is a good one. I'm starting to feel this pain. [00:33:00] Our ourselves. We, we have been very aggressively been building out GPTs as an organization. Everybody has that ability in our company and, people have been way more proactive, I would say, in terms of creating GPT or d different processes and workflows.

[00:33:16] we don't have like a structured naming convention for ours. They, you know, they're available to people within our team license, but. We don't, to my knowledge, I mean, you know,  Cathy, like we don't have a Google sheet that tracks all of these things that just kind of in there. and as I'm saying this, I'm thinking like maybe we need some better system than we currently have.

[00:33:39] for me, the ones I build and manage, I journal again, like you'll sense a trend here. So I have a custom GPT, Google Doc, and all of the ones I build, I'll go in and say, problems. GPT made these five updates. here's the system instructions, here's the clean version, here's the, the edited version from the prior [00:34:00] version.

[00:34:00] So like, I track GPTs the same way I would if I was building an actual app or product. which we have done some before. And so that's how I do mine. So I can always go back and see what happened. But like I don't, I don't know the team has access to that doc even, like, I don't, I guess sometimes I'll say like, here's what I did and not maybe show it to 'em.

[00:34:20] But that's not a uniform structure we have internally. So I would say. I think of this as probably like pro, you know, a project management style thing that maybe needs more structure. maybe like prompt Libraries is a good reference if you've been trying to structure your prompts for sharing with your team.

[00:34:38] Maybe it's following a similar flow, but I would imagine this probably fits into however your company manages, projects. 

[00:34:44] Cathy McPhillips: and maybe as these evolve, these companies will figure out better systems for organization, for, for their users. 

[00:34:50] Paul Roetzer: Yeah, it'd be nice to be able to put the GPTs into different foldering systems for the company.

[00:34:54] Like if you're looking for customer success, gpt, here they are. Here's the number of usage, things like that. But yeah, [00:35:00] unfortunately Opening Eyes provided very minimal support to GPT since they launched and they made a big deal out of it. Like it was gonna be the next app store like Apple. and then they just didn't do anything for them.

[00:35:11] And, you know, every once in a while there's some little feature added, like last week we got the ability to choose which model you would recommend to. Now you can use any model within the gpt, which actually probably did more to break them than anything because. These things weren't written to be reasoning, used, reasoning models, and now all of a sudden a user can pick a reasoning model and it's gonna like break the way the thing works.

[00:35:31] so yeah, unfortunately they just haven't put as much energy behind GPTs, but hopefully they, they do provide some ways to better organize it. Right now you're kind of on your own in Google Sheets or Excel or As, or however you manage these things coming up with a system. I'll, I'll, I'll have to think about it more because that, that is, it's a, it's a good question.

[00:35:50] Something I honestly haven't really thought about or developed a system for our company to do. 

[00:35:55] Cathy McPhillips: I mean, I've thought about it as, you know, team members are building things and I just need [00:36:00] to remember to go back and look at what they've done, remember when they put it. So 

[00:36:04] Paul Roetzer: I have had that where it's like, Hey, Mike, didn't you build like a prompt generator or something?

[00:36:08] Like you're just kind of like, I feel like some point I saw that somewhere. 

[00:36:12] Question #9: How do you personally decide which AI tools to use—and do you see a winner emerging?

[00:36:12] Cathy McPhillips: Yeah. Okay. Number nine, there is so much buzz about ChatGPT versus Gemini. How do you personally decide which tools to use and do you see a winner emerging? 

[00:36:20] Paul Roetzer: I think the winner is just gonna change every three to six months. I don't know.

[00:36:25] We're gonna have a situation where like GPT-4 from OpenAI was just the dominant model for a year and a half. Like it was far and away just the best model. I don't, I don't know that we're gonna enter that phase again. Like, I think they're not like fully commoditized per se, but the models are so close in their abilities that it's hard to go wrong right now.

[00:36:48] Like the difference with Gemini is 2.5 Pro, which just yesterday I think, went to general availability in, in Google Gemini. So if you have the Gemini app, 2.2 0.5 Pro is now generally [00:37:00] available, I think in, in all Gemini accounts. And that model is both a traditional chatbot and a reasoning model combined, like one unified model.

[00:37:10] chat. GPT is not, so they have a reasoning model, which is oh three and oh three Pro, and then they have their traditional chat model. So I actually posted something about this on LinkedIn this week, and we talked about on episode 153, when we talk about the oh three Pro model, I use both. So I actually, so I think I might have said this on the 153, but right now in the building of AI Academy, I created a teaching assistant gem, like a Google Gem.

[00:37:39] And I created the same, using the same instructions in, a custom GPT.  and  so oftentimes actually put it into both. I'll say like, here's how I'm gonna describe a course. This is the course template I'm using for the description that'll appear in the learning management system. What do you think? evaluate [00:38:00] this in a critical way and I'll give the same prompt, the same output to both systems and see what they do.

[00:38:05] So if it's a high value thing, I will just use both. Then sometimes you realize like, oh, okay, Gemini's just better at this use case that I do all the time. So I'll use it and every once in a while I'll check in with ChatGPT, see if you got any better. So if you can afford both, I mean, 20 bucks a month, like for the value you get from 'em, if you use 'em enough, there's, it's pretty good argument to just pay the 20 bucks a month for both and try 'em.

[00:38:30] But I also don't think you can grow wrong, go wrong with just picking one and using it all the time. Like just there's, there's some value there to just experiment and get really good at at talking to one of 'em. so I don't know that there's a right answer here. Honestly. If you can afford both, then you have the capacity to be testing both go for it.

[00:38:48] Worst case scenario, just pick one and spend a lot of time with it, experimenting with it and getting good at prompting. 

[00:38:53] Question #10: What tools or platforms in the agent space—like HubSpot, Salesforce, or chatbot integrations—are actually ready for production today?

[00:38:53] Cathy McPhillips: Right. Okay. Number 10. What tools or platforms in the agent agent space like HubSpot, [00:39:00] Salesforce, or chatbot integrations are actually ready for production today? 

[00:39:04] Paul Roetzer:   So we, I don't have personal experience with Salesforce.

[00:39:10] They introduced Agent Force in fall of 2024, so it's still pretty fresh.  it  's like anything else, like sometimes you get feedback that it's just a bunch of marketing and branding and there's really nothing to it. And sometimes you hear stories of, no, it actually works. It's great. We've got, you know, these agents set up HubSpot, builds on top of chatGPT or you know, GPT technology from OpenAI.

[00:39:34] So they're starting to enable things like they just did a connection with deep research from chat GPT, so you can actually connect at the HubSpot and then that's agentic in a way. So it's able to go and look at your CRM data and provide reports. Our first experience with it is it just didn't work.

[00:39:50] Great. I've, again, I've heard awesome stories and I've heard things like our experience where it's like it just doesn't work. It like takes forever and it returns [00:40:00] nothing of use. So I think just generally speaking, agents are just really early in terms of their reliability. I think there's, the marketing from these companies haven't done them.

[00:40:11] The product team much favors in overpromising like what these things do. I think there's a lot of early efforts made that to make them pure more autonomous than they are that, you know, that, that you just thought you hit the button and it just went and did the thing and it was great. So, I don't know.

[00:40:27] I mean, everybody's kind of playing in this space, even in, in Nvidia, you know, is starting to move into this space. Not only enabling other people to build it, but building their own things. I think it's probably gonna be, you know, six to 12 months before a lot of the early stuff that's, just not delivering on the promise right now truly starts to, but that, that's a very broad statement.

[00:40:51] I, I'm sure that there's lots of people, even if you're setting it up through like Zapier or make where you're kind of building. An agentic process and the human's pretty [00:41:00] involved maybe in establishing what the workflow looks like. There's a lot of those that are working. So if we think of AI agents on this spectrum of autonomy, I would say that there's probably a lot of early stuff where humans are pretty heavily involved in writing some rules that are working great.

[00:41:18] If we're thinking about, I'm just gonna go and get an email agent and it's gonna take 80% of the work off of my team and they can go focus on this other stuff, I don't think that that's the reality. For the vast majority of use cases that you would look at applying agents to today, you can go get a good sense of agent.ai.

[00:41:34] So that's Dharmesh Shah, one of the co-founders of HubSpot. He has agent.ai that's like a, almost like a social net, not a social network, but like, a marketplace for agents and you can go see the kind of things that are being built. And what you'll see is they're very distinct tasks like that.

[00:41:50] Most of the agents are, are kind of still being built to do these very specific things. Yeah. More hype than reality. I guess, is the short too long to read? It's, [00:42:00] it's probably more hype than anything at this point, but it's gonna change real fast and I wouldn't ignore it because of that. 

[00:42:07] Cathy McPhillips: Yeah.

[00:42:09] Incidentally, I saw the backend of a workflow of a MAKE integration. 

[00:42:12] Paul Roetzer: Yeah. 

[00:42:13] Cathy McPhillips: I was just like, what in the world? And I'm so glad we had a human to help us do that because we knew what we wanted. Yeah. But just seeing all that logic and the branching and everything, it was just like, wow. 

[00:42:24] Paul Roetzer: Yeah. I think 

[00:42:24] Cathy McPhillips: we really need to understand that.

[00:42:26] Paul Roetzer: Yeah. I think that a lot of the agentic stuff today does start with understanding the actual workflow that needs to run, and then finding the ways to integrate the agentic processes into those workflows. But it usually requires the human to first envision the workflow in some ways. Now, again, there's exceptions to this, like deep research from Open and Google.

[00:42:47] You just say, I would like this research project, and it builds its research plan and it goes and does it. And there's. That is agents at work. So yeah, it's a, it's a mixed bag, but [00:43:00] I, again, I'd say more hype than reality at the moment, but moving pretty quickly in the opposite direction. Yeah. 

[00:43:07] Cathy McPhillips: Okay. Our third section, business strategy, adoption, and career impacts.

[00:43:10] Question #11: For companies just getting started, how do you recommend they identify the right pain points and build their AI roadmap?

[00:43:10] Question 11, for companies just getting started, how do you recommend they identify the right pain points and build their AI roadmap? 

[00:43:19] Paul Roetzer: So we have a custom GPT called ProblemsGPT. You can go to smarterx.ai. We'll, we'll put this in the show notes and click on tools. And the custom GPTs that we've created for this kind of stuff are right there and problems.

[00:43:32] ProblemsGPT is a, a free custom GPT. I actually was just showing this in the workshop I was running for move bullying. what problems GPT does is it helps you identify those pain points, like what are our problems? And then it helps you write problem statements and value statements, and then it'll actually develop a strategic brief to help you solve in problems more intelligently.

[00:43:54] So identifying the right pain points is basically the same. It's always been what are your goals in [00:44:00] the company? What is, what are the KPIs you are responsible for? Which ones aren't you meeting? Like pain points of pain points, like, so I don't know that that changes. What changes is the more you understand what AI is capable of, you look at how to solve those pain points and problems differently.

[00:44:17] And so that's what I built ProblemsGPT for, was to help people identify and properly state their problems and then assign values to them and then try and prioritize which ones can be solved more intelligently with ai. And so when we talk about an AI roadmap, we think about all of the kind of smaller level projects you may be running.

[00:44:39] Like we often talk about like pilot projects that you're running. So, okay, we're gonna apply it to email, we're gonna apply it to social, we're gonna apply it to media buying, or we're gonna apply it to data analysis and whatever. And you've got, you know, go find a tech and you do these things. And then the roadmap layers in.

[00:44:51] And here's like the five fundamental business problems we wanna solve over the next 12 months. And then you probably have a third layer, which is, and [00:45:00] here's the innovation layer, here's the new stuff we're gonna go do that we weren't doing before. And so the best AI roadmaps solve for efficiency and productivity immediately through these distinct projects.

[00:45:12] And then you're thinking about the higher value stuff through problem solving and innovation that actually drives the growth of the company and hopefully prevents you from having to lay people off because the efficiency is gonna make it so you need fewer people doing that work. And then problem solving and innovation, make it so you can redistribute the talent into these other areas that drive the growth and innovation.

[00:45:34] Question #12: What AI tools do you believe deliver the most value to marketing leaders right now?

[00:45:34] Cathy McPhillips: Right. number 12, what AI tools do you believe deliver the most value to marketing leaders right now? 

[00:45:41] Paul Roetzer: This could vary. Like  Cathy, you might say descript. I don't know. Like you, you could answer this one as well, but I think just a chat bot like it. Using Gemini or ChatGPT well, every day. And building gems and custom GPTs like that is for most organizations, most marketing teams in particular, [00:46:00] that's enough.

[00:46:00] Now you might go get like a writer or Jasper that's specifically built for marketing as well. but at, at minimum you just go hard on one and you, you integrated into the work. But I would you answer that one differently,  Cathy? 

[00:46:13] Cathy McPhillips: I wouldn't. I mean, we have specific use cases for some specific tools, but 90% of my AI use is within chat.

[00:46:20] Question #13: How is AI forcing agencies and consultants to rethink their models, especially with rising efficiency and lower costs?

[00:46:20] GPT. Yeah. Okay. Number 13, how is AI forcing agencies and consultants to rethink their models, especially with rising efficiency and lower costs? This 

[00:46:30] Paul Roetzer: is a dynamic space, so if people are, again, are kind of new to our ecosystem  and  what we do, we have an AI for Agency Summit. I owned an agency for 16 years.

[00:46:41] My first book was the Marketing Agency Blueprint. So I, I've sort of lived in this space for a really long time. It's a challenging time to be an agency, to be a consultant. I think you're under tremendous pressure because, if you're using generative ai, which you should be, your clients are in [00:47:00] increasingly aware of that and that you're probably doing things more efficiently.

[00:47:04] So if you were using, or still using some form of billable hours, that's a little tricky. 'cause you have to do a lot more work to make the same amount of money if you're charging by the hour, if you're in a value-based model where you're charging based on value creation. And I, I, again, as I'm saying this, like flip the script, if you're not an agency and you're on the brand side and maybe you pay agencies or consultants, freelancers, it's, it's a just a very, up in the air space of how it's gonna all play out.

[00:47:31] You could also get into the issue of. If you're using generative ai, are you passing copyrights over for the creative work you do for the outputs? The answer is no, you're not. Because as of right now, at least in the United States, the copyright law is if AI creates it, no one owns a copyright to it. So you're not passing a copyright to your client.

[00:47:50] Client may not know that I have seen contracts from larger enterprises that outlaw their agencies from using generative AI unless they get specific permission. [00:48:00] it is a total reinvention of the agency model. Like I, and I'm not even trying to oversell this like over the next couple years, the agency model is going to have to be completely reimagined.

[00:48:11] We're seeing some of the big agencies trying to do this. It's really hard to shift,  and ,  and  stay stable financially while you're trying to reinvent this. It's probably a great time to start an agency or consultancy because you can do stuff that, I mean, honestly, I've said it before, like. At, at my peak, my agency I think was around 20 people.

[00:48:35] we, based on the way we do work now as an organization, we are more productive than, than that agency by far.  and  probably on par with what a 50 to 80 person agency would've done back then. So I think that  it's just so much easier to build and scale a professional service firm right now. [00:49:00] it's a hard position to be in, to be an established one that's having to try and reinvent this.

[00:49:05] So, AI native, starting from the ground up is, is a way easier play than being an AI emergent, where you've got all this traditional stuff. You may have a bunch of people, especially creatives, who don't want anything to do with AI or like, don't want to use it. And  it  's gonna be a challenging change management process at a lot of agencies.

[00:49:22] I've seen some doing it well, but it's, it's gonna be hard 

[00:49:26] Cathy McPhillips: and there are so many, so many people at agencies that want to figure this out. 

[00:49:29] Paul Roetzer: Yeah. Because 

[00:49:31] Cathy McPhillips: people at MAICON are like, just tell me what, you know, tell me what to do. Yeah, tell me what that, what should I be thinking about? 

[00:49:34] Paul Roetzer: Yeah, we have a huge, I mean, our community, we have probably 110,000 plus subscribers at the institute, and there's a fair portion, I don't know, 20% range or something that, that are probably in that agency consulting, umbrella.

[00:49:51] And so these, yeah, these are people we talk to all the time  and  we see the people doing great work and that are evolving. And we do start to see a lot [00:50:00] of people who just jump ship and like start their own thing and they can be, you know, one person can do the work, a 10 basically. and so you, you, you see those kind of people having more kind of freedom to build their future.

[00:50:13] So yeah, great time to be building an AI native firm or a consultancy. Tough time to be trying to steer the ship to, to build an AI emergent one from an existing, traditional or agency. 

[00:50:24] Cathy McPhillips: Yeah, a little plug for our Slack community, we just hit 10 thou, 10,000 members this week. 

[00:50:28] Paul Roetzer: Nice. 

[00:50:28] Cathy McPhillips: And we have an agency channel within there that is very active with all these agencies trying to support each other, offer best practices, figuring us out together.

[00:50:36] So if you're an agency and looking for some support, come join us. 

[00:50:40] Paul Roetzer: Yeah. And the other thing I would add to this is like HubSpot. So that was how, you know, I came up as the first HubSpot partner back in 2007 and HubSpot's been doing an incredible job of helping to try and guide their partners. They have ecosystem partners, not just traditional marketing agencies, but you know, full blown solutions partners.

[00:50:57] And they're, they're doing great work [00:51:00] trying to actually help level up, those partners to help them make these kinds of shifts. And so, you know, if you are an agency, look for those kinds of partners who are invested in your future as well. It's cool to see what they've been doing with their partner ecosystem.

[00:51:14] Question #14: What does great prompting actually look like? And how should employers think about evaluating that skill in job candidates?

[00:51:14] Cathy McPhillips: Yeah, absolutely. Okay. Number 14, what does great prompting actually look like and how should employers think about evaluating that skill and job candidates? 

[00:51:24] Paul Roetzer: So great prompting. and again, I'm like top of mind. I'm building a prompting one-on-one course right now for academy. the simplest way I'd explain this though is like, just pretend like you're giving a project to an associate or an intern.

[00:51:37] Like how would you do that? So if you're asking ChatGPT, you know, if, if you're not treating as an advisor, if you're, if you're doing as like you want it to help you with an output, the way you would talk to an intern is, listen, like, here's the project I want you to do. Here's why you're doing it. This is the goal of the project.

[00:51:51] Here's five examples to look at, and make sure you don't do like these couple things, but like, this is what we want out of it. So you just describe it. And so [00:52:00] the easiest way to actually prompt is just talk to it like you would talk to a person. and then from an advisor perspective, you flip it a little bit and you say, listen, I want you to function as my CFO.

[00:52:12] Like, I'm trying to understand the ins and outs of this, and I'm not an expert in, in finance. Like, help me understand this, or, I want you to function as an attorney, and I want you to think critically from a legal perspective about the thing I'm trying to solve for. that, so that's the difference, is like, just talk to it, like what you want it to do, what the output needs to look like.

[00:52:30] And then if it's the opposite and you, it's an function as an advisor, then tell it. You want it to function in that role, and here's what you're trying to solve for. And honestly, like, if all else fails, say, I'm not sure how, how to ask you this. Here's what I'm trying to do. Like, I say this in workshops all the time.

[00:52:46] People come up like, what should I do here? It's like what you just asked me. Ask ai like you're, you just phrased it perfectly. You have a problem, you're not sure what to do. You don't know how to use AI to help you. Literally give the prompt that you just asked [00:53:00] me. So. Sometimes just imagine you're talking to like a consultant or someone you know has the knowledge you need, how would you phrase it to them?

[00:53:07] So there are formulas you can follow and like, do these five things and like that. That can work too. And we'll we teach that, but if all else fails and you're just not sure, just talk to it like you would a human that you're seeking the knowledge or the output from. 

[00:53:21] Cathy McPhillips: Yeah. And then once you get through a couple of those and you realize, okay, this is what I need to include in the beginning versus trying to, you know, do it 10 times, you just, you you'll get better at it.

[00:53:30] Paul Roetzer: Yeah. And honestly the ais are being trained to get better and better. Asking you qualifying questions like making sure that they know exactly what you're trying to do. Say, hey, well I can help you with that, but I would really need these five things. And what I'll do then is like, I'll just answer one I, what I'll say is I'm gonna give you answers one at a time.

[00:53:45] Like wait till I give you all the answers before you go and do the thing I want you to do. And then you just do it. And sometimes I'll actually keep a separate Google doc and I'll just like look at the five questions and I'll just write the answer fully and then I'll like throw it back in as a single answer.

[00:53:59] But yeah, I mean, it's just, [00:54:00] and it, and the biggest part is just experiment. Like you learn how to talk to 'em. It's, it's almost like as the other analogy is, if you have, if you ever raised a kid and it's like when they're four or five and you're just trying to figure out, well, how else can I say this to get through to you?

[00:54:13] Like, I've just, we need to figure out how to get you to do this thing. And sometimes it's like talking to a kid, like, you just gotta figure out how to say it so it actually does the thing you want it to do, or, or doesn't do the thing you don't want it to do, which I've definitely gone through or just keeps outputting something the wrong way.

[00:54:31] And you're like, stop. Like, what are you doing? And then you just have to try and rephrase it. It's like, okay, let me come at this a totally different way. Right. So yeah, it's very much like a kid. Yeah. 

[00:54:40] Question #15: As AI reshapes roles, does age or experience become a liability—or can being the most informed person in the room still win out?

[00:54:40] Cathy McPhillips: Okay. 15. As AI reshapes roles, does age or experience become a liability or can being the most informed person in the room still win out?

[00:54:49] Paul Roetzer: Yeah, so this one goes back to what I was saying earlier, and I'm not sure yet, like I have to play this out a little bit more in my head, but there is a big part of me right now, [00:55:00] I should think about this more before I say this. okay. So I could be totally wrong here. I think middle management's screwed.

[00:55:10] I think the people that lose out the most in the interim are not the entry level because you can bring them in and they're cheaper and you can teach them and they bring a, a nativeness to this where like they've, they've just familiar with these things and you don't have to teach them new stuff.

[00:55:30] Like they just come out ready to work with these things. So like entry level's still super valuable and you can pay the entry level more than you normally would have. 'cause they're gonna outproduce their peers and they're gonna, you know, produce it like 2, 3, 5x what they used to. You need the senior level because they actually have the experience to evaluate the models.

[00:55:52] They know what to ask, they know the right questions to put in. They have some institutional knowledge. And I think [00:56:00] the middle management might be stuck in this position where they don't have that yet. They don't have all the critical thinking they need. They don't have all the ways to like know if the outputs is good.

[00:56:12] But I don't, I don't know, like, again, I'm literally thinking out loud here. but if I, if I even look at a microcosm of like our organization or like some of the companies I've recently talked to. The senior people need to be there. Like you, you can't just get rid of them. Sure. And if you, yeah, and if you don't have, the entry level people, then like who are the future leaders?

[00:56:38] So I don't know. That's, those are kind of like, I could be wrong and I could change my mind next week when I start talking about it this morning. I've had more time to think about it. 

[00:56:47] Cathy McPhillips: This is live folks. We are, we're desperate on the spot. 

[00:56:51] Paul Roetzer: Yeah. 

[00:56:52] Question #16: What kind of changes should leaders expect in workplace culture as AI adoption grows?

[00:56:52] Cathy McPhillips: Okay. Number 16. What kind of, what kind of changes should leaders expect in the workplace culture as AI adoption grows?[00:57:00] 

[00:57:01] Paul Roetzer: This is gonna depend a lot on your organization. I could see there's gonna be a lot of clashes. I think pretty soon in some industries and in society, I think there's gonna start to be quite a bit of pushback against ai. And so there's a possibility that if you have cultures that don't want to change or that become so fearful of their jobs, that there's actually pushback to AI adoption and resistance to it.

[00:57:29] If you're in a more innovative culture that welcomes change and is used to it, then it's probably gonna go really smoothly. So I don't know. I think that the culture, you have the level of transparency and honesty from leadership, the, willingness to invest in your talent  and  help them improve their careers.

[00:57:48] So if you have internal professional development programs, if you have a history of creating a workplace that's conducive to them, advancing their careers probably goes [00:58:00] really well if it's a very traditionalist organization that doesn't handle change well and has been through some of these digital transformations over the last 20 years, and it was kind of painful, it probably.

[00:58:13] Isn't great. but I think it really comes down to leadership and their vision, their willingness to execute that vision, and then their honesty of having to go through that change. Because like we saw, we'll talk about this on, the next episode, but like handy chassis for the CEO of Amazon, literally just put out a memo to his team yesterday.

[00:58:35] He's like, we're gonna have fewer people like who just straight up AI is gonna drive efficiencies. We will have a smaller workforce in the future. So that's part of it. It's like, okay, we've got the transparency part. We're at least admitting this is what's gonna happen. Now, how you actually execute that and what that looks like to people.

[00:58:53] That's where the culture part comes in, is like, what does this actually mean? Does it hurt our recruiting efforts if we're literally saying we're gonna start [00:59:00] getting rid of people? I don't know. And so I think culture becomes critical and I think the way you handle AI and whether you take in a human first approach to it, I.

[00:59:08] Starts to really matter in your ability to recruit and retain people, in their profession. 

[00:59:14] Cathy McPhillips: Well, and even take out job replacement. Just think about people within an organization. Some love it, some don't love it, but like that honesty, that knowledge sharing. Look what I did. Look what I learned. Yeah.

[00:59:24] Look, I wanna show you something. Like, just having the, that collaboration I think is really important. 

[00:59:28] Paul Roetzer: Yeah. We see it with our, in ours we tried, you know, huge on the knowledge sharing side, and we, we want it to be inspirational to people, but you also have to be, you have to know where that line is. Like, you know, at some point are like, oh man, are we becoming too automated?

[00:59:42] Are we relying on the AI too much?  and  honestly, like, I already kind of feel that sometimes I feel it myself. Like sometimes I'm just like, yeah, I got, I gotta do the hard thing now. Like, I can't use AI for this thing. but I think strong cultures stay strong. Like, I think that I. [01:00:00] you know, again, I'll go back to like a company like HubSpot.

[01:00:03] I just knew their culture intimately for a long time when I was a partner and it was always just a great place and had a great culture. And I think that, if you trust your leaders and those leaders are transparent, and open, then it, yeah,  it could be good but bad cultures, it's, it's gonna probably get amplified if you have a bad culture.

[01:00:26] Right? And the other thing is, the problem you might run into is if overall the organization is not pushing, is not an AI forward organization, but you have AI forward individuals within that organization that are trying to push for change, that can go bad real fast. Yeah. And those people are not gonna stay there.

[01:00:44] They're gonna go find a place that like embraces their ability to be AI forward. 

[01:00:49] Cathy McPhillips: Okay. We have four questions left and it's at the top of the hour, so let's rapid fire last couple. 

[01:00:53] Paul Roetzer: Okay. 

[01:00:54] Question #17: What is ChatGPT really storing in its “memory,” and how persistent is user data across sessions?

[01:00:54] Cathy McPhillips: Trust, ethics and responsible use number 17. What is chat GPT really storing in its [01:01:00] memory and how persistent is user data across sessions?

[01:01:03] Paul Roetzer: I would assume it's storing everything unless you've told it not to. They, the labs see memory as a fundamental element of achieving AGI and having a very sticky experience with chatGPT, so you don't leave and go to Gemini. So if it knows you and everything about you, your preferences, your interests, your buying history, like they wanna know everything, everything in your calendar, everything in your email, everything in your Google drive, everything in your photos.

[01:01:32] the more they know, the more personalized the experience can become. So I would just assume that today it doesn't remember everything. It's not a perfect memory, but assume that's where they want it to go. and so how persistent user data is across sessions varies depending on the chat bot you're using.

[01:01:50] But again, I would just assume in the next couple years it's going to feel. Almost, perfect. Like it just remembers [01:02:00] everything and they have to, it's tricky to like kind of figure out how to manage all those memories, but they're gonna spend a lot of resources to solve memory.  it   is, like I said, fundamental to where these models are going.

[01:02:11] Question #18: How can businesses—especially in regulated industries—safely use LLMs while protecting personal or proprietary information?

[01:02:11] Cathy McPhillips: Right. Okay. Number 18. How can businesses, especially in regulated industries safely use LLMs while protecting personal and proprietary information? 

[01:02:20] Paul Roetzer: Get this one all the time. so the first thing is safely using LLMs. If you're having trouble getting approval to do it, so you're having trouble getting chat, GPT or copilot or whatever it may be, internally, steer into the concerns that the different stakeholders have about the use of those tools and find a bunch of use cases that are not impacted by that, so that don't require the personal information  and  things like that.

[01:02:48] The other is, this is where you lean heavily on legal and IT to make sure you're doing everything safely. 

[01:02:55] Question #19: Why do you think some companies still ban AI tools internally—and what will it take for those policies to shift?

[01:02:55] Cathy McPhillips: Okay. Number 19, why do you think some companies still ban AI tools internally? And what will it [01:03:00] take for those policies to shift 

[01:03:02] Paul Roetzer: lots of risk and uncertainty? So it's logical to ban things that you don't understand or that you think have a higher risk.

[01:03:11] And so in some cases, banning is because the people making the decisions don't fully understand and don't realize that there's probably a bunch of use cases that don't cause risk and concern. so it's just easier to ban them. But the, you know, when we think about things like agents that you're gonna give access to your computer to, and access to company data, like there's all kinds of risks, including things you can't even fathom that are being considered, like, data poisoning and prompt injection and like.

[01:03:42] All these emerging research areas that it sees this stuff and it's like, whoa, whoa. Hold on, let's pump the brakes, let's hold off on rolling things out. So yeah, sometimes you just have to trust that the, the information security people, cybersecurity people, like there's a reason why they're paid to manage the risk of a company.[01:04:00] 

[01:04:00] And you have to understand that, and you have to be empathetic to that, that like everybody's trying to do their jobs here. And sometimes your job is to find the simpler use cases that can create value that don't cause these concerns or come up against them. 

[01:04:13] Question #20: If AI tools are free or low-cost, does that make us the product? Or is there a more optimistic future where creators and users both win?

[01:04:13] Cathy McPhillips: Yep. Okay. Number 20. Let's shut this thing down.

[01:04:17] Okay. If AI tools are free or low cost, does that make us the product? Or is there a more optimistic future where creators and users both win? 

[01:04:26] Paul Roetzer:   So that rule's generally pretty reliably. True. So yeah, if, if you're. Not paying for something, there's a pretty good chance your data is the product that's the thing that they, they want access to.

[01:04:40] So like Facebook would be an example here. everything you have ever put up there is basically being used to train their models. Now, you know, you could think of the same thing with like a Gmail or photos, like yeah, the, the data is the product and the data became more valuable because now it can train [01:05:00] models that they think can generate billions of dollars in revenue and value every year.

[01:05:04] Tens of billions, hundreds of billions, trillions potentially. So yes,  it  's pretty safe. And I would say like, just kind of a bigger picture to end with, I would just be really, really cautious of experimenting with a bunch of AI tools where you have to give it any data, pictures of yourself. as an example, I.

[01:05:26] If you don't know the company, you don't know who funds the company. You don't even know what the founders are, what country it was built in, where your data's going. I just generally take a very cautious approach to the using of the tools and the connecting of any of those tools to any meaningful data source.

[01:05:43] because you just don't know.  and  it's often better. Now, I know that there's plenty of people who are pretty free with their data and just assume everybody's got it anyway. And I get that too. I think it's gonna be a generational thing. I think the next generation's gonna be less and less, you know, [01:06:00] cognizant of where their data's going.

[01:06:02] but generally speaking, I think it's good to just take a cautious approach to who you're giving your data to, what data it is that you're giving to them. and you know, you gotta find the companies you trust. And that's why in a company situation, I often say like, start with the companies that are already through procurement that are already approved in your tech stack.

[01:06:22] see what AI they have before you go try and like patch together a bunch of other tools that you might not trust or even be able to get through procurement. 

[01:06:31] Cathy McPhillips: Absolutely. Alright, that's 20 questions. 

[01:06:34] Paul Roetzer: Okay. That went fast. Okay. an hour. All right. Well thank, thank you everyone for the questions. Again.

[01:06:40] This is, these were from our intro to AI class. Do you, off the top of your head,  Cathy, know when the next intro to AI class is? We could, we probably have the ability to look that up July. We will put it in the show notes. Something July something, isn't it the ninth or something like that? 

[01:06:53] Cathy McPhillips: Maybe 

[01:06:54] Paul Roetzer: we just scheduled it.

[01:06:55] So the, that is coming up. I, I'll look it up right now. It is July [01:07:00] 9th. Wow. Look at that. Okay. Wednesday, July 9th at noon Eastern time is the next intro to AI class. So we do about 30, 35 minutes of presenting and then we do the Ask me anything for 25 minutes and then same deal, whatever doesn't get asked there, we'll kind of curate that and do another AI answer session.

[01:07:16] And then the other one I mentioned is scaling. That one is coming up. The day this drops. So you might miss that one June 19th, and then we'll announce a July session for that as well. So again, every month intro and scaling happens, and we appreciate the tens of thousands of people who have joined us in those classes, and we plan on keeping 'em going.

[01:07:37] So thanks to everyone there. And  Cathy, any final notes on this episode? See 

[01:07:42] Cathy McPhillips: MAICON? 

[01:07:43] Paul Roetzer: Yeah, MAICON there we go. And MAICON.ai. All right. Thanks everyone. Thanks  Cathy, and thanks Claire for, helping put it all together. Thanks for listening to AI Answers to Keep Learning, visit smarterx.ai where you'll [01:08:00] find on-demand courses, upcoming classes, and practical resources to guide your AI journey.

[01:08:06] And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about ai.

Recent Posts

[The AI Show Episode 154]: AI Answers: The Future of AI Agents at Work, Building an AI Roadmap, Choosing the Right Tools, & Responsible AI Use

Claire Prudhomme | June 19, 2025

Ep.154 of The Artificial Intelligence Show brings back AI Answers, in this series we tackle real questions from professionals trying to adopt and scale AI. 

How to Create Your Own AI Toolkit with Taylor Radey [MAICON 2025 Speaker Series]

Cathy McPhillips | June 19, 2025

In our ongoing speaker series, we’re spotlighting the remarkable AI leaders featured at MAICON. Read this post to get a preview of Taylor Radey’s session on how to create your own AI toolkit.

Sam Altman: The "Gentle Singularity" Is Already Here

Mike Kaput | June 17, 2025

Are we already living in the early stages of the singularity? Sam Altman thinks so. And Mark Zuckerberg just placed a big bet that Meta can accelerate it.