Episode Summary
AI anxiety is holding a lot of service firms back, and in this episode Justin Davis and Greg Ross-Munro explain why that hesitation often does more harm than good. They walk through how modern AI tools handle data, why paid tiers of tools like ChatGPT and Claude are fundamentally different from free versions, and where the real risks actually live. From sensitive data and GDPR concerns to application-level security and prompt abuse, they unpack the nuances leaders need to understand to make informed decisions instead of reactive ones.
The discussion also dives into practical options for higher-security use cases, including anonymization, AWS Bedrock, and even running models on your own hardware. But the episode ultimately lands on a clear message: most AI risk isn’t technical, it’s human. Without clear policies and training, employees will make inconsistent choices that expose the business. AI isn’t something to fear, but it does demand intention, moderation, and leadership.
Episode notes
- Why fear of AI safety is slowing real adoption in businesses
- The difference between free and paid LLM tiers, and why it matters
- What “model training” actually means for your data
- When “improbable” risk still isn’t good enough
- Practical guidance on handling PHI, PII, and sensitive data
- How anonymization solves more problems than most people realize
- When it makes sense to use AWS Bedrock or self-hosted models
- The hidden security risks in app-to-LLM integrations
- Why logging, tracing, and LLM ops tools are becoming essential
- How users (especially students) can unintentionally rack up huge AI bills
- New attack vectors like prompt injection via websites and agents
- Why most security breaches are human error, not hackers
- The critical importance of having an AI usage policy
- How to make AI security training more engaging and effective
Episode Transcript
Justin Davis: Happy Friday, and welcome. It’s good to have everybody here. My name is Justin Davis. I’m the Vice President of User Experience at Sourcetoad, and I’m joined by…
Greg Ross-Munro: Greg Ross-Munro. I’m the CEO of Sourcetoad, and I’m in charge of making sure the beer fridge is fully stocked and the snack wall is overflowing.
Justin Davis: Yes, and the snack wall and the beer fridge are great. If you’re in the Tampa Bay area, come by Sourcetoad on Busch Boulevard.
Greg Ross-Munro: The unrivaled collection of Oreos, possibly in the entire southeast.
Justin Davis: I would agree. I don’t think there’s another office building that could beat us. If there is, let us know.
Greg Ross-Munro: Today we’re talking about how safe AI really is for your business. Can you put things into these models? Can you feed ChatGPT credit card information, Social Security numbers, employee pay records, or other sensitive data? Is that a good idea, and should you be doing it?
Justin Davis: Right. A lot of businesses are interested in AI, but far fewer are actually using it in their operations. One of the biggest reasons is fear. Is it safe? Am I going to get in trouble? Will my data get leaked? That fear creates a chilling effect. In this episode, we want to talk about how to tell what’s probably okay, what needs more thought, and what trade-offs you’re really making.
There are major opportunity costs to not using AI, both internally and customer-facing. Sometimes the security and privacy trade-offs people make go too far.
Greg Ross-Munro: A good place to start is to think back a couple of years, when ChatGPT was around version 3.5. Samsung famously put sensitive product information into ChatGPT. Microsoft researchers could see that data coming in and warned Samsung not to do that. That story freaked a lot of people out.
At the time, the answer really was yes, someone at the model provider could see what you were typing.
Justin Davis: The mental model is important. Putting data into an LLM is a bit like putting paper into a shredder. You can’t ask it to dump out everyone’s Social Security numbers, but scraps of data exist and could theoretically be reconstituted. Is that likely? Probably not. Is it impossible? No.
For some data, improbable isn’t good enough. You want impossible.
Greg Ross-Munro: That brings us to tiers. Most LLMs have a free tier and a paid tier. The free tier can use your data to train the model. The paid tier, usually around $20 a month, promises not to train on your data.
If you’re not paying that $20 per employee, you should seriously ask why.
Justin Davis: That’s like one lunch a month.
Greg Ross-Munro: With the paid tier, your data shouldn’t end up in someone else’s chat. You also usually get legal protection. OpenAI, for example, will indemnify you if data leaks from their system. That’s a huge deal.
Justin Davis: One important thing: even with paid accounts, you need to go into settings and turn off data sharing. In ChatGPT, that’s under Data Controls. Make sure “improve the model for everyone” is turned off.
Greg Ross-Munro: At the company level, you can enforce this. Employees should be using company accounts, not personal ones, and admins should control those policies.
Justin Davis: That’s where policy comes in. There’s the technology side, but there’s also the question of how employees know what they can and can’t do.
Greg Ross-Munro: So is it really true that you can safely put whatever you want into ChatGPT if you’re paying for it? I don’t think it’s quite that simple.
Justin Davis: There’s nuance. We tend to overestimate rare but dramatic risks, like airplane crashes, and underestimate everyday risks. A useful test is this: if this were a sheet of paper that flew out of your car, would you be okay with someone finding it?
Things like Social Security numbers, medical records, and identifiable health information should generally stay out. But you often don’t need that raw data anyway. You can anonymize datasets and analyze patterns without exposing sensitive details.
Greg Ross-Munro: For very sensitive cases, especially with GDPR concerns, some clients run their own models. We’ve used AWS Bedrock to do this. Justin, want to explain what that is?
Justin Davis: AWS Bedrock lets you run models from different vendors on infrastructure you control. Think of it like putting the data in a locked file cabinet you own. The data doesn’t leave that environment.
The risk never goes to zero unless the hardware is physically in the room with you, but it drops dramatically.
Greg Ross-Munro: The downside is that things change fast. Keeping models up to date and managing the tooling becomes a full-time job.
Justin Davis: For most companies, off-the-shelf tools like ChatGPT, Claude, or Gemini are more than sufficient. But for law firms or highly regulated industries, running your own infrastructure can make sense.
Greg Ross-Munro: Another big security concern isn’t the chat itself, but how LLMs are embedded into applications. The connection between your app and the model is a traditional attack surface.
Justin Davis: Exactly. LLMs are often surrounded by regular web services, and those are where things usually go wrong.
Greg Ross-Munro: That’s why tooling like Langfuse exists. It gives you logging, auditability, and traceability, so you can see what’s happening and why your OpenAI bill suddenly exploded.
Justin Davis: We’ve also seen abuse from users, especially students. People will try to jailbreak systems or spam prompts just to see what happens.
Greg Ross-Munro: Middle schoolers are excellent at breaking software.
Justin Davis: Another new attack vector is prompt injection through websites. Agents that read web pages can be tricked by hidden instructions embedded in the page.
Greg Ross-Munro: We actually did something similar in an RFP once, hiding a prompt in white text. They found it because they were clearly using an LLM.
Justin Davis: That was a great honeypot.
Greg Ross-Munro: At the end of the day, most security breaches come from human error. Not hackers in basements. People clicking links, sharing passwords, or using tools incorrectly.
If you don’t have an AI usage policy, you’re taking on unnecessary risk. Your employees are already using AI.
Justin Davis: Policies and training matter. Make it interesting. Run drills. Use AI to simulate attacks. Anything is better than a boring PowerPoint.
Greg Ross-Munro: You can even ask ChatGPT to help write your policy.
Justin Davis: My final thought is moderation. Be realistic about risk and trade-offs. Don’t overcorrect in either direction. Use the tools, minimize risk, but don’t be scared of them.
Greg Ross-Munro: Most leaders say they’re using AI, but very few have policies. If you’re listening to this, go work on a policy.
Thanks for listening. Have a great weekend.
