It’s powerful, useful — and potentially risky as hell. Welcome to Ask AK, a new weekly column where I tackle your real-world tech questions — with straight-up answers you can actually use. Whether it’s AI, gadgets, security, or the latest overhyped app, if you’re wondering about it, I’m here to break it down.
This week’s question came in hot:
“Is it safe to use ChatGPT to help write work emails?”
Short answer? Yes — but only if you’re not reckless.
Long answer? Let’s get into it.
The Rise of the AI Assistant at Work
We’re in an age where AI isn’t just a buzzword anymore — it’s sitting next to you on your desk, helping you write follow-up emails, summarizing meeting notes, and rewording that awkward client response you drafted at 11PM.
ChatGPT is the star of that AI show. It’s being used in businesses big and small to:
- Write emails and memos
- Draft content and reports
- Brainstorm ideas and strategies
- Translate and simplify documents
Sounds harmless, right? But here’s the kicker: it only knows what you tell it. And that’s where things get messy.
The Real Risk: You Are the Weakest Link
Let’s call it what it is — ChatGPT is only dangerous if you feed it sensitive info. But that’s exactly what people are doing.
According to research by Cyberhaven, a shocking 11% of what employees paste into ChatGPT is confidential data. We’re talking internal documents, customer info, strategy decks, and even login credentials in some cases.
This is how companies get burned:
- A dev copies code into ChatGPT to debug it — not realizing it contains proprietary IP.
- A marketing manager pastes a client contract to get help rewording a clause.
- An exec uploads meeting notes filled with unreleased plans and performance data.
The moment that info is entered, it leaves your ecosystem and enters theirs. And while OpenAI claims it doesn’t train on data from ChatGPT Enterprise accounts, the free version? Different story. They even say in their own FAQs:
“We are not able to delete specific prompts from your history. Please don’t share any sensitive information.”
Translation? Once it’s in, it’s in.
The Security Headaches No One Talks About
Here’s what else can go wrong:
- Privacy breaches – If someone hacks into your ChatGPT account, they see everything you’ve shared.
- Data leaks – ChatGPT might accidentally resurface data it learned from other users in completely unrelated conversations. Rare, but not impossible.
- Phishing and social engineering – Hackers can use ChatGPT to craft incredibly realistic emails that fool your team into clicking malicious links.
- Inaccurate info – It’s not just about data leaks. Sometimes the AI just makes stuff up. And if you send that to a client? That’s your rep on the line.
And don’t think regulators are sleeping on this. Italy temporarily banned ChatGPT over data protection concerns, and companies like Amazon, JPMorgan, and Apple have put hard bans in place for internal use.
How to Use ChatGPT at Work Safely
You don’t need to swear off ChatGPT completely. But if you’re going to use it, use it smart.
Here’s your playbook:
- ✅ Never share confidential info.
No client names, internal plans, financial data, or anything you wouldn’t post on your LinkedIn profile. - ✅ Stick to generic drafting.
Use it to rephrase generic outreach, brainstorm ideas, or clean up grammar — not write your business strategy. - ✅ Get your IT involved.
Use tools like Metomic that integrate with ChatGPT to monitor data sharing and flag risky behaviour before it becomes a headline. - ✅ Train your team.
Employees aren’t malicious — they’re just moving fast. A regular AI security training can go a long way. - ✅ Use an enterprise plan if possible.
It offers better data privacy guarantees than the free version, and your data isn’t used to train the model.
Final Take: Smart Tool, Dumb Mistakes
Look, ChatGPT isn’t evil. It’s a tool — and like any tool, it’s only as smart as the person using it.
Used wisely, it can save you time, sharpen your writing, and free you up for higher-level thinking. But used carelessly? It’s a digital megaphone you just screamed your client’s secrets into.
So next time you’re about to paste something into ChatGPT, ask yourself:
Would I be okay if this showed up in someone else’s inbox tomorrow?
If the answer’s no? You already know what to do.
Got a tech question? Ask AK!
Send us a DM, or drop a comment on our social pages— and I’ll tackle it in an upcoming column.
