The AI hype is real.
From instant subject lines and automated flow drafts to “data insights” generated in seconds — it feels like everyone’s rushing to connect their MarTech stack to AI tools like ChatGPT, Claude, and beyond.
But here’s the uncomfortable truth no one wants to say out loud:
Every prompt has a cost. And sometimes, that cost is your customer’s trust.
We’ve seen teams paste raw customer exports into AI chat windows.
We’ve seen full ESP accounts (like Klaviyo) connected to third-party plugins for “faster campaign generation.”
We’ve seen prompts that include real names, email addresses, postcodes, and brand details.
It’s fast.
It’s convenient.
And it’s a potential privacy nightmare.
First-party data ≠ free-for-all
Your CRM program is built on trust — data willingly shared by your customers in exchange for value.
That includes:
-
Email addresses
-
Names
-
Purchase history
-
Location
-
Product preferences
-
Loyalty or membership status
That data doesn’t belong in a public AI tool.
Not in prompts.
Not in uploads.
Not in connected plugins.
Even if you think you’re being careful, AI platforms often retain prompt data unless you explicitly opt out (and sometimes not even then — check those T&Cs). Many free or default accounts also use your inputs to train their models.
So when you or your agency upload something “just to get a faster subject line,” you might be handing over more than you realise.
Leaders: Do you know what your team or agency is sharing?
This isn’t just a marketing issue. It’s a governance issue.
We’ve worked with big brands that have strong compliance policies — but no visibility into what junior marketers or third-party agencies are plugging into AI tools.
Ask yourself:
-
Are your teams trained on what not to share with AI?
-
Have your agency partners been briefed on data handling expectations?
-
Do you have clear guardrails on tool usage, exports, and access?
If the answer’s “not really,” then it’s time for a reset.
Best practices: Using AI without risking your data
We’re not anti-AI. Not even close.
We use AI tools internally all the time — to ideate, summarise, check code, and test ideas.
But we do it responsibly. So should you.
Here’s how to stay sharp:
1. Do a data audit
Find out what’s being shared, by who, and through what tools. Don’t wait until something goes wrong.
2. Never share PII
No names. No email addresses. No postcodes. No loyalty IDs. If it can be tied to a person, don’t include it.
3. Avoid using brand names in prompts
Don’t give away strategic context that identifies your business or client — especially if you’re testing campaign copy.
4. Use sanitised data for testing
Replace real customer data with dummy values or aggregate insights when you want AI to help interpret something.
5. Train your team and agency
Make data safety part of your onboarding. If your agency uses AI, they need to play by your rules.
6. Read the fine print
Understand what the AI platform retains, stores, or trains on — especially if you’re using free tiers or Chrome extensions.
Final word
AI is here to stay — and it’s only getting more powerful.
But don’t let speed and novelty override common sense.
Your customer data is your most valuable asset. Treat it like one.
Because in CRM, trust is everything. And once it’s broken, it’s hard to rebuild.