How to Avoid Revealing Too Much in Your Nonprofit Advertising
Jan 31, 2026At a recent privacy event, I heard privacy counsel who works with advertising practitioners share a piece of guidance that I found tremendously helpful in its simplicity:
"Your advertising should never make a prospect feel like you have more information about them than you either should have or actually do have."
It sounds straightforward, but when you start examining your actual advertising practices, especially if you work for a social good organization that collects sensitive information, the implications run deep.
For nonprofits and mission-driven organizations, this guidance is particularly critical. You're often working with health information, demographic data, and qualitative details about your audience's lived experiences. The data you collect is inherently sensitive, and how you use it in your advertising directly impacts whether your audience trusts you with it.
What "Revealing Too Much" Actually Looks Like
Here are concrete examples of how advertising can inadvertently signal that you know more than you should:
Example 1: Health Condition Retargeting
The scenario: Someone visits your website's diabetes support resources page but never fills out a form or identifies themselves. Two days later, they see a Facebook ad from your organization that says: "Living with diabetes? You're not alone. Join our community."
The problem: You've just revealed that you tracked their browsing behavior on a sensitive health topic. Even if they did visit that page, seeing the ad makes them feel surveilled, especially if they haven't publicly disclosed their condition or if they were researching for someone else.
Example 2: Demographic Inference in Creative
The scenario: Your organization serves LGBTQ+ youth. You create a lookalike audience based on your existing donors and run ads that say: "As a member of the LGBTQ+ community, you understand why our work matters."
The problem: Not everyone in that lookalike audience has disclosed their identity to you, and they may not be out publicly. The ad assumes information they never shared and could inadvertently out someone or make them feel like you've made assumptions based on their online behavior.
Example 3: Geographic + Life Event Targeting
The scenario: You run a women's shelter and target ads to women in specific zip codes using the headline: "Escaping domestic violence? We can help."
The problem: Even if your targeting is sophisticated, this ad could appear to someone on a shared device or in a household where seeing it puts them at risk. It also signals that you're making assumptions about their circumstances based on their location and demographic profile.
Example 4: Sensitive Search Behavior
The scenario: Someone searches for "addiction recovery programs" and later sees your ad: "Ready to get clean? Our 90-day program has a 75% success rate."
The problem: The ad language assumes they're struggling with addiction when they may have been researching for a family member, writing an article, or exploring options hypothetically. The direct "you" language makes them feel identified and tracked.
In each of these cases, the advertising isn't necessarily doing anything illegal. But it is violating an important trust principle: it's revealing that you've made inferences about sensitive aspects of someone's life based on their digital behavior — inferences they never explicitly authorized you to make or act on.
Why This Is a Cross-Functional Issue
The reason this guidance is so important — and so challenging — is that it requires coordination across teams that don't typically work together closely on privacy issues.
Here's what each team needs to understand:
Your Product/Tech Team
They're responsible for:
- What data you're collecting through pixels, cookies, and tracking technologies
- What audience segments you're building based on site behavior
- What information you're passing to advertising platforms
- Whether your consent management actually limits data flow when people opt out
What they need to know: Even if you have permission to collect behavioral data, using it in ways that make people feel surveilled can damage trust and violate the spirit of privacy laws, even if you're technically compliant.
Your Media/Advertising Team
They're responsible for:
- How you're building and deploying audience segments
- What targeting parameters you're using (behavioral, demographic, lookalike)
- Whether retargeting campaigns are appropriate given the sensitivity of your content
- How you're measuring and optimizing based on user behavior
What they need to know: Just because a platform allows you to target based on certain criteria doesn't mean you should. The audiences you build and the way you deploy them have privacy implications that go beyond what the platform's terms allow.
Your Creative Team
They're responsible for:
- The language and imagery in your ads
- Whether the creative implies knowledge about the viewer
- How personalized the messaging feels
- Whether the tone assumes information that wasn't explicitly shared
What they need to know: Copy that works great for an email to an existing supporter can feel intrusive in a retargeted ad. The creative needs to be calibrated to what the viewer has actually shared with you, not what you've inferred about them.
Your Legal/Compliance Team
They're responsible for:
- Ensuring you have legal basis for data collection and use
- Reviewing vendor contracts and data processing agreements
- Understanding what constitutes "sensitive data" under various laws
- Advising on consent requirements and opt-out mechanisms
What they need to know: Legal compliance is the floor, not the ceiling. Practices that are technically legal can still erode trust, and trust erosion eventually becomes a legal and reputational risk. They need to understand the business context, not just the compliance checkboxes.
The problem is that these teams often operate in silos. Product builds the tracking infrastructure. Media creates the audiences. Creative writes the copy. Legal reviews the privacy policy. And nobody's looking at the full picture of what the audience actually experiences.
What to Do About It
The solution isn't to stop advertising or to eliminate all targeting. It's to create a cross-functional process that ensures everyone understands the privacy implications of their work and has a voice in how you balance effectiveness with trust.
This requires two things:
1. Audit Your Current Practices
Before you can fix the problem, you need to understand where you currently stand. This means examining your data collection, audience building, creative messaging, and cross-functional alignment.
2. Build a Privacy Working Group
Privacy can't just be "legal's problem." You need a cross-functional team that meets regularly to review campaigns, establish guidelines, and ensure everyone understands their role in protecting audience privacy.
The Bottom Line
Living up to that guidance from the privacy event — "Your advertising should never make a prospect feel like you have more information about them than you either should have or actually do have" — requires rethinking how your teams work together, how you balance performance with trust, and how you translate privacy principles into operational practices that everyone understands and follows.
For social good organizations handling sensitive data, this isn't optional. Your audiences are trusting you with some of the most personal aspects of their lives. How you use that information in your advertising directly impacts whether that trust continues or gets broken.
The organizations that get this right won't just avoid regulatory risk. They'll build deeper relationships, earn more trust, and ultimately be more effective at their missions.
If you need support building a privacy working group or establishing a charter for your organization, I offer customized workshops and ongoing consulting to help teams navigate these conversations. Let's talk about what would be most useful for your situation.
Frequently Asked Questions
What counts as "sensitive data" in advertising?
Sensitive data definitions vary by state but often include information about health conditions, sexual orientation, gender identity, race or ethnicity, religious beliefs, financial hardship, domestic violence, addiction, mental health, disability status, immigration status, and other aspects of identity or personal circumstances that could cause harm or discrimination if revealed. For nonprofits, this often extends to any information related to why someone might need your services. Even if someone visits a public webpage, using that behavioral data to target them with advertising can reveal sensitive inferences you've made about their circumstances.
How do we know if our ads are revealing too much?
Ask yourself: "If someone saw this ad, would they feel like we know something personal about them that they never explicitly told us?" Look at your ad copy and targeting together. Does the combination of who you're targeting and what you're saying imply knowledge about their health, identity, personal crisis, or life circumstances? Would this ad make someone uncomfortable if seen on a shared device? If the answer to any of these is yes, you're likely revealing too much. Another test: Would you be comfortable explaining to that person exactly how and why they were targeted with this specific message?
Who should be involved in reviewing ad creative for privacy concerns?
At minimum: your legal/compliance team, your media/advertising team, your creative team, and someone from product/technology who understands what data is being collected and how. Ideally, you should also include a representative from the community you serve who can flag when messaging might feel intrusive or revealing to someone in that audience. The review should happen before campaigns launch, not after. Many organizations formalize this through a privacy working group that meets regularly to review campaigns and establish ongoing guidelines.
What's a privacy working group and do we need one?
A privacy working group is a cross-functional team that meets regularly to ensure privacy considerations are integrated into operational decisions — not just handled as a legal compliance exercise. They can be helpful in most organizations, especially if: (1) You collect sensitive data about your audience, (2) You use behavioral tracking or retargeting in your advertising, (3) Your teams (product, media, creative, legal) aren't currently coordinating on privacy issues, or (4) You've had instances where ads or campaigns raised privacy concerns after launch. The working group creates shared understanding, establishes guidelines, reviews campaigns, and ensures everyone understands their role in protecting audience privacy.
How do retargeting and lookalike audiences factor into this?
Retargeting shows ads to people based on their previous behavior on your website, which means you're revealing that you tracked what they viewed. If they visited sensitive content (health resources, crisis support, identity-related pages), retargeting them can feel especially invasive because it signals you know about a private aspect of their life. Lookalike audiences are built by finding people who "look like" your existing audience based on platform data — but you don't always control what criteria the platform uses, and you may be targeting people with inferred characteristics they never disclosed. For both tactics, ask: Are we revealing information or making assumptions that could make someone uncomfortable? If so, either avoid those audiences or adjust the creative to be more generic and less assumptive.
Is behavioral targeting always bad for privacy?
No, but it requires careful consideration. Behavioral targeting based on non-sensitive content (someone visited your general "About" page or read a blog post) is generally lower risk than targeting based on sensitive content (someone visited your "Addiction Recovery Programs" page or "LGBTQ+ Youth Services" section). The key is proportionality: Does the sensitivity of the data you're using match the value exchange for the user? Are you using the minimum necessary data to achieve your goal? Are you being transparent about this in your privacy policy? And critically: Does your advertising reveal that you've made inferences about sensitive aspects of their identity or circumstances? That's where trust breaks down.
What if our current advertising practices don't meet these standards?
You're not alone. Most organizations are navigating this in real time. Start by conducting an audit of your current practices using the questions in this post. Identify the highest-risk campaigns (those targeting based on sensitive behavioral data or using assumptive creative) and address those first. Convene your cross-functional team to establish new guidelines going forward. The goal isn't perfection overnight; it's creating a process that prevents future missteps and builds trust over time. Remember: your audiences will forgive mistakes if they trust that you're trying to do right by them. They won't forgive feeling surveilled or exploited.
How does this relate to legal compliance with privacy laws?
Legal compliance is the floor, not the ceiling. Privacy laws require that you have a lawful basis for processing personal data and that you be transparent about how you use it. Drawing inferences from behavioral data and using them in advertising falls under "processing" in most privacy laws and — as California's CPPA has clarified — inferences that create new information about someone count as personal data. But even if you're technically compliant, practices that make people feel surveilled erode trust, damage your brand reputation, and ultimately create legal risk through complaints and investigations. The privacy counsel's guidance at the event was about going beyond compliance to build genuine trust, which is both ethically right and strategically smart.
Where do we start if we want to improve our practices?
Start with awareness. Share this post with your media, creative, product, and legal teams. Schedule a meeting to walk through the examples and audit questions together. Ask: Are we doing any of these things? What would our audiences feel if they knew how we were targeting them? From there, establish some initial guardrails — even simple ones like "no retargeting based on visits to sensitive content pages" or "all campaign creative must be reviewed by at least two teams before launch." Then, if you need more structure, consider formalizing a privacy working group and creating a charter. The goal is to create shared understanding and accountability so privacy becomes everyone's responsibility, not just legal's problem. If you need help facilitating these conversations or building the framework, that's exactly the kind of work I support organizations with.