Stay AgileĀ Blog

I write about the need for transparency, efficiency, equity, and diversification - in contracts, in tech stack development, in content pipelines, in media placement, in investment and revenue streams, in team and channel development, in attribution methods, and more. Read all posts. >>

READ ALL POSTS

To AI or Not to AI: Assessing Risk and Opportunity Cost

ai operations planning privacy Jan 02, 2026

There’s a pattern I see every time a new technology enters the mainstream.

The excitement is real. The efficiency gains look obvious. And the risks feel abstract — until they’re not.

The excitement (and the tradeoffs)

Thing that’s exciting:
Generating a polished profile image or illustration in seconds using a generative AI tool.

Thing that’s less exciting:
Realizing that the model producing that image was likely trained on artists’ work without consent or compensation, and that your organization is now participating in that ecosystem, whether you intended to or not.

Thing that’s exciting:
Helping staff draft content faster, from blog posts, emails, summaries, internal documentation.

Thing that’s less exciting:
Discovering that the output is generic, repetitive, or subtly off-brand — requiring just as much time to correct, contextualize, and approve as if it had been written from scratch.

Thing that’s exciting:
Debugging code, summarizing meeting notes, or accelerating research with a few well-written prompts.

Thing that’s not exciting at all:
Realizing that confidential, sensitive, or regulated information was just fed into a third-party system you don’t control, with no clear understanding of how that data is stored, reused, or retained.

This is the tension at the heart of AI adoption.

AI isn’t one thing. It isn’t one tool. And it isn’t just a productivity hack.

It’s a collection of technologies, owned by third parties, trained on opaque data sets, and embedded into tools your staff is increasingly using by default — often before organizations have decided what’s acceptable.

That doesn’t mean organizations should avoid AI. But it does mean AI requires more intentional decision-making than most teams are prepared for.

The real question isn’t “Should we use AI?”

It’s:

Do we understand the risk we’re taking on — and the opportunity we’re trading off — when we do?

Every organization is already making AI decisions, whether explicitly or implicitly:

  • by allowing staff to experiment freely,

  • by integrating AI-powered features into existing platforms,

  • or by saying nothing at all and letting practices emerge informally.

Silence is still a decision.

What organizations need to consider beyond the headlines

1. You inherit the risk profile of your third parties

Many generative AI tools have been trained on consumer data without meaningful consent. Some have faced scrutiny for insufficient protections around minors’ data, intellectual property, or sensitive information.

While the regulatory landscape continues to evolve — particularly at the U.S. state level — one thing is already clear:

When you use a third-party AI tool, you inherit their data practices as part of your own risk exposure.

That risk isn’t only legal. It’s reputational. It’s ethical. And for mission-driven organizations, it directly affects trust.

2. Bias, inference, and amplification are not theoretical risks

We now understand far more about algorithmic bias than we did even a decade ago. AI systems don’t just reflect the data they’re trained on; they amplify it.

That matters because:

  • flawed or incomplete datasets create skewed outputs,

  • early training decisions are hard to unwind later,

  • and many models don’t allow users to inspect or audit underlying training data.

When organizations can’t explain:

  • where insights came from,

  • what assumptions were embedded,

  • or how outputs were generated,

they also can’t confidently stand behind the decisions those outputs inform.

That introduces legal risk, yes, but also moral and mission risk.

3. “Using AI” is not a strategy, and it’s not a skill set

Granting access to ChatGPT, Bard, or embedded AI features does not make an organization “AI-enabled.”

Language matters here.

There is a meaningful difference between:

“Our team uses AI.”

and:

“Our team is approved to use these specific tools, for these defined purposes, with these guardrails in place.”

Without that clarity:

  • scope creeps,

  • expectations expand,

  • and accountability disappears.

Teams don’t need permission to “use AI.” They need guidance on when, why, and how it’s appropriate.

4. AI should solve a problem, not create one

Before adopting or expanding AI use, organizations should be able to answer a few basic questions:

  • What problem are we trying to solve?

  • What’s preventing us from solving it today?

  • How much change is actually required?

  • And how confident are we that AI addresses the root issue — not just the symptoms?

If the answers are vague, the risk of unintended consequences increases.

AI is powerful, but power without clarity tends to create rework, not efficiency.

So where does that leave organizations?

The most effective teams aren’t rushing to adopt or ban tools outright. They’re slowing down just enough to:

  • define acceptable use,

  • clarify consent and data boundaries,

  • and establish decision-making frameworks that hold up as tools evolve.

In other words, they’re treating AI readiness as a governance challenge, not a technology race.

That’s what allows organizations to:

  • support responsible experimentation,

  • protect sensitive data,

  • and move forward with confidence instead of fear.

Final thought

AI doesn’t eliminate the need for judgment. It raises the stakes for having it.

Organizations that invest now in clarity — around data use, consent, and accountability — will be far better positioned to use AI effectively and responsibly.

If you’d like support assessing where AI introduces risk or opportunity for your organization, I work with teams through structured assessments and facilitated workshops that bring clarity before decisions get made by default.

You can contact me to learn more or schedule a conversation.

STAY AGILE NEWSLETTER

Stay ahead of change.

Sign up for tips to help you feel in control and in command of your audience reach.