Stay Agile Blog

I write about the need for transparency, efficiency, equity, and diversification - in contracts, in tech stack development, in content pipelines, in media placement, in investment and revenue streams, in team and channel development, in attribution methods, and more. Read all posts. >>

READ ALL POSTS

Building Ethical and Sustainable Growth Engines

advertising strategy audience trust investment strategy Nov 06, 2023

What does the “Wolf of Wall Street” have to do with modern-day social issues? A lot, as it turns out.

In 1995, Jordan Belfort – the wolf himself – sued an online message board called Prodigy because the company didn’t remove user-generated posts that accused Belfort of fraud.

Prodigy argued that, because it wasn’t a publisher, like a newspaper, it wasn’t responsible for moderating content posted on its site. The court, however, found that because Prodigy had removed some content that it deemed offensive (Prodigy wanted to be a “family-oriented platform”), it had taken on the role of publisher.

Essentially, Prodigy’s decisions to moderate any obscene content had increased its liability for all the content it hosted.

The decision alarmed members of Congress who didn’t want tech companies to avoid moderating content for fear of increasing their risk exposure.

To incentivize content clean-up, Section 230 of the Communications Decency Act was introduced. The act, adopted in 1996, shielded technology providers from being treated like publishers – a role in which they would assume responsibility for hosted content – while also ensuring they wouldn’t be held liable for good faith actions to remove offensive material.

What’s happened in the years since is unfortunate: State and lower-federal courts interpret the law as an exemption from moderation for tech companies. “Bad Samaritans” – such as sites that deliberately republish illegal content like hate speech, nude images used without consent, and false news – are immunized from liability.

“Sites have no liability-based incentive to take down illicit material – especially if that material gets them extra clicks,” the author Danielle Keats Citron writes in her book The Fight for Privacy. “Digital platforms wield enormous power, yet bear no responsibility.”

Following Russian interference in the 2016 election, public pressure shifted the ways in which Big Tech companies moderate content to some extent. Facebook, Google, and Twitter, among others, announced sweeping changes to their ad policies, stating that content they deemed political or a social issue would either be disallowed (in Twitter’s case) or require added review, as well as a disclaimer visible on any ad material (in Google and Facebook’s case).

The problem, however, is that there is no oversight for what technology companies deem social issues. The list of categories requiring review from Facebook, for example, includes many topics concerning women that, when interpreted broadly, limit the reach of organizations like Planned Parenthood, which offers services for women’s health in addition to services classified as more “political” in nature, such as abortion care.

This lack of applied nuance is a phenomenon that SAGE Journals calls “flattening.”

“Because ‘adult male’ is the norm, conditions considered normal for adult males do not require categories,” SAGE research reflects. “When it comes to political issues [on social media], we see that most categories do not subdivide into more specific interests.”

Organic content – which is unpaid user-generated content – receives no more review or moderation than it ever did, presumably based on the traffic activity generated by bad behavior. Notably, Facebook has even allowed conservative news outlets and personalities to repeatedly spread false information without facing any of the company’s stated penalties.

And this doesn’t stop with Facebook. “It can be argued,” the author Safiya Noble writes in her book Algorithms of Oppression, “that Google functions in the interests of its most influential paid advertisers or through an intersection of popular and commercial interests.” Noble asserts that Google biases information toward the interests of neoliberalism and social elites.

Resultantly, we face a double-edged sword: paid content that’s deemed political if it’s related to certain LGBTQ+, social justice, or women’s issues, alongside unmoderated, unpaid speech that receives more algorithmic traction when it reflects conservative, instead of unbiased, leanings.

What does this have to do with our everyday decisionmaking as marketers?

When we share data with Big Tech companies, or continue to ramp up investment in advertising with them, or shy away from understanding the nuance of the perpetuation of hate speech on these channels, we perpetuate the problem. 

It’s imperative that we continue to explore strategies and best practices that help close this gap in the years ahead by:

  1. Asking better questions of the ways that our tech partners are using audience data
  2. Moving toward viewing marketing as a creative exercise -- instead of an algorithmically-aided exercise -- as we once did
  3. Diversifying our pipelines to build ethical and sustainable growth engines.

--

When you're ready, here's how I can help you with your investment diversification strategy:

>> Book 1:1 help to formulate your action plan or discuss a strategic objective.

>> Contact me about ongoing support with financial projections, board relationships, blended investment planning, and more.

STAY AGILE NEWSLETTER

Stay ahead of change.

Sign up for tips to help you feel in control and in command of your audience reach.