To AI or Not to AI: Assessing Risk and Opportunity CostMay 16, 2023
Thing that is exciting: Getting a fun new profile avatar in which your face has been illustrated using generative AI.
Thing that is not exciting: Learning that an artist's work was likely used in a machine learning atmosphere to produce that image -- and the artist went uncompensated for it (see: this Disney artist's example).
Thing that is exciting: Realizing efficiency among your staff in more quickly creating content for your website, blog, ad copy, etc.
Thing that is not exciting: Realizing that using ChatGPT or Bard to create that content generally results in very generic, redundancy-laden verbiage that doesn't reflect your quality standards -- and causes you and your team to spend more time editing what was churned out.
Thing that is exciting: Quickly debugging product code or summarizing proprietary company meeting notes by plugging them into ChatGPT.
Thing that is not exciting: Realizing you just put your organization's confidential information in the hands of a generative AI model and have lost the ability to control what happens next with it.
AI is more than a buzz word that's everywhere right now. AI is complex, algorithm-run, and carries risk. It's something that powers a number of different tools run by a number of different companies.
Does that mean you definitely shouldn't use AI, or tools that employ it? No. Keeping abreast of arising innovation is important so that you can weigh the risk of employing a new tool versus the opportunity cost of not employing it.
With AI -- and tools like ChatGPT specifically -- we need to consider these things:
ChatGPT's model is trained by using consumer data -- without those consumers’ consent. It also doesn’t sufficiently protect minors. On their own, these points are important to consider, because you take the on the risk that your third parties take on when you engage with them. Both Italy and Canada are considering regulation based on this, and we don't yet know what that will look like in full scope.
Given what we know about algorithmic learning -- which is a lot more than we knew as an educated public even a decade ago -- we can't ignore the moral and ethical considerations for which we should account when we feed an algorithm. Laws are in place to protect against unfair or deceptive practices that result in inequities in housing, lending, employment, etc. However, emerging technologies like generative AI gain insights from the data given to them early-on before they're held to account -- making it harder to correct the resulting biases reflected based on the learning that happened via data sets that may be inherently flawed. When you can't review the underlying data that is informing that content or code you just produced, you can't protect against errors and morally questionable behaviors, such as how that data was collected, from whom, etc. This presents moral, legal, reputational, and revenue risk.
Using a trendy tool like ChatGPT or Bard doesn't make you an AI expert -- and you should communicate that to your team often. If you've decided to give your team clearance to use a tool like these ones, be clear about the fact that you're granting access to these tools only. Semantics are important here: If you use words like "our team uses AI," your scope -- and presumed approval of the tools your team will engage with -- just expanded by a lot. Be clear: "You have permission to use [tool] at work for these specific XXX purposes."
As in all things, you should be clear about what problem you're trying to solve before you decide you even need to use AI tools. Start with these, and if you can't clearly point to AI as your solution, you should keep exploring.
What is your goal?
What’s stopping you from reaching the goal?
How much change is needed to overcome the problem?
Are you certain this will solve the problem?
I support teams in assessing the business case for innovation and hosting workshops that foster creativity among your team that gets to the heart of your challenges and potential solutions.