Hollywood has been ringing the alarm bells about the dangers of pervasive, unchecked artificial intelligence for decades.
Now it’s here, and already everywhere.
Last week, “Terminator” director James Cameron reminded the world that he warned us in 1984. Meanwhile, two of today’s biggest blockbusters — “Mission Impossible 7” and “Oppenheimer” — center on ominous tales of technology run amok.
While both tech companies and governments grapple over how to regulate AI, the genie is not only out of the bottle, she’s already moved in and set up house.
To be sure, AI technology is firmly established in the online services and apps we use every day, on our mobile devices and in our smart appliances. But generative AI — tools like ChatGPT, Google Bard, Microsoft CoPilot and ClaudeAI — have sparked a megaton explosion that has already started to disrupt entire industries in a matter of months.
More than half of Americans fear AI poses a threat to humanity, according to a Reuters poll. The Pew Research Center found that most people thought AI would do more harm than good in the workplace.
Yet several surveys — by CNBC, Microsoft and Monster.com — also found ChatGPT was already being used at work by one-third to half of all employees. AI is writing emails, summarizing meeting minutes, reviewing contracts and writing software. Look around your office right now — AI is probably involved in running your business today.
In real estate, law or software development? AI is definitely in the house. And that tool to detect AI-written content that OpenAI released in January? They unplugged it last week for being ineffective.
The takeaway? The best time to discuss AI and develop policies for your organization was before Nov. 29, when OpenAI started the generative AI arms race. The second-best time is now.
For many companies the main threat these tools pose is privacy, confidentiality and trade secrets. Employees who submit documents to these tools to extract insights and find problems aren’t sending them into a void — that information is going into the hungry maw of a private company. While it’s not as if Pepsi can just ask ChatGPT about Coca- Cola’s plans, it’s too great of an unknown for companies not to set a policy now.
Even if AI can be safely used — perhaps in writing a company holiday email announcement or writing a description for an oceanfront condo for sale — it’s critically important to check its work.
Because all generative AI tools are basically guessing how to best complete a sentence, they can get things wrong. Very wrong. These “hallucinations” have led to inaccurate biographies, triggered libel lawsuits and disrupted court proceedings. Try asking ChatGPT to describe your company or your CEO. That should be warning enough.
Third, disclosure is vital. Call it the “may contain nuts” rule. Is using AI illegal? No. And a disclaimer that AI was used in creating a product or published content is not really going to protect you from a legal standpoint. But it’s the ethical and responsible thing to do.
Even if ChatGPT does a great job writing an article for your blog, you should let readers know it was written (or co-written) by AI. And, frankly, discerning readers can tell — at least for now.
Finally, remember that AI is just a tool, a hammer, that can be used to build or cause damage. It should be a part of your corporate risk assessment process, if not an entire chapter. Just channel your inner Hollywood director: How could these tools be used maliciously, enlisted against you? Those emails pretending to be your boss, asking for a wire transfer? They’re only going to get better, more realistic and more interactive.
Artificial intelligence could disrupt work and business in good ways: automating small tasks, catching loopholes in contracts and helping programmers code better and faster. Customer support chatbots were already everywhere, and now they can talk on the phone and engage in conversation.
But consider ChatGPT and its ilk to be merely overeager, ambitious unpaid college interns. AI should only empower — not replace — people. Because once you let artificial intelligence speak for your company, it’s not really your company anymore.
———
Ryan Kawailani Ozawa publishes Hawaii Bulletin, an email newsletter covering local tech and innovation. Read and subscribe at HawaiiBulletin.com.