There is always something shiny and new to drive global trends, industry hype cycles and viral memes, so it would be easy to dismiss the current obsession with artificial intelligence as merely the “next big thing.” And as a concept that has driven largely apocalyptic science-fiction narratives for decades, it’s easy to pigeonhole AI as the existential threat of the day — a trusty source of doom-saying headlines to keep people sleeplessly clicking and scrolling.
We’re relatively fresh out of the rise and fall of virtual reality and the metaverse, after all, and blockchain and cryptocurrency before that. True believers in both will always tell us it’s still early days and that decentralized everything and ubiquitous spatial computing are coming soon.
But AI, today, is different. It has not only been working its way into our daily lives for years, but has made genuine mainstream inroads in just 16 months.
The emerging tech has made stars out of new companies and tools dedicated to the space: OpenAI (ChatGPT), Anthropic (Claude), Stability AI (Stable Diffusion), Mistral (Le Chat) and Cohere. And the biggest tech companies are all betting their futures on AI: Microsoft, Google, Amazon, Apple and Nvidia, for starters.
The rapid rate at which AI saturated every industry was powerful enough to land firmly on Hawaii shores.
I was fortunate to be included on an 11-person panel discussing AI on PBS Hawaii’s live town hall series, “Kakou.” It was the best cross-disciplinary exploration of the topic for the local community I’ve seen yet, spanning everything from industry to education to the arts. Most thought- provoking was the topic of Indigenous cultural representation and how AI can empower the powerless.
You can watch the 90-minute replay on YouTube at youtube.com/watch?v=3bUD9JIExwc. Meanwhile, AI is the marquee topic of every conference and networking group in the islands — from publishers to chambers of commerce — and AI meetup groups are booming.
The Hawaii Technology Development Corp. just wrapped up a series of AI webinars, with more planned for this year. Meeting monthly, there are the AI Hawaii group and the national network Pie and AI (yes, there is pie), and the newly formed Hawaii Center for AI, which promises regular workshops as well as an incubator program.
The upcoming annual East Meets West conference — Hawaii’s largest startup and tech event — will have three separate sessions on AI: AI in marketing, environmental sustainability and government policy.
I’m moderating that last conversation, and the push for laws, rules, ethical standards and even outright bans is happening globally and locally.
The United Nations and the European Union have either called for or implemented restrictions on AI development. U.S. lawmakers are making similar calls, although other priorities and political machinations have stymied most efforts.
In the Hawaii Legislature, HB 1766 proposes penalties for using AI to impersonate a political candidate or misrepresent a political party. Such impersonation, known as “deepfakes,” is a real threat to elections. New Hampshire residents received robocalls from a fake Joe Biden in February, discouraging them from voting in the state primary.
While well intentioned, both local and national attempts to regulate AI run up against one cold hard fact: AI can’t be contained. While penalties for abusing it to commit crimes sounds good, focusing on the technology is misguided. A crime is a crime, however it was facilitated, whether the criminal used a tape recorder or an AI tool.
The biggest providers of generative AI tools — from Microsoft’s Copilot built into its popular office suite, to chatbots to art tools such as Midjourney — all have rules in place to prevent abuse. You can’t ask them to emulate a celebrity or depict copyrighted or trademarked characters or brands. This is good.
But AI tools are widespread and frequently open-source projects, meaning anyone can install them on their own computer and use them to do whatever they’d like — no limits on depicting Donald Trump or Taylor Swift. And regardless of the sales pitch being made to schools and businesses, tools that can detect AI-generated material don’t work. The ability to create convincing content will continuously leapfrog attempts to flag it.
Impersonation for the purposes of electoral manipulation is bad, but we don’t need to encode what tool or strategy is used into law. After all, there will be another “next big thing” — and big threat — coming right around the corner.
Ryan Kawailani Ozawa publishes Hawaii Bulletin, an email newsletter covering local tech and innovation. Read and subscribe at HawaiiBulletin.com.