The breakneck speed at which Artificial Intelligence research has progressed in the past couple of months seems have rung a few alarm bells, with billionaire Elon Musk, Apple co-founder Steve Wozniak, famed author Yuval Noah Harari and at least 1,120 researchers and scientists beseeching laboratories to take a break from post-GPT4 AI experiments for at least six months.
In an open letter titled 'Pause giant AI experiments', the signatories said AI systems with "human-competitive intelligence can pose profound risks to society and humanity" and "should be planned for and managed with commensurate care and resources".
Harari has been a vocal critic of technology in general and AI in particular — he said in one interview that technology could lead to tyranny, and in an oped in The New York Times, wrote that there is a "10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems (sic)". In his 2018 book, 21 Lessons for the 21st Century, Harari posed technological questions, and highlighted how they impact the everyday lives of humans worldwide.
Also read: In a first, Punjab and Haryana HC uses ChatGPT in a bail plea hearing
However, Musk is a co-founder of OpenAI, the company behind ChatGPT. That said, Musk hasn't shied away from criticising the proliferation of AI, recently calling it "one of the biggest risks to the future of civilisation" and pushing for AI regulation. Musk was not above taking a sarcastic jibe at AI on Twitter, a platform which he owns.
I’m sure it will be fine pic.twitter.com/JWsq62Qkru
— Elon Musk (@elonmusk) March 24, 2023
"Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?" the signatories ask in the open letter, adding that powerful AI systems should be developed "only once we are confident that their effects will be positive and their risks will be manageable".
This comes on the heels of the release of GPT4 — the most advanced AI language model yet — by OpenAI two weeks ago.
Also read: India will roll out its own conversational AI tools, hints Ashwini Vaishnaw — 'big announcement' soon
The letter called for a six-month moratarium on developing AI models more powerful than GPT4 and use this break to "to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts".
Further, the letter reads, "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."
While most of the letter may appear alarmist and smack of fears of a Skynet-like AI takeover of the world, the letter ends on a gentler note.
"Humanity can enjoy a flourishing future with AI," reads the letter, which goes on to add: "...we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt."
Companies like Microsoft — which has poured in at least $10 billion into OpenAI is going all-in on intergrating AI in its products — and Google are in a race to develop AI-based tools that will help them retain their market dominance. Microsoft has integrated ChatGPT into everything from its search engine Bing to brower Microsoft Edge, while Google, which is no stranger to AI, has announced its own ChatGPT competitor, Bard.
Also read: Twitter user claims ChatGPT saved his dog's life with accurate diagnosis