Our Future Artificial Intelligence Overlords Need a Resistance Movement

Talk

Artificial intelligence moves so fast that even scientists have a hard time keeping up. In the last year, machine learning machines have started to produce original movies and amazing fake photos. They are writing code. In the future, we may look back to 2022 as the year AI transitioned from information processing to content creation and crowdsourcing.

But what if we look back at the age where AI played a role in destroying the human species? In a negative and ironic way, public figures from Bill Gates, Elon Musk and Stephen Hawking, going back to Alan Turing, have raised concerns about the fate of humanity in a world that is increasingly machines to them in intelligence, and Musk calls it AI. far worse than nuclear warheads.

After all, people don’t really care about sub-intelligent types, so who’s to say that computers, trained everywhere with data that reflects every aspect of human behavior, won’t “put their goals before ours” as a famous computer scientist. Marvin Minsky once warned.

Good news, good news. Many scientists seek to make deep learning systems more transparent and scalable. That momentum will never stop. As these projects become more prevalent in financial markets, social media and supply chains, technology companies must begin to ensure AI security in a proactive manner.

Last year, across the world’s major AI labs, nearly 100 full-time researchers focused on building secure systems, according to the State of AI 2021 report published annually by London corporate investors Ian Hogarth and Nathan Banaich. Their report for this year found that there are only 300 researchers working full-time on AI security.

Also Read :  Season 2 of the PokerStars VR Metaverse Poker Tour Has Landed!

“The number is too low,” Hogarth said in a Twitter Spaces chat with me this week about the threat of AI. “Not only are there very few people working to set up these systems, but it’s like the Wild West.”

Hogarth talks about how over the past year AI tools and research have been developed by open-source groups, suggesting that super-intelligent machines should not be controlled and created in secret by a few large companies. , but it is created in the open. In August 2021, for example, the EleutherAI community organization created a public version of a powerful tool that can write comments and essays on almost any topic, called GPT-Neo. The original tool, called GPT-3, was developed by OpenAI, a company co-founded by Musk and largely funded by Microsoft Corp. which allows limited access to its active systems.

This year, months after OpenAI wowed the AI ​​community with a revolutionary image rendering system called DALL-E 2, an open-source firm called Stable Diffusion released its own version of the tool to public, for free.

One of the advantages of open source software is that it is out there, and many people are always checking for bugs. This is why Linux is one of the most secure operating systems available to the public.

But throwing powerful AI systems out there raises the risk of misuse. If AI is as dangerous as a virus or nuclear contamination, it might be worth it to contain its development. After all, viruses are examined in bio-safety laboratories and uranium is enriched in highly restricted environments. Research into viruses and nuclear power is under legal scrutiny, though, with governments following the rapid pace of AI, there are still no clear guidelines for its development.

Also Read :  Top Jobs in Business Technology

“Both worlds are about to end,” says Hogarth. AI can cause problems by being built out there, but no one is monitoring what happens behind closed doors.

Meanwhile, it’s encouraging to see a growing focus on AI customization, a growing field that aims to design AI systems that are “adapted” to human goals. Leading AI companies such as Alphabet Inc.’s DeepMind and OpenAI have several teams working on AI automation, and many researchers from those firms have gone on to launch their own startups, some focusing on and to be safe AI. These include San Francisco-based Anthropic, whose founding team spun off OpenAI and raised $580 million from investors earlier this year, and London-based Conjecture, backed by the founders of Github Inc. Stripe Inc. and FTX Trading Ltd.

The theory is working under the assumption that AI will reach human intelligence within the next five years, and that its current trajectory is bad for the human species.

But when I asked Conjecture’s CEO, Connor Leahy, why AI would want to harm humans in the first place, he answered directly. “Imagine the desire of people to flood the valley to build a hydroelectric dam, and the valley has lentils,” he said. “This will not stop the construction of humans, the ant will quickly flood. No one ever thought of harming ants. They just wanted more power, and this was the perfect way to achieve that goal. Similarly, specialized AIs will need more power, faster communication, and more intelligence to achieve their goals.

According to Leahy, in order to prevent such a dark period, the world needs to get a “fundamental fund,” as well as examine the changes in deep learning to better understand how to make decisions, and try to endow AI with human-like thinking.

Also Read :  Asia-Pacific markets trade lower; China keeps LPR steady

Despite Leahy’s fearsome nature, it’s clear that AI is not on a path that fully aligns with human needs. Just look at some of the recent efforts to create chatbots. Microsoft abandoned its bot Tay 2016 learned from interaction with Twitter users, after it sent racist and sexist messages within hours of release. In August of this year, Meta Platforms Inc. a chatbot that pretends to be Donald Trump and is trained for public documents on the Internet.

No one knows if AI will disrupt the financial markets or torpedo the food chain one day. But it’s possible that people will clash with each other through social media, which is already happening. Powerful AI systems recommend ads to people on Twitter Inc. and Facebook seeks to inform our actions, which inevitably include posting information that provokes anger or misinformation. When it comes to “thinking AI,” changing those incentives is a good place to start.

More thoughts from Bloomberg:

• Tech’s scary, scary week told in 10 charts: Tim Culpan

• Wile E. Coyote Moment as Tech Recesses Off the Cliff: John Authers

• Microsoft’s AI Art Tool is a Good Thing: Parmy Olson

This post does not reflect the opinion of the editorial board or Bloomberg LP or its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, he is the author of “We Are Anonymous.”

More information is available at bloomberg.com/opinion

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button