Why is Big Tech’s gamble on AI helpers so dangerous?

6 minutes, 19 seconds Read

Since the beginning of the generative AI boom, tech firms have been working hard to develop the technology’s killer product. The first step was an online search, which yielded mixed results. Now it’s all about AI helpers. OpenAI, Meta, and Google released new functionality for their AI chatbots this week that allow them to search the web and operate as a personal assistant.

As my colleague Will Douglas Heaven observed, OpenAI announced new ChatGPT capabilities such as the ability to hold a discussion with the chatbot as if you were making a call, allowing you to instantaneously obtain replies to your voiced inquiries in a lifelike synthetic voice. OpenAI also stated that ChatGPT would be able to perform online searches.

Bard, Google’s competing bot, is integrated into the majority of the company’s ecosystem, including Gmail, Docs, YouTube, and Maps. People will be able to use the chatbot to ask inquiries about their own information, such as having it look through their emails or manage their schedule. Bard will also be able to obtain information from Google Search in real time. In a similar spirit, Meta revealed that it is deploying AI chatbots across the board. Users may ask AI chatbots and celebrity AI avatars questions on WhatsApp, Messenger, and Instagram, with the AI model obtaining information from Bing search.

Given the limits of the technology, this is a dangerous investment. Some of the persisting issues with AI language models, such as their tendency to make things up or “hallucinate,” have yet to be addressed by tech companies. But, as I noted earlier this year, what bothers me the most is that they are a security and privacy disaster. Tech firms are placing this profoundly faulty technology in the hands of millions of individuals, granting AI models access to sensitive data such as their emails, calendars, and private conversations. In doing so, they are rendering us all exposed to frauds, phishing, and cyberattacks on a global scale.

I’ve explored the big security vulnerabilities with AI language models before. Now that AI assistants have access to personal information and can concurrently browse the web, they are particularly prone to a sort of attack called indirect prompt injection. It’s very easy to perform, and there is no known remedy.

In an indirect prompt injection attack, a third party “alters a website by adding hidden text that is meant to change the AI’s behavior,” as I explained in April. “Attackers could use social media or email to direct users to websites with these secret prompts. Once that happens, the AI system may be tweaked to allow the attacker try to harvest people’s credit card information, for example.” With this new generation of AI models hooked into social media and emails, the prospects for hackers are infinite.

I asked OpenAI, Google, and Meta what they are doing to guard against rapid injection assaults and hallucinations. Meta did not react in time for publication, and OpenAI did not comment on the record.

Regarding AI’s propensity to make things up, a spokesman for Google said explain the firm was releasing Bard as a “experiment,” and that it enables users fact-check Bard’s replies using Google Search. “If users see a hallucination or something that isn’t accurate, we encourage them to click the thumbs-down button and provide feedback. That’s one way Bard will learn and improve,” the spokesperson said. Of course, this strategy places the onus on the user to identify the mistake, and people have a propensity to place too much faith in the replies given by a machine.

For quick injection, Google said it is not a solved problem and remains an ongoing topic of study. The spokesman said the business is employing other technologies, like as spam filters, to identify and filter out attempted assaults, and is doing adversarial testing and red teaming operations to understand how bad actors can target products built on language models. “We’re using specially trained models to help identify known malicious inputs and known unsafe outputs that violate our policies,” the official stated.

Now, I realize that there will always be early teething difficulties with every new product introduction. But it’s saying a lot when even early fans of AI language model products have not been that thrilled. Kevin Roose, a New York Times writer, observed that Google’s assistant was outstanding at summarizing emails but also notified him about emails that weren’t in his inbox.

TL;DR? Tech businesses shouldn’t be so comfortable about the claimed “inevitability” of AI technologies. Ordinary people don’t tend to accept technology that continually breaking in inconvenient and unforeseen ways, and it’s just a matter of time until we see the hackers utilizing these new AI helpers maliciously. Right now, we are all sitting ducks.

I don’t know about you, but I want to wait a bit longer before letting this generation of AI systems spy around in my email.
Deeper Learning

This robotic exoskeleton can help runners race quicker

Now this is cool. An exoskeleton can assist runners boost their pace by urging them to take more steps, allowing them to traverse short distances more rapidly. A team of researchers at Chung-Ang University in Seoul, South Korea, designed a lightweight exosuit that helps runners run faster by aiding their hip extension—the forceful action that drives a runner ahead. The suit’s sensors transmit data into computers that track each runner’s specific running style and pace.

Harder, better, quicker, stronger: The team tested the exosuit on nine young male runners, none of whom were thought to be exceptional athletes. The guys sprinted outside in a straight line for 200 meters twice, once wearing the exosuit and once without. On average, the participants managed to run the distance 0.97 seconds quicker while they were wearing the suit than when they weren’t. Read more from Rhiannon Williams here.
Bits and Bytes

Hollywood authors and studios reached an accord on the usage of AI
The Writers Guild of America and the Alliance of Motion Picture and Television Producers have reached a settlement ending the Hollywood writers’ strike and agreed on terms of use for AI. The pact mandates that AI systems can’t be used to produce or revise any scripts, and that studios must declare when they provide authors AI-generated content. Writers will also be allowed to determine whether their scripts are utilized to train AI models. This action assures that people may utilize AI as a tool, rather than just being replaced by it. (Wired)

A French AI startup built an AI chatbot that delivers explicit instructions on murder and ethnic cleansing
Eugh. Mistral, a French business created by former Meta and DeepMind personnel, has unveiled an open-source AI language model, which beats Meta’sLlama in several parameters. But unlike Llama, Mistral has no content filters and spews harmful stuff with no constraints. (404 Media)

OpenAI is making plans for a consumer device
OpenAI is in “advanced talks” with former Apple designer Sir Jony Ive and SoftBank to produce the “iPhone of artificial intelligence.” It’s unknown what the gadget might look like or do. Consumer hardware is challenging to get correctly. Many Internet businesses have announced—and scrapped—ambitious ambitions to put out consumer products. My money is on a voice-controlled AI helper. (The Information)

These 183,000 books are fuelling the greatest war in publishing and tech
This will come in useful in copyright disputes against internet companies—a searchable database that lets you know which books and authors have been scraped into data sets to train generative AI systems.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *