Skip to content
Certain names make ChatGPT grind to a halt, and we know why

Certain names make ChatGPT grind to a halt, and we know why

The “David Mayer” block in particular (now resolved) presents additional questions, first posed on Reddit on November 26, as multiple people share this name. Reddit users speculated about connections to David Mayer de Rothschild, though no evidence supports these theories.

The problems with hard-coded filters

Allowing a certain name or phrase to always break ChatGPT outputs could cause a lot of trouble down the line for certain ChatGPT users, opening them up for adversarial attacks and limiting the usefulness of the system.

Already, Scale AI prompt engineer Riley Goodside discovered how an attacker might interrupt a ChatGPT session using a visual prompt injection of the name “David Mayer” rendered in a light, barely legible font embedded in an image. When ChatGPT sees the image (in this case, a math equation), it stops, but the user might not understand why.

The filter also means that it’s likely that ChatGPT won’t be able to answer questions about this article when browsing the web, such as through ChatGPT with Search.  Someone could use that to potentially prevent ChatGPT from browsing and processing a website on purpose if they added a forbidden name to the site’s text.

And then there’s the inconvenience factor. Preventing ChatGPT from mentioning or processing certain names like “David Mayer,” which is likely a popular name shared by hundreds if not thousands of people, means that people who share that name will have a much tougher time using ChatGPT. Or, say, if you’re a teacher and you have a student named David Mayer and you want help sorting a class list, ChatGPT would refuse the task.

These are still very early days in AI assistants, LLMs, and chatbots. Their use has opened up numerous opportunities and vulnerabilities that people are still probing daily. How OpenAI might resolve these issues is still an open question.

Source link