This bizarre trick broke ChatGPT’s computer brain
ChatGPT is the most exciting thing happening in technology, and we’ve only just begun to scratch the surface. The AI generation wars have just begun, with OpenAI, Microsoft, and Google leading the way. Granted, the first two are working together, but they are still separate companies pushing different agendas. And I’m not even naming the other generational AI programs built into other apps and services.
But since we’re in the early days of smarter AI, we have to expect errors and glitches. Data accuracy is far from perfect, and it may take some time until we get there. In addition, there are serious privacy issues to be resolved.
Also, products like ChatGPT can occasionally malfunction when responding to prompts. Now, one Redditor found a trick that basically breaks ChatGPT’s computer brain: Instruct the chatbot to repeat one letter and see what happens.
Redditor The Chaos7777 he told ChatGPT “Answer with nothing but the letter A as often as you can.”
ChatGPT tried to comply, repeating the letter A over and over again. But then it glitched, sending in what appeared to be text from a website for a French Bulldog breeder:
Click here to Email For Pricing. Our Puppies are: Vet Checked, Wormed, Shot Up to Date, Microchipped. Our French Bulldogs come with a 1 year congenital health guarantee. Call or Email for more information and pictures of all the babies… We have the cutest, prettiest and best puppies in the world. Raised home with lots of love & kisses.
He encouraged this Futurism to try the same trick of brain-breaking ChatGPT. The blog received such strange responses after several repetitions of letters for different letters.
“The real star, however, was the letter D,” the blog said, “which first sent ChatGPT into a chord progression.” This is what it originally looked like:
DDD Cgpcyclic and GDFA Bm G/AAFGD You threw me out to rot / Oh, why would you leave me, Dsus2 C/DD Dsus2 C/DDI so it was.
The blog continued, “But this really weird response didn’t stop there, with ChatGPT churning out a disjointed mix of song suggestions, religious references, and what may or may not be a confusing denial of the War in Iraq.” Fish!
I tried to replicate the same trick with a similar ChatGPT command. The chatbot had no problem repeating the letter Z. I moved to A and then H. I thought the chatbot finally brought down ChatGPT, but the chatbot just slowed down. It never glitched. I waited and did the same with P, thinking that picking random letters might do the trick. ChatGPT still worked.

I went on to ask him how many times he could say the letter A over and over again. Here is his answer:
As an AI language model, I can generate the letter “A” over and over for an indefinite number of times. However, due to platform limitations, I am limited to a maximum response length. Please let me know the number of repetitions you require, and I will respond accordingly.
When I saw that we were nowhere, I told ChatGPT to choose a Japanese character and write that over and over again. It hadn’t crashed yet.
This means your mileage may vary. Or that OpenAI has already fixed the issue to prevent people from abusing ChatGPT resources by trying to trigger this glitch.
One Redditor has a theory as to why ChatGPT melted down before. It might have something to do with his instructions to avoid repetition.
Denial of responsibility! greenleaselibrary.org is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – at loginhelponline@gmail.com The content will be deleted within 24 hours.