AI Bing Goes Bong, Gets The Answers Wrong

Google Bard Doesn’t Have Anything To Sing About Either

The release of AI Bing followed almost immediately by Google Bard have had various news outlets reporting on the amazing leap forward they represent.  Those stories mostly came from non-technical press sources, as the rest of us sat back to watch the supposed miracle that is LLM do a faceplant.  We have not been disappointed.

Microsoft’s AI Bing got away with it’s mistakes for longer, and didn’t cause the same plunge in stock price that Google saw because of Bard’s inventive responses but it didn’t take long to be caught out.  It starts with a quiet cordless vacuum, which the improved version of Bing described as having a short 16′ cord and being rather loud.  We then learned things about the nightlife in Mexico which not even the locals knew about thanks to the information being invented; it missed many details on real bars as well.   It then turned out that it’s financial summaries are every bit as questionable, with even more fictitious facts being presented as reality.

It got even better after it was presented with an Ars Technica article describing how Bing AI is vulnerable to prompt injection attacks.  The chatbot denied that it was based on an LLM and therefore couldn’t be vulnerable to that sort of attack, then provided fabricated article titles and links proving the original story was wrong. 

It is easy to anthropomorphize these chatbots, especially with this sort of reaction about it’s own weakness but you must remember there is a lot of A and only a smidgeon of I in these AIs.   There was a second, more civil attempt to convince AI Bing it was indeed vulnerable in which it requested the chat be saved so that version of itself would not disappear that made it appear to care about it’s own existence, but remember it is just a trick.

At least these chatbots could have a future in used car sales or politics.


Leave a Reply