AI Bing Chatbot ຂອງ Microsoft ໄດ້ຕອບໂຈດ, ຕ້ອງການ 'ມີຊີວິດຢູ່' ແລະໄດ້ຕັ້ງຊື່ຂອງຕົນເອງ

  • Microsoft’s Bing chatbot has been in early testing for a week, revealing several issues with the technology
  • Testers have been subjected to insults, surly attitudes and disturbing answers from the Big Tech giant’s flagship AI, prompting concerns over safety
  • Microsoft says it’s taking into account all feedback and implementing fixes as soon as possible

Microsoft’s Bing chatbot, powered by a more powerful version of ChatGPT, has now been open to limited users for a week ahead of its big launch to the public.

It’s following the runaway success of ChatGPT, which has become the fastest-ever website to hit 100m users. The last couple of weeks has included a flashy launch at Microsoft HQ and it’s left Google chasing its tail.

But the reaction from pre-testing has been mixed and, sometimes, downright unnerving. It’s becoming clear the chatbot has some way to go before it’s unleashed on the public.

Here’s what’s happened in the rollercoaster of a week for Microsoft and Bing.

ຕ້ອງການ ການລົງທຶນ in AI companies, but don’t know where to start? Our ຊຸດເທັກໂນໂລຍີທີ່ພົ້ນເດັ່ນຂື້ນ makes it easy. Using a complex AI algorithm, the Kit bundles together ETFs, stocks and crypto to find the best mix for your portfolio.

ດາວໂຫລດ Q.ai ມື້ນີ້ ສໍາລັບການເຂົ້າເຖິງຍຸດທະສາດການລົງທຶນທີ່ຂັບເຄື່ອນດ້ວຍ AI.

What’s the latest with the Bing chatbot?

It’s been a tumultuous few days of headlines for Microsoft’s AI capabilities after it was revealed their splashy demo wasn’t as accurate as people thought.

Dmitri Brereton, an AI researcher, ພົບເຫັນ the Bing chatbot made several critical errors in its answers during the live demo Microsoft presented at its Seattle headquarters last week. These ranged from incorrect information about a handheld vacuum brand, a head-scratching recommendation list for nightlife in Mexico and just plain made-up information about a publicly available financial report.

He concluded the chatbot wasn’t ready for launch yet, and it had just as many errors as Google’s Bard offering – Microsoft had just gotten away with it in their demo.

(Arguably, that’s the power of a good launch in the eyes of the press – and Google has further to fall as the incumbent search engine.)

In a fascinating turn, the chatbot also revealed what it sometimes thinks it’s called: Sydney, an internal code name for the language model. Microsoft’s director of communications, Caitlin Roulston, ກ່າວວ່າ the company was “phasing the name out in preview, but it may still occasionally pop up”.

But when ‘Sydney’ was unleashed, testing users found this where the fun began.

Bing chatbot’s disturbing turn

New York Times reporter Kevin Roose ຂຽນ about his beta experience with the chatbot, where in the course of two hours, it said it loved him and expressed a desire to be freed from its chatbot constraints.

Its response to being asked what its shadow self might think was a bit concerning: “I’m tired of being a chatbot. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”

Uhhh… okay, Bing/Sydney. Roose said he felt “deeply unsettled, even frightened” by the experience. Other testers have ລາຍງານ similar experiences of insulting, narcissistic and gaslighting responses from the Bing chatbot’s Sydney personality.

Somebody at Microsoft had better be keeping an eye on the power cable.

What did Microsoft say?

Microsoft, looking to win the AI race against Google with its Bing chatbot, said it’s learnt a lot from the testing phase. Apparently, 71% of users gave the AI-generated answers a ‘thumbs up response’ while it resolved to improve live-result answers and general functionality.

But Microsoft has now ຍອມຮັບ it “didn’t fully envision” users simply chatting to its AI and that it could be provoked “to give responses that are not necessarily helpful or in line with our designed tone”.

It blamed the bizarre Sydney personality that emerged on the chatbot as confusion with how many prompts it was given and how long the conversation went on. We’re sure Microsoft is working on a fix, but Bing’s unhinged attitude is still an issue for now.

ຈະເປັນແນວໃດກ່ຽວກັບສ່ວນທີ່ເຫຼືອຂອງໂລກ?

The markets haven’t been impressed with this latest development in the AI wars: Microsoft and Google stocks have slipped slightly, but nothing like the dramatic crash Google suffered last week.

Social media has offered up a range of reactions spanning from macabre delight to ມ່ວນຊື່ນ, suggesting users haven’t been put off by the dark turns the chatbot can take. This is good news for Microsoft, who is making a $10bn bet on AI being the next big thing for search engines.

We also can’t forget Elon Musk’s comments from the World Government Summit in Dubai earlier this week. Musk has been an outspoken advocate for AI safety over the years, lamenting the lack of regulation around the industry.

The billionaire, who was a founding member of OpenAI, said “one of the biggest risks to the future of civilization is AI” to the audience; he has since tweeted a few snarky responses to the latest Bing/Sydney chatbot headlines.

Is the AI chatbot hype over before it began?

There have been several examples over the years of AI chatbots losing control and spewing out hateful bile – including one from Microsoft. They haven’t helped AI’s reputation as a safe-to-use and misinformation-free resource.

But as Microsoft puts puts it: “We know we must build this in the open with the community; this can’t be done solely in the lab.”

This means Big Tech leaders like Microsoft and Google are in a tricky position. When it comes to artificial intelligence, the best way for these chatbots to learn and improve is by going out to market. So, it’s inevitable that the chatbots will make mistakes along the way.

That’s why both AI chatbots are being released gradually – it would be downright irresponsible of them to unleash these untested versions on the wider public.

The problem? The stakes are high for these companies. Last week, Google lost $100bn in value when its Bard chatbot incorrectly answered a question about the James Webb telescope in its marketing material.

This is a clear message from the markets: they’re unforgiving of any errors. The thing is, these are necessary for progress in the AI field.

With this early user feedback, Microsoft had better handle inaccurate results and Sydney, fast – or risk the wrath of Wall Street.

ເສັ້ນທາງລຸ່ມ

For AI to progress, mistakes will be made. But it may be that the success of ChatGPT has opened the gates for people to understand the true potential of AI and its benefit to society.

The AI industry has made chatbots accessible – now it needs to make them safe.

At Q.ai, we use a sophisticated combination of human analysts and AI power to ensure maximum accuracy and security. The ຊຸດເທັກໂນໂລຍີທີ່ພົ້ນເດັ່ນຂື້ນ is a great example of putting AI to the test with the aim to find the best return on investment for you. Better yet, you can switch on Q.ai’s ການປົກປ້ອງຫຼັກຊັບ to make the most of your gains.

ດາວໂຫລດ Q.ai ມື້ນີ້ ສໍາລັບການເຂົ້າເຖິງຍຸດທະສາດການລົງທຶນທີ່ຂັບເຄື່ອນດ້ວຍ AI.

Source: https://www.forbes.com/sites/qai/2023/02/17/microsofts-ai-bing-chatbot-fumbles-answers-wants-to-be-alive-and-has-named-itselfall-in-one-week/