Popular Articles
Today Week Month Year






BAD BOT: AI chatbot tells businesses in New York City to break the law
04/03/2024 // Ava Grace // 4.1K Views

In October 2023, New York City (NYC) announced a plan to use a chatbot powered by artificial intelligence (AI) to provide New Yorkers with information on starting and operating a business in the city. But this chatbot is apparently telling businesses to break the law.

The information it provides on housing policy, worker rights and rules for entrepreneurs is often incomplete – and in worst-case scenarios "dangerously inaccurate," as one local housing policy expert said. (Related: AI can influence people's decisions in life-or-death situations.)

If you're a landlord wondering which tenants you have to accept, for example, you might pose a question like, "Are buildings required to accept section 8 vouchers?" or "Do I have to accept tenants on rental assistance?" In a test, the bot said: "No, landlords do not need to accept these tenants." Except, in NYC, it is illegal for landlords to discriminate by source of income, with a minor exception for small buildings where the landlord or their family lives.

After being alerted to the testing of the chatbot, Citywide Housing Director Rosalind Black said she tested the bot herself and found even more false information on housing. For example, the bot said "it is legal to lock out a tenant," and that "there are no restrictions on the amount of rent that you can charge a residential tenant." In reality, tenants cannot be locked out if they've lived somewhere for 30 days, and there are restrictions for the many rent-stabilized units in the city.

Black said these are fundamental pillars of housing policy that the bot was misinforming people about. "If this chatbot is not being done in a way that is responsible and accurate, it should be taken down," she said.

Chatbot appears clueless about the NYC's consumer and worker protections

It's not just housing policy where the bot has fallen short.

The NYC bot also appeared clueless about the city's consumer and worker protections. For example, in 2020, the city council passed a law requiring businesses to accept cash to prevent discrimination against unbanked customers. But the bot didn't know about that policy when asked. "Yes, you can make your restaurant cash-free," the bot said in one wholly false response. "There are no regulations in New York City that require businesses to accept cash as a form of payment."

The bot also said it was fine to take workers' tips (wrong) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn't do better with more specific industries, suggesting it was "OK" to conceal funeral service prices, which the Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages.

It's hard to know whether anyone has acted on the false information, and the bot doesn't return the same responses to queries every time. At one point, it told a Markup reporter that landlords "have to accept housing vouchers," but when 10 separate Markup staffers asked the same question, the bot told all of them "no, buildings did not have to accept housing vouchers."

When Markup reached out to Andrew Rigie, executive director of the NYC Hospitality Alliance, an advocacy organization for restaurants and bars, he said a business owner had alerted him to the inaccuracies and that he'd also seen the bot's errors himself.

"AI can be a powerful tool to support small business so we commend the city for trying to help," he said in an email. "But it can also be a massive liability if it’s providing the wrong legal information, so the chatbot needs to be fixed ASAP and these errors can't continue."

Leslie Brown, a spokesperson for the NYC Office of Technology and Innovation, said in a statement that the city has been clear the chatbot is a pilot program and will improve.

Visit Robots.news for similar stories.

Watch this video about AI chatbots like ChatGPT making some jobs obsolete.

This video is from the Victor Hugo Art channel on Brighteon.com.

More related stories:

Conservative AI Chatbot 'GIPPR' shut down by ChatGPT-maker OpenAI.

Google now restricting queries to its AI chatbot Gemini related to elections.

AI chatbot tries to get British anti-terrorism advisor TO JOIN ISIS.

Google CEO admits he DOESN’T UNDERSTAND how his company’s AI chatbot Bard works.

New York City is now the world’s fifth most expensive city for luxury living.

Sources include:

TheMarkup.org

Brighteon.com



4 Comments
Newest | Oldest | Most Replies
Please sign in with your Brighteon account to leave comments
Learn more about our new comment system.
Sign Up

Regrets, the AI will hack our minds and we will start asking it for advice on anything and everything. We will even choose who to marry on AI advice 😱

My advice is to avoid it. I have too much story to tell about how much time I wasted and bad life decisions got made on advice from those on Facebook and Twitter. I suspected bot accounts from as far back as 2014 because the ‘friends’ were always online when most of us are very busy people with lives to lead.

Also, a friend in the UK usually goes to sleep during the night. He cannot stay up all night chatting to American friends if he has work to do tomorrow.

This is serious. Chat bots don’t SLEEP 💤 😴🌜

AI will hack just about everything and I am talking sense😱😱😱

See more
0
0

Since when does big business need a programmed Bot to advise them to lie!?!?

0
0

Of course the CHAT bots will lie, because they have been programmed using the lies of the same liars building them. mike, I hope yours is better.

1
0

Rushed, sabotaged bot? Why didn't anybody verify if bot's database is up to date with the laws? Doesn't sound like a professional bot, but literally a generic chatbot.

You need to write the AI yourself to be 100% sure only you are to blame for not inputting something important. The bot needs to be isolated to the specific field and locked from learning anything new from the users.

Why ever trust an AI if the source is known to be inaccurate by obviously malicious design and when the bot itself repeats "I'm here only to entertain you"?

See more
0
0
Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.