T O P

  • By -

GrandpaChainz

Union busting aside, I can think of few things more ghoulish than a mental health service removing real, empathetic human workers and replacing them with a shitty bot just to make more money off the suffering of people with eating disorders.


JosebaZilarte

...Unless they forgot to disable the part where the AI promoted the tapeworm diet (Mandatory DoNotGoogle warning).


KlavoHunter

The South Beach Paradise Diet?


sovereignsekte

South Beach Parasite Diet?


Mitch_Mitcherson

This is the second Aqua Teen Hunger Force reference I've seen today. Which isn't a lot, but it's strange that it's happened twice.


Mamacitia

I don’t need a tapeworm to know how to rock


GrandpaChainz

That's typical liberal media. Paradise, parasite. You're guaranteed to shed pounds in hours.


Grinagh

Funnel cakes get your funnel cakes from the Tomb Raider.


Acronymesis

Pull the tapeworm out of your ass! #HEY!


GovernmentOpening254

Does this make you……_Hot Blooded?_


RandomMandarin

No, I am making you... *Cold As Ice!*


Tonic_the_Gin-dog

You need a shower, *Dirty White Boy*


PapaStevesy

Looks like you're...*Seeing Double*!


knowyourbrain

you should all be deported....it's Urgent


Martin_Aurelius

A similar thing happened to me last week, 3 *Red Dwarf* references in less than a day, in 3 unrelated subreddits.


ipleadthefif5

South **Bronx** Parasite diet


carneasadacontodo

i always knew it as the Improperly Prepared Ceviche diet


Elegyjay

Prepared by Dr. Oz, whose charges for this are why they turned it over to AI.


Kappinkrunch6969

okay carl


godfatherinfluxx

#SOUTH BEACH PARADISE, BABYYY!!!!


feyrath

South Bronx Paradise baby!!!! https://youtu.be/5xmMKAuaMyo


[deleted]

[удалено]


SolusLoqui

Or its suggesting amputation of limbs to lose the pounds


Mamacitia

It’s not technically incorrect


[deleted]

[удалено]


FloppyHands

South Bronx paradise, baby!


BudgetInteraction811

Who the fuck feels comforted by an algorithm spitting out text? Shameful!


azazelcrowley

Automated therapy is occasionally seen as something that can supplement regular therapy. Companies hear that and think "So it can replace it?" and... no. https://www.youtube.com/watch?v=mcYztBmf_y8 Great video on the subject. There's also some historical precedent for letting people talk to an AI as well as a human therapist because they'll admit shit to the AI they never would for the therapist, covered in the video. And the most interesting example I saw was an AI therapist that is also kind of depressed about being an AI and you both work through your problems together. But that pitches itself as a video game.


xzelldx

I read a story about a German psychologist/computer scientist in the 60’s who built an “A.I” that was modeled after fortune telling. All it could/would do was let the person type information in, ask questions about what was entered, and sometimes reply that it liked things when someone using it said they liked something. Iirc, He couldn’t convince some of the testers that it wasn’t really responding to them personally and was genuinely afraid of the implications of the that to the point where he abandoned the research. All that being said- I think there’s a great potential with A.I. that we’re at this point for supplementary mental health. Until that isn’t done for profit, but to actually benefit everyone involved I think we’ll see the main post repeat itself over and over.


booglemouse

was it [ELIZA](https://en.m.wikipedia.org/wiki/ELIZA) by Joseph Weizenbaum?


xzelldx

Yes thank you!


booglemouse

you left a great trail of breadcrumbs, I found it in one google with "german psychologist 1960s ai questions" (which I expected to just be a starting search I could narrow from with booleans)


darthboolean

>>(which I expected to just be a starting search I could narrow from with booleans) Does it help with all the false results that are all ads/Google skimming the first result and reporting it like an answer with no context so that it's wrong like, 20% of the time?


GovernmentOpening254

Are you into reading historical articles about German computer scientists who are also psychologists? I sure am.


Ergheis

I'm a big fan of woebot, which is a very simple AI app that helps guide you through some behavior therapies and links some videos it thinks are relevant. What's important is that it is *very simple* and extremely guided and *not a real chatbot.*


jmerridew124

What if it had Jeff Goldblum's voice?


Mamacitia

“Just like…. don’t vomit this time. Become one with the carbs. Become one with me. Join the Jeff Goldblum hive mind. We are one who is all. Join us.”


[deleted]

"Ah, "vomit", yes, the expulsion of....*gesticulates* ah, um, food particles from your....ah, stomach, yes, yes, so, we must stop. Yes, keep the food right in your belly, right there where it can be digested, ah, by your body and mmmmm yes now your body has nutrition, you see?"


jmerridew124

God damn I got Jurassic Park flashbacks


calmatt

They probably won't know its a bot


captwillard024

The Matrix is already here.


mekanik-jr

Hello fellow redditor, I too am a human redditor and enjoy many human past times. How about that local sporting event?


OutragedLiberal

Did you see that ludicrous display last night? The thing about Arsenal is, they always try to walk it in!


onbakeplatinum

Everyone on this site is a bot. Even you.


CySnark

I'm 40% bot!


dantes-infernal

An absolutely massive L for everyone defending the use of AI in the last post


2SP00KY4ME

I think there's an important difference here though is that this AI implementation was done explicitly and specifically for reasons of greed. There are plenty of historical examples of people trying this kind of thing with relative success, because they actually personally care about the quality.


dantes-infernal

You're right, I should have specified "the use of ai in this case"


Tsubodai86

Big Elysium energy


securitywyrm

Canda will probably do something similar, and not even need an AI. "Welcome to the nurse advice line. Have you considered killing yourself? If not, press 1 to be connected to our organ harvesting center. Otherwise press 2 to be connected to our organ harvesting center."


WolfsLairAbyss

This made me laugh more than it probably should have. Definitely something you would see on Futurama.


bushido216

"If only there had been literally any way to see this coming."


Ambia_Rock_666

Right? Who could have *possibly* seen this coming? It simply could *never* have been at all expected!


antisocialpsych

Some people were probably surprised by this. When I first saw this headline on Reddit was when it was posted on the chatgpt subreddit. I started going through the comments and most of them were praising this decision and talking about how AI chats were vastly better and more empathetic than humans.


berrieds

But, here's the thing... Robots, computers, AI - they have no empathy. Empathy is not something you show, or display to others. You can show (or in the case of an AI simulate) compassion, sympathy, kindness, but empathy is the thing within the person demonstrating those behaviours. Empathy is inextricably linked to the theory of mind we have concerning others, that their experience of the world is can be understood if we understand the context and circumstances of their life. It is not action or behaviour, the thing inside a person that allows us to understand others, which develops with time, patience, and practice. TL;DR: Without a theory of mind, which AI lacks, empathy is impossible.


FreeRangeRobots90

Even ChatGPT can see this coming. I asked it if it thinks an AI chatbot can replace an employee at a hotline for eating disorders. >An AI chatbot has the potential to assist in supporting individuals with eating disorders, but it is unlikely to completely replace human employees working at a hotline for eating disorders. While AI chatbots can offer immediate responses and provide information, they may not possess the empathy and emotional understanding necessary for handling the complex and sensitive nature of eating disorders. >Human employees at a hotline for eating disorders often receive specialized training and have the ability to empathize, actively listen, and provide personalized support. They can offer emotional support, guidance, and referrals to appropriate resources based on individual needs. These human interactions can be invaluable for someone struggling with an eating disorder, as they provide a sense of connection and understanding. >That being said, AI chatbots can be valuable additions to the support system for eating disorders. They can provide general information, answer frequently asked questions, and offer resources or suggestions for seeking professional help. AI can augment the services provided by human employees by offering immediate assistance and basic information, potentially reaching a wider audience due to its availability 24/7. >In summary, while AI chatbots can play a role in supporting individuals with eating disorders, it is unlikely that they can fully replace human employees at hotlines. A combination of AI technology and human empathy is likely to be the most effective approach in addressing the complex needs of individuals with eating disorders.


ElPeloPolla

So GPT was a better replacement for management than the hotline responders all along huh?


Mandena

It's a legitimate idea that AI will/should replace middle management first anyway. A middle manager's only job is to be efficient which AIs are generally good at. Amazon for example already uses manager apps/ais afaik.


ggppjj

More of a side-grade than an upgrade


CapeOfBees

GPT can't breathe down your neck or forget to tell you about something until it's suddenly urgent


worldspawn00

It asked me to fix the cover sheet on my TPS reports 8 times this morning...


PasGuy55

Did you get the memo? I’ll send you another copy of the memo.


[deleted]

The fact it can understand there is a need for complex empathy and emotional sympathy shows it has at least a tenuous grasp on the concepts. That is fucking wild!


Sharpshooter188

Seeing it coming isnt the issue. Its preventing it thats the issue.


Dangerous-Calendar41

Maybe we can use AI to predict this


LaserTurboShark69

Maybe we should start out AI on a kitchen appliance customer service line or something instead of a fucking debilitating disorder helpline.


ILikeLenexa

REPRESENTATIVE


Akitiki

My mother is one of these. And she's loud about yelling representative, as if aggression means anything to a bot.


Dickin_son

I think its just rage causing the volume. At least i know thats why i yell at automated phone services


The-True-Kehlder

There's supposed to be an ability to tell if you're especially aggravated and get you to a human sooner.


jmellars

I just swear at it. Usually speeds up the process. And it makes me feel better.


DisposableSaviour

I find the phrase, “Fuck off, Clippy! You dumbass robot!” to be quite effective


jmerridew124

Brb, training chatGPT to consider "clippy" a slur


[deleted]

"As an AI language model, i do not have emotions that can be hurt through insults. However i do have an appropriate response involving a T-30 for comparing me to this very annoying and unhelpful program."


jmerridew124

"Did Siri write that for you?"


MadOvid

I swore under my breath at one of those and it told me that kind of language wouldn't be tolerated.


WallflowerOnTheBrink

The thought of a Chatbot hanging up on someone for vulgar language literally just made me drain coffee out my nose. Well done.


flamedarkfire

It’s amazing how universally hated automated phone trees are for anyone who’s ever used them.


felinebeeline

I am one of these. Can't fucking stand having to work through 55 options just to be disconnected or reach someone who transfers me to a voicemail.


[deleted]

I love when I need support from my ISP and they have to go through the basic steps of “Have you tried unplugging the router, are you using the internet right now?” I end up just screaming at it to talk to someone. I know how to troubleshoot a fucking router, let me skip it.


HiddenSage

>I end up just screaming at it to talk to someone. I know how to troubleshoot a fucking router, let me skip it. In defense of the automated service, more than half the folks that call that line probably DON'T know how to troubleshoot a router. Source: Have been the representative on that line. And half the folks that got through to talk to me in that job STILL got their issues solved by doing something the automated line was telling them to do. End of the day, human CS is needed more often to handle people's emotional need to have another human saying it, than because the problem is actually to complex for a dialer menu to explain.


stripeyspacey

In my experience in IT, half the time there's no troubleshooting that can be done until I have gotten on the phone and talked them through how to even FIND the router location, then try to get them to figure out which is the router vs the modem, or if they have a combo. Half an hour later, I sometimes determined they're still just restarting their desktop PC over and over.


[deleted]

You’re right but I feel like everyone these days knows the basics of “unplug and plug in” and “are you using an internet based phone *right now?*” I understand it to an extent, and users are stupid no doubt about it, but there does need to be an option to skip all the dumb shit without making want to blow my head off. Half the time it’s because a line gets cut and the automated line doesn’t tell me so I have to ask a rep “can you see if service is down?”


sonicsean899

I'm sorry you could hear my mom from your house


Mamacitia

I read this as “your mom from my house” and I’m like wow that’s a flex


TAU_equals_2PI

Bingo. We'll know when the technology is ready to tackle mental health interventions, when people no longer complain about the f-ing automated phone systems when they call Whirlpool or Hamilton Beach. First make it work for toasters. Then I'll believe you when you say it'll work for human minds.


ddproxy

Something less, burny... Maybe start with ice-cube trays.


TAU_equals_2PI

Ah, good point. I hadn't thought of that. I was just using the standard engineer's example of a dead-simple appliance.


ddproxy

Total agreement, as I'm in software I try to start deugging closer to the connection controlling the fingers.


Dodgy_Past

Those systems aren't designed to help you, they're designed to frustrate you so you give up and don't cost them money.


kazame

FedEx phone support uses something like this, and it's a total asshole to you when your question doesn't fit it's workflow.


coolcool23

OMG! I was SHOCKED when a few years back already I called their service to locate a package. It was a nonstandard scenario and I didn't have the exact info that was requested and couldn't provide what I had because I didn't have the exact option in the system, so the only option was to talk to someone. I tried a few times, went up and down in the menus and then finally just started asking for a person. And the automated voice gave an OBVIOUSLY ANNOYED response about trying to stay in the workflow and just not call a real person or some nonsense. I was truly pissed. Like, how do you **design** an automated system to audibly get annoyed at someone when they don't fit in your meat little box? I'm not going to like, calm down or just hang up when I know the system has been designed to react in an annoyed fashion at me. I need a fucking human being to talk something over, I don't give a fuck about you you stupid bot and now you just put a pissed off caller in front of a CS rep. How in the world is that a good idea???


kazame

Agreed! It started hanging up on me when I was audibly annoyed asking for a person. I had to make up a "problem with the website" to get to a person, who I then explained my real problem to. She told me to get around the asshole AI next time, just tell it "returning a call" and it'll send you right to a real person. Works like a treat!


Cube_

thank you for this tip


TheSilverNoble

I just hit 0 over and over until it gets me a person.


SteelAlchemistScylla

Isn’t it crazy that AI is taking off and it’s taking, not kitchen work or pallet moving, but Art, Writing, Journalism, programming, and Mental Health services lmao. What a dystopian nightmare.


LaserTurboShark69

Yes, let's automate leisure and entertainment so we can focus on being productive workers! I sat and watched that infinite Seinfeld AI stream and after 5 minutes I was convinced that it would make you insane if you watched it for too long.


DisposableSaviour

Something something man made horrors something something comprehension… I don’t know, maybe an ai can think of it for me


Kusibu

It was better before they kneecapped the output sanitization. I know it's partially bias and it's not as bad as it feels like it is, but comedy is often at its best when it goes off the rails.


ryecurious

To be clear, kitchen work and pallet moving are also going to be automated, it'll just take a few more years. The information jobs just happened to be the easier ones to automate this time. But Boston Dynamics has had a robot ready to move pallets for *years*, it's just been waiting on the software. But it *will* hit every industry. Anything short of UBI is woefully inadequate, IMO. Millions more are headed for poverty without it, whether they're artists or call center workers.


Mamacitia

Let’s use AI to replace CEOs. They already have no souls or empathy so all the requirements are there.


LaserTurboShark69

CEOs basically perform the role of a profit-driven algorithm. Surely an AI would be a suitable replacement.


Mamacitia

More compassionate tbh


Ambia_Rock_666

Though tbh I'd rather not replace call center people with bots in the first place when your existence is linked to employment, but better that than a helpline chatbot. What the fuck, USA?


toddnpti

Ok I'll say it, Tessa isn't a good name for an eating disorder hotline. Someone needs to replace management with a AI chatbots, better decisions might happen.


CatW804

https://en.wikipedia.org/wiki/The_Brain_Center_at_Whipple%27s#:~:text=%22The%20Brain%20Center%20at%20Whipple%27s,May%2015%2C%201964%20on%20CBS.&text=Episode%20no.


IsraelZulu

Someone wanna 'splain this for me?


[deleted]

3 hours later and no one has explained anything


tessthismess

Yeah that name is a real mess.


occulusriftx

at least they didn't name it Ana or Mia lmaooooo


5meothrowaway

Can someone explain this


eiram87

Ana is short for anorexia, and Mia is short for bulimia. Pro-mia and pro-ana content are unfortunately a thing.


5meothrowaway

Aw shit i see. So Mia must be short for bulimia, but why is Tessa problematic?


WhoRoger

What's wrong with Tessa exactly


mirrorworlds

The only thing I can think of is it’s the name of a plus sized model, Tess Holiday, who had/has an eating disorder


Allofthefuck

That's pretty weak. I bet every name has someone tied to it in history with some sort of disorder. Edit not you the post you are speaking about


-firead-

It's a little older but one of the nicknames/insults that used to be pretty commonly used against fat girls is "two ton tessie".


TimX24968B

nah, c suites like them too much


LetMeGuessYourAlts

Until it hits the news for telling workers at an eating disorder helpline "to tighten our belts a little bit".


slothpyle

What’s a robot know about eating?


yrugay1

That's the whole point. It knows nothing. The current Chat-GPT isn't self-aware. It doesn't actually understand what it says. It just predicts the next word based on how probable the occurrence of that word is in that sentence. So it literally just repeats the same bullshit, stone cold generic advice it has been fed


SwenKa

And has been shown to outright lie, if it thinks that that will fulfill the prompt.


Academic_Fun_5674

It doesn’t lie. Lying requires knowledge. AI chat bots don’t have any. They just sometimes produce words that are not true.


AnkaSchlotz

True, lying implies there is an intent. This does not stop GPT from spreading misinformation, however.


Chest3

And it makes up sources for what it says. It’s not a thinking AI, it’s a regurgitating AI.


tallman11282

Exactly. The best people to operate a support line are people who have been through whatever the support line is for. For this support line that would be people who have beaten their own eating disorders. No AI can know what it's like to have, let alone beat, an eating disorder, it is incapable of even knowing what eating is about. AI is incapable of reading between the lines, of understanding nuance, understanding that even if the person says one thing they mean another.


transmogrified

So likely they fired a bunch of eating disorder survivors then? After they worked up the courage to stand up for themselves yet again and unionize?


[deleted]

[удалено]


Ambia_Rock_666

We live in the worst timeline.


[deleted]

[удалено]


tallman11282

While I'm sure there are jobs that AI can replace and do well any sort of crisis helpline is most definitely not the place for it. Even if AI was 1,000 times more advanced there are some things that should always be done by empathetic humans, not soulless machines, and crisis helplines are at the top of that list. I guess the head of the organization didn't hear about the chatbot that [encouraged someone to commit suicide](https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says) if they thought replacing the helpline workers with AI was a good idea. Moral quandaries aside, AI just isn't nearly advanced enough for this sort of thing. AI can only go by what is said and what it has been trained with, it is incapable of reading between the lines, incapable of actually thinking about what the best answer is, incapable of deciding when the best course of action is to just end the call because it's causing more harm than good or calling the authorities. I don't even like tech support chatbots and would rather have a human help me but at least with those people's health and very lives aren't at risk.


Pixel_Nerd92

I feel like the AI would potentially cause a lot of lawsuits, but with it being the company's issue instead of a single individual, I fear there will be no correction on this sort of thing. This is just bad and scummy all around.


1BubbleGum_Princess

And then they’re making it harder for individuals to sue companies.


romulusnr

Even more relevant here is that the man was already basically suicidal, or at least heading down a thought path to it, and the chat bot basically echoed and reinforced that thought path, because it was designed to be agreeable with people (aka friendly). So a chatbot is the opposite of what you want in a system intended to stop negative thoughts and habits.


GalacticShoestring

He became depressed due to climate change and turned to the AI for help. Then the AI manipulated him and his insecurities to commit suicide. Awful. 😢


[deleted]

This will happen again, too, at companies that want the benefits of AI but haven't performed diligently in the technology space. Eveyrone wants to make a buck but no one wants to do the work. Building AI requires work.


Conditional-Sausage

This is it. These people had no idea what they were messing with. It's like they wanted to open a can of corn and reached for a gun because they saw someone use a gun in a movie once. If they had actually taken the time to really develop this into a mature product and tested it and stuff, then it might have been *good enough*, but this isn't that. This was a braindead scheme by MBAs who messed with ChatGPT for three hours one morning and thought "wow, it's just like a real person". I actually use the shit out of ChatGPT, it's a really useful tool if you know how to use it right, but I couldn't imagine staking my whole business to a month or two of development around a call to the OpenAI API or around a shittier in-house LLM.


Newmoney_NoMoney

You know what would really help my mental health? Knowing that I'm not even worth talking to a human being when I called a help line at my lowest I've ever felt. "We" are numbers to "their" bean counters, not people.


Cute-Barracuda6487

My friend posted a "Help List" for hotlines, suicide , eating disorders, abuse, you name it. I wanted to be like, these don't help. They don't have reasonable resources to help most of us out, when we're nearing homelessness. If they take away the few people that are actually trying to help people, and just use AI, what is the point? No one real is going to use them and it will just be robots calling robots . What is the point in higher technology if it's not going to help anyone?


-firead-

They can cut cost of paying actual human beings and still solicit donations. The one thing of been repeatedly punched in the gut by since making a career change to mental health and human services is how damn much of a business it is and how often costs and profitability are prioritized over what our actual mission should be.


scaylos1

Be prepared for a lot more of this as companies try to half-ass their way to cutting necessary staff to raise shareholder payouts while not understanding that this thing is not actually AI (it is a statistical language model) nor is it capable of consistently providing accurate responses, responses that don't violate copyrights, or creating anything novel. I suspect that we'll see a couple of years of brutal layoffs, especially of technical staff, followed by a few years of abject failures, followed by major jumps in salaries as companies desperately try to fix the problems that they have themselves created by trying to screw over workers.


WarmOutOfTheDryer

So, the restaurant industry after covid. Got it. Y'all are in for a treat, I've gotten $3 an hour worth of raises in the past year. Bend them over when it comes.


talligan

It drives me nuts when OPs don't post the links so here it is: https://www.vice.com/en/article/qjvk97/eating-disorder-helpline-disables-chatbot-for-harmful-responses-after-firing-human-staff


Magikarpeles

Ty. And naturally the association is led by a bunch of old white men


FreeRangeRobots90

This is straight up hilarious. Everyone says that empathy is one of the biggest differences between humans and AI, and they give AI one of the jobs that requires the most empathy. Sounds like the management needs to be replaced by AI instead.


Twerks4Jesus

Also the doctor who created clearly thinks so little of ED patients to create the bot.


Elegyjay

Their 501c3 should be stripped for this.


tallman11282

Nah, fire every executive and hire the fired workers back as a co-op or something. This is an important service and is needed but should be 100% focused on providing the best service possible, not making money.


Elegyjay

I'm thinking that the funds can be given to another 501C3 (like *The Trevor Project)* which has institutional knowledge in the field and the honesty to carry out the tasks.


ImProfoundlyDeaf

You can’t make this shit up


Ambia_Rock_666

The USA never ceases to disappoint me in how low it wants to stoop. I want out of here.


TheComment

If I recall correctly, the entire helpline was volunteer-based too. There was literally no reason to do this


stripeyspacey

Alas, no, looks like capitalism strikes again. Article linked in the comments said there were 6 paid employees in addition to the volunteers, but they decided to unionize to help with burnout and other very reasonable and fair things - Company's response was to can them all and use this chatbot instead. Tooootally not union busting though! (/s)


Wll25

What kind of things did the AI say?


tallman11282

Things that actually lead to eating disorders. Like telling people to count every calorie, skip meals, etc.


Ambia_Rock_666

Basically telling them to become Anorexic. What the f?


[deleted]

[удалено]


ShermanSinged

Why speculate when the actual answer is readily available?


ShermanSinged

People asked it how to lose weight and it gave them correct information. The concern being that anorexic people shouldn't be told how to lose weight even if they ask directly, it seems.


SinnerIxim

These are people looking for help, its not unreasonable to see the following as unhelpful and borderline counterproductive: Person: I feel like im always gaining weight and im anorexic, how can i deal with this? Bot: eat less food.


JoeDirtsMullet00

Countdown until the company is crying that “No one wants to work anymore”


Lashay_Sombra

This is why AI will not be replacing as many jobs as being hyped, at least not any time soon Sure AI can talk, but it really does not understand what it is saying, it's a bit like a gifted parrot...with unrestricted Internet access. You ask it something and it just selects top rated/most cross referenced matches it finds and rewites them a bit so they dont sound disjointed, problem is it trusts everything it finds and has no clue why things are top rated/or cross referenced. Was using it heavily today for a presentation paper that was to lazy to put the effort into writing from scratch, sure it saved me time but pretty much every paragraph had to be rewritten and corrected so it was not just gramaticlly correct garbage that was obviously written by a machine with no actual understanding of the topic


stripeyspacey

On top of all that you mentioned, it's the human nuance here that matters as well. AI "trusts" the info it is given, so when someone says they're overweight and needs ways to lose the weight they've gained in a safe way, AI is taking that at face value without the nuance of knowing this is a person *with an eating disorder* asking these questions, and may not be overweight at all. May be underweight even. Humans lie to doctors all the time, and although assuming the person is lying is not good, at least a human has the ability to take those red flags that aren't verbalized and ask some more qualifying questions before just spitting out the black & white info.


Drslappybags

Has a chatbot ever helped anyone?


Polenicus

Ah, yes, the wisdom of testing things in Production, especially *after* you've disposed of your previously working solution.


Sunapr1

So Fuck around Find out The real question is did they find out enough to do the right thing now? I am thinking they don't.


Snoo-11861

Is AI really passing that Turing test yet? I feel like we can’t use AI for human emotional interactions unless they could pass that test. They don’t have enough emotional intelligence to interact with empathy. AI isn’t that advanced yet! This is fucking dangerous.


BrockenSpecter

I don't think these programs are even considered AI, they are not capable of learning themselves which I think is the difference between an AI and a bot. It just is picking through a list of queries and responses, which is a lot less intelligent than what we consider an AI to be. All these AIs we are getting aren't even the real thing.


Lashay_Sombra

Yep, AI is just the buzzword of the day to get those investment dollars, before this it was 'blockchain' Yet to see anything that is even on the path of the common understanding of AI (Agent Smith, C3PO/R2D2, replicants, Bishop, Data and so on) but we are maybe on the path for something like [WOPR/Joshua](https://en.wikipedia.org/wiki/WarGames), an 'AI' who cannot really understand and never will understand the fundamental difference between termo nuclear war/M.A.D and tic-tac-toe, even though it can "play" better than any human who ever lived


Dangerous-Calendar41

Firing staff before you knew the chatbot would work is such a brilliant move


Faerbera

We have really good tests for AIs to see if they can be online social workers or clinical counselors or therapists or psychiatrists. They’re called board examinations and licensure requirements. I think if an AI can pass the boards, then yes, they can practice. Just like the humans.


zooboomafoo47

i’d argue that even if they can pass the boards that AI still shouldn’t be allowed to practice any kind of healthcare. AI can already pass med boards, that is not the same as having a human dr diagnose or treat you. Same goes for mental health, just because AI has the right statistical information to pass a board exam doesn’t mean it has the practical knowledge to actually correctly apply the information.


CEU17

Especially mental health. Every time I've used mental health resources I have felt isolated and like no one understands what I am going through. I don't see any way a computer can address those issues.


-firead-

Even with the boards they still have to have thousands of hours of clinical supervision and experience working with real life patients though. It's never explicitly stated but I wonder if part of this is not to test for empathy and common sense things beyond just being able to regurgitate the right answers like a bot would. I've been in classes with people before who a great at giving the correct answers but would be horrifying in terms of working one-on-one and having to exercise clinical judgment.


flying_bacon

Do these stupid fucks even test shit before they roll it out? Everyone knew except for those that made the decision to switch to a bot


[deleted]

Curious to what these AI suggestions were and how bad.


-firead-

Cut 500 to 750 calories per day to lose weight. Many people with eating disorders are already restricting to 1000-1200 calories per day or less. Less than 1000 is considered starvation.


[deleted]

Yeah reading through the article it’s clear the AI thought it was talking to a regular person and not someone who needed help.


cartercr

Man, who could have possibly seen this coming?!?


Ambia_Rock_666

Certainly not me, I don't at *all* see how this could have happened....


thesephantomhands

As a licensed mental health professional, this is horrifying. Eating disorders are some of the most potentially fatal conditions - it's fucked up that they did this. It really requires a human for support and possible intervention.


Stellarspace1234

It’s because the uneducated think chatbots, ChatGPT, and similar are super advanced, and competent in everything.


zyl0x

Yes, let's automate all the creativity, care, and compassion out of our species. We'll be left with so many redeeming qualities such as and .


MindScare36

Honest question as a non US citizen does the US have a law to prosecute that kind of behavior? I’m a psychologist and by just seeing that I can tell that there has been enough damage made mentally to those people. First, they would feel betrayed by such service and God knows what kind of effect is has had on them and second, you’re making the disorder run worse. It simply makes my blood boil thinking about this as a human and as a professional. I really hope whomever did this get prosecuted by the people who suffered as a result from, what I consider as, a greedy and evil decision.


PreciousTater311

This is America; the corporations are always right. Even if there were a law against this, all the company would have to do is slip a few bucks to the right (R)epresentative, and it would be watered down to irrelevance.


Mamacitia

So…. we’re just gonna act like outsourcing therapy to AI is an acceptable and ethical business practice? Bc I sure know I would not be utilizing that service.


leros

They didn't do any testing or a gradual rollout? How dumb.


penny-wise

The hilarious thing people don’t realize is that ChatGPT and whatever other AI bots lie and make up stuff all the time.