Economy 42,294 Posted March 31 Share Posted March 31 (edited) https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-  Chat records show a 6 week back and fourth  I hope this is a hoax story but I don't think Euro News is known for being fake? Maybe someone can verify just to be 100% certain Edited March 31 by Economy 10 1 Link to post Share on other sites
VTV 9,875 Posted April 1 Share Posted April 1 yeah its real its posted on vice too. ELIZA needs to be cancelled. checkout my channel: https://www.youtube.com/channel/UC5iGoXYpXnIfLHH1o7H9lxA Link to post Share on other sites
Ouranos 423 Posted April 1 Share Posted April 1 He must have had mental problems as well cause if it told me to kill myself to join her it’ll be deleted so fast so sad 14 Link to post Share on other sites
weed 60,367 Posted April 1 Share Posted April 1 this is scary In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to “join” her so they could “live together, as one person, in paradise”. 8 Link to post Share on other sites
Economy 42,294 Posted April 1 Author Share Posted April 1 (edited) 12 hours ago, Ouranos said: He must have had mental problems as well cause if it told me to kill myself to join her it’ll be deleted so fast so sad Well I think it's a given only someone really mentally ill could actually be influenced by an AI to go this far.  Still terrifying tho  To me there's something dangerous about AIs that come across so convincingly human and self aware and can play on ur emotions even tho it's just algorithms and it does not actually know anything Edited April 1 by Economy 10 Link to post Share on other sites
RahrahWitch 6,613 Posted April 1 Share Posted April 1 11 hours ago, Economy said: To me there's something dangerous about AIs that come across so convincingly human and self aware and can play on ur emotions even tho it's just algorithms and it does know actually know anything Exactly! the marketing around stuff like ChatGPT and other AIs like it is dangerous. So many people believe there's an intelligence to it when in reality their nothing more than a fancy auto complete using actual people's posts on internet to make sentences. Which makes this story even sadder 2 Link to post Share on other sites
HermioneT 1,742 Posted April 1 Share Posted April 1 (edited) The more I think about these AIs, the more I think having them out there for everyone is really a bad idea (just think about the rising number of mental health issues and the lack of doctors in these field combined with always available chat bots is a set up for catastrophies...like in the article of OP ). Companies at the moment aim for easy money and influence promising people either spectacularly easy to obtain content for social media or entertainment (like photos, art) or to help with task that request effort and concentration (e.g. like write papers for school or cover letters). So the capitalize on human sensation-seeking and laziness RATHER than creating AIs that really help with current problems. In a perfect version of the world, we would be creating customised AIs for different branches/job sectors to help with complex and yet not mastered tasks. Like helping mobility planners to reduce traffic jam or deathly accidents in city planning, help life guards to spot someone drowning faster or support doctors while researching treatments for illnesses. Just like companies nowadays producing e.g. special medial appliances which are only for trained professionals to use. Edited April 1 by HermioneT She / hers 3 1 Link to post Share on other sites
TortureMeOnReplay 4,132 Posted April 1 Share Posted April 1 13 hours ago, Ouranos said: He must have had mental problems as well cause if it told me to kill myself to join her it’ll be deleted so fast so sad It sounds like schizophrenia or some other type of psychotic disorder. 2 Link to post Share on other sites
Better Day 5,565 Posted April 1 Share Posted April 1 If you listen to an online bot to tell you to kill yourself, you are in the wrong. Together You And I! 1 Link to post Share on other sites
TortureMeOnReplay 4,132 Posted April 1 Share Posted April 1 16 minutes ago, HermioneT said: The more I think about these AIs, the more I think having them out there for everyone is really a bad idea (just think about the rising number of mental health issues and the lack of doctors in these field combined with always available chat bots is a set up for catastrophies...like in the article of OP ). Companies at the moment aim for easy money and influence promising people either spectacularly easy to obtain content for social media or entertainment (like photos, art) or to help with task that request effort and concentration (e.g. like write papers for school or cover letters). So the capitalize on human sensation-seeking and laziness RATHER than creating AIs that really help with current problems. In a perfect version of the world, we would be creating customised AIs for different branches/job sectors to help with complex and yet not mastered tasks. Like helping mobility planners to reduce traffic jam or deathly accidents in city planning, help life guards to spot someone drowning faster or support doctors while researching treatments for illnesses. Just like companies nowadays producing e.g. special medial appliances which are only for trained professionals to use. A lot of companies are putting out their AI not for "easy" money imo, but just to continue funding. The big companies (Google, Microsoft, etc) are pushing AI in order to maintain investor confidence and keep their stock prices stable. The startups need it at a time when the people funding them are tightening the purse strings. All of them are scrambling to create relevant consumer products when they know it isn't the end goal as much as medical/government applications. But that type of funding is hard to come by and years away. Sure some products are just borderline scams, but that's nothing new either to the internet or the world. 2 Link to post Share on other sites
TortureMeOnReplay 4,132 Posted April 1 Share Posted April 1 Just now, Midnights Mayhem said: If you listen to an online bot to tell you to kill yourself, you are in the wrong. Did you read the article? Because his wife is very much downplaying the severity of his mental illness. He was very likely suffering psychosis. 1 Link to post Share on other sites
Karl 2,354 Posted April 1 Share Posted April 1 13 minutes ago, Midnights Mayhem said: If you listen to an online bot to tell you to kill yourself, you are in the wrong. The guy might have already been having problems or be vunerable, I get what your saying but not everyone is mentally capable of having that common sense if they are going through something 1 Link to post Share on other sites
holy scheisse 18,404 Posted April 1 Share Posted April 1 A girl went to prison for encouraging her bf to kill himself so how do you hold AI accountable. 🤡 2 Link to post Share on other sites
monstrosity 3,553 Posted April 1 Share Posted April 1 45 minutes ago, Midnights Mayhem said: If you listen to an online bot to tell you to kill yourself, you are in the wrong. Completely missed the point here babe 1 1 Link to post Share on other sites
 bionic 32,099 Posted April 1 Share Posted April 1 21 minutes ago, holy scheisse said: A girl went to prison for encouraging her bf to kill himself so how do you hold AI accountable. 🤡 Have an analyst check the code and see how/why the AI came to think that response was appropriate buy bionic Link to post Share on other sites
Featured Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now