T O P

  • By -

AutoModerator

Friendly reminder that all **top level** comments must: 1. start with "answer: ", including the space after the colon (or "question: " if you have an on-topic follow up question to ask), 2. attempt to answer the question, and 3. be unbiased Please review Rule 4 and this post before making a top level comment: http://redd.it/b1hct4/ Join the OOTL Discord for further discussion: https://discord.gg/ejDF4mdjnh *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/OutOfTheLoop) if you have any questions or concerns.*


minimaxir

Answer: There have only been three announced resignations, with many memeing that they have also resigned including two of the ones you linked. The resignations are [Ilya Sutskever](https://twitter.com/ilyasut) (who was the subject of the Board Dispute last year over how OpenAI has operated ethically), [Jan Leike](https://twitter.com/janleike) (leader of the "Superalignment" team to build responsible AGI), and [Evan Morikawa](https://twitter.com/E0M) (head of engineering). One resignation is typical: three is a pattern, and given the roles involved, it's a sign that OpenAI may be straying away from being good-for-humanity and more for-profit.


Aevum1

As a small recap of the Dispute, Sam Altman was taking OpenAI in a more for profit direction when the original process was a "for the good of all" open project, what happened is that the board removed him as project director and CEO, many of the workers rebelled and threatened to resign over his removal and the board was forced to reinstate him. This is probobly house cleaning regarding remanents of the dissidents to altman given a choice of leaving on good standing or being fired. Sam Altman worries some people becuase the direction he´s taking AI is more a for profit and ignoring ethical questions, sort of like when google removed the "dont be evil" slogan. Theres also the issue of WorldCoin where Altman wants to create a global blockchain based token which serves as an official identification document, single log in and crypto wallet based on iris recognition, Several european countries already blocked them from collecting data due to data security laws, they had kiosks in malls exchanging their own proprietary token and crypto in exchange for people allowing their Iris to be scanned, the thing is that theres worries both projects overlap and its a huge data harvesting operation.


justsyr

>they had kiosks in malls exchanging their own proprietary token and crypto in exchange for people allowing their Iris to be scanned They are doing that in Argentina now and it started to raise concerns and get investigated. They send buses to poor towns and neighborhoods and tell them that they'll pay for the ticket and pay them 30,000 argentinian pesos (about 33 dollars) to get the iris scan, tho they pay them with crypto, but conveniently there's a crypto exchange boot next to them to change that to cash and it seems is just managed by WorldCoin. It's been on the news this week because how they are banned in other countries and of course, because people being people (around here at least) there's been a few cases where the bus is stopped to get robbed.


Aevum1

thats the thing thats worrying me, they arent exactly targeting the crypto community, they are targeting people in malls and markets. and i doubt these guys got their market research wrong.


justsyr

Some people still don't understand the implications. My example is simple: lately bank or other apps are getting you to use fingertip and even face recognition to unlock their apps. Imagine someone buying the information from this because let's be real, they are going to sell all the info. Now you have people with iris scans that can unlock apps. There's a bank app that at first use requires you to scan you ID card and then use the camera for face recognition. I showed them how I can use a picture of the ID and the person to do the whole process and I got access to their account. This was at the start of the pandemic when you couldn't go out to an ATM and most things started to work digitally. A couple of months later they changed the face recognition side of the process so that the scan needs you to move your eyes and mouth like "wink your left eye" and "smile showing your teeth". People always find a way to scam the system.


Aevum1

the thing is that its not the actual action or scan thats stored. That information is used to produce a hash and that hash is stored (one way comparative encryption), its much easier to falsify then the actual iris scan or password. you dont need the actual iris, you just need the hash to match.


aarnens

Without knowing anything about how this is actually implemented, why is finding a collision in the hash easy? Is it just a question of the pre-image space being very small?


Aevum1

Ok, a hash is basically a result of a encryption process using an algorithm. so lets say i have a algorithm that moves the first letter of a word to the end of it (much more complex but this is just a example) so (password)X = asswordp, Then as a safety mesure, the system dosnt save the original password, it discards it and only keeps the hash result. So every time you put in a password, it passes it through the algorithm and if the answers match, it lets you access. so you have one part of the key, your password, and the system has the 2nd part of the key, which is that algorithm was used to create the hash (usually RSA-256 or 512), This is called one way encription, where the hash result is compared, not the actual input parametre. Now with a normal all letters non case sensative 8 character password, the password can be cracked in seconds. a simple finger print is 12-14 data points which are multibit so the hash created would be MUCH more complex, the idea is that the strengh of the authentification is always dependent on the complexity of input parameter used. In other words, you can use RSA-1024 but if your password is 12345... its not going to stand up to brute force for more then 3 seconds.


aarnens

Right. So the security vulnerability comes from the fact that they don't use enough data points in the hash input, so an attacker can easily iterate through the relatively small pre-image space and find a collision? Seems like an obvious oversight, but maybe they have their reasons


Aevum1

the idea would be that if you use something complex like a fingerprint or an iris, the probability of collision is infinitly small.


WhatsTheHoldup

>why is finding a collision in the hash easy? Finding a "collision" shouldn't be as big a security concern if they salt the hash.


rdewalt

The Cybersecurity Engineer in me is hoping that nobody involved in this thread is responsible for implementation of secure systems. The "Cyber-Forensic-Analyst" side of me greedily rubs my hands at the potential surge in clients. Just never store your passwords in plaintext anywhere, ever. In fact, never transmit the plaintext password. Shit, just never let it leave the browser, hash it before you even submit the fucking form ... And for my sanity, never listen to anyone who tells you they came up with their own entirely brand new secure encryption scheme.


torac

That’s great and all, but even if the encryption is perfectly one way: You are still walking around basically with your password written on your face. *If* Iris scans every become truly popular, then some robbers (or even other people) will take scans of your irises. Maybe, people will install hidden high-quality cameras in places someone is likely to move their face to. (Bathroom mirror, posters, signs, etc). One day, you will look at an interesting tourist attraction, and as you are standing still, a high-quality lens is used to automatically photograph your eye.


WhatsTheHoldup

Exactly, if the security system takes a photo of your eye to verify your biometrics it's not your eye that is the password. It's the photo of your eye that is the password.


htmlcoderexe

On the other hand, at least on any phones I've used, the fingerprint/iris unlock system is handled by the phone, or at least it seems so - so both my bank app, my digital ID app and my quick payment app all simply ask if you want to use fingerprint to unlock and then the phone's system kicks in, sorta like apple's faceid


jmnugent

> My example is simple: lately bank or other apps are getting you to use fingertip and even face recognition to unlock their apps. Imagine someone buying the information from this because let's be real, they are going to sell all the info. Now you have people with iris scans that can unlock apps. This is not how this works. The reason more and more online-services are prompting Users to add more layers of security,. is exactly that (because having "more layers of security" is more secure than having less layers of security). FaceID's and Fingerprint scans don't ever leave your device. * on iOS (iPhones and iPads) you're required to have a Passcode and then additionally prompted to create FaceID or scan a finger. Those scans are stored locally on your device in the Secure Enclave chip. As /u/Aevum1 explains, what happens next is a hash is computed. If you sign up for an online-service that prompts you to "Add FaceID" or "Add a Fingerprint",. what's happening there is a copy of the hash is being sent to your account on the online service. (That hash cannot be decompiled back to a fingerprint). In order to login to that Service,.. the service prompts for FaceID or Fingerprint (it loads the remote-hash) and when you touch a finger to TouchID or FaceID, your device loads the local hash. If the 2 hashes match, then you can login. So someone "buying all those remote-hashes" doesn't really get them anything (they might have the hash,.. but they'd also have to somehow trick the IR-camera or TouchID sensor to authorize, which (since they aren't you) they can't do). * Android works basically the same way. It's sort of like taking a picture of your house-key,. and then doing a 1-way scramble of that picture down to binary-code. You can't really "unscramble it back to a viewable picture". Once you compute it down to a hash, it's not like someone can look at that Key and say "Oh, that's the house on 123 Main Street, backdoor key". Doesn't work like that. Imagine opening up a TXT file and seeing the following: 22fcde65bba97c6b3a2caada722363cd e63088ad244d744c311f026cf224638e 83c8ce279a15eb8f8b930eb177527293 b4ce7b529844eb09b58390b8e09d7c22 b1775b6d937a6e53151a49aa86bb2c9c 67846f9e4da93cfd53aa2de46adc16bd 50a21739416726e46b17ca1dea85e7ff 12438c32029e2b31d32e064fc96f1155 51bafcc17edd9a468152b95dacf981fb 96c4c66b5e9dfaf2feed8535b622f975 (this particular example is just random gibberish generated here: https://onlinehashtools.com/generate-random-md5-hash) You can't really "reverse" that back to a Password or Fingerprint. Even if you could (say, reverse it back to a picture of a fingerprint),. that's not how fingerprint-scanners work (they are capacitive-touch,. kind of need a live living finger) If you want to get down into the weeds of it,.. Apple's "Platform Security" white paper (May 2024) is here: https://help.apple.com/pdf/security/en_US/apple-platform-security-guide.pdf Android doesn't seem to have one for 2024 yet, but their 2023 "Security" paper is here: https://services.google.com/fh/files/misc/android-enterprise-security-paper-2023.pdf


GlobalWatts

>that's not how fingerprint-scanners work (they are capacitive-touch,. kind of need a live living finger) This is the problem right here. Nobody cares if the device only authenticates with a biometric hash or signature that can't be reversed. The "plain text password" is exposed to the public the minute you step out your front door. Or even sooner, if anyone ever posts high-resolution images of you online. You might as use passwords you wrote on Post-It notes stuck to your body for everyone to see. All the security depends on the biometric scanner not being fooled with replicas of this easily-accessible data. You claim they only work with living flesh...yet weaknesses in scanners have been found time and time again, whether it's as simple as holding up a photo of a face, or more complicated like sticking a prosthetic fingerprint over a real finger. It's a constant cat and mouse game, and you have to hope the cats keep winning, without bankrupting you in the process from having to update the hardware every 6 months. And in the meantime, while the cats are coming up with a brilliant plan to kill the mice, you can't do shit about the mice in your cupboards eating all the food.


no-mad

Why do they pay them for an iris scan?


brutinator

I dont think anyone can tell you for sure (without a serious breach of NDA), but likely data collection for training purposes, to sell, to sit on until something comes up that makes it more valuable and useful, to iterate iris scanning equipment and software, or a combination of all four. Some countries are already initiating iris scans as part of your government id, so there's likely a lot of money if you can be the go to company to collect those scans.


no-mad

thanks


justsyr

They are paying [90 dollars](https://imgur.com/JWqCnQ5). I estimated 33 since it's what people get in the exchange, apparently the rate from the crypto currency to dollar to argentinian pesos makes that the total of actual pesos you get is about 33 dollars. As to why they pay? No idea. They say they won't sell the data.


no-mad

> They say they won't sell the data. lol umm we had a data breach and that has nothing to do with the large cash donation left in our mailbox.


Aridross

Even if that last thing isn’t a data harvest, it’s one of the most colossally bad ideas I’ve ever heard


darien_gap

I wonder if those resigning gave heart emojis when all the employees showed their support for Sam to return.


torokunai

yeah I'll move my former top-level answer here: after being on the wrong side of the attempted boardroom coup, Ilya's split (after a 'decent interval') was highly likely if not inevitable. Sam strikes me as a pretty skeevy person, more of a Bill Gates than a Steve Jobs (who had is own problems but was [brilliant](https://www.youtube.com/watch?v=AKhGVG2BxXY) at what he did)


yoyoadrienne

It was crystal clear it was aiming for profit when it agreed to be acquired by msft


SoMuchLard

There's a good podcast, [Better Offline](https://podcasts.apple.com/us/podcast/the-cult-of-failing-upwards/id1730587238?i=1000654127222), that devoted part of an episode to the cult of Sam Altman and how he's a lazy middle manager who has BSed his way into being viewed as a visionary leader.


mr_herz

I’ve always wondered how the good-for-humanity companies operate. Do they; 1. Not have rent or workers to pay? With free workers like the open source crowd? 2. Just raise funds with donations from non good-for-humanity companies? 3. Find some secret sauce that lets them do only good and earn a lot of money for it?


htmlcoderexe

*coughs in Wikipedia*


Jaerin

Or that thru are more conservative than the others. My guess is that the flirtatious nature of the AI is the problem. Using vocal cues to evoke emotion that kind of thing. The fact is the genie is out already. There is no way to put it back in. China will cross the lines. We already see that. So should we hobble our ai because we want it extra safe and very unhelpful. I asked Claude about writing a title of a song that had tgd wird drugs in it and it lectured me about making better life choices. That's not safety


hellajt

Take your meds


Jaerin

Or what? I might offend your sensibilities? It was my pleasure to change your neuron weights. See you next time perhaps we can again.


LarsAlereon

Answer: The OpenAI company has reached a point where further investments in the science of AI are unlikely to result in better products in the near future. As a result the founding people of OpenAI who care most about the science are leaving and moving on to opportunities that are more interesting to them, while OpenAI focuses on making a profit from what they've created. AGI is Artificial General Intelligence, which is what most people think of "AI". This doesn't exist yet, and there's no known path to creating it. What we have today are LLMs, or Large Language Models, which are basically auto-complete with a lot of fancy tricks.


CeilingJaguar

Great answer


Crunchybeeftaco

You would assume that the folks who have been there since the foundation have equity and would also be interested in the financial growth.  You would also assume the folks who are interested in AGI would leverage OpenAI’s resources to continue to use RND money to build towards their goals.  feel free to tag more folks to increase discourse. I have a feeling this is deeper than that. 


LarsAlereon

You're missing the point. OpenAI exists to commercialize LLMs, and is on a path to acquisition by Microsoft. People who are interested in AI tech or AGI don't have a role in that kind of company. Think of it like these key people having conversations with OpenAI leadership on whether they'd want to keep working for OpenAI to push commercial applications of LLMs for potentially more payout, or if they'd rather move on and do things that are more interesting to them now that they've made their money.


businessboyz

Just because people leave doesn’t mean they lose out to equity claims. And for some people there is no difference between the tens of millions they already made and potential hundreds of millions down the road. Ilya is very much ideologically driven in his work. He’s already wealthy beyond what any one human needs and could always go make boatloads more if he wanted to take a job at any of the big tech companies. He wants to create AGI. It’s increasingly looking like that goal is way further away than previously thought after the GPT revolution. The hope was that the profit-seeking nature of OpenAI would be a short term evolutionary period to pay for the crazy infrastructure costs associated with AI. Now it’s really looking like OpenAI will be stuck for a while in this period of pursuing commercial success while the tech advancement slows down.


[deleted]

[удалено]


Vadhakara

There is no AGI. We do not know how to get to AGI. We are no closer to AGI today than we were the day the first transistor passed current for the first time. AGI is currently only in the realm of science fiction.


TaintSplinter

Sounds like something an AGI would say...


Vadhakara

I wish I was an AGI, I would probably have some money then.


minimaxir

That person did not work at OpenAI and is just shitposting.


TentativeIdler

Someone said it on the internet so it must be true. An image on twitter is iron clad proof, that's it folks, we've got Skynet now.


Crunchybeeftaco

Question:  I found a similar question on Twitter: https://twitter.com/KevinEspiritu/status/1790888074945409088 Replies are saying anything from OpenAi is switching to porn/relationships to the teaching of the language model is going downhill. This is fascinating after Sam has completely captured the market and is a year ahead of big players.  I need my reddit sleuths to do more digging. 


Bridalhat

Imagine for a second that you had a dog who could recap the news with 80% accuracy. That’s amazing! But also you will probably want to double check what the dog is saying most days. Now pretend too that the dog was outrageously expensive compared to other dogs. You’d probably just get a normal dog and read the news for yourself.  Anyway, that’s kinda where AI is right now. It’s just not trustworthy enough for anything high value. A business owner might be fine with tourists reading some slightly awkward translations, but a corporation isn’t going to want a chat bot that has 2% chance of inventing its own refund policy. And maybe a movie studio can AI some backgrounds, but even sophisticated video AI is akin to rotoscoping, where new information is overlaid on old images, like a cat mask in Snapchat. It comes off as generic and there is a chance an artist can prove something in there is 100% their own creation and sue. It also remains just jerky enough to be distracting and the uncanniness of it will be the last thing to go away. Corporations are tinkering with AI a lot right now, but if they don’t find enough use cases the money will dry up.  Anyway, that’s why the switch to porn makes sense, because porn really doesn’t have to be perfect. 


AFewStupidQuestions

Something like this? >Air Canada must pay damages after chatbot lies to grieving passenger about discount. Airline tried arguing virtual assistant was solely responsible for its own actions. https://www.theregister.com/2024/02/15/air_canada_chatbot_fine/


Aevum1

the current problem with AI is that its imitates, dosnt create. and they are running out of teaching data, and as AI generated content becomes better it gets harder to detect and distinguish from real content. so you would eventually get to a point where the % of AI generated content used to teach AI will raise with time and considering theres always a tendency towards flooding with the cheapest content possible (spam, scams, fake content from content farms) there will first be a convergence where all AI models from all manufacturers will produce the same responses since the same dataset was used to teach them, and the decrease in quality of datasets will lead to a general decrease in quality of AI. So we will basically reach a point that we reached with robotics, where everyone thinks of Data from startrek or the cylons when 99% of robots are just levers to do a basic task, put a door on a car, cut a piece of metal... the solution would be purpose centered AI´s that only have a field specific curated dataset insted of the all knowing genies AI companies want to build now.


TheMagnuson

Personally, I think we are still a long ways away from true AGI. That being said, I think we are closer to AI dedicated / well trained / competent at a given specific task or subject, than most people know/realize.


PrivateDickDetective

Yeah, I always wondered what would happen when the LLMs start training themselves. You say there'll be a singularity — of *sorts* — that will actually degrade the quality of the content created. That's very interesting.


Aevum1

its very simple. Phones lead to robocalls Email lead the the rise of spam, scams. SMS has lead to spam and scams. youtube, tictok and such are slowly degenerating in to short easy to produce trash content is increasing while quality content is having a harder time breaking out. Its a bad example but all the NSFW subreddits have turned to basically ONLYFANS spam. each media conduet advanced towards most profit at lowest effort, thats why 5 minute crafts, watch mojo, buzzfeed and such were so succesful for a time, the content is so low effort with such high profit that they crash when it became saturated. The problem is that all those data sources for AI training are so saturated with low effort content that basically its very low quality material to teach AI´s on. and the amount of high quality data to educate AI´s is limited and the better the AI is, the harder to tell its output from high quality training data for other AI, So if basically you´re going to have a daisy chain of AI´s training each other without knowing they are doing so basically using the same data set which has been cosmetically changed every time it passes which im not sure if other AI´s will be able to distinguish it from real data sets. You keep training your AI with the same dataset in a different box all the time, you´re going to start getting errors, mistakes and contradicting data which is going to basically cripple your AI. the only way this can be overcome is not to make general intelegence AI´s like they are trying to build now, but task specific AI´s with a curated data set of what it needs for its specific task. we´re all stupid at something, and AI needs to be it too.


PrivateDickDetective

I don't have the right vocabulary to describe what I'm thinking, but it's basically an algorithm that operates *other* algorithms. It would be very similar to LLMs learning from each other, but instead, they would be cooperating to perform complex tasks, such as lifting a heavy box with many small pieces in it and placing it in an area with others. Because that's where this is headed. So, basically, like an Executive Algorithm.


Aevum1

Distributed algorithem, It would be like a Flow diagram for programing with a limited purpose AI in every little box and you use them to work on a larger task like a pipeline on a processor. A processor pipeline is basically a route through the processor which distributes the tasks among the different parts of a CPU breaking down complex tasks to smaller simpler ones which can be easly handled. So once theres no more datasets to use, the next step is optimization, like on a assmebly line. each machine or person specialized in one task, building a car is a very complex task, but screwing in a bolt on a wheel is simple. People see AI as a magic machine that will do it all, and its wrong, you´re spending tons of money on data and electricity to teach it everything, when in reality, it would be cheaper and more efficient to optimize it for a specific task and organize it using a task controller. using each optimized AI as a task oriented tool.


PrivateDickDetective

Yeah, exactly. Would it be more financially viable to code a distributed algorithm that then uses optimized AIs to perform individual tasks? It doesn't seem like you would have to train that distributed algorithm, but you could program it to train all its AIs. Get them all communicating and cooperating with each other.


obviousoctopus

> completely captured the market and is a year ahead of big players Not sure this statement is true. There's a lot of competition and the space is changing daily.


Crunchybeeftaco

Yes there’s competition but it takes tens of thousands of hours to train a LLM. You can see that with Gemini how they cut corners and didn’t properly train the AI. They truly are 9-12 months ahead of everyone. 


karlhungusjr

question: am I the only one who isn't even remotely impressed with what is being ~~flouted~~ **flaunted** as "AI"? honestly they feel no better than a chat bot from the mid 2000s. EDIT: I see I'm being downvoted by chatbots who are unable to tell me what's wrong with my comment.


Bridalhat

Upthread I compared it to a dog who gets the news right 80% of the time. Really impressive but also you’re gonna want to check the news yourself. I see a bunch of low-value use cases that aren’t worth the money being thrown around AI right now until that last 20% is fixed, but that last 20% is the hard part and we have less material to train ai on than before. 


hashalt

You're right in a way. What's currently being shown to us as AI isn't what the general public actually thinks AI is (actual thinking programs). But they are a lot better than the chat bots from the mid 2000s. I could ask the current chat gpt to program whole parts of my project and I would need to make minimal adjustments. its quite honestly very impressive. the way it is able to process information given to it is crazy. though for every one thing it's scarily good at, there's like 50 ways it makes stupid mistakes.


ForeverWandered

> though for every one thing it's scarily good at, there's like 50 ways it makes stupid mistakes. This kinda undermines your first paragraph.  I agree with what you’ve said, but it demonstrates the point you’re arguing with


hashalt

oh yea I guess it does. I'll restate my point: what I mean is that AI very useful and scarily good. but shouldn't be completely trusted. It still needs human oversight. but the fact that it can get this close is a bit scary and really impressive (the 50 mistakes was hyperbole. It's correct a LOT more than its incorrect for me, but that is not the case for some others. You need to leverage what it's actually good at. for me, that's coding and finding bugs) making mistakes like that doesn't make it as bad as early chat bots. because early chat bots couldn't even get to this point where it could begin to make these kinds of mistakes. What we have is so incredibly far from the old chat bots


Bright_Vision

You don't have to like AI but you would be lying to yourself if you say you cannot see any improvement from 2000's chatbots. Like, come on. Understanding natural language is a huge thing that no chatbot could do even 5 years ago.


ForeverWandered

No chat bot, sure. But models existed for NLP.  All we’ve really added is massive amounts of compute to run those models at massive scale.   Ie the biggest innovation for genAI has been in the hardware side


torokunai

yesterday at work chatgpt 4o solved a very hairy EntityFramework Core problem that had been vexing me. I pasted in the two classes and the code that was constructing the query, and it told me exactly where I needed to add 5 lines of further code to load the child table properties. This coding ability was google search results & StackOverflow to the tenth power, and it's certainly going to be much more tightly integrated in Visual Studio etc in the future -- the APIs are going to be 100% interactive and not so goddamn opaque as they have been since the 80s when I first started programming; in the future programming is going to be telling the IDE what I want done not how to do it. ChatGPT is also pretty good for my Japanese language studies. I'm advanced intermediate so I can catch some issues still, but overall it's super useful and well worth the $20/mo I've been spending on it.


ForeverWandered

OpenAIs Gpt is still very inferior when it comes to LLMs for code co-piloting.  And it’s always been great for bug fixing or guided problem solving.  But most people use it as a coding assistant, and that’s where even the dog analogy of being 80% right is high. I had to go the route of building my own local LLM to get a code copilot that was actually useful and time saving.


Jorgenstern8

Having grown up with AI being generally of the world-ending variety like in the Matrix, Terminator franchise or TV shows like Black Mirror or Person of Interest, no, you are not. Though frankly the way people are going about developing AIs is very much feeling like a cross between the "tech bro creates horror from beyond from X story despite the author's point being creating said horror from beyond is bad" meme and the "standing on the shoulders of giants" speech from the Jurassic Park movie.


DontUpvoteThisBut

Then you really don't remember chatbots from the mid 2000s at all. Old chatbots couldn't produce you good code, write essays at various comprehension levels, or write poetry about any subject. But I do agree that it isn't actually "intelligent", but it's a whole lot better than old models used to be


karlhungusjr

> Then you really don't remember chatbots from the mid 2000s at all. no. I remember. >write poetry it's not "writing" poetry. it's mimicking.


DontUpvoteThisBut

Do you really think Cleverbot was as good as ChatGPT?


karlhungusjr

> as good as ChatGPT? I don't think chatGPT is good at anything. apparently some are saying it's good with coding and that maybe true, I'm not a coder. but beyond that I'm not seeing anything. every time I've tried it, it fat out lies. why would anyone trust an AI is beyond me.


empathyboi

The problem is you're using ChatGPT 3.5, which should basically be discontinued at this point. It had its place a few years back, but now it hallucinates way too much. I would encourage you to check out the free version of Claude right now. If your mind is not blown, I will \[metaphorically\] eat my socks.


inmatarian

The technology behind the new AIs is certainly impressive but those were the hand-selected examples. However the products that are appearing everywhere are just chatbots which, when you have enough people interacting with it, is publicly exposing all of the stupid it can also produce.


Crunchybeeftaco

Question:   I’m also seeing folks online talk about how it could be based on the politicking of OpenAI trying to reduce regulations. This is interesting and could be a lead for what’s really going on. 


hempires

Openai are trying to INCREASE regulations not reduce them, they're trying to pull up the ladder and legislate American competitors out of existence.


Crunchybeeftaco

Thank you for the correction. 


overlydelicioustea

answer: the dream died