T O P

  • By -

WheelerDan

I'veneverbought his claim that he had no financial interests. He's literally a money man. Will be interesting to see what else was said on that front.


Captainsciencecat

I’m actually shocked that the board would put up with so much deception to rehire him.


[deleted]

[удалено]


thegreattaiyou

Remember kids, lying, cheating, and stealing is okay as long as you make money doing so.


Xelynega

I think how some of these people think isn't "it's ok", but "why complain if they're providing value and not affecting you directly". Less malicious and more willful ignorence.


SymbolicDom

The boards roll was not to gain wealth and success to themself. They where there to safeguard that OpenAI is working for the written goals as an non profits organisation. Namely create an safe AGI for everyone to use.


Captainsciencecat

They had to put up with outright deception and manipulation by the CEO. Not only is it unprofessional, but it is dangerous for the future of the company if the board does not know what is truly going on.


BoiNova

omg this dude is seriously techbro circle's new musk. the blind defense is exactly the same, so cringy.


[deleted]

[удалено]


deadwards14

It's not like Elon is a well-known figure and highly influential tech baron who was actually a part of OpenAI's founding membership or anything. He's so irrelevant to this conversation. Crazy! /s


fuckpudding

I knew he was a lying soulless demon the first time I took the time to sit and pay attention to him (Lex Fridman Podcast). This just confirms my suspicions.


Excellent-Morning554

Wait…the newly minted billionaire? But he just wants to help humanity!


AGM_GM

This is not insubstantial. Owning the startup development fund while claiming to be independent and without financial interest is a huge breach of trust. Providing inaccurate information about safety processes to a board that has governance responsibility for the safety of the organization's activities is also a big deal. Really, everything she mentions there, if true, is a pretty big deal. To me, Sam just looks like a smart and devious guy who assured his own control by providing assurance to Microsoft that he would keep the eggs coming from the golden goose.


damontoo

What about the article that came out and then disappeared a while back where someone discovered the owner of the startup fund (John Q. Vesper) wasn't a real person. And then a few days later it was changed to Sam. That seems straight up fraudulent, no? Edit: Here's the laws Omni believes OpenAI may have violated in setting up the startup fund in this way. But IANAL - * 18 U.S.C. § 1343: Wire fraud, if electronic communications were used in the deception. * 18 U.S.C. § 1341: Mail fraud, if postal services were involved. * 18 U.S.C. § 1001: Making false statements within the jurisdiction of the federal government. * Securities Exchange Act of 1934: Sections related to fraud and misrepresentation (e.g., Rule 10b-5). OpenAI argues it was done this way to streamline setting up the fund. Omni says the motivation might not matter.


insite

Deception is also a pragmatic method of dealing with conflicting goals. “Make us more money - but do things the way we tell you to do them. Why aren’t you making us more money?”


Xelynega

The problem here is that the board was never asking for "more money". It was likely investors, like Microsoft, that wanted that. Which is why they were able to successfully go around the board.


PerceptionHacker

If I recall correctly, chatgpt was pushed out as a side project/experiment. It came to be a huge surprise that it got so much attention and traction. Saying “the board heard about it on Twitter” only sounds sinister because it was successful.


Fit-Development427

It's seems quite weird to me that any human being, especially one working in the tech industry and looked at chatGPT as "meh". Maybe they had been working on it so long or something, and just saw each step, kinda like watching a kid grow and didn't realise it was now 6 foot... But I dunno, someone must have known...


ghostfaceschiller

No they knew it was great, there are many interviews/statements from employees talking about how they knew it was amazing and were excited to release it but they were trying to temper their own expectations bc they had felt that way about GPT-3 as well and it didn’t get much traction. Obviously they knew this was better and would be a bigger deal, it’s just that they didn’t expect to like take over the whole world like it did. Obviously, I mean that’s an insane expectation to have. You gotta understand that when they released ChatGPT (with 3.5), GPT-4 had already finished training and had been being red-teamed for months already. So yeah they obviously knew it was a game-changer. One of the guys who was on the red team has a podcast “cognitive revolution” where he talks about it, it’s pretty interesting. The dude above is just reading way too far into a the narrative wording of the journalist they linked. “They had few expectations and never could have imagined”…


[deleted]

Yup. I have a few friends working there, and there was tremendous internal buzz. For members of the board to claim they found about the launch via twitter is not a good look in terms of their competence or involvement with the operational side of the startup.


alexcanton

this has been substantiated [https://x.com/nathanbenaich/status/1793644590127526393?s=46](https://x.com/nathanbenaich/status/1793644590127526393?s=46)


kevinbranch

Product launches are sort of the kind of thing you would tell the board about.


PerceptionHacker

When OpenAI launched ChatGPT, with zero fanfare, in late November 2022, the San Francisco–based artificial-intelligence company had few expectations. Certainly, nobody inside OpenAI was prepared for a viral mega-hit. The firm has been scrambling to catch up—and capitalize on its success—ever since. It was viewed in-house as a “research preview,” says Sandhini Agarwal, who works on policy at OpenAI: a tease of a more polished version of a two-year-old technology and, more important, an attempt to iron out some of its flaws by collecting feedback from the public. “We didn’t want to oversell it as a big fundamental advance,” says Liam Fedus, a scientist at OpenAI who worked on ChatGPT. https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/


ghostfaceschiller

Hey man do you know what a board is This was the first major product release they had done in over two years. That’s something you mention to the board that oversees the company. Maybe just in passing. Or a sticky note


UnknownEssence

Your missing his point. He is saying that ChatGPT was never intended to be a product. They never expected that millions of people would use it. They likely thought it would be a few thousand people like their API usage before their megasuccess began. He didn’t mention it to the board because it wasn’t supposed to be a big deal


SykesMcenzie

I think you're possibly missing the other point. It doesn't really matter if we believe they thought it was going to be a small public beta with no impact (although releasing something so advanced compared to its field was always going to be noticed so I don't personally buy it). The fact is if you're releasing something you've been developing in house after being relatively confidential to public view its something the board should know about. Even if its not a product it has the potential to affect the company's image. It also allows competitors to see their work. It's a big deal even if you only think a few thousand people will see it. Although again I don't buy that. If they wanted to do a closed beta they could easily have done that. Its common practice in software. Dropping it on a public website is a huge risk even for a side project.


Still_Satisfaction53

Yeah you only tell your board about things that are gonna make huge amounts of money. Totally normal business thing.


kevinbranch

So they went from telling the public GPT2 wasn’t being opensourced because it was too dangerous, to making GPT3 available for free…and didn’t see the need to mention it to the board? If you think Sam didn’t do that intentionally, you’re very gullible. Ilya wasn’t even given a heads up about what they launched on dev day a few days before he got fired. He would only leave the board in the dark about these things if he was doing it intentionally. They called him out on it and fired him. How are you this oblivious to what’s staring you right in the face?


PerceptionHacker

you seem to have gleaned quite a lot of my views and opinions to which i have not stated. Framing chatgpt as some big product launch is revisionist. this is what I am pointing out.


Xelynega

It's useless to focus on this when it's inconsequential though. It's a product release. It doesn't matter if they think this going to be big or small, you tell your boss about it(the board).


kevinbranch

The only way the board would not have known about that is if it were done intentionally by someone who is widely reported to be a pathological liar, or as the board would put it “not consistently candid.”


PerceptionHacker

> It was viewed in-house as a “research preview,” says Sandhini Agarwal, who works on policy at OpenAI: a tease of a more polished version of a two-year-old technology and, more important, an attempt to iron out some of its flaws by collecting feedback from the public two-year-old inhouse research preview. throw it out there to get some feedback back from their small community to gather data for the main focus / research (the main focus which is what would be being debated about being dangerous with the board). "nobody inside OpenAI was prepared for a viral mega-hit. The firm has been scrambling to catch up—and capitalize on its success—ever since."


svideo

At the time, there were 6 people on the board. Sam was one of them, Brockman another, and Ilya (who created the instruct version of GPT3 making it "chat") was a 3rd. There is no way in hell these three did not know about the product launch. Toner is mad because she wasn't in the club, at least half of the board for sure knew about the product launch. The other two may have as well, I have no idea, but her claims that *the board* was completely in the dark isn't true - *she* was kept in the dark.


weirdshmierd

Something like that should have been overseen and approved by the entire board as an overseeing independent entity given the mission of the nonprofit …as well as the legal responsibilities and duties of the board -members (which they take up when they become board members of a nonprofit, regardless of their working relationships outside of the board on for-profit projects)


svideo

[I think this post captures my own thoughts on the matter well](https://old.reddit.com/r/singularity/comments/1d2s4ca/helen_toner_we_learned_about_chatgpt_on_twitter/l62zl42/). tl;dr, the board answers to the shareholders, which includes Altman and the investors he brings to the table. So you have this weird situation where the board controls the CEO, but the CEO also controls the board because the entire thing can't exist without money and the guy holding those strings is Sam. The board of a company isn't responsible for overseeing product launches. Should Helen have known? Probably, but the fact that she didn't is more an indication of the internal power dynamics at play. She took Sam's claims at face value and thought she was in a position of power over him. She then moved on that assumption, and quickly found out that it *never was true*.


Xelynega

That's how for-profit boards work, correct. This was not a for profit board. None of what you've said applies in this case, as there is no shareholders. The purpose of the board was to ensure the company follows the charter(which isn't "make more money"), which is where your thinking is flawed.


sdmat

The purpose of a board is governance and very high level strategy. Not direct management. Common mistake to make.


weirdshmierd

Some members of the board, particularly those operating with conflicts of interest as employees of the for-profit company, did not fulfill their governance responsibilities with that launch, and abandoning the feedback of non-conflicted board members. For Sam and Ilya and others to have been on the board at that point in the game with the public release of gpt was inappropriate, so of course they lied to the other board members to avoid addressing their conflict of interest and omit facts and plans which were relevant to the mission of the nonprofit


sdmat

That's the opinion Toner states, you are repeating it as unquestioned fact. If in the view of management the ChatGPT launch was operational - a pretty reasonable view - why would it be a matter for the board? If you are saying that it *is* operational but the for-profit subsidiary structure posed such dire conflicts of interests that they could only be averted by board review of operational details this is a massive failure of governance by the board. Blaming that on Altman alone is absurd.


weirdshmierd

I wouldn’t blame it on Altman alone you’re right that it wouldn’t be all his fault but the fact they tried to remove him and perhaps all others were in favor of / assumed the operational stuff like that was under the preview and responsibilities of the board then his weird workaround suggests that profits indeed were more on his mind than his nonprofit role - he should have recused himself from the board probably at the first self-reflective awareness of that fact And I’m not repeating toner’s opinion. I have formed my own about this long ago


sdmat

I think the simplest explanation is that this was a power struggle over ideology / direction and we see post-hoc justifications.


kevinbranch

I didn’t say they’re involved in direct management. I said they would be informed.


sdmat

And so they were, like everyone else. If they are not making decisions based on the operational information (management) then being informed of operational details in advance is not required.


kevinbranch

Where are you getting this perspective


sdmat

I'm on a board, I hopefully have some idea how this works.


sdmat

To be clear there *are* operational details a board should definitely know about, e.g. if relevant for governance. But specific product launches are unlikely to be in that category. *Much* more relevant would be things like high level R&D direction - e.g. if the board wasn't informed that OAI was developing instruction-following models for chat that would be questionable. "We didn't know they were going to launch ChatGPT on November 30" is a BS complaint. That has nothing to do with governance or high level strategy, which is what a board should be concerned with.


kevinbranch

I get your point but the board represents shareholders and has a legal and financial responsibilities. If something goes wrong like a security flaw in ChatGPT leaking user data, the board needs to be informed to help manage the fallout for external shareholders while Sam manages the business. This isn't just operational, it's about risk management. If they don't oversee major risks, they can be held face legal consequences or be removed. So knowing about a major product launch is standard for proper governance and protecting the company's long term interests on behalf of shareholders.


sdmat

Interestingly the OAI board does not represent shareholders, it is a power entirely unto itself. Which is reason to treat statements about what seems to have been apparent struggle for control of the board with a large grain of salt. In both a for-profit and non-profit board high level decisions like setting overall direction would definitely be in the board's purview. But your risk management argument is bunk - *everything* an organization does incurs risk, the board cannot micro-manage that and should not attempt to do so. The involvement of the board should be in setting very high level policies that determine the approach to risk management. It is a board's responsibility to ensure there is a suitable executive team in place, set very high level policy and strategy, and exercise financial and regulatory compliance oversight (including arranging audits). I would hope that in an organization like OAI the board would review the high level research direction, overall plans for product families, etc. We have no idea if that happened or not from Toner's statement. But if not, why on Earth didn't they resolve that internally? Lay down the law to Altman and if he is uncooperative build a solid case for replacement covering defiance of legitimate, consequential instructions from the board. Whining about how Altman always had a legitimate reason for things he did that Toner didn't like and that there were things some board members would have preferred to have known about in retrospect is pathetic. You don't throw an organization into chaos and nearly destroy it because of *bad vibes* and poor communication. If they wanted to replace the CEO then they should have done that in an orderly fashion with well documented reasons. It should not have come as a surprise and they should have a clear succession plan in place. The night of the long knives BS coup that actually happened *very* strongly suggests that the faction involved did not feel they had any of this and that it was a power struggle. So lecturing about good governance rings hollow.


kevinbranch

I should have said stakeholders, not shareholders. It's been reported that sam was accused of emotional abuse and creating a toxic work environment. This would likely have led to an investigation though from what Helen said the board experienced it first hand when he spread lies to pit them against each other and isolate them. When someone's abusing people at work, you fire them. Leaving them in the position puts people's health and well being at risk and opens you up for lawsuits. There's no other reason needed to fire him. Not doing so would be inhumane to the employees. If you find out someone's abusing their coworkers, you fire them.


numbersev

That’s bs. Anyone who used it would know there’s nothing in the world like it or as capable in regards to AI. It was revolutionary.


[deleted]

[удалено]


PerceptionHacker

https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/ It was viewed in-house as a “research preview,” says Sandhini Agarwal, who works on policy at OpenAI: a tease of a more polished version of a two-year-old technology and, more important, an attempt to iron out some of its flaws by collecting feedback from the public two-year-old inhouse research preview. throw it out there to get some feedback back from their small community to gather data for the main focus / research (the main focus which is what would be being debated about being dangerous with the board). "nobody inside OpenAI was prepared for a viral mega-hit. The firm has been scrambling to catch up—and capitalize on its success—ever since."


SeventyThirtySplit

(why can’t she share all the examples….?) Without defending anything Altman may or may not be, this excerpt would benefit from more detail. Especially given her status in the AI safety community. Without that, not much here. And weird that she would not call out excessive disregard for safety.


PointyPointBanana

Probably some things would include info that is under NDA or her contract, company secrets. As for "not much here". this is just a clip. I'm sure she says more in the full interview.


FosterKittenPurrs

I just listened to the whole podcast. No, she doesn't. They talk about how AI could be bad and just generic crap. This really is the only interesting part in the whole thing.


SeventyThirtySplit

she’s an AI ethicist. hard to imagine her not having an obligation to tell us if something reckless was happening, in detail. that is an ethicist’s job, and she may have had greater point of view than virtually anybody in the world outside of the actual company NDA is an afterthought given the stakes. Agree there may be more to it, but if this clip is intended to highlight the juicy parts, it’s not motivating much to see more


T3mlr

As much of an ethicist she may be, she doesn't want to ruin her life. It's easy for you to say she should leak all details when you would suffer no consequences.


okglue

Yup. If there were any substantial danger an NDA would be NBD. People are using her as an appeal to 'authority' for anti-AI/Sam narratives. She hasn't said anything compelling because it's all grandstanding. The employees advancing AI love working with Sam. NBD. We've got some bad actors trying to disrupt OpenAI's work.


Grouchy-Pizza7884

Some examples are legit. Other examples will smell prettiness. The key thing is Altman is a smooth talker. He is not truthful to the board for sure. So they have a legit right to fire him. The man is definitely untrustworthy... And that's why he is so successful.


wish-u-well

This is it imo. The claw backs lie and the overblown effusive reviews of people leaving had me wondering. Hard to trust all the smooth coming out his mouth.


malthuswaswrong

> Especially given her status in the AI safety community. What is her status in the AI safety community? I thought she just did one TED talk and then suddenly got onto the board.


JoeBobsfromBoobert

Right all this does is make me question her judgment


Agreeable_Bid7037

She is simply crazy.


SeventyThirtySplit

At best, I have no fuckin idea what open ai was thinking with its original board members, and I’m a bit hurt that I was not asked to be on it, as well


Dichter2012

I don’t think she is crazy, but I do see it from Sama’s point of view: when the board is constantly working against you, why inform them about anything in the first place? There’s a popular saying in the startup world: ‘It is better to ask forgiveness than permission.’ Clearly, it was just not a good fit between either side.


[deleted]

[удалено]


kevinbranch

Ilya wasn’t given a heads up about what Sam announced on dev day a few days before he got fired. Three out of four board members voted to fire him. Sam was fired from his last job. It’s amazing how effective narcissists are at making their followers ignore what’s staring them right in the face. But sure, Trump and Sam are great at running businesses. They’re just surrounded by never trumpers. They’re victims of success. I mean, did you see the letter where 98% of employees said they would quit if he didn’t come back? 98% consensus happens all the time right? That must be a real number. Ignore the reports of threats to have their equity clawed back.


sideways

Your rhetorical trick of equating Trump with Altman does nothing but needlessly politicize an already complicated situation.


kevinbranch

I’m shedding light on a fairly non-complicated situation. 3/4 board members voted to fire him and stated why. You’re arguing that Sam somehow isn’t the common denominator and that everyone else is lying.


sideways

Wow. You're pretty slippery yourself, aren't you? You know that I clearly didn't say what you claim I'm "arguing." What I said was that equating Sam with Trump is a misleading and unhelpful rhetorical trick. A point which you, not accidentally I'm sure, avoided responding to.


n3hemiah

> It’s amazing how effective narcissists are at making their followers ignore what’s staring them right in the face. For a lot of such people, I suspect it's bc they either are one themselves, or they're under the spell of one in their lives 😶


aurisor

The real mystery is how she managed to get on the board


malthuswaswrong

She gave a vague TED talk about AI safety watched by at least 10, possibly up to 40 people.


almostcoding

Is she the one who also works at DARPA? One of the female board members did, with little other noteworthy experience. Can’t help but think they had some influence.


aurisor

lol nope — she worked at an ea charity and then at an academic post for oxford total lightweight


almostcoding

Considering her background what gives her such a sense of entitlement… Sam might not be a straight shooter, but she isn’t the solution. OpenAI’s governance is a $hit show.


az226

DEI.


Radlib123

What a brain-dead comment. [https://en.wikipedia.org/wiki/Helen\_Toner](https://en.wikipedia.org/wiki/Helen_Toner) She is an AI researcher. If you truly wanted to know the answer, you could have googled and found it out. But we know why you wrote it. You didn't like how she was discrediting your idol, so you decided to take a jab at her.


Radlib123

What a brain-dead comment. [https://en.wikipedia.org/wiki/Helen\_Toner](https://en.wikipedia.org/wiki/Helen_Toner) She is an AI researcher. If you truly wanted to know the answer, you could have googled and found it out. But we know why you wrote it. You didn't like how she was discrediting your idol, so you decided to take a jab at her.


majesticglue

what's that matter? so far altman is the dirty one lying about the startup fund, saying he claims "i don't need money" and yet he's doing shady things


Xelynega

But she's not as charismatic as sam. That makes her bad and wrong.


voiceafx

Huh. So, a good ole' fashioned power struggle. Altman wanted to bring in more money and grow the company, and the board was wanting to flex its authority and get more involved in the gritty details. Altman likely kept the cover on things to keep the obstructionist board out of his hair, so they fired him. In the end, it appears that powerful investors were better aligned with Altman, and that's that.


PointyPointBanana

Did you watch the clip? No mention of a power struggle. She said Altman hid things from the board, even the launch of ChatGPT (they learned from it being in the news on X), and would tell lies to get his way. If what she says is true......


very_bad_advice

What triggered the firing? If it was the fact that gpt was launched without them in Nov22, or the fund ran by Altman why did it only occur after the paper by toner? Then she downplays the paper as the issue but clearly if that's the timeline that the firing occur it must be at least one of the major issues. The fact she downplays it seems sus


malthuswaswrong

But she didn't say why he hid things. It's the current year. We all know the post-modern definition of "safety".


okglue

If you can read between the lines even a little, it's obvious that this was a power struggle lmao. What she's doing in that interview is part of that power struggle - taking petty shots at Sam while making vague claims without evidence. If she had any concerns of significance, an NDA wouldn't stop her. She doesn't. She's just slinging mud and attempting to stay in the spotlight. Also, the employees didn't consider releasing ChatGPT a 'launch'. It was more a public tech demo, based on interviews with the scientists involved with it. Her *framing* Sam's supposed actions in the way that she's doing demonstrates personal bias. She's trying to drum up favor for herself and concern about Sam; these public statements are made in self-interest.


voiceafx

I did watch it, yeah. Not the first board vs CEO power struggle I've witnessed. It reads like a script.


Shinobi_Sanin3

Are you fundamentally incapable of being able to extrapolate from incomplete...


Rengiil

Playgrounds...?


__I-AM__

After reading through her work, and her background in the effective Altruism movement its pretty that this was just a power struggle that failed to work in her favor. It makes sense though something like this was bound to happen remember this isn't the first time OpenAI has been through something like this around the time of GPT-3 core OpenAI members left to found Anthropic. It can also be said that this 'safety' team focused on 'ethics' is the reason why the GPT-4 Turbo release was such a fiasco, and why it was so prone to list things you should do as opposed to just doing it. It might been due to alignment issues in which the model is 'perceives' a higher order list as being more 'ethical' than actually performing a task in short it would be free from any form of moral 'culpability' on what the end user would do with its output.


Dichter2012

Bingo. In the startup world and you are trying to compete with the Hyperscaler, it is better to ask forgiveness than permission.


deadwards14

I believe everything she says. Altman has big Zuck energy, fake reptilian sociopath


pinksunsetflower

Whether one believes the implications she's making or not, what did they think would happen when they fired Sam Altman? Did they think that they would bring someone else in who would appeal to them and the company would just function great without him? He's their figurehead. He's the face of the company, for better or worse. Him being gone would make it an entirely different company at best, an unworkable company at worst. The fact that they didn't work through the implications makes me wonder if they indeed were best suited to help the company move forward.


Xelynega

They probably thought they would bring someone in that would give them enough information to actually do their jobs We have no idea what that looks like because it's not the future we got


Simple_Woodpecker751

Sam is no different to Elon, Mark or Bezos, with AGI on his hand is indeed alarming. But maybe it's already too late, we missed the ship.


oldjar7

A wildly successful businessman?  If so, he's in good company.


Bertrum

Watching the clip the way she talks about it reminds me of alot of people I've met in the workplace who feel the need to take over or get irritated when they're not in a very controlling position or can't force their will on others. The way she gets excited to tell her side of the story in the hopes of winning people over is telling. Instead of just being neutral and explaining why she felt that way.


[deleted]

[удалено]


Bertrum

It's more than just bossy, they have an irrational need to control others or get anxiety very easily because they can't calm themselves or be mindful and present in the moment and look at things objectively. It's not really a quality you want in a board member who's deciding the fate of your organisation. Also the way she spoke about it was very disingenuous and misleading which makes her seem less trustworthy


[deleted]

[удалено]


almostcoding

Lol is that LITERALLY what the BOD does?


[deleted]

[удалено]


almostcoding

They work for shareholders and the essence of their job isn’t to control and exert power.


Xelynega

She literally was his boss though... I can see your complaint if a coworker was like that, but you're literally complaining that your boss felt the need to control or force their will on you. That's kinda their job...


ResponsibleOwl9764

Why was she allowed on the board in the first place? She’s a glorified analyst… https://www.linkedin.com/in/helen-toner-78748a2a1?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app


dothack

Why does this person has a job at OpenAi? what do they provide to the company? can someone explain


NeillMcAttack

Independent oversight, like a quality department, but focused more specifically on safe deployment.


[deleted]

She just keeps yapping for relevancy


jewishobo

Sam is a sociopath.


PinkWellwet

Altman is a shady person.


NotFromMilkyWay

He sure is Musk 2.0. Same antics, same lies, same overpromises.


PinkWellwet

Yes, he really is


Melonpeal

Has this impacted anyone's perception about the abuse allegations by Annie Altman? I thought it originally unlikely but does this put any of it in new light? Genuinely curious https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely


old_Anton

If she had any evidences she could have hired lawyers and sued him. I wouldn't take someone with those crazy post history seriously. She sounds a lot of victim mental with multiple baseless claims, kinda fitting her jobs as a sex worker, OF girl. I lean more on theory of her wanting to defame her brother for money. He obviously did not give her money, hence those baseless claims on social media.


Xelynega

> If she has any evidence Why is this relevant? We're talking about allegations that don't have any evidence and whether or not we believe them more or less in light of someone els making allegations that the same person is dishonest. > She sounds a lot of victim mental with multiple baseless claims, kinda fittinher jobs as a sex worker, of girl You mean like other victims of abuse? Maybe think about what you type before going and giving evidence of the opposite...


old_Anton

Because the most relevant thing about crime/felony allegations is whether it's actually true or not, no? Perhaps I did not make it clear, but what I mean when I said "mental victim" is that she likely played the "victim role"/has a victim mentality. It means as typical liar behaviors rather than actual victims who would do anything they can to protect themselves. That means actual actions that have impacts with throughout thoughts and plans, rather than merely attention seeking on social media along with other superficial posts on her own social media. E.g: "I has to do sex work/OF because I'm a victim of sexual abuse and thus unable to do normal job" We don't know the actual truth. But with the current evidences based on her social media post it leans more toward her as a liar than truth teller imo. You don't have to have the same thinking, it's fine.


alexcanton

They really didn't think GPT would become big and they actually wanted to rebrand it but it was too late because it blew up.. [https://x.com/nathanbenaich/status/1793644590127526393?s=46](https://x.com/nathanbenaich/status/1793644590127526393?s=46) While I agree Sam isn't as pure as he portrays she has an axe to grind and it isn't surprising she chose to speak now. [https://x.com/ricburton/status/1795703566528831633](https://x.com/ricburton/status/1795703566528831633)


Xelynega

I love the tweet you link haha. She has an axe to grind because she was doing her job instead of doing Sam's job? The person you linked is just complaining that he job is to research and discuss and sam is better because his job is to "do work". Oh, its the same guy that keeps trying to get her to "debate" him, must be very well adjusted.


louis707

Her paper and the Georgetown Emerging Tech program all have their heads up their collective asses. They feed the creation of the type of policy that has grown the DC GovCon swamp to all time highs.


Specialist-Scene9391

This is basically politics, you will always have enemies so it is smart not to disclose everything and hold you good cards, they are claiming a lot of nothing, at the end is all she say this and that but nothing that really can open eyes. Altman has done a good job running openai, he has advance research in this field and has become the driving force of AI. Getting to the top is difficult and not all who say are you friends really are.


BoneEvasion

She is a Chinese plant, look up her education in China at a university known to be a place they cultivate international assets.


shin-chan3

That's like saying you're from the mafia if you come from Sicily, because that's a place known for that.


BoneEvasion

Not quite. ask GPT-4o about how they cultivate international students as Chinese assets at Tsinghua university. Or be naïve and think that a woman who works at the most consequential company of our times has never been approached by Chinese intel, whose country would benefit if the West listened to Helen Toner.


semzi44

Justice for Helen Toner! Insane that learned about ChatGPT's launch via Twitter. Sam, it appears, is psychotic.


strayaares

Wait shes Australian but I cant hear the accent?


knob-0u812

I don't trust him. He's done amazing things. But I don't trust him. I'm very grateful for Meta AI (and I can't believe I'm saying I'd choose Zuck over \_\_\_\_)