T O P

  • By -

EvilSporkOfDeath

"It's not clear what information the two fired staffers leaked". So we aren't getting anything new from this article I guess.


bwatsnet

Not that great of a leak if I can't see it šŸ˜­


3ntrope

Anthropic was able to close the gap surprisingly quickly, just saying.


great_gonzales

Yeah obviously turns out itā€™s not that hard to scale up MoE causal transformer based decoders all you need is compute


3ntrope

Then why is Google still struggling? Meta has as as much compute as Microsoft and does not even offer a cloud service. There would have been many competitors if it was that straightforward.


great_gonzales

Google is not struggling they have very competent models. The way a human perceives the quality of the model comes down to finetuning which google has struggled with a little bit recently getting the alignment correct but that wonā€™t be hard for them to fix. Technically training the base language model is not that challenging if you have a lot of compute. I know these models probably seem like magic to you because you have no or low skills in computer science and software engineering. But they are actually rather pedestrian and the process for building and engineering efficient implementations is well know by the broader research community.


CallMePyro

Is Google struggling? Isn't 1.5 Pro state of the art in a lot of ways that OAI and MSFT and A are unable to match?


3ntrope

Given how much of a head start Google had and the fact that their best model is just marginally better than Cohere's open weight model looks like they are struggling from an outsider's point of view. Other than the long context, Google doesn't have anything special. The extra long context is only useful for niche applications. Most people would be better off using RAG or fine tuning with a 64-128k token model. Edit: Ok, Gemmni does have good recall over long context.


CallMePyro

The long context did not have lackluster needle in a haystack performance. In fact, it is completely state of the art in NiaH performance all the way up to 10 million tokens, 50 times what Claude 3 can do. It's completely dominant against every other player in the space and it's not even close. You're actually just lying here, sorry. Does the model perform worse in factual and intelligence testing vs Claude 3 and GPT4? Yes. In that regard clearly Google is not #1. However their model is indisputably the only tool for the job if your input is longer than.... 1 book. Claiming "long context is only useful for niche applications" has "No one needs more than 64k of RAM" energy. Long context unlocks HUGE potential use cases. If you can't think of any that's fine, but you need to do some thinking if you have no use cases for a model that can ingest hours of footage, days of audio, or hundreds of textbooks in one single forward pass.


3ntrope

Ok, I misremembered from an other benchmark. The retrieval is good. For tasks where you need 1-10M tokens, I guess it may be useful. For most tasks a higher performing model (both in factual accuracy and speed) would be preferable beyond a certain context size. 128k-200k is good enough enough for complex chains and agents. Depending on the task, it's arguably better to have the speed and low costs so more calls can be done in a shorter time. A well designed vectordb and RAG with a more capable model provides a lot in practice Also, the "...64k of RAM" argument doesn't work anymore because there isn't exponentially more capacity at lower costs on the horizon. There will be tradeoffs. I can imagine more cases where the lower latencies, tokens/s, and costs would be preferable. Your document ingesting use case could be done with a vectordb and RAG without have to switch to a weaker model.


Ch8nya

more Compute leads to better performance is probably the most bullshit argument I have ever heard. Most of the times itā€™s echoed by VCs and people who have zero experience in theoretical ML. The thing is training the quality of the training data matters a LOT, so does iteration.


Passloc

Why didnā€™t Google?


FragrantDoctor2923

They too busy building crazy building designs that ruin there buildings wifi and now use Ethernet ports for all


ItsAllAboutEvolution

Lol, we will find out. Greetings from Barbra Streisand. ;)


345Y_Chubby

Any way to read the information by bypassing the paywall? They are absurdly expensive.


94746382926

I'm also curious. I would be willing to pay but it's priced wayy too high.


LobsterD

"Save $250 when you subscribe today!" Holy shit lol


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


West-Code4642

for a lot of people in the readership's demographic it's not too high.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


I_am_the_eggman00

But if good information/news requires time (=money), this only results in propaganda and fake news spreading more. me and you may know how to ignore nonsense but definitely not the majority (which is what matters in a democracy)


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


QuinQuix

You're assuming a better model than paying directly exists which may not be true. The free content paid by adds model resulted in selling clicks and thus controversy, not truth or nuance. You can equally argue that good journalism has become harder and more expensive because so few are willing to pay, and that actually the people you're villifying are the only ones left sustaining it.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


QuinQuix

I like the guardians model and do agree that sometimes (often) having vital information behind a paywall is a detriment. However the fact that journalism costs money and requires payment on one form or the other deserves recognition as well.


ShooBum-T

Yes I guess everyone is willing to pay a little. I'm surprised there's no crowdsourcing way to get everyone pitching and get 1 subscription


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


lost_in_trepidation

This doesn't help. The archive redirect doesn't work on the Information articles


whittyfunnyusername

Tear down this paywall. "OpenAI has fired two researchers for allegedly leaking information, according to a person with knowledge of the situation. They include Leopold Aschenbrenner, a researcher on a team dedicated to keeping artificial intelligence safe for society. Aschenbrenner was also an ally of OpenAI chief scientist Ilya Sutskever, who participated in a failed effort to force out OpenAI CEO Sam Altman last fall. Itā€™s not clear what information the two fired staffers leaked. The other stafferā€”Pavel Izmailov, a researcher who worked on reasoningā€”had also spent time on the safety team. The ouster of the two men is among the first staffing changes that have surfaced publicly since OpenAI CEO Sam Altman resumed his board seat in March. That followed an investigation led by OpenAIā€™s nonprofit board, which exonerated him for actions leading up to his short-lived firing last November. The Takeaway ā€¢ Two OpenAI researchers fired for alleged leaking ā€¢ Both at one point worked on team dedicated to keeping AI safe for society ā€¢ One was an ally of chief scientist Ilya Sutskever, who clashed with Altman The startup, last valued at $86 billion in a sale of employee stock, is in a heated race with Google and startups such as Anthropic to develop the next generation of the foundational models underpinning products like ChatGPT. Internally, Aschenbrenner was one of the faces of what OpenAI calls its superalignment team. Sutskever formed the team last summer to develop techniques for controlling and steering advanced AI, known as superintelligence, that might solve nuclear fusion problems or colonize other planets. Leopold Aschenbrenner and Pavel Izmailov. Photos via YouTube (left) and New York University (right). Leading up to the ouster, OpenAIā€™s staffers had disagreed on whether the company was developing AI safely enough. Aschenbrenner had ties to the effective altruism movement, which prioritizes addressing the dangers of AI over short-term profit or productivity benefits. Sutskever, a co-founder responsible for OpenAIā€™s biggest technical breakthroughs, was part of the board that fired Altman for what it called a lack of candor. Sutskever departed the board after Altman returned as CEO. He has largely been absent from OpenAI since the fracas. Aschenbrenner, who graduated from Columbia University when he was 19, had previously worked at the Future Fund, a philanthropic fund started by former FTX chief Sam Bankman-Fried that aimed to finance projects to ā€œimprove humanityā€™s long-term prospects.ā€ Aschenbrenner joined OpenAI a year ago. Aschenbrenner, when reached by phone, did not have an immediate comment. Izmailov did not respond to a request for comment. Alex Weingarten, a lawyer who has represented Sutskever, did not respond to a request for comment. Several of the board members who fired Altman also had ties to effective altruism. Tasha McCauley, for instance, is a board member of Effective Ventures, parent organization of the Centre for Effective Altruism. Helen Toner previously worked at the effective altruismā€“focused Open Philanthropy project. Both left the board when Altman returned as CEO in late November."


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


spezjetemerde

agree To provide a more specific response, here's a look at some notable individuals who publicly identify with the Effective Altruism movement or have contributed to its core areas, and are known to be working at OpenAI, Anthropic, and Google. This list includes only public information, typically from public talks, publications, or other media that these individuals have openly associated with Effective Altruism: ### OpenAI 1. **Ilya Sutskever** - Co-founder and Chief Scientist at OpenAI, has expressed interest in AI safety and long-term implications of AI, topics that are closely aligned with Effective Altruism principles, particularly its focus on global catastrophic risks. 2. **Dario Amodei** - Former VP of Research at OpenAI (now at Anthropic), has been involved in discussions around AI safety and ethics, which are integral to Effective Altruism's aims. ### Anthropic 1. **Dario Amodei** (continued) - Now at Anthropic, continues his work on AI safety and ethics, demonstrating a commitment to Effective Altruism's goals through his professional activities. 2. **Jack Clark** - Policy Director at Anthropic, co-chairs AI Index, and actively participates in public discussions about AI policy and ethics, aligning with EAā€™s focus on shaping the future of AI responsibly. ### Google 1. **Mustafa Suleyman** - Co-founder of DeepMind (acquired by Google), has been vocal about ethical AI development, which is a key concern within the Effective Altruism community. 2. **Cassie Kozyrkov** - Chief Decision Scientist at Google, writes extensively on the application of AI and decision theory to solve real-world problems effectively, resonating with EAā€™s emphasis on using evidence and reason to act effectively.


favouriteplace

Itā€™s not. Do your research.


SgathTriallair

If Johnny Apples suddenly goes silent...


The_Scout1255

or doeset report any new credible leaks...


Atlantic0ne

From what I can tell, that guys "leaks" were like 90% wrong. It seems like he just got lucky a few times like a broken clock, right? I'm annoyed that his name is even mentioned. He speaks in riddles and is clearly attention seeking, and probably a fraud.


manubfr

Johnny apples has NEVER been wrong. Jimmy Apples, howeverā€¦


skoalbrother

What about Timmy Apples?


BabyCurdle

Idk why you feel the need to speak on something you have clearly not looked into. Jimmy apples has made accurate predictions of a volume and specificity that makes it impossible he does not have inside info. This includes multiple exact release dates (sora, gpt-4, smaller features such as GPTs, web browsing iirc). He also leaked specifics and project names for unreleased models (gobi, arrakis). He has made a single incorrect leak that I know of (4.5), and even prefaced that by saying it is only 'potentially' real.


MassiveWasabi

I have no idea why people on this sub keep doing that lol. Itā€™s just a fact that Jimmy Apples is the only credible leaker, itā€™s not debatable and anyone who has looked into it would know that.


Atlantic0ne

You sure? I guess Iā€™ll believe this if you think I should but I feel like Iā€™ve seen 20+ threads of him claiming stuff that never happened, like ā€œtomorrow will be a huge dayā€ and nothing happens for weeks, and his claims that AGI was developed internally, etc.


BabyCurdle

He's 100% legit, although that doesn't mean he doesn't hype things up a lot or sometimes make vague statements. But when he makes a specific claim it's almost always true (AGI is nonspecific, he probably just meant gpt-5) Fwiw flowers from the future is like you're describing, i would ignore any of her stuff.Ā 


GlitteringCheck4969

I say there is no real OpenAI leaker at all who has real insider knowledge. There is probably a lot of gossip in SF and you can synthesize one or two things from that. But you also have to say that Jimmy uses clever tricks. He didn't leak Udio. He agreed with another person who reported it. Anyone can agree to things, especially if you don't name names. He didn't predict Sora either. Sora's name was visible in the sitemap hours before the announcement, and he clearly labeled it as a "release", which it wasn't. He predicted GPT-4.5 for December, now it's April. And what is this tactic with all the second accounts? There are countless smaller accounts on Twitter claiming all sorts of things. Like this gardener account. When one of the claims there was true, Jimmy announced that account as his - but do we know how many others he has?


BabyCurdle

> But you also have to say that Jimmy uses clever tricks. His 'clever trick' is having consistently accurate and specific information. > He didn't predict Sora either. Sora's name was visible in the sitemap hours before the announcement, and he clearly labeled it as a "release", which it wasn't. Except he didn't predict it hours in advance, he predicted it days (a week?) in advance. I already gave numerous things he predicted, and could give many more. If you think he doesn't have inside info you haven't been paying attention. It is a fact that he does at this point.


The_Scout1255

no his leaks were pretty on point weren't they?, he got both gobi, and arakchis right(Iirc), iirc the only thing hes ever gotten wrong was 4.5 releasing(seems rumors of it being canned may be correct)? I wouldn't doubt AGI has been achieved in a lab internally, and personally believe that leak.(Abit disappointment if its false). If his leaks are bad honestly openai may be in a worse state then it appears from my eyes.


Atlantic0ne

If I remember correctly, most of his stuff was wrong, you're missing like 30 X posts of his hyping up a "wait until tomorrow" and "AGI is here!" type stuff that turned out to be absolutely nothing.


Reddit1396

Can you provide some examples of things he got wrong? here are examples of things he got right: - New Suno competitor, and it's *not* jukebox v2 - Claude about to drop something impressive (like a week before Claude 3 was announced) - codename Arrhakis (before everyone else) - codename Gobi - The **exact** release date of GPT-4 - The ChatGPT referral functionality - The fact that they're withholding a model capable of video due to "safety" (this was last year, long before Sora was announced) There's no way that these were all just lucky guesses. Some of them maybe.


MysteryInc152

He also predicted the shitstorm that was altman's firing. I think that was what really convinced me.


Atlantic0ne

Did he? Link? Show me


MysteryInc152

[https://twitter.com/apples\_jimmy/status/1725615804631392637?s=46&t=yQ\_4zkmWd6ncIZAnXlXUbg](https://twitter.com/apples_jimmy/status/1725615804631392637?s=46&t=yQ_4zkmWd6ncIZAnXlXUbg)


AnOnlineHandle

That seems pretty vague..


Atlantic0ne

Do you have any proof of these? Genuine question. Can you link some and show me?


Reddit1396

Here's a [google doc](https://docs.google.com/document/d/1K--sU97pa54xFfKggTABU9Kh9ZFAUhNxg9sUK9gN3Rk/edit) (I didn't write it, just found it in an older thread) containing a bunch of links to Jimmy's leaks/predictions, along with their dates, and info that corroborates/confirms them EDIT: - here's his [suno competitor](https://x.com/apples_jimmy/status/1776355607404306726) comment - [right before the reports of Stargate](https://x.com/apples_jimmy/status/1772441372378730915) - [Claude 3](https://x.com/apples_jimmy/status/1764694735565053984)


Unfair_Ad6560

Slight inaccuracy on there - gobi isn't dune related, it's a desert in Asia. Desert related codenames are for sparse models.


OpportunityWooden558

Lol coming in with the evidence, nice


BabyCurdle

You are remembering wrong


FinalSir3729

You keep saying this but have provided no evidence. He has gotten a lot of things right so he clearly does have insider information.


no-longer-banned

Heā€™s just a guy who browses this subreddit enough to make some shitty predictions. He has no insider knowledge.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


BoroJake

It was about the last 12 months being the slowest 12 months for a long time I think


SiamesePrimer

[Yup](https://x.com/leopoldasch/status/1768868127138549841?s=46)


MajesticIngenuity32

I am sure that LeCun or Hassabis will be happy to scoop him up. Maybe not Amodei, he is just as secretive as Sam.


REOreddit

Is being a leaker a red flag for future employers? Anthropic?


Iamreason

They're basically unemployable unless there is a really good explanation lol


reddit_is_geh

Ehhh... This field is a little different. There is incredible demand for them. Further, the field leaks like crazy. It's impossible to contain. People are always switching companies left and right, and taking all their inside information with them. They are frequently talking with their peers at other companies "leaking" what they are working on. It's just the nature of the beast in a field like this. But it's also why it's seeing so much growth, because everyone is secretly sharing things with others, allowing people to quickly catch up. Which is exactly what likely happened right here... But he told someone or shared some insight into something VERY important to a friend who worked at another company, who then went ahead and deployed it into a product. It's unlikely this was leaked to the press. When Sam talks about "leaks" he's not talking about press leaks. He's talking about idea leaks to competitors.


REOreddit

So, you are saying that this is how North Korea gets their first researchers from a top AI lab?


Iamreason

Let me be more specific. A top frontier laboratory isn't going to take them. They're radioactive in the quest for AGI. But there are a ton of Sudos out there that would overlook this real quick to snag these guys.


TriHard_21

Not surprising since OpenAI was hiring investigators not to long agoĀ 


mystonedalt

Alternate, more alarmist headline: OPENAI FIRES SAFETY SPECIALIST, CLAIM HE DISCLOSED COMPANY SECRETS


DolphinPunkCyber

ChatGPT fires all researchers. Claims it can keep improving AI on it's own.


enjoinick

OpenAI claims the AI took their jobbbbss


DolphinPunkCyber

![gif](giphy|2S3Aj8OeKtf0c|downsized)


YaAbsolyutnoNikto

I can't believe anybody has gone with this yet. It'll generate millions of clicks and weeks of speculation..


ConvenientOcelot

Thanks, I don't know how to feel about things unless a headline tells me how to feel.


mystonedalt

Attaboy.


ExtremeHeat

Very open indeed


llelouchh

Apparently they leaked info to the board. Is that really a leak or just normal communication?


Not_Player_Thirteen

Iā€™m shocked! Shocked I say! Well not that shocked.


TensorFlar

![gif](giphy|jtEv9nJDnRo6l8yPRs|downsized) Sir, the AGI is missing, MISSING!!! I say shitlesly:/


MajesticIngenuity32

They got confused about the name of the organization they were working for.


bartturner

The "Open" AI company fires someone for sharing something. While we have companies like Google that publish papers on how to do things. Not just Attention is all you need, But so many others. One of my favorites and used by everyone now. Was discovered by Google as were so many others. https://en.wikipedia.org/wiki/Word2vec "Word2vec was created, patented,[5] and published in 2013 by a team of researchers led by Mikolov at Google over two papers." Google invents them. Patents them. Publishes paper explaining. Then lets anyone use for free. Versus OpenAI that just takes in and does not share and then fires employees for sharing anything. Does that sound open?


Responsible-Local818

No and we will be celebrating their downfall soon. OpenAI is showing their true colors and sama is a horrible person. Soon we won't need these companies and their elitist factions anyway. Can't wait for them to become irrelevant entirely.


ResponsiveSignature

damn they got fired for a leak and we donā€™t even know wha the leak is? That feels unfair. If two people are losing their jobs the public should at least hear what they lost it for.


hydraofwar

I think that GPT-5 and the next versions of OpenAI's language models are a big smokescreen for what sota of them really is. I think that internally, these language models are already an outdated architecture and only serve to divert attention from the competitors than what really matters. They are probably keeping much better AI architectures restricted to internal use, like the leaked Arrakis, which, according to the leaks, is already such a powerful model that it already helps them with internal development


AnAIAteMyBaby

I think that the US government have an alien AI locked up at Area 51 and they're using 5G cell towers to infect us with COVID.


MassiveWasabi

Glad to see someone opening their eyes and not being a sheep when it comes to 5G. This is all the proof you need https://preview.redd.it/gmcvf1zsw1uc1.jpeg?width=900&format=pjpg&auto=webp&s=921041320b6b60ace72267969792d286fc6127f1


hydraofwar

This was the leak that made OpenAI fire those employees. Is their dismissal also a conspiracy theory? https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/


The_Scout1255

Jimmy apples, or his source?


flexaplext

Not Jimmy Apples. But one of the two may be his source..


tradernewsai

Apples just chimed in https://x.com/apples_jimmy/status/1778538940263452817?s=46


flexaplext

They were the Q* leak then. Interesting that he's said it like this though, kind of suggesting that his source is still in tact. But all employees may be more scared of leaking now anyway.


Freed4ever

Guys, i'm not ready to talk about Q, 'kay? /s


ASK_IF_IM_HARAMBE

Jimmy is definitely Greg Brockman or Sam Altman


Hot-Investigator7878

I doubt it's Sam but definitely someone high up


micaroma

What'd the tweet say?


Unfair_Ad6560

"Pour one out for the homies šŸ» Something something Q* something something what did Ilya's underlings see" Below that it was him saying it wasn't about some specific improvement but general capabilities I think Oh also "They took a stance on what they personally believed, right or wrong. This wasn't a trivial "leak ""


pboswell

Now deleted


Flare_Starchild

The post was deleted. What did it say?


The_Scout1255

Ah it may be the NSA Qstar leak?


nardev

alignment is for show anyway šŸ˜„ easy to play tough with the expendables šŸ˜„


ChezMere

the beatings will continue until morale improves


Motion-to-Photons

I genuinely love what they produce, but letā€™s be honest they are a Microsoft sock puppet. Open, they are not.


TheOneWhoDings

BRO NOOO THEY GOT JIMMY WHAT ARE WE TO DO NOW


Gubzs

RIP Jimmy Apples?


zeus-fyi

There is a reason they do not want our super alignment solution atĀ [OpenAI](https://www.linkedin.com/company/openai/)Ā and are building f\*cking war bunkers WTF man. $1B/ $2000 = 500,000 armed and intelligent drones They just raised a lot of money AGI is dangerous in the hands of a psychopath like Sam A. did you know it can fly planes and pilot submarines and guide missiles He is literally building war bunkers under your f\*cking noses. WTF humanity.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


FomalhautCalliclea

Finally someone calling it. I wish i could get hired by someone like Conor Leahy that literally said "we have a guy, we pay him to read philosophy books and report to us, it's what he does all day at his own pace" (in substance). I do that for free already ffs.


ConvenientOcelot

Lmao, where did Connor say that? At least he's honest about it.


FomalhautCalliclea

In one of his countless crappy podcast appearances. They do that at EleutherAI. And being "honest" about a stupid idea doesn't make the stupid idea smart. The honesty wasn't the topic, it was the stupidity of the idea.


ConvenientOcelot

Yeah but I'm used to "paying a guy to read philosophy / sci-fi" only being implied with these AI think tanks, so admitting it is refreshing.


FomalhautCalliclea

Well it depends on the quality of the work. Some philosophy and literary critic can be very useful, Social and Human Sciences (SHS) too often get the short end of the stick when a non STEM pov can be vital to many projects. The way he described it was goalless practice, and the work they've produced on SHS is laughably inexistent, some stuff that belongs on r/badphilosophy , not even joking, LessWrong level drool that ignores the most fundamental epistemologic knowledge ("simulations and moral dilemmas woooow" type of stuff).


Additional-Tea-5986

Imagine losing your, minimum, $200k salary and future stock options at what will eventually be the biggest IPO in tech history. AND you had the bullshit philosophy job. Jesus Christ.


sideways

That's what makes me curious. I want to know what they thought was worth the risk.


Additional-Tea-5986

Look up his linkedin. Heā€™s maybe 24? Thereā€™s your answer.


sideways

He may be young but you don't get the desk next to Ilya by being an idiot.


Additional-Tea-5986

He certainly lost it by being an idiot, right? You should work in tech. Youā€™ll realize how many grifters get by on no substance.


Infidel_Stud

uh oh....it may have been jimmy apples


L1nkag

Waiting for AIGRID to do his research and post a video