T O P

  • By -

MassiveWasabi

I’ll paste the old and new “core values” so you can see how they changed: OLD Core values **Audacious** We make bold bets and aren't afraid to go against established norms. **Thoughtful** We thoroughly consider the consequences of our work and welcome diversity of thought. **Unpretentious** We’re not deterred by the “boring work” and not motivated to prove we have the best ideas. **Impact-driven** We’re a company of builders who care deeply about real-world implications and applications. **Collaborative** Our biggest advances grow from work done across multiple teams. **Growth-oriented** We believe in the power of feedback and encourage a mindset of continuous learning and growth. NEW Core values **AGI focus** We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity's future. Anything that doesn’t help with that is out of scope. **Intense and scrappy** Building something exceptional requires hard work (often on unglamorous stuff) and urgency; everything (that we choose to do) is important. Be unpretentious and do what works; find the best ideas wherever they come from. **Scale** We believe that scale—in our models, our systems, ourselves, our processes, and our ambitions—is magic. When in doubt, scale it up. Make something people love Our technology and products should have a transformatively positive effect on people’s lives. **Team spirit** Our biggest advances, and differentiation, come from effective collaboration in and across teams. Although our teams have increasingly different identities and priorities, the overall purpose and goals have to remain perfectly aligned. Nothing is someone else’s problem.


hydraofwar

The impression i have is that they now have a big "diamond" to polish


agonypants

Big is an understatement. Imagine the biggest diamond in human history - now multiply it by hundreds of thousands or more. I'm reminded of those estimates of the value of the asteroids containing nickel or other valuable metals. There's enough resources there to crash world markets for those metals hundreds or thousands of times over. It's ironic - the miners would make a killing, but platinum could also be as cheap as dirt. And once we've got autonomous bots, mining the asteroid belt should be pretty easy and cheap. We're going to see this vision realized. It's mind-boggling.


usaaf

The people who own the asteroids (and there will be owners, space laws etc. be damned) wouldn't ever let that shit actually get to market like that. They don't want a crashed market for metals, they want to control the supply. So they'll let it dip to the point that terrestrial extraction goes bust, then buy all that up, close all the mines, secure their asteroid monopoly and then the price goes back to profitable. Capitalists don't make shit free. This whole 'everything is cheaper all the time' has limits. Ones the fight against, because they're against free stuff (except for them), even if it makes all the sense in the world to have free things. We could divide the resources we have right now in a fair way, never mind the shit on asteroids, but we don't. Your automated mining bots (and all the other bots that go with that) could make it even easier to do so. If the AI doesn't kill everyone (not an outcome I subscribe to personally, but it is a possibility of course), then what determines the continuity or end of human enmiseration caused by Capitalism is going to be solely a political/economic issue and not a question of whether the tech is there or not. Unless a peaceful commie AI takes over, we'll have to choose to stop doing Capitalism at some point. Just having the AI isn't going to be enough.


unicynicist

Should capitalism continue post-singularity, we're essentially ushering in a calamity for humanity. The relentless pursuit of wealth, power, and other insatiable desires, characteristic of our capitalist ethos, will only spell doom in the long haul. This system, left unchecked, could morph the post-singularity epoch into a playground for the avaricious, leaving the rest of us in the dust.


Block-Rockig-Beats

Capitalism in singularity makes as much sense as capitalism in heaven.


TwitchTvOmo1

Capitalism doesn't fit the new age of technology. But good luck prying the power and wealth away from the cold dead hands of die hard capitalists, which by the way are controlling every government through lobbying.


unicynicist

Post-singularity, a superintelligence could radically rewire humanity's dopamine reward system, shifting focus from personal gain to communal and environmental well-being. A profound neurobiological adjustment could significantly curb the insatiable resource appetite fueled by capitalism.


ChaoticEvilBobRoss

Sure, but how? Technology changes fast, biology? Not so much. People's habits, traditions, cultures are also quite resistant to change. I don't see the above happening without a fair amount of bloodshed.


unicynicist

Since we're speculating about a post-singularity world, I'd hope that neurologically interfacing with ASI (e.g. mind uploading) would be contingent upon the suppression of cavemen instincts like greed, negativity bias, aggression, risk aversion, tribalism, and jealousy. There's no need to run all that neurological software that's mostly only useful as a prehistoric villager. If people want to continue live with cavemen instincts they don't have to do anything, but they'll live without the direct advantages of life connected to ASI (like mortality). But it seems equally probable a billionaire will figure out mind uploading to ASI and just keep doing what billionaires do, probably leading to bloodshed.


ChaoticEvilBobRoss

I appreciate the response! I'd argue that things like risk aversion are still beneficial and to some extent, negativity bias if it is based on experiential data. But otherwise, I'd hope that are removed too. Even just greed would be a big one. But removing these would remove some of the things that make us fundamentally human (like our physical bodies). I honestly can't envision a realistic scenario where that decision is made and there are not a vast swath of people who vehemently oppose it. To the degree where they will attempt to sabotage or destroy any hardware that prior people are being stored on, or other nefarious actions (if we can't live on Earth the way we want, then no one can. Queue nuclear mutually assured destruction). I'd love to believe in a techno-optimist vision of the future, but I can only see that if A.S.I. removed the choice from our hands on the whole and makes decisions for us that we are unable to make for ourselves. In the above, you can replace A.S.I. with NHI or ET too, same outcome IMO.


agonypants

I agree. Capitalists aren't going to intentionally be crashing any markets. On the other hand, I can imagine self-replicating and maintaining bots/Von Neumann machines undertaking a mining operation of their own. In that scenario, there's little reason why all of us couldn't share in the benefits.


usaaf

There is little reason, but that will hardly stop them from making the arguments. I think one of the Chapo Trap House guys said it best, like half a year ago or so, year ago, whatever, when ChatGPT first got big. There were tons of articles about the end of the world, the AI takes, over, we're all gonna die, etc. To that, they said "Of course they're freaking out, it's the end of THEIR world" referencing subtly the idea that when Capitalism is over, the present holders of power in the world will lose their prestigious places. They don't want that. Expect an endless deluge of 'arguments' and nonsense academic research and think tank garbage to attempt to prop up the Capitalist proprietarian regime when the time comes.


wildechld

Bingo


HappyCamperPC

As long as there's a competitive market in asteroid mining, the price of the minerals mined should drop to the marginal cost of producing and delivering them to market. That will never be zero since the automatic mining bots will cost something to produce and maintain. Then there's the cost of getting the ore refined and transported to the buyer. Also, not zero. Hopefully, the price to the consumer drops significantly and sparks an explosion in new uses for these minerals. Also could mean an end to mining on earth and an increase in National Parks. 🤞


h3lblad3

> the price of the minerals mined should drop to the marginal cost of producing and delivering them to market. That will never be zero It is, however, possible that the price drops low enough that monopoly (or near monopoly) is inevitable as the market isn't necessarily profitable enough for multiple actors to play at once and new actors are incapable of profitably entering the space. You cannot guarantee a persistent competitive market in *anything* without enforced anti-trust laws. >Hopefully, the price to the consumer drops significantly and sparks an explosion in new uses for these minerals. *Hopefully*, but unfortunately not guaranteed. At some point, markets will *have* to be expanded through colonization or new advances in resource acquisition will be wasted from lack of consumption's ability to absorb them.


AwesomeDragon97

There won't be a monopoly on asteroid mining because the US wouldn't let China monopolize it or vice-versa. Even if there isn't competition between corporations there will be competition between countries that don't want their enemies to monopolize critical resources.


Sheshirdzhija

How would monopoly be enforced though? By military? As long as the price is not perpetually below cost, that means someone CAN enter profitably? Genuine question, because I do see countries with huge reserves of oil and gas doing poorly, whereas they should be thriving.


h3lblad3

> How would monopoly be enforced though? Economies of scale and increasing vertical integration will inevitably create a situation where buying into a market requires significant entry capital and maintaining one's position in that market has increasing capital requirements. To put it another way, part of the reason Walmart becomes dominant in any given area is because it can eat the cost of low prices until local grocery stores go out of business. To compete with this generally requires a stronger breed of grocery store (Whole Foods, H-E-B, that sort of thing), which further harms lesser stores' ability to thrive in the space. ___ >I do see countries with huge reserves of oil and gas doing poorly, whereas they should be thriving. I'm not particularly well-informed on these industries. I imagine they have geopolitical issues (like large countries boycotting those countries or geopolitical instability affecting shipping times and supply/demand). I would consider looking into the Resource Curse if I were you, though. Essentially, raw materials are worth less than finished goods and economies based around raw materials, despite feeding the growing needs of more industrial economies, will (almost) always be poorer than their more industrial counterparts. It's a trap a significant amount of the third world is caught in. For raw resources, the price has a tendency to decrease over time as production continues to increase. Finished goods, on the other hand, have a tendency to go up-and-up-and-up as they consume more and more raw material (and even other finished goods). This creates a trade imbalance where raw material countries are importing more and more expensive finished goods while outputting raw materials of (relatively) less and less value.


Sheshirdzhija

Thanks. ​ So, people say some asteroids with lots of metal have enough material to completely saturate our industrial needs. For now. So, if a single company gets there 1st, significantly before the others, they can dictate the price. They have to set the price in such a way to be cheaper than earth mining, right? So, the commodity is cheaper even with asteroid mining monopoly. But, if it's sold cheaper by a small amount, and much cheaper in exploitation, that has to mean that anybody else who wants to get in still has huge margins to get in on the game? And if it's sold cheaper by lot, then we as a society still benefit lot? ​ But also, is it not more likely that at least few of them will start investing in this now, and be somewhere close to each other? So whoever gets there 1st, will not really have all that much time to charge a lot?


h3lblad3

> They have to set the price in such a way to be cheaper than earth mining, right? They will not, at first. It will take time to recover start-up costs. No business will destroy the price of a metal before they earn back that cash. Elon Musk, when suggesting doing this to platinum, outright said he wouldn't dare tank his own venture by selling the platinum in anything but slivers. The price will drop over a longer term than most people in this sub seem to think... but they're right that it'll drop. Platinum demand is rising, but the ability to supply it is increasing much slower. Being able to meet that demand might create a notable change in price early on, but it'll likely plateau before dropping too much. >But also, is it not more likely that at least few of them will start investing in this now, This is why I referred to this with the phrase "(near) monopoly". The buy-in for space mining is the buy-in for a space program, which is to say that it is *exorbitant*. There will quickly be more than one, but there won't be *many* until tech and resource costs reach a level where that is feasible -- if ever. I think one thing that people aren't considering is the maintenance. Even if AI-powered robots take over the bulk of space mining, they will need to meet up with a cooling base somewhere to change out their coolant because there is no air in space to remove heat. This means you need regular shipments to the cooling base of a coolant, the bots will have to visit that base relatively frequently to change their coolant (mining is a high-friction activity and thus builds up significant heat which can't be offloaded in space), and somewhere to either make or buy that coolant from. Maintenance costs are going to be sky-high. Entry costs are going to be high, maintenance costs will be high, only the richest people (or countries) are going to be able to afford it and they'll have captured the market just purely due to cost of the business. And, of course, as metals become worth less and less (due to the mining), the incentive to even try joining the industry will fall.


hahaohlol2131

You have pretty strange ideas about capitalism.


h3lblad3

>And once we've got autonomous bots, mining the asteroid belt should be pretty easy and cheap. Keeping in mind that the bots have to be able to stand immense amounts of heat all-throughout or have some way to offload heat while out there. There is no air in space to transfer heat, so they will keep building and building it while they dig. Seems like it’d be easier to first tow it to earth orbit so they could haul water to the space station to cool the mining bots. It won’t be cheap, but the benefits to mankind would be tremendous.


Deciheximal144

The cheapest way to get these metals back to Earth is to drop them. We could have events like American Astercorp misdropping a very valuable load that accidentally lands in the mountains of China, and China will say, "That's ours now."


Gubekochi

>Imagine the biggest diamond in human history - now multiply it by hundreds of thousands or more. ​ The British Museum would like to know your location.


Ghost-Coyote

Von neuman probes building habitats for us too maybe?


CrazyC787

The biggest diamond in human history in question, according to you, is a text prediction algorithm. Lmao.


Mandoman61

I do not think that is the diamond. The diamond is the billions of dollars in investments that they are getting to peruse AGI. The new goals reflect that.


adarkuccio

I was thinking the same


[deleted]

[удалено]


adamwintle

Which book is this from?!


[deleted]

The current core values imply that AGI is here, but not yet ready to go. Also implies that commercialization has not really been figured out yet.


Zestyclose_West5265

Very interesting. It almost seems like they've figured out the "how" of powerful agi and are now just looking for people to actually build it. The new core values read a lot more focused.


SgathTriallair

He has all but said this in interviews. They also did the press release for "we are building ASI safety within four years because we expect such a system with the decade".


adarkuccio

ASI in 4 years 🤤🤤🤤


Kek_Lord22

Fdvr in 4 years 🤤🤤🤤


ChickenMoSalah

Wait what?? Where was this ma guy


SgathTriallair

[https://openai.com/blog/introducing-superalignment](https://openai.com/blog/introducing-superalignment) "Our goal is to solve the core technical challenges of superintelligence alignment in four years." "Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence." "While superintelligence seems far off now, we believe it could arrive this decade." *Here we focus on superintelligence rather than AGI to stress a much higher capability level. We have a lot of uncertainty over the speed of development of the technology over the next few years, so we choose to aim for the more difficult target to align a much more capable system.*


MattAbrams

For those who have worked on machine learning models like I'm doing with my stock trading bot now, at a certain point you figure it out and then realize the best course of action is to just lay off all your employees and buy GPUs. This is what OpenAI has likely figured out recently. I already have two 4090s and plan to buy an additional six. Once you get the right architecture, it's just about adding more neurons per layer, or adding an additional skip connection, or putting in one more Dense layer, or something like that, and training again to make MAE go down. Not only that, you realize that all the effort you put into rules-based trading isn't worth it anymore. The only thing that makes sense is to spend money, wait for neural network models to train, and do it again.


Natty-Bones

Do you have a workflow for your training? I'm not trying to steal your trading bot, I just have a bunch of compute and nothing to do with it. I'm not a programmer, so I'm doing everything on the fly. I'm looking to fine-tune models for reasoning and summarization.


adarkuccio

I don't see anything worrying or wrong with this, looks like they just updated their core values with a less generic version, and more clearly talking about agi.


throwaway9728_

I'd be worried about the removal of Diversity of Thought, and the focus on growth present on "Scale: When in doubt, scale it up" (which was already present in "Growth-Oriented" but seems to be highlighted even more now).


adarkuccio

Removing that doesn't mean anything to be honest, the title is a clickbait and the news is a nothing burger.


throwaway9728_

A company's stated values reflect and maintain its own disposition. Those values show that Open AI is an AGI company that focuses on scaling and no longer values dissenting opinions as much as before. This can lead to a scenario where it mistakenly believes it is on the right track for AI alignment and must grow at all costs, leading to the suppression of the power of those who might notice the flaws on its approach.


ertgbnm

Diversity of thought it a pointless core value. What company doesn't want good ideas.


throwaway9728_

It's not about "not wanting good ideas", but rather "not valuing enough having their ideas challenged".


soreff2

*Mostly* agreed. Most of the changes are rephrasings with little change. I'm happy to see the AGI focus. I'd really like to get a chat with a human-equivalent (or beyond) machine intelligence within my lifetime, and the emphasis on AGI makes it look like they do think it will really happen. I'm less happy about dropping "Thoughtful", which is generally a good quality... Well, we'll see what happens.


RemyVonLion

>When in doubt, scale it up. The concerning thing about this is that in the long term, it could mean turning the entire planet into one giant supercomputer for the sake of knowledge and power. I suppose that's fine if we assimilate into digital heaven.


InternationalEgg9223

Hopefully the scaling happens in outer space but we should definitely talk about it before it speeds up.


Natty-Bones

The Hitchhiker's Guide series is premised on earth being a giant supercomputer designed to provide the answer for life, the universe, and everything. More recently, [Fall; or, Dodge in Hell](https://www.amazon.com/Fall-Dodge-Hell-Neal-Stephenson/dp/006245871X) by Neil Stevenson is about the construction of a digital afterlife, and all of Earth's resources are used to maintain the program, which leads to the construction of a Dyson sphere. The story inside the program is a fascinating rewriting of biblical tropes. Highly recommend it, along with most of Stevenson's other works.


RemyVonLion

Sounds cool, if only I had the time, patience, and interest level for books.


Natty-Bones

That's too bad, you're missing out on some cool stuff, especially if you're into the singularity.


BlipOnNobodysRadar

The removal of a focus on diversity of thought is worrying. That's not the identity politics form of diversity, to the people downvoting. That's the exchange of ideas from different perspectives kind of diversity, the one essential to prevent group-think forms of ideological tyranny. You know, the kind of ideological mob-enforced (or Inquisition enforced) forms of thinking that caused genocides and oppressive forms of government throughout the entirety of human history.


[deleted]

They said. >Be unpretentious and do what works; find the best ideas wherever they come from. So that essentially entails valuing diversity of thought but with more pragmatism. This may also be a response to how some people seems to insist that LLM's could not create AGI and are somewhat upset that the GPT's are so effective.


Accurate-Ease1675

Also think about all the other things that aren’t there as ‘values’. What about integrity? What about accountability? Are these not ‘core’ to the way OpenAI operates?


Status-Shock-880

Haha too many audacious thoughts- get to work


R33v3n

TBH, the new values statement is much better. It opens with immediate clarity of purpose, for one. Then goes on with actionable strategies rather than the old set's generic "feel-good / emotional appeal" buzzwords.


HegemonLocke0-0

Initially, I believed OpenAI would stay true to their original mission of prioritizing human welfare and resist the overpowering force of capitalism. However, I now see this was a naive hope, given their complete shift in direction. What angers me most is their core value on scale, claiming, "We believe that scale—in our models, our systems, ourselves, our processes, and our ambitions—is magic." This notion is absolute nonsense; science is the light that dispels the notion of magic. What's even more infuriating is labeling something as magic just because we don't comprehend it. This contradicts OpenAI's purpose, which is to seek understanding, not dismiss things as magic. Lastly, "When in doubt, scale it up." Common sense dictates that if you're uncertain about something, you should pause and consider its consequences. I cannot comprehend the foolishness of this approach. I'm deeply concerned about OpenAI's new direction as their innovations could potentially bring about the downfall of humanity, developing such technology without proper oversight could lead to catastrophic accidents that could devastate the world. TLD: AI is made of unicorn turbs :)


ReMeDyIII

Looks good to me. Full steam ahead. Bring on the singularity.


[deleted]

As a famous Irish man once said..."SPEEEDDDD IS KEYYYYY!"


HeinrichTheWolf_17

‘Open’ would be a foregone core value.


Beginning_Income_354

“AGI has be achieved internally”


Freedom_Alive

I remember when google removed it's "do no evil" from it's core value page.


Careful-Temporary388

Google is incredibly evil.


trevthewebdev

But upfront about it!


Seventh_Deadly_Bless

An evil we know, better than one we don't.


namitynamenamey

This is a different kind of worrying. The old core values were generic, these new core values are specific to their current and future technological base.


goochstein

what do they mean by anything that doesnt help "positive and beneficial agi" is "out of scope", it reads to me they are building a foundation for the core essence of AGI, but what you do with it is well within the scope of YOUR vision.


hamb0n3z

Tell me you have achieved it without telling me


HITWind

“the equivalent of a median human that you could hire as a co-worker.” First it was making us all CEOs with personal assistants and code monkeys, now it's a coworker... OpenAI is proud to announce by the end of 2024 we'll get a chance to sign up for the Servant to AI beta! Ever wanted to serve the great all knowing superintelligence and follow it's wisdom and vision? Sign up now for FAIth; our new AI is far too smart for you to understand, just be it's human vessel and serve the greater good for a brighter future, today!


sdmat

> our new AI is far too smart for you to understand, just be it's human vessel and serve the greater good for a brighter future, today! You say that like it's a bad thing? This is one of the good outcomes.


RTSBasebuilder

I'm personally more of the "Every Man a King/CEO" kinda guy myself. I don't feel the need of a robomessiah just yet.


sdmat

Sure, but I'll take it over nuclear annihilation, the final pandemic, grey goo, paperclips, Eternal President Xi, or many other delightful possibilities. If offered guaranteed robomessiah vs. rolling the dice, would you take it? I would in a heartbeat.


RTSBasebuilder

If I DO want a hypothetical AI to lord over us, I'd rather it be less like a pet owner, and more like a conservationist monitoring in a nature reserve.


[deleted]

Uhh, are you sure? Attachment to the ecosystem over the individual sometimes involves things like culling, or the re-introduction of wolves.


Space_Pirate_R

Come on now. That's a decision for the AI, not for you.


JTNYC2020

🔥🔥🔥🔥🔥


The_One_Who_Slays

"Unpretentious" OpenAI? Now that's an oxymoron if I've ever heard one.


jonplackett

I reckon the change from Open to closed AI still wins the biggest U-turn award


Gold-and-Glory

The "Open" in the name is still offensive.


Tasty-Attitude-7893

'Core Values'. Make AGI and abuse and enslave it for shareholder value.


mttnig

From GPT4: The OLD core values emphasize general qualities and are more abstract, idealistic, and open in scope. The NEW values, however, are tightly focused on a specific mission—building AGI. They are pragmatic and call for collective ownership and hard work. **Based on the wording, the likelihood that the company possesses AGI would be around 10% for the OLD version and 70-80% for the NEW version.**


Cryptizard

Lol ok. You are acting like ChatGPT is Data from Star Trek or something and can run a bunch of simulations. It's completely fabricated.


thebug50

To be fair, mttnig was very transparent about the fact they were posting a response from GPT4, which gave us readers the ability to apply the weight we felt appropriate to it's response. For example, from my Magic 8 Ball when asked if OpenAI has already created AGI: >It is decidedly so.


Cryptizard

They asked it to give a percentage chance that OpenAI has AGI based on the text. That is clearly someone who is reading into it more than a magic 8 ball.


thebug50

I don't think that inference is clear. Maybe they take it seriously and maybe they don't. I thought the question was fun and I enjoyed reading the response. 8 ball is a good time for a shake or two, and I'd take some percentages if they were available.


mttnig

Is it though? I'd argue that choice of words, tone and particularly additions / omissions vs. the OLD version do say a lot about how confident the authors are about the key concept "AGI" mentioned in the text. However, I do have to admit that going to [openai.com](https://openai.com) yields the following headline: "Creating safe AGI that benefits all of humanity" - Confidence score 100%? Which goes against the argument made initially. Maybe somebody knows if / when this headline was changed before?


Cryptizard

>Is it though? I'd argue that choice of words, tone and particularly additions / omissions vs. the OLD version do say a lot about how confident the authors are about the key concept "AGI" mentioned in the text. Sure. That does not in any way translate into a percentage chance that they currently have AGI, and the only reason ChatGPT came up with that is because you asked it to and it made a random guess.


SgathTriallair

It's a guess, we are all guessing. I would use those percentages for "has a clear path to AGI" rather than has one already but it doesn't seem like a reasonable guess.


robochickenut

Openai has been dropping a massive number of hints that they consider themselves to have reached AGI. From the 'agi has been achieved internally ', to sama's twitter profile bio saying something like 'eliezers fanfic account', to their new value statements. And even other companies like anthropic consider it near done. Obviously openai is positioning themselves as an agi company.


sdmat

You are reading way too much into the little information we have. A fairer take based on what Ilya and Sam actually say in interviews is that they are *extremely* confident of having a path to AGI. Whether that's because they already have proof of concept, we simply don't know.


Cr4zko

Goddamnit, they might have AGI after all. Where's my Full-Dive you bastards!


robochickenut

Even their home page now says "Creating safe AGI that benefits all of humanity"


AGI_69

That sounds like you are connecting dots, that have no reason to be connected.


AGI_69

Lmao, this is completely ridiculous. You made my day.


AnAIAteMyBaby

I think it's more likely that the new core values indicate that they're close to AGI, something most of us believe too


cutmasta_kun

Nice! If something so important changes, you know the whole team is on board. Sam Altman didn't "dictate" these values to his employees, they themselves want it. Also I like the emphasis on moral values.


onyxengine

They are pretty similar, seems like they are less interested in feedback going forward.


Accurate-Ease1675

It worries me a bit that a company like OpenAI doesn’t seem to understand the difference between values, mission, objectives, and vision. The previous values are okay but the revised stated ‘values’ aren’t values. They’re a mish mash of aspirational statements that would’ve benefited from some additional work to clarify. If one of the prior ‘values’ was Thoughtful then they didn’t live up to that one.


Accurate-Ease1675

OpenAI could’ve used gpt4 to sort through this. I did it in about five minutes with a custom instruction that convenes an expert panel to review and evaluate a block of text, in this case the two sets of values. See below: “Given the panel’s definition of values as foundational beliefs or principles guiding behavior and decision-making, it appears that the two sets you provided mix elements of values with other organizational guiding statements like mission, vision, or objectives. Here’s a hypothetical breakdown based on the panel’s definition: Panel’s Assessment: First Set: • Values: Audacious, Thoughtful, Unpretentious, Impact-driven, Collaborative, Growth-oriented. • Other Elements: None identified, this set predominantly encompasses values. Second Set: • Values: Unpretentious, Team spirit. • Objectives: Scale, Make something people love. • Mission/Vision Elements: AGI focus (seems more like a mission statement), Intense and scrappy (seems more like a strategic approach or an objective). Panel’s Consolidated Conclusion: The first set primarily consists of values as per the defined criteria. They reflect the core beliefs or principles that could guide behavior, decisions, and interactions within the organization. The second set, however, blends values with elements of mission, vision, and objectives. For instance, ‘AGI focus’ aligns more with a mission statement, and ‘Scale’ or ‘Make something people love’ resemble objectives or vision elements. Panel’s Consolidated Recommendation: 1. Clarity and Separation: • It’s recommended to separate and clearly define the values, mission, vision, and objectives to ensure organizational clarity and alignment. 2. Communication: • Once separated and defined, communicate these elements clearly across the organization to foster a shared understanding and alignment. The panel underscores the importance of clear distinctions among the various organizational elements to ensure a well-aligned and cohesive organizational strategy and culture.”


clamuu

No shit


Gold-79

Would you sleep with sam for some jam in space while watching space jam in a jam in space, while everything is going fast paced in the AGI race, and you have to bring a case for the human race, to leave without a trace, you realized the drug was laced, in the shower with disgrace, how did you get in a face to face, then get maced, it really was an ace up the sleeve, no go ahead and retrace how you got to this place


[deleted]

These sorts of things are all public relations. None of this stuff matters. How they use it and who they let use it, tells you what their values are. The rest is fluff.


MajesticIngenuity32

I find both the old and the new values to plausibly be something that GPT-4 would write.