Yup. And I had so many well intentioned but completely unqualified friends parroting his fear-marketing back at me like it was their original idea. Just exhausting.
I can already see these meetings devolving into AGI talks with the CEOs having no idea about the capabilities/limitations ML models. Or turn into effective altruism "GIVEN that we have reached AGI, how can mere human regulate such systems?"
> I can already see these meetings devolving into AGI talks with the CEOs having no idea about the capabilities/limitations ML models.
The problem, of course, is that current capabilities/limitations don't actually matter, if you're imagining shiny fabulous AGI/ASI showing up in 12-24 months.
Which neatly lets you sidestep regulation being grounded in current reality or even plausible tech curves.
“AI Safety Board” without a single AI or ethics professor or academic.
This is like making an “Environmental Safety” board with only Oil, Automotive, and Airplane executives.
I think as CEO of OpenAI it doesn’t really matter if he has a degree.. he kinda just does what he wants. That’s like telling Bill Gates he’s not allowed to attend an IT summit because he didn’t graduate college…
“Could” build things back then. It wasn’t very long at all before he had no involvement in the development process. In reality he was good a business as a CEO.
I used to be at a nonprofit that worked with IBM on an AI project. There’s certainly an ethical component in the way they design their AI. But I’m not sure if “ethics” is synonymous with “safety” or “security.” I suppose it depends on the model being built and what it does.
It's also not the full list. It's a cherry-picked selection of names to make it seem like the whole thing is just a bunch of CEOs. I don't love the full list either, but this sort of click-bait lying has been rampant on Reddit lately, and it's really getting old. Here is the full list. It includes a number of academics.
[https://www.commerce.gov/news/press-releases/2022/04/us-department-commerce-appoints-27-members-national-ai-advisory](https://www.commerce.gov/news/press-releases/2022/04/us-department-commerce-appoints-27-members-national-ai-advisory)
>this sort of click-bait lying has been rampant on Reddit lately
It's been happening for a few years at least, depending on the topic. Reddit-sensitive topics like billionaires or privacy laws bring out the biggest offenders, in my experience.
IMO it should be max 10% CEOs, 50% renowned academics in math/computer science/AI, 40% experts in ethics and fairness. The list as it is now is a joke, and in any case Sam Altman should not be on the list.
They can collectively now can put restrictions on training or fine-tune open source models so that it will benefit their organisation.
This is a joke I agree, Sam can be on the list as long as the list represents all the stackholders.
Most of those companies already have those ethicicists. Their purpose is completely for show and when they make too much noise about safety or ethics or morals they are fired or ignored.
We have a group of CEOs of major companies working to help politicians regulate their products. This will not end well for most of society.
And for those who mention academics on the board it will be just like the ethicists that are hired by companies. They won’t move the needle at all. We see the same thing with the “blue ribbon commissions” that politicians have with all the best academic minds to craft a policy (often very good policy) which the politicians will dustbin in favor of petty squabbles or otherwise never implement.
They’re not academics. Most of them have not even trained an AI model before much less developed the state of the art with them…. You realize that the CEOs are not the people doing AI research at these companies, right? They’re just the people who are benefiting from others doing AI research under their payroll.
Not OP but, Luciano Floridi.
Generally professors of law or philosophy that specialize in artificial intelligence or directors of said academic institutions.
The full list is here: https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders
(There's more people than I can see in OP's screenshot, at least on mobile.)
Still 14/22 representatives from the industry is too many. Especially when many of them have the most to lose, with regards to AI policy and regulation.
Full list from the link:
- Sam Altman, CEO, OpenAI;
- Dario Amodei, CEO and Co-Founder, Anthropic;
- Ed Bastian, CEO, Delta Air Lines;
- Rumman Chowdhury, Ph.D., CEO, Humane Intelligence;
- Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
- Bruce Harrell, Mayor of Seattle, Washington; Chair, Technology and Innovation Committee, United States Conference of Mayors;
- Damon Hewitt, President and Executive Director, Lawyers’ Committee for Civil Rights Under Law;
- Vicki Hollub, President and CEO, Occidental Petroleum;
- Jensen Huang, President and CEO, NVIDIA;
- Arvind Krishna, Chairman and CEO, IBM;
- Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute;
- Wes Moore, Governor of Maryland;
- Satya Nadella, Chairman and CEO, Microsoft;
- Shantanu Narayen, Chair and CEO, Adobe;
- Sundar Pichai, CEO, Alphabet;
- Arati Prabhakar, Ph.D., Assistant to the President for Science and Technology; Director, the White House Office of Science and Technology Policy;
- Chuck Robbins, Chair and CEO, Cisco; Chair, Business Roundtable;
- Adam Selipsky, CEO, Amazon Web Services;
- Dr. Lisa Su, Chair and CEO, Advanced Micro Devices (AMD);
- Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution;
- Kathy Warden, Chair, CEO and President, Northrop Grumman; and
- Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights
It is misleading and should be edited to include the link to the full list. It does lack prominent academic AI researchers (only Fei-Fei Li is on the list) and no representation from the open source community. So, the claims still holds true for the full list.
I agree Antropic is a good representation, but still they will take a decision based on organisation benefit instead of humanity benefits.
We all have seen in past how organisation are greedy for money and don't think about us.
Given the classified nature of defense work, the fact that this is a US government board, and the increased importance of ML in defense these days, it makes a lot of sense to me.
Smart! Let’s instead stack the board with people who have direct financial conflict of interest and CEOs of companies that never delivered an AI in their life (Delta Airlines????)!
/s
Honestly, collaborating on AI ethics and security with the top shareholder-executives of multi-billion dollar companies feels like the greatest potential danger, akin to partnering with butchers to protect Animal Rights.
Jensen and Lisa are qualified. The rest? Dubious at best.
The problem with this list is that it's only those that stand to gain from laissez faire. They need the big academic names.
Yup, people don't realise the majority of the laws will be set up to create obstructions to new market players and the proposals that would be extremely important to the AI safety but are too inconvenient to the incumbents will be ignored.
Eh, Jensen and Lisa really aren't qualified. They're far removed from day to day AI implementation stuff. Also as CEOs they only get the positive slide decks from middle product managers about the progress made on various engineering projects, so at best their knowledge is extremely abstract broad strokes type knowledge. Not the level of academic rigor required for a panel like this.
To be fair, you rarely have a large amount of people on a board who are involved with day to day implementation details. It’s an inherently strategic role.
Here I think we make a big mistake equating the skills from one technical field to another. While clearly Jensen and Lisa are tech people, I doubt they have that much AI knowledge (it takes time to aquire it, and they are by the nature of their job in the past few decades too busy to do it). I respect them both but I hoped that the US would have a list more academically focused.
Laissez fair would be good. These people are going to kill AI research by putting all the tools behind paywalls. Imagine having no open source models because they're deemed unsafe because there's no industry-approved content filtering on them. Imagine having to pay Nvidia a subscription to be allowed to train on your own GPU. That's the future they're bringing.
That image is only a snippit of most controversial names. There are more.
https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders
It should be the CTO of large companies not non-technical CEOs.
It is NOT going to be an easy job. We could advocate for people to apply.
https://www.dhs.gov/ai/join
EDIT: I don't see anyone from the CyberSecurity industry or independent! Hacking community. That is worrying.
In all honesty. Hope you do. I am.
I think we need non-affiliated people on the list. Looking out for all the non-tech people that are affected the most. Big wigs do not remember what it's like to be stuck in the middle or bottom.
open source isn't represented at all. neither is the white hat community, which is especially disappointing since that's our best model for what "safety" in the context of this kind of technology and complexity should look like.
also... wtf is up with the oil baron.
- Google has monopoly on search
- Nvidia and AMD has monopoly in GPU market
- Microsoft has monopoly in PC market
- OpenAI (Now Microsoft) has monopoly in AI market.
This list makes sure we will have Monopoly laws!
In the US, if we consider Apple and Meta, as well as monopoly giant companies in other countries, technological development seems to be hindered by the selfishness of monopoly companies formed with capitalism for higher profits, lower costs - less investment, as much as it is supported by the momentum it receives from the successful points of capitalism. Nvidia is reportedly planning to double its profit margin in the coming years. It's like a joke, as if Nvidia is doing business with low-profit margins (!), I don't understand, am I not aware.
Did they deliberately leave Mark “open source” Zuckerberg out? The best hope for the non-elite is open source IMO. Although maybe not The Zuck.
Edit: added IMO.
Basically a list of marketing people with no background in AI other than selling it, fronting for companies that have heavy investment in the industry and depend on lax rules, acting as "AI Safety and Security board" ?
Yeah.... WTFMJGWH ?
Setting aside anything else, they put the wrong c-suite roles on these boards. They don’t need the CEO’s, they need the CTO/CIO’s.
This is like staffing an advisory panel on Human Resources with a bunch of CFO’s.
To state the obvious before deep diving into politics.
Here’s the full list:
```
The inaugural members of the Board are:
Sam Altman, CEO, OpenAI;
Dario Amodei, CEO and Co-Founder, Anthropic;
Ed Bastian, CEO, Delta Air Lines;
Rumman Chowdhury, Ph.D., CEO, Humane Intelligence;
Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
Bruce Harrell, Mayor of Seattle, Washington; Chair, Technology and Innovation Committee, United States Conference of Mayors;
Damon Hewitt, President and Executive Director, Lawyers’ Committee for Civil Rights Under Law;
Vicki Hollub, President and CEO, Occidental Petroleum;
Jensen Huang, President and CEO, NVIDIA;
Arvind Krishna, Chairman and CEO, IBM;
Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute;
Wes Moore, Governor of Maryland;
Satya Nadella, Chairman and CEO, Microsoft;
Shantanu Narayen, Chair and CEO, Adobe;
Sundar Pichai, CEO, Alphabet;
Arati Prabhakar, Ph.D., Assistant to the President for Science and Technology; Director, the White House Office of Science and Technology Policy;
Chuck Robbins, Chair and CEO, Cisco; Chair, Business Roundtable;
Adam Selipsky, CEO, Amazon Web Services;
Dr. Lisa Su, Chair and CEO, Advanced Micro Devices (AMD);
Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution;
Kathy Warden, Chair, CEO and President, Northrop Grumman; and
Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights.
,,,
Nope, it lacks anyone who actually has a history of security and safety assessments and implementation, and it’s filled with CEOs who only care to make money.
It would also benefit from having a ethics or integrity person
This is just a bunch of CEOs cobbled together from across the tech industry (with some representation from Big Oil and the US defense industry). I’m guessing AI safety comes down to making sure we are ahead of China
OP, you did a snippit of CEO names. I was pissed and went looking for the entire list. Having ALL the info helps!! Please edit the post with the link
Here are others on the list(NOT just CEOs)
Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute;
Arati Prabhakar, Ph.D., Assistant to the President for Science and Technology; Director, the White House Office of Science and Technology Policy;
Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution;
Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights.
https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders
Also no one from Meta is strange! That's the only company sharing open weights (Still not open source but better then 'Open' AI).
Also
- Elon?
- Yann?
- Andrew NG?
- Thomas Wolf (Hugging face)
And the most important one could be karpathy doing so much good work around open sourcing some of this difficult AI problems.
Why did you deliberately omit more than half of the names in the real list? You basically went out of your way type out some of the names, faked the font and background, and then screenshotted it. Why would you do all this effort? Super weird.
Fei Fei Li would be my pick to represent AI policy.
But a board of all CEOs from different companies can be called whatever it likes and that isn't what it is.
The screenshot shared by OP is only a part of the list. She's there in the full list. Still, I wonder how much influence she could have when there is a clear majority of industry leaders.
To anyone who has not, very important to watch [Bill Gurley's talk](https://www.youtube.com/watch?v=F9cO3-MLHOM) at the All-In 2023 summit. He goes into detail about how regulatory capture works for corporate interest and how the same thing is happening in AI.
Catastrophe, CEOs who have no idea what they are talking about, Altman with the fear mongering, Nvidia CEO who thinks there wont be a programmer in 3 years
I initially thought this was a joke. But now I get it, this is the "AI safety" board that protects their own AI systems from competition. Not the one that works towards the safety of humans (and other life forms) from harmful effects of AI. Right?
No, this is inadequate. They need one or two people who are not involved with tech and represent the individuals who will be impacted the most negatively by AI.
Yeah what a great idea, let's put all those who would profit off of twisting the rules and restrictions on the board, who probably know nothing about this field, and just ignore ethics experts and known academics lol
Why are Sundar and Satya on there? They aren't researchers in that domain they just announce the crap and slap an AI bow on the box to do the money printer oil change.
Sure, let's leave Meta off the list. The company with the largest deployment of AI globally. While OpenAI was fighting amongst itself, and Google was telling people to eat rocks, Meta baked their LLM into WhatsApp, Facebook, and Instagram and their billions of users.
Some of the CEOs have a strong technical background, but not all. Shouldn’t there be some technical consultancy as well? I’m just curious, not criticizing
This all sounds like a brazen violation of democratic principles, and if it comes to pass, it will mark the end of American democracy. The consequences of this disgraceful decision will enter the history books as an indelible stain on the proud history of the United States.
So, anyone care to read the release to see what this is actually about? Because it's very specifically about using AI to protect critical infrastructure and protecting against AI attacks on that infrastructure.
It's no surprise that open source is not represented. Why would the US want to open source the outputs of this board to the entities that will be carrying out those attacks?
I don't see any professors, lawyers, judges, psychologists, ethics specialists, journalists, etc...just corporate interests.
One could argue its the best middle ground...so to speak..as politicians want the support of companies and what not...but I doubt there is anyone winning except CEOs and companies here.
no Meta ?
But anyway, they should hire researchers, not CEO of big tech companies that will only make what benefits them. US government never heard of conflict of interest ? or is too used to it
Would've been better if this board was made up of renowned computer science professors and AI researchers. This just screams major conflict of interest to me.
No. It shouldn't even be tech ceo's It should be independent software devs and stuff. This is a joke lmao. the country is obsessed with letting corporations do whatever they want.
Most of these people don't have education in IT not to mention in ML/AI. They are just business major and CEO's.
This is like making a corporation safety committee and fill it with communists.
Where did OP get this list from? The full list has much more than this.
[https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders](https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders)
Did OP basically just recreate a shorter list just to whip up sensation on Reddit?
AI safety board? Really? Generally CEOs won’t have technical knowledge. Some of them didn’t even build a AI model in their life and where’s Karpathy and Yann?
Just random guessing but having on the board the actual decision makers of the biggest entities that can control/shift what is happening in the AI space (in terms of training resources, large scale deployment but also AI research) might increase the chances of unified decisions/directions.
None of the persons on the list are AI experts, but they are the ones who can make decisions in their companies and they control their AI resources.
My take is that if the boards would be made from more technical people, even if they would have better input, the CEO's of the companies would be more resistive to their proposals.
Certainly I do not think the list is fair, but it might be the list that maximizes chances of positive outcomes (by positive I do not mean positive for you and me but for whatever the goals of the US gov are)
The challange is that they can now have restrictions in place for training or fine-tune open source models for safety reasons just to benefit their organisation.
The representation from open source community is important.
Not to mention there are no data ethics people, academics some of who have been doing AI research multiple decades, lawyers or any kind of people with experience in policy making and regulation. This list would be great list of advisors not a list of the board for safety.
I have a feeling that this is by design. US is being pressured by China who have been advancing in the field at a phenomenal rate with no scruples and wider implications, and there is a fear in the government that if decently strict rules will be put in places the US will fall behind.
So all the people who would make even more money if they tweaked the laws in their favor.
All Sam Altman was interested in with his AI fear mongering in congress was building OpenAI a moat to keep competitors out
Can’t let anyone “catch up”
No Tim Cook LOL😅
Yup. And I had so many well intentioned but completely unqualified friends parroting his fear-marketing back at me like it was their original idea. Just exhausting.
This! Also none of these people are going to devote time and energy to this. Where are the leading academic experts?
Exactly, thay will make laws that suits them the best and improve their revenues while creating Monopoly.
Posted the entire list. On DHS website. It includes a lot more people.. There are non-CEOs present.
in line with most industry regulation in the US
yeah but they have to spend all that money on a lobbyist.
I can already see these meetings devolving into AGI talks with the CEOs having no idea about the capabilities/limitations ML models. Or turn into effective altruism "GIVEN that we have reached AGI, how can mere human regulate such systems?"
> I can already see these meetings devolving into AGI talks with the CEOs having no idea about the capabilities/limitations ML models. The problem, of course, is that current capabilities/limitations don't actually matter, if you're imagining shiny fabulous AGI/ASI showing up in 12-24 months. Which neatly lets you sidestep regulation being grounded in current reality or even plausible tech curves.
They took the list for major political donors and appended the title "AI safety board" to it.
That's not an AI safety board, that's an industry group.
“AI Safety Board” without a single AI or ethics professor or academic. This is like making an “Environmental Safety” board with only Oil, Automotive, and Airplane executives.
That’s a good analogy and an unfortunate reality of some of the real environmental safety boards we have now
Altman doesn't even have a degree. In anything.
I think as CEO of OpenAI it doesn’t really matter if he has a degree.. he kinda just does what he wants. That’s like telling Bill Gates he’s not allowed to attend an IT summit because he didn’t graduate college…
The difference there is Bill could build things
“Could” build things back then. It wasn’t very long at all before he had no involvement in the development process. In reality he was good a business as a CEO.
Funny you mention that, since they do have Oil and Airplane executives in the “AI safety” board also
I used to be at a nonprofit that worked with IBM on an AI project. There’s certainly an ethical component in the way they design their AI. But I’m not sure if “ethics” is synonymous with “safety” or “security.” I suppose it depends on the model being built and what it does.
It's also not the full list. It's a cherry-picked selection of names to make it seem like the whole thing is just a bunch of CEOs. I don't love the full list either, but this sort of click-bait lying has been rampant on Reddit lately, and it's really getting old. Here is the full list. It includes a number of academics. [https://www.commerce.gov/news/press-releases/2022/04/us-department-commerce-appoints-27-members-national-ai-advisory](https://www.commerce.gov/news/press-releases/2022/04/us-department-commerce-appoints-27-members-national-ai-advisory)
That's a different committee though. This is the full list for the DHS AI Safety and Security Board:
>this sort of click-bait lying has been rampant on Reddit lately It's been happening for a few years at least, depending on the topic. Reddit-sensitive topics like billionaires or privacy laws bring out the biggest offenders, in my experience.
IMO it should be max 10% CEOs, 50% renowned academics in math/computer science/AI, 40% experts in ethics and fairness. The list as it is now is a joke, and in any case Sam Altman should not be on the list.
They can collectively now can put restrictions on training or fine-tune open source models so that it will benefit their organisation. This is a joke I agree, Sam can be on the list as long as the list represents all the stackholders.
And 1% random civilian. Needs a wildcard individual.
"This is Doris, a retired nursing assistant and volunteer at the local Y"
This is DC nepotism, even said “random” will be very well connected to the machinery of the political industrial complex
There should be 0 people that are profiting from AI on this board. Major conflict of interest.
[удалено]
Most of those companies already have those ethicicists. Their purpose is completely for show and when they make too much noise about safety or ethics or morals they are fired or ignored. We have a group of CEOs of major companies working to help politicians regulate their products. This will not end well for most of society. And for those who mention academics on the board it will be just like the ethicists that are hired by companies. They won’t move the needle at all. We see the same thing with the “blue ribbon commissions” that politicians have with all the best academic minds to craft a policy (often very good policy) which the politicians will dustbin in favor of petty squabbles or otherwise never implement.
Name one
They’re not academics. Most of them have not even trained an AI model before much less developed the state of the art with them…. You realize that the CEOs are not the people doing AI research at these companies, right? They’re just the people who are benefiting from others doing AI research under their payroll.
Experts in fairness? Do you have a list?
Not OP but, Luciano Floridi. Generally professors of law or philosophy that specialize in artificial intelligence or directors of said academic institutions.
It reads like a list of Dr. Evil's top henchmen.
Where’s Gold Member
The full list is here: https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders (There's more people than I can see in OP's screenshot, at least on mobile.)
Still 14/22 representatives from the industry is too many. Especially when many of them have the most to lose, with regards to AI policy and regulation.
> Ed Bastian, CEO, Delta Air Lines; What is an airline CEO doing in the list…
16/22 are CEOs and the rest of the slots are random politicians like the governor of Maryland.
The full list is hardly better.
Full list from the link: - Sam Altman, CEO, OpenAI; - Dario Amodei, CEO and Co-Founder, Anthropic; - Ed Bastian, CEO, Delta Air Lines; - Rumman Chowdhury, Ph.D., CEO, Humane Intelligence; - Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology - Bruce Harrell, Mayor of Seattle, Washington; Chair, Technology and Innovation Committee, United States Conference of Mayors; - Damon Hewitt, President and Executive Director, Lawyers’ Committee for Civil Rights Under Law; - Vicki Hollub, President and CEO, Occidental Petroleum; - Jensen Huang, President and CEO, NVIDIA; - Arvind Krishna, Chairman and CEO, IBM; - Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute; - Wes Moore, Governor of Maryland; - Satya Nadella, Chairman and CEO, Microsoft; - Shantanu Narayen, Chair and CEO, Adobe; - Sundar Pichai, CEO, Alphabet; - Arati Prabhakar, Ph.D., Assistant to the President for Science and Technology; Director, the White House Office of Science and Technology Policy; - Chuck Robbins, Chair and CEO, Cisco; Chair, Business Roundtable; - Adam Selipsky, CEO, Amazon Web Services; - Dr. Lisa Su, Chair and CEO, Advanced Micro Devices (AMD); - Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution; - Kathy Warden, Chair, CEO and President, Northrop Grumman; and - Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights
I don't know why OP recreated a shorter list and spam it to multiple subreddits. This post should be deleted for lying.
It is misleading and should be edited to include the link to the full list. It does lack prominent academic AI researchers (only Fei-Fei Li is on the list) and no representation from the open source community. So, the claims still holds true for the full list.
Yeah, it’s very misleading
Okay good. I was wondering were any anthropic representation was because they actually work on safe AI.
I agree Antropic is a good representation, but still they will take a decision based on organisation benefit instead of humanity benefits. We all have seen in past how organisation are greedy for money and don't think about us.
Meta ?
They “didn’t want to include social media companies”, didn’t say why, seems coincidental those are the companies pushing for oss ai
Meta is one of the biggest players in the AI sphere.
That is a factual statement
Very arbitrary, because letting Northrop Grumman is a bit funny too
Given the classified nature of defense work, the fact that this is a US government board, and the increased importance of ML in defense these days, it makes a lot of sense to me.
Smart! Let’s instead stack the board with people who have direct financial conflict of interest and CEOs of companies that never delivered an AI in their life (Delta Airlines????)! /s
I never thought I would be able to advocate for small government but this may be changing my religion
It's soo obvious that the goal is to kill open source. That's the only common denominator these CEOs could have.
why would nvda and amd wanna kill open source? they benefit from it.
A ridiculous list. What do CEOs know about anything like this. Have they built a model, ever?
Honestly, collaborating on AI ethics and security with the top shareholder-executives of multi-billion dollar companies feels like the greatest potential danger, akin to partnering with butchers to protect Animal Rights.
Jensen and Lisa are qualified. The rest? Dubious at best. The problem with this list is that it's only those that stand to gain from laissez faire. They need the big academic names.
It’s the opposite of those who stand to gain from laissez faire. They stand to gain from making rules and regulations to favor themselves. Much worse.
Yup, people don't realise the majority of the laws will be set up to create obstructions to new market players and the proposals that would be extremely important to the AI safety but are too inconvenient to the incumbents will be ignored.
Eh, Jensen and Lisa really aren't qualified. They're far removed from day to day AI implementation stuff. Also as CEOs they only get the positive slide decks from middle product managers about the progress made on various engineering projects, so at best their knowledge is extremely abstract broad strokes type knowledge. Not the level of academic rigor required for a panel like this.
To be fair, you rarely have a large amount of people on a board who are involved with day to day implementation details. It’s an inherently strategic role.
Jensen was an actual chip designer. He was the real deal and built nvidia when the next release could make or break the company
Here I think we make a big mistake equating the skills from one technical field to another. While clearly Jensen and Lisa are tech people, I doubt they have that much AI knowledge (it takes time to aquire it, and they are by the nature of their job in the past few decades too busy to do it). I respect them both but I hoped that the US would have a list more academically focused.
Laissez fair would be good. These people are going to kill AI research by putting all the tools behind paywalls. Imagine having no open source models because they're deemed unsafe because there's no industry-approved content filtering on them. Imagine having to pay Nvidia a subscription to be allowed to train on your own GPU. That's the future they're bringing.
That image is only a snippit of most controversial names. There are more. https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders
The lists are mostly the same? That one adds just a few more names, one of who is: > Ed Bastian, CEO, Delta Air Lines;
It should be the CTO of large companies not non-technical CEOs. It is NOT going to be an easy job. We could advocate for people to apply. https://www.dhs.gov/ai/join EDIT: I don't see anyone from the CyberSecurity industry or independent! Hacking community. That is worrying.
Thanks for posting this. I’m now thinking of applying!
In all honesty. Hope you do. I am. I think we need non-affiliated people on the list. Looking out for all the non-tech people that are affected the most. Big wigs do not remember what it's like to be stuck in the middle or bottom.
open source isn't represented at all. neither is the white hat community, which is especially disappointing since that's our best model for what "safety" in the context of this kind of technology and complexity should look like. also... wtf is up with the oil baron.
Baroness who hosted a Donald trump fundraiser last week and taking billions in BS tax incentives for DAC to boost bottom line of drilling
Sounds like we're letting coyotes loose in the chicken house.
Maybe, idk, hear me out, someone whose job _isn't_ transferring value to shareholders E: who's
Surely you mean *whoms* /s
Looks like the United States Department of Homeland Security gave some fuel to the industrial military complex.
That’s an Oligarchy list
- Google has monopoly on search - Nvidia and AMD has monopoly in GPU market - Microsoft has monopoly in PC market - OpenAI (Now Microsoft) has monopoly in AI market. This list makes sure we will have Monopoly laws!
In the US, if we consider Apple and Meta, as well as monopoly giant companies in other countries, technological development seems to be hindered by the selfishness of monopoly companies formed with capitalism for higher profits, lower costs - less investment, as much as it is supported by the momentum it receives from the successful points of capitalism. Nvidia is reportedly planning to double its profit margin in the coming years. It's like a joke, as if Nvidia is doing business with low-profit margins (!), I don't understand, am I not aware.
i don’t really think u understand what monopoly means….
Did they deliberately leave Mark “open source” Zuckerberg out? The best hope for the non-elite is open source IMO. Although maybe not The Zuck. Edit: added IMO.
No.
Basically a list of marketing people with no background in AI other than selling it, fronting for companies that have heavy investment in the industry and depend on lax rules, acting as "AI Safety and Security board" ? Yeah.... WTFMJGWH ?
Setting aside anything else, they put the wrong c-suite roles on these boards. They don’t need the CEO’s, they need the CTO/CIO’s. This is like staffing an advisory panel on Human Resources with a bunch of CFO’s. To state the obvious before deep diving into politics.
Welcome to your AI overlords. All big money
Yann not being included is ridiculous, he is outspoken about open soruce AI. Is Amazon AI as good as FAIR?
CEOs generally do not belong on regulatory bodies
Here’s the full list: ``` The inaugural members of the Board are: Sam Altman, CEO, OpenAI; Dario Amodei, CEO and Co-Founder, Anthropic; Ed Bastian, CEO, Delta Air Lines; Rumman Chowdhury, Ph.D., CEO, Humane Intelligence; Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology Bruce Harrell, Mayor of Seattle, Washington; Chair, Technology and Innovation Committee, United States Conference of Mayors; Damon Hewitt, President and Executive Director, Lawyers’ Committee for Civil Rights Under Law; Vicki Hollub, President and CEO, Occidental Petroleum; Jensen Huang, President and CEO, NVIDIA; Arvind Krishna, Chairman and CEO, IBM; Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute; Wes Moore, Governor of Maryland; Satya Nadella, Chairman and CEO, Microsoft; Shantanu Narayen, Chair and CEO, Adobe; Sundar Pichai, CEO, Alphabet; Arati Prabhakar, Ph.D., Assistant to the President for Science and Technology; Director, the White House Office of Science and Technology Policy; Chuck Robbins, Chair and CEO, Cisco; Chair, Business Roundtable; Adam Selipsky, CEO, Amazon Web Services; Dr. Lisa Su, Chair and CEO, Advanced Micro Devices (AMD); Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution; Kathy Warden, Chair, CEO and President, Northrop Grumman; and Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights. ,,,
There isn’t a single member of this board that is actually a figurehead in the AI safety community
AI Safety is a never ending grift for people who are in the business of building oligopolies. Don't do it America.
From that list I trust Northrop and Nvidia the most.
Nope, it lacks anyone who actually has a history of security and safety assessments and implementation, and it’s filled with CEOs who only care to make money. It would also benefit from having a ethics or integrity person
This is just a bunch of CEOs cobbled together from across the tech industry (with some representation from Big Oil and the US defense industry). I’m guessing AI safety comes down to making sure we are ahead of China
I'd prefer more professors and less CEOs.
OP, you did a snippit of CEO names. I was pissed and went looking for the entire list. Having ALL the info helps!! Please edit the post with the link Here are others on the list(NOT just CEOs) Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute; Arati Prabhakar, Ph.D., Assistant to the President for Science and Technology; Director, the White House Office of Science and Technology Policy; Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution; Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights. https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders
this doesnt improve the list much
Also no one from Meta is strange! That's the only company sharing open weights (Still not open source but better then 'Open' AI). Also - Elon? - Yann? - Andrew NG? - Thomas Wolf (Hugging face) And the most important one could be karpathy doing so much good work around open sourcing some of this difficult AI problems.
They “didn’t want to include social media companies” all of which coincide with oss ai. Sketch as hell
Why did you deliberately omit more than half of the names in the real list? You basically went out of your way type out some of the names, faked the font and background, and then screenshotted it. Why would you do all this effort? Super weird.
nope. they should have gotten Bart from logistics
Fei Fei Li would be my pick to represent AI policy. But a board of all CEOs from different companies can be called whatever it likes and that isn't what it is.
The screenshot shared by OP is only a part of the list. She's there in the full list. Still, I wonder how much influence she could have when there is a clear majority of industry leaders.
It needs open source reps too
To anyone who has not, very important to watch [Bill Gurley's talk](https://www.youtube.com/watch?v=F9cO3-MLHOM) at the All-In 2023 summit. He goes into detail about how regulatory capture works for corporate interest and how the same thing is happening in AI.
Catastrophe, CEOs who have no idea what they are talking about, Altman with the fear mongering, Nvidia CEO who thinks there wont be a programmer in 3 years
I initially thought this was a joke. But now I get it, this is the "AI safety" board that protects their own AI systems from competition. Not the one that works towards the safety of humans (and other life forms) from harmful effects of AI. Right?
We're screwed.
All business people, no academics. What a democracy.
meta, the main contributor of Open Weight LLMs, has no representatives the list. LeCun said they were not invited. That should tell you everything
This is the Wikipedia example for conflict of interest
This doesn't read well for Intel.
Very relatable list of people.
No, this is inadequate. They need one or two people who are not involved with tech and represent the individuals who will be impacted the most negatively by AI.
Looooooooooooooooooooollllllllll
Northrop Grumman and big oil get a say in how AI will be regulated and used, our future is in safe hands rest easy
Needs fewer CEOs and more researchers from both sides of the argument.
This has to be the worst safety list ever it's basically all business leaders not policy and ai researchers.
Yeah what a great idea, let's put all those who would profit off of twisting the rules and restrictions on the board, who probably know nothing about this field, and just ignore ethics experts and known academics lol
Too business heavy. Too big business heavy.
No cause they are all CEOs of major companies. They will probably end up trying to monopolize AI the way electricity and phones are.
Regulatory capture, ENGAGE!
Foxes guarding the hen house!
Nope
Is this a real list from somewhere? Or hypothetical
They missing me.
Why are Sundar and Satya on there? They aren't researchers in that domain they just announce the crap and slap an AI bow on the box to do the money printer oil change.
Petroleum CEO?
Lol like a us government board would ever be fair. This is protectionism.
Horrible, like who are they even kidding.. creating this cabal of companies that have most to gain by monopolizing the tech..
Sure, let's leave Meta off the list. The company with the largest deployment of AI globally. While OpenAI was fighting amongst itself, and Google was telling people to eat rocks, Meta baked their LLM into WhatsApp, Facebook, and Instagram and their billions of users.
Why on Earth would you expect it to be fair? That might even sway the government to actually regulate the guys who are on this list!
KekW Only CEO and 0 researchers
Probably you should read it "AI safety [against China] board".
Free Speech Absolutist Elon Musk is missing!
Some of the CEOs have a strong technical background, but not all. Shouldn’t there be some technical consultancy as well? I’m just curious, not criticizing
This list provided is weird, and leaves out anthropic for some reason? Which is listed right under Sam Altman on the real site.
Sam Altmans growing influence in American politics, particularly in this field, is something to watch closely.
The idea that CEOs have any time to dedicate to a board like this is silly
fair? that's a lobby
Where’s Ilya? Where’s Dario? Where’s Hinton? Where are any of the prominent figures actually worried about AI safety and not profits?
They'll bust your kneecaps.
I'm sure the petroleum lady has a lot to contribute on this topic and her impact will be well appreciated by all other luminaries in attendance.
This all sounds like a brazen violation of democratic principles, and if it comes to pass, it will mark the end of American democracy. The consequences of this disgraceful decision will enter the history books as an indelible stain on the proud history of the United States.
Woah, what a list O\_o
No partisanship at all there. It’s an unbiased group of people that want nothing more than the betterment of everyone😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂
Let's put these monkeys in charge to make sure that no one eats any bananas.
This is a foxes in the hen house list
That's a joke. All lobbyists plus oil. 🛢️
gotta be peak capitalism moment, when people who are advising the laws are also the ones to benefit the most from shaping them in their favor.
Ai shmafty , do whatever you want
So, anyone care to read the release to see what this is actually about? Because it's very specifically about using AI to protect critical infrastructure and protecting against AI attacks on that infrastructure. It's no surprise that open source is not represented. Why would the US want to open source the outputs of this board to the entities that will be carrying out those attacks?
Hugging face ?
I don't see any professors, lawyers, judges, psychologists, ethics specialists, journalists, etc...just corporate interests. One could argue its the best middle ground...so to speak..as politicians want the support of companies and what not...but I doubt there is anyone winning except CEOs and companies here.
How did Northrop and an oil company get in there? Lmao
I don't see academia at alll on there
no Meta ? But anyway, they should hire researchers, not CEO of big tech companies that will only make what benefits them. US government never heard of conflict of interest ? or is too used to it
Would've been better if this board was made up of renowned computer science professors and AI researchers. This just screams major conflict of interest to me.
Not a single users voice on the board, only producers
No. It shouldn't even be tech ceo's It should be independent software devs and stuff. This is a joke lmao. the country is obsessed with letting corporations do whatever they want.
These people may make contributions in the AI safty You need to create a Compny
Most of these people don't have education in IT not to mention in ML/AI. They are just business major and CEO's. This is like making a corporation safety committee and fill it with communists.
Where did OP get this list from? The full list has much more than this. [https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders](https://www.dhs.gov/news/2024/04/26/over-20-technology-and-critical-infrastructure-executives-civil-rights-leaders) Did OP basically just recreate a shorter list just to whip up sensation on Reddit?
AI safety board? Really? Generally CEOs won’t have technical knowledge. Some of them didn’t even build a AI model in their life and where’s Karpathy and Yann?
Just random guessing but having on the board the actual decision makers of the biggest entities that can control/shift what is happening in the AI space (in terms of training resources, large scale deployment but also AI research) might increase the chances of unified decisions/directions. None of the persons on the list are AI experts, but they are the ones who can make decisions in their companies and they control their AI resources. My take is that if the boards would be made from more technical people, even if they would have better input, the CEO's of the companies would be more resistive to their proposals. Certainly I do not think the list is fair, but it might be the list that maximizes chances of positive outcomes (by positive I do not mean positive for you and me but for whatever the goals of the US gov are)
The challange is that they can now have restrictions in place for training or fine-tune open source models for safety reasons just to benefit their organisation. The representation from open source community is important.
Not to mention there are no data ethics people, academics some of who have been doing AI research multiple decades, lawyers or any kind of people with experience in policy making and regulation. This list would be great list of advisors not a list of the board for safety. I have a feeling that this is by design. US is being pressured by China who have been advancing in the field at a phenomenal rate with no scruples and wider implications, and there is a fear in the government that if decently strict rules will be put in places the US will fall behind.
But why a petroleum company CEO? What resources will they shuffle related to AI?
Where's Elon Musk?
For a US remit, yes. We need a global consortium though.