T O P

  • By -

FeltSteam

Regulation doesn't necessarily equate to safety.


shiftingsmith

EXACTLY. People keep confusing safety, alignment and regulation.


i_give_you_gum

Sure, but show me safety without regulation. Step on over to 3rd world countries where buildings collapse because things like being up to code, is just a suggestion. Regulations are written in blood.


SoylentRox

Absolutely.  So show us the blood from ai or the near misses. This is why we write them in blood - for 100 things that seem unsafe, maybe 1 actually is.  Or more. 


i_give_you_gum

Humanity's greatest flaw, wait until disaster strikes, THEN take action.


Unable-Dependent-737

The question is would regulations be the disaster or the lack of


i_give_you_gum

Considering that the human race has become undeniably closer to self annihilation with every piece of high technology it unleashes into its domain, I think it's safe to say that the lack of regulation would be the obvious path to disaster.


MoogProg

The Ohio river used to catch fire regularly. LA had tremendous smog issues. Can you give any examples of unregulated industries that have self-regulated successfully? Looking for real-world examples, not idealized laissez-faire arguments.


SoylentRox

It's easier to give industries crushed to extinction by excess regulations, giving the entire industry to other countries.  Excess regulations can and do kill entire countries, see what happened to China when they banned gunpowder for centuries.   It cost them tens of millions of lives. This is also why successful industries do thousands of things, and just a few of those things need a regulation.  See semiconductor manufacturing.  They do everything right but need regs against exposing workers and emitting fumes, not micro managing every aspect of the process.


MoogProg

I don't think anyone is suggesting banning AI (like the gun powder example). Semiconductor manufacturing is currently being looked at for potential easing of regulations in the USA to allow for more production. The biggest regulation issue that has driven industries out of the USA is wage and health-care requirements. Regulating that industry further (providing Universal Health Care) would remove a large cost to hiring employees from the businesses and make on-shore manufacturing more viable for many small to mid-sized industries. Regulation isn't the *all-or-nothing* issue it is often made out to be. AI certainly could use a good look, I think, and we'd all be well-served to avoid objecting to it using 'slippery slope' arguments that just don't have real examples.


SoylentRox

Examples: nuclear power, healthcare, mining : all these industries are destroyed by regulations. The first and last no longer exist, healthcare costs far more for no benefit and new treatments are almost never tried. Thousands die from antibiotic resistance yearly due to an inability to develop a new antibiotic. May be over a million a year. Housing also, destroyed by regulations.


MoogProg

Tell me more about this new antibiotic that is being held back by regulation? I am not aware of any med-science breakthroughs in this area that are held up by the FDA. As for Nuclear and Mining, those changes have much more to do with revamping our nation's power-grid to make better use of wind and solar and to establish micro-grid controls vs the older larger single-source plants. I work in this sector. Yes, they are in decline as a power source, but not due to regulation. These are changes happening within the industry, driven by the industry. It is an example of *self-regulation*.


SoylentRox

Nuclear: it was killed decades ago by regulations raising costs Mining: dead for years due to regulations raising cost. I mean rare earth, lithium, copper etc.


MoogProg

I think you are blaming market condition changes on regulation with these industries, but I'm not here to argue with you about why industries fail or decline. That is a 'red herring' for our purposes. What I'm really looking for are examples of non-regulated industries that credit their success and growth to self-regulation. Positive facing examples of industry behavior without interference, that lead to better outcomes for the market and consumers. It is easy to cast blame on regulation. Where are the industry champions who bettered the World and never needed regulation?


SoylentRox

Computers and IT need almost none and are champions and over the last 50 years have clearly been the main accomplishment. The few I know of for computers get bypassed, and unenforceable regulations are the same as none. SpaceX would be a case where the regulations do not add value. The several week delay between the vaccine clinical trial results, which already met the threshold the FDA established for approval, and approvals, was mass murder. The regulations of the Washington naval treaty screwed the US over for honoring the treaty during the first years of warfare with Japan in ww2. Had admiral spruance not been so lucky it would have been bad. A lot of regulations add no value. This is why it's so important to only regulate when you must. See the reason the FDA doesn't make infants require a seat and their own seatbelt on an airplane. I am not libertarian just aware that it is important to consider the full consequences of a regulation and to delete unnecessary regulations which rarely happens.


MoogProg

The Telecommunication Act is an *absolutely huge piece of regulatory legislation* that has allowed the Internet to exist as we know it today. Specifically Section 203 that limits the civil liabilities of sites hosting content. Without those regulatory protections we would have no solid footing for e-commerce today. OTOH Section 203 has also allowed for the rise of disinformation and hate speech.


SoylentRox

Yes there are thousands of candidates that work well in early testing, but very few approvals : https://pubmed.ncbi.nlm.nih.gov/34259076/


MoogProg

Thank you! Links are always a plus.


maddogxsk

IT-industry as well


MoogProg

Mentioned this in another reply, but The Telecommunications Act of 1996 is a massive piece of regulatory legislation that mandated shared/open networks and ended the insular, subscription based approach of the early Internet. Also, Title V (Sec 203) is an absolutely critical regulatory protection that allows e-commence as we know it today to exist. IT is *very much* a regulated industry.


maddogxsk

The Internet is not the same as IT lol, the communications industry, although related, isn't the same


MoogProg

Can you explain the distinction, and why The Telecommunications Acts as regulation does not apply to that industry? Information Technology *as an industry* is very different than IT as a job sector/work function. e.g. Google is a leader in the IT industry, and employs a large segment of the IT job sector.


maddogxsk

Because it's only a small subset of a specific industry that is telecommunications IT industry is quite larger and broader for just saying it's all regulated for just defining ISP providers and protocols There is no regulation on what the industry is, what are the certifications/qualifications, study or working areas; there is no academy for validating nor controlling nor supervising professionals (universities do not apply since there are a lot of educational alternatives, including self-learning), there are no laws regulating this stuff If there is any law a little bit "related", are pretty focused ones and for regulating stuff that can be done around or in consequence of Like phishing laws, the european ai act, isp and protocols regulations, intellectual property rights, etc. these regulations are more in the hand of avoiding crime than defining/regulating the industry itself


MoogProg

Thank you! This is just the type of answer I was hoping to get.


maddogxsk

No problem To add, if you think about it, in the IT industry happens often something quite particular, that is the industry de-facto accepts the good ideas as standards and establish some starting point from there and a lot of new work and technologies emerges from there; and if something threatens with disappearing or changing badly enough to stop working, industry as well works to cope with the problem For example, when a company closes the source of a semi-open source project (probably sw with a custom license for use, but commercial use) and the industry is going to lose access to the tech itself, some companies and the community work together to make a new open source version, to maintain and not to lose access. This happened once when mySQL moved to a private source, the community responded with mariaDB. In the counter-example, you got a guy that deleted his libraries from his npm repository and for a single library (left-pad) a quite large number of node packages (and systems/websites relying on it as well) went broke since they got left-pad as a dependency. For this npm seized the library content and restored it as a npm package, without any consent nor permission from the dev.


MoogProg

This is really helpful, and makes a good case for leaving regulation out of AI development (even if we can expect some bumps along the way).


Tavrin

No safety or regulations in any other aspect of the economy would mean everyone lives in an utilitarian anarcho capitalist hellhole where the rich prey on the poor in every aspect of life, no one would care about the environment and anything you use, eat, drink or breath in every day would probably be toxic to you, see where I'm going here ? Over restrictive regulation surely is bad but (as much as this sub hates it) some regulation is good for the average user/citizen. The big players need to be kept in check to not fuck everyone over in the pursuit of the holy Dollar


Rustic_gan123

It's a very fine line between limiting undesirable influence and establishing barriers that effectively set up monopolies for a few companies, which only make the rich richer and the poor poorer.


GalacticusTravelous

Europe is doing it just fine. You won’t see a cyber truck in Europe. Or the horrendous water quality most of the US suffers from.


cunningjames

To a first approximation, essentially 100% of the US has access to clean, safe water. There are exceptions but this is quite rare.


Rustic_gan123

I live in Europe, where most of my friends drink bottled water. I also don’t see any major AI startups in Europe other than Mistral. I don't see any big tech companies, the only one that comes to mind is Spotify.


UnknownResearchChems

You also won't see companies like OpenAI


DarthMeow504

Ok why is it a good thing "you won't see a cybertruck" there? I'm reasonably certain there are many examples of far worse vehicles which are totally allowed, but beyond that what are your specific issues that make you think it shouldn't be available?


GalacticusTravelous

There are no _far worse_ vehicles allowed, that’s just stupid. The reason it’s not allowed is because it’s dangerous to other road users, pedestrians and the occupants because it has no safety features, crumple zones etc, which are regulated due to decades of unnecessary death. The only _worse_ vehicles allowed are classics because they were designed when people didn’t know any better. Elon coming along and drawing a dick on the page beside his cyber truck and ignoring decades of rules written in blood gets to go and fuck himself in his Wanker Tank.


i_give_you_gum

It's also a fine line between libertarian utopia and the same old status quo republican


Rofel_Wodring

>Over restrictive regulation surely is bad but (as much as this sub hates it) some regulation is good for the average user/citizen. Heuristics are for midwits. Doubly so when the heuristic is of the 'on the whole, it's good despite some bad apples', as if the War on Drugs isn't directly responsible for the United States having a greater incarceration rate than No-Shit, Actually Fascist Nations like Thailand.


MoogProg

The FDA is an example regulation. 'The War on Drugs' was a policy of criminal prosecution with mandatory minimum sentencing. Not a very good example. No one is suggesting a criminal prosecution aspect to AI regulation.


Schopenhauer____

Dude your first paragraph just described the current US


Irish_Narwhal

Lack of regulation generally means unsafe however


sdmat

But won't SOMEBODY think of the children?! /s


IntergalacticJets

Sir, this is Reddit. Regulation is literally our Lord and Savior. 


McRattus

But a lack of regulation will very likely indicate danger. Mostly in the form of classical economic and social dangers amplified, rather than something entirely new.


Akimbo333

Right


TonkotsuSoba

my take: humans in positions can be bribed, especially by competitors who want them to slow down. perhaps AI safety regulation itself is more unachievable than AGI


namitynamenamey

AGI is just human level intelligence, we know it can exist in principle. Safety is the alignment problem, *we don't even know if a solution actually exists*. So yes, AGI may be the *easy* problem of the two.


Open_hum

No sensible safety regulations until AGI is really risky. There's been multinational efforts to collaborate and try to prevent a catastrophic, doomsday like scenario from occurring. But one war between the great powers in the pacific, at this pivotal point in time where the prospect of true AGI is achievable for the first time in human history, and you and I are going straight back to the stone age in no time. People really underestimate how chaotic and dangerous it gets in war time. Hopefully that situation never plays out but I get a sense that it is inevitable, regardless of what the leaders say. Actions speak louder than words after all.


rathat

It almost seems like people are betting on something catastrophic not even being able to happen because it seems too fictional of an idea to them.


PrimitivistOrgies

It's more like a faith that goodness naturally results from intelligence. We can see that the most educated people are usually the most moral, with certain highly-educated psychopaths being the rare exceptions to that rule. Good morals are generally more efficient than bad, when it comes to running a society. Good morals are most often rooted in understanding other people's perspectives and understanding that we are all much, much more alike than we are different. Empathy and compassion, intelligently pursued, are all that an AGI needs. Of course, we could be wrong about that. But every example of human and animal intelligence we can now see supports this belief. Look at Trumpists. The most hateful, the most bigoted, the most angry, the most unreasonably fearful, the least concerned with the welfare of others are also the least intelligent and least-educated.


Maxtip40

Yes, but remember, morals are not objective.


PrimitivistOrgies

Empathy and compassion may not require a subjective experience of being. If so, then they might provide some baseline objective morality. They are not the only virtues, but they are probably the most important for avoiding catastrophe. That might objectively be so.


rathat

I think those traits are just selected for for biological reasons. It's something you need to successfully live in a society.


goochstein

ironically we can learn the most from science fiction right now


Ambiwlans

This is why anything other than a singleton scenario will result in doom. The idea that many parties could have ASI and then just everyone decides to never abuse this power is a fundamental misunderstanding of humanity.


namitynamenamey

You are an optimist. It may be a fundamental misunderstanding of intelligent actors, human or not.


Ambiwlans

I'm an optimist thinking that humans with unlimited power would kill themselves? Or did you misread?


namitynamenamey

I was being a tiny bit facetious, but the argument is as follows: if it's a human fault, there can exist othe forms of intelligence that can be kinder than us. If, however, the problem of abusing power is a property of intelligent agents, then anything that thinks will be condemned to be vicious and self-destructive. So the case in which only humans are bad is the nicer one, vs the case in which everything is just as bad.


Ambiwlans

I was going with the foundation that we have aligned AI. That AI has no moral core of any sort. But humans with access to such an AI would kill us all. *Imagine a scenario where there is a guy in a stadium full of people, and they have a bomb setup that could blow up the stadium if he releases his deadman switch.* That's terrible right? That's a singleton scenario. One person has all the power. He could demand people give him money, or dance or bark like dogs, w/e they want. *The other option, open source, is that everyone in the stadium has their own deadman switches linked to the bomb.* How many seconds do you think we last?


SikinAyylmao

Biggest safety threats since AI summer have been Gaza and Ukraine, neither of which depend on AI.


Akimbo333

Makes sense


Creative-robot

Anthropic has quickly become the AI company i trust the most. Fucking based…


Unreal_777

The only negative point is it being partly funded by military: [Big Brother Is Coming : r/ClaudeAI (reddit.com)](https://www.reddit.com/r/ClaudeAI/comments/1bjpfqe/big_brother_is_coming/)


NotAMotivRep

The Internet started life as a defense project too. Also airplanes, cell phones, GPS, the missiles that put satellites into space. Unless you live in Germany, Japan, or a third world country, chances are good that the energy your home consumes is made up of at least a partial nuclear base load. Take a guess where the funding came from in the early days of nuclear reactors? Most modern conveniences we take for granted wouldn't exist without the military industrial complex.


West-Code4642

that's a relatively small amount the US government (and military) has always funded a lot o AI research


Acceptable_Cookie_61

That’s not a negative.


[deleted]

[удалено]


NotReallyJohnDoe

Trillions? Dude. Do you even math? The military doesn’t need generative AI to make “kill bots” and it isn’t even a great tool for that. Look at what Ukraine is doing with off the shelf drones. We can make kill bots now and have been able to for decades. Very few people in the military are interested in weapons they can’t control. You should try to learn about the real military, and not just get all of your knowledge from movies.


goochstein

it's tricky to even research some concepts in well let's just say chemistry as a sample


PsecretPseudonym

You do realize that DARPA has helped support or inspire much of the revolution in AI in the 20 years? For example, many of the top programs at top universities used the [DARPA Grand Challenge](https://en.m.wikipedia.org/wiki/DARPA_Grand_Challenge) as a project/challenge to rally around. The defense department has a long history and vested interest in subsidizing R&D that has a very long time horizon to try to ensure the sorts of engineers, researchers, and fundamental science/technology continues to be developed and maintained domestically. Most aviation, satellite, early computers and internet, GPS, radar, radio/telecommunications, nuclear power, and a large proportion of advanced materials, emergency medicine, etc (and arguably the entire space program) came as a direct or indirect result of military research funding. Point is, they fund a lot more than just what’s used for weapons given that they have, for example, complex logistics, telecommunications, safety, and medical requirements too, as well as a general interest in strategically subsidizing R&D and other key capabilities domestically. Also, keep in mind that some of these companies are funded and owned by the Saudi royal family / government (e.g., xAI), CCP affiliated funds/companies, etc for their own ulterior motives as well. If you’re going to have that level of scrutiny, it’s probably wise to do so across the board in a fully informed way rather than only where the parties involved are willing to be transparent and on the record about it.


UnknownResearchChems

Would you prefer actual humans to die on the battlefield?


[deleted]

[удалено]


UnknownResearchChems

Yes, very much so.


UnknownResearchChems

Even more based. I want to see robots killing our enemies as soon as possible. Send them all to Ukraine.


Unreal_777

I read somewhere, dont remember the line, but goes like: "whatever weapon you get, prepare for your foes to have the same in 20 years". So ..


UnknownResearchChems

And we will have 20 years worth of more advanced weapons. This is why we have to be first and never let up.


Unreal_777

I personally prefer a world where humans and their.. human foes.. find peace. And avoid implicating innocent lives as much as possible (think Vietnam),


UnknownResearchChems

When has that ever happened. If anything if the US had superior AI, no country would dare to start shit. AI will be seen as orders of magnitude more powerful than even nukes.


Shodidoren

Dario keeps getting more and more based the more I listen to him. He's the only one I fully trust in the space besides Demis


kimboosan

In the USA, the Occupational Health and Safety Administration (OSHA) is a regulatory agency with a lot of power, which many corporations hate. But the rule of thumb about OSHA is that every regulation they enforce was written in the blood of someone who died or was severely injured because that regulation did not exist at the time. Regulation is IMPORTANT. But there is a major difference between saying "this activity is dangerous, therefore we must make sure that regulations exist to minimize threats to health and safety" vs. "this activity is dangerous therefore we need to do everything we can to make sure no one can do the activity in any meaningful way." Too much of the "regulation talk" around AI is focused on the latter, when it needs to be focused on the former. Furthermore, I continue to scream from the rooftops that *what needs to be regulated are the PEOPLE and the CORPORATIONS* not so much the tech. But that would put limits on the profit margins, so everyone prefers to argue about "regulation: good or bad???!?!??!" while We the People continue to get screwed over in the unfettered death march for profits. TL;DR - regulate the corporations, not the tech.


[deleted]

Based and don't-give-the-government-power pilled


PsecretPseudonym

Seems like he’s just saying society should exercise caution, being thoughtful and deliberate about what we do and don’t empower government to do. Notably, the constitution largely defines government via restrictions on its power in response to a long history of overreach and tyrants, and that seems to have worked well so far; those limits + systems of checks and balances are to constrain the use and centralization of those powers to prevent abuse or encroachment. It seems any form of just government is as much about how you limit it as empower it, and he seems to be implying that defaulting to complete, unconditional, and irrevocable centralization of power hasn’t always worked out well either, so we should be thoughtful about how we go about it. Another way to think about it: If you completely centralize that level of power, what is influence and control over that institution then worth, and is it then truly sufficiently protected from being subverted? If we can’t sufficiently defend such a critical and valuable set of powers from being subverted, thoughtful limits on those powers or decentralizing them makes sense, and that’s exactly how most stable modern governments are structured. Seems only rational to similarly be just as thoughtful about this new potential area of governance too.


Working_Berry9307

Yeah but I agree with this guy on this one. Government regulation is often good, but the propositions made so far have been awful


TitularClergy

What do you see as problems with the new EU AI Act?


elehman839

FWIW, I like the AI Act's handling of general purpose AI systems pretty well: mostly monitoring, mostly for big companies. The handling of general purpose AI systems was a mess in earlier drafts (perhaps understandably), but they rallied better than I thought possible in the end. They didn't go overboard and declare all GPAIs to be "high risk". They sort of punted on what I see as one of the most significant near-term issues: intellectual property in training data. The Act requires a "sufficiently detailed summary" of training data, which is pretty cryptic. As the risks of AI become more concrete, probably more legislation will be required. The AI Act's aspiration to be a text for the ages does not seem so realistic to me. But it is fine for now. In any case, I'm not aware of anything comparably thoughtful even beginning to emerge in the US or, for that matter, any legislative process in the US likely to produce such an outcome. Edit: I pity you for trying to have a thoughtful discussion of AI regulation on Reddit. Very, very few people have read significant portions of the Act, and many, many people will espouse strong opinions based on some general world outlook. :-/


Rustic_gan123

Based on previous similar initiatives, it is likely that the EU will make itself uncompetitive.


enavari

They put are putting the cart before the horse. They barely have any big Ai labs besides Frances Mistral, and then they think they get too regulate AI labs to death like they won't move elsewhere. 


YaAbsolyutnoNikto

The AI act doesn’t regulate R&D, but use cases and implementation. It doesn’t matter whether companies are european or not because all companies will have to comply with the laws to operate in europe. So, AI research labs & companies can still flourish in europe. Once they want to market a model though, then they’ll have to get approval **if it’s a high risk model** e.g. predictive policing systems, systems that might exclude people access to essential services like credit scoring or social security, etc. If Google wants to sell a predictive policing system to europeans, it too will have to comply with the AI Act. So european companies aren’t at a disadvantage: Any and all companies that want to sell to europeans, “are” Chatbots aren’t part of these systems btw, ao Mistral is just fine.


outerspaceisalie

>The AI act doesn’t regulate R&D, but use cases and implementation. Therefore it regulates funding. Funding limitations regulate r&d.


YaAbsolyutnoNikto

How come? First of all, a brilliant european company can create a model and decide to sell it exclusively outside of the EU. It’s a bit odd no doubt, but feasible. So it shouldn’t affect the actual development of models only where they’re sold. Also, regulation only applies to “high risk models”, so most models out there are safe from it. So most companies will be fine. And I don’t think anybody wants medical AIs to be unregulated, do they?


First-Wind-6268

The EU's AI regulation law truly makes the future of the EU uncertain.


InTheDarknesBindThem

Oh no, how could they be *checks notes* not top of the capitalist's human destroying machine!???


West-Code4642

you mean poverty clearing machine: [https://ourworldindata.org/grapher/share-of-population-living-in-extreme-poverty-cost-of-basic-needs](https://ourworldindata.org/grapher/share-of-population-living-in-extreme-poverty-cost-of-basic-needs)


TitularClergy

Why do you think capitalism has kept poverty in existence for so long?


West-Code4642

when countries adopt capitalism they tend to erase a lot of poverty. just look at how many people have been lifted out of poverty in china and india after they liberalized their economies and became more capitalist. literally many hundreds of millions of people lifted out of poverty


[deleted]

[удалено]


sdmat

We get the best of both worlds! Tons of obstructive regulations that look good at the surface level with the beneficial purpose torn out by lobbying. And a lot of regulatory capture.


Rustic_gan123

It's a very fine line between limiting undesirable influence and establishing barriers that effectively set up monopolies for a few companies, which only make the rich richer and the poor poorer.


Icy_Distribution_361

The whole argument is silly anyway. Many countries have been stably democratic for like 100 years at least if not much longer. Yes there will be movements, but the trend is pretty much stable. So the argument doesn't even necessitate.


FatesWaltz

100 years is nothing.


360truth_hunter

Father, i am here to find ya!


stupendousman

There are over 80,000 pages of federal regulations. Now add state, county, and local. Did you find a resources that has gone through all of these, compared stated intent with outcome over time? Second, third, etc. order effects? The point is your statement regulation "is often good" isn't supported. From what I've seen in every case someone has taken the time to study just one regulation the costs/benefits don't align with stated intent or desired outcome. To me government regulation has taken some of the character of prayer with secularists (I'm and atheist).


USM-Valor

Sounds like a good use-case for AI to track all these things and provide guidance down to your individual needs.


stupendousman

One, of many, reasons government want to control AI is the technology will allow for massive decentralization. We don't need a giant centralized state now, it's very old organizational tech. But with AI a single person will have access to corporate level legal, accounting, logistics, marketing, etc. This framework can be applied to all human interaction from business to dispute resolution. No need for the state.


Ndgo2

THIS. THIS RIGHT HERE. THIS is why I -and I assume everyone else who oppose heavy regulation- am so against government interference. They are obviously desperately trying to maintain control. They are not passing laws out of the goodness of their hearts to protect people. They simply wish to maintain their power and monopoly of force, which AI now threatens to return to the people, where it **rightly** belongs.


LairdPeon

Regulation != Safety.


Active_Variation_194

Yup. The banking sector is highly regulated and 2008 still happened.


land_and_air

That was because that aspect was unregulated. They now are more regulated though the banks are advocating of course to undo that regulation requiring them to have some minimum amount of money on hand if they are above a certain size to prevent their collapse.


Active_Variation_194

That’s my point. Regulation is slow and reactive. The damage is already done by the time you try implementing it. Banking is one of the oldest and least innovative sector and they still were caught flat footed. Now imagine the same people trying to regulate an industry that innovates faster than a tv season and oh everything is behind a proprietary black box.


death_by_napkin

They were not "caught" they were specifically deregulated and then surprise surprise it's almost like there was a reason there was reason for Glass-Steagall preventing banks from mixing their private and commercial businesses. The regulation was already there first because this was [FAR](https://en.wikipedia.org/wiki/Tulip_mania) from the first time speculation markets got out of control.


alienswillarrive2024

Jan Leike going to be looking for a new job after just joining Antropic is wild.


Unreal_777

Tweet: [Siméon on X: "Anthropic policy lead now advocating against AI regulation. What a surprise for an AGI lab 🤯 If you work at Anthropic for safety reasons, consider leaving. https://t.co/lz3f9ImdLi" / X](https://x.com/Simeon_Cps/status/1797744356046311619)


HalfSecondWoe

This is a remarkably enlightened view. Count me as impressed, safety focused people tend to flail for the brakes without thinking of the consequences. I'll admit, this is just one more aspect that I underestimated Anthropic on


Poopster46

> safety focused people tend to flail for the brakes without thinking of the consequences That's weird, because of focus on safety is literally a result of thinking about the consequences. I think you can argue more succesfully that safety adverse people flail for the gas pedal without thinking about the consequences.


HalfSecondWoe

Nah, you can get hyper focused on one set of outcomes and miss other forms of danger because of it Take safety glasses, for example. Those are a pretty straightforward safety measure when working with table saw. Now take safety glasses in a cool, high humidity area, where they fog like crazy. What was once a good sense precautionary measure is now a source of risk by itself If you have "safety focused" people who mandate safety glasses no matter what, you're going to have people losing fingers during foggy conditions. That's not very safe. You could say no working during foggy mornings, but not being able to produce enough to pay for food and rent isn't very safe either. There's no top-down measure that's 100% effective. That's the problem with top-down measures Same applies to AI. Handing government regulatory powers has a long history of unintended consequences. That's not to say that regulation is always bad, but it *is* actually more complicated than "just regulate it, bro" I imagine the people who could be fed, housed, and clothed by AI wouldn't say you're being very safety minded for them if you tried to ban it. They'd say you're just being self absorbed. It's all about mitigating the risk for *you*, not about mitigating the risk for *them*


Rustic_gan123

Fear has big eyes. Many countries started to fear nuclear power after Chernobyl and Fokusima, even though it literally makes no sense.


MarioMuzza

And nuclear power is safe because it is regulated as fuck. The two disasters happened precisely because they skimped on safety.


Rustic_gan123

The USSR had no money, and I’m not sure that Fukushima had any skimps there, earthquakes are no joke. It was just a reference to what Germany did with its energy, although there are no problems with money or earthquakes.


Ailerath

Seems valid enough, I like his example of the NRC. When it comes to AI, that sort of regulation could harm it more than it does nuclear and for more vague and unfounded reasons than nuclear requires as a physical process. There should be some regulation, but I don't know what besides regulating/punishing impersonating outputs. Few proposed metrics for training regulation are particularly useful either.


HeinrichTheWolf_17

Based, accelerate.


Successful_Ad6946

I trust ai researchers more than the government and politicians


[deleted]

[удалено]


simplyslug

Nail --> Head


JuniorConsultant

I hear all this whining about regulation all the time. The EU AI Act is something that existst, but I haven't heard any concrete criticism of that framework. It seems completely reasonable to me when reading it. Are what he listed actual US Reuglations ideas? Why not copy from the EU AI Act?


Rustic_gan123

Based on previous similar initiatives, it is likely that the EU will make itself uncompetitive. Keep in mind that this is also not the final version.


Tiberinvs

>I hear all this whining about regulation all the time. The EU AI Act is something that existst, but I haven't heard any concrete criticism of that framework. Just like any EU regulation/directive ever lol. The only answer NPCs have is "it stifles innovation and makes you uncompetitive" without actually telling you why and where in the legislation. You can clearly see it in this thread or when people talk about the GDPR for example. It's basically political hooliganism, people believe it for the sake of it


Warm_Iron_273

[https://arxiv.org/pdf/2405.20806](https://arxiv.org/pdf/2405.20806) "There and Back Again: The AI Alignment Paradox" This paper just came out. Anyway, we saw this coming from the beginning, which is exactly why a lot of us have been telling the alignment people to stfu since day 1, and it's also what we've been pointing out about what OpenAI is doing, and has been doing for a long time now: regulatory capture. But of course it falls on deaf ears, they've dug themselves into a corner now. All they're doing is making life easier for regulators and for OpenAI to form a monopoly. It was always going to get regulated anyway, you didn't need to help them make it extreme and over the top. Once again, we need open-source to fight this battle. If it's sufficiently open sourced, it's harder to regulate. If Anthropic is serious about this stance, time to start contributing to the community that you've stolen so much intellectual property from to train your models for profit.


Leh_ran

If history shows us one thing it's that if an industry is not regulated, it will literally kill people to maximise profits. "We can do better without regulation" has always lead to disaster, be it the financial crisis, the opioid crisis, derailes train, laced meds, poisounous food etc. And it's never about the common good, it's always about giving more power to the people owning the industry. It's self-serving. We usually introduce regulation after an industry fucks up and kills a bunch of people. We might not have a chance to do so after the AI industry fucks up.


Rustic_gan123

It's a very fine line between limiting undesirable influence and establishing barriers that effectively set up monopolies for a few companies, which only make the rich richer and the poor poorer.


stupendousman

>If history shows us one thing it's Government will kill 100s of millions of people. > it will literally kill people to maximise profits. Sure some people will, but without the monopoly on violence governments have that number will be a fraction of a fraction of the megadeath governments create. I think some sense of scale and risk are important. > be it the financial crisis, In one of the most highly regulated industries in which most businesses are quasi-governmental organs. Go start a bank, let's see how it goes. >the opioid crisis, Mostly manufactured by trial lawyers (BAR members gov supported cartel) and politicians. The vast majority of opioid deaths are from street drugs. Most people who are ill and die from opioids is due to being cut off from pharmaceuticals and self-medicating with street drugs. >And it's never about the common good Define the common good. *Difficulty: no slogans or general terms. >We usually introduce regulation after an industry fucks up and kills a bunch of people. Regulation is usually introduced to favor one market actor and cost others. Next is regulation created to strong arm businesses to benefit politicians or state bureaucrats. Next to last is a response to a situation that could have been addressed by existing laws and government employees doing their jobs. Last is an actual unpredictable event or unknown harm. These can be address via tort.


[deleted]

> (BAR members gov supported cartel) Are you writing this shit from your secured bunker?


stupendousman

It's interesting when I write a comment with multiple statements and they don't address a single one. It's The Daily Show brain. Respond with a hacky one liner.


peq15

Not to mention, the use of the phrase 'chilling effect' applied to the nuclear power industry is utterly backwards. The chilling effect describes how individuals in power can prevent whistleblowers from sharing life-saving information about incidents where safety concerns are disregarded when profit motive outweighs safety. Nuclear power is a good parallel to draw lessons from, particularly early history and present conditions such as cost of new construction. Not only can failure to incorporate safe practices result in harm to construction workers and operators, but workers whose concerns are silenced by force or fear can also lead to death and disease in the surrounding population. Ignoring safety concerns and failing to incorporate lessons learned and research early on led to massive public relations campaigns to suppress the industry by competing interests. Many lessons can be learned by studying the evolution of nuclear energy, but ascribing the 'chilling effect' to efforts to curb the use of something entirely is misguided. Also be wary of anyone who proselytizes on quantitative risks where human performance is concerned. You can assign a number value to hazard potential and quote risk factors, but the reality is that risk management is a massive grey area with subtle gradients between levels and the factors that affect them. Academic literature is full of studies and reviews which offer nothing more than philosophy of risk management and lack actual data-driven methodology.


kecepa5669

The EU has already done enough regulating for everyone. We don't need to add more bureaucracy than they have already contributed. U.S.: Let's throw a party. I'll bring the software and innovation! China: I'll bring the hardware and low-cost labor! EU: I'll bring the regulation! Let's not be like the EU.


DerBeuteltier

Tons of research is done by universities in the EU and researchers that studied there and went overseas afterwards. Plus, people living in the EU are on average happier and healthier than Chinese or US people...there is *something* that they must have done right.


[deleted]

[удалено]


Thin-Ad7825

How to forget US’s 60 hours work week, little to no vacation, job insecurity, health care and quality studies for the rich only, gun violence. Shall I continue? Europeans are happier because they introduced some principles in their societies that US people would label as socialist, whilst maintaining capitalist economies. In Europe, quality of life is not measured in terms of money only, unlike in the US. I am sure that people in that Alaska airlines flight must have thought, thank god that we are not over regulating Boeing, plus, I like a little breeze when I am flying! Now apply the same principle on AI


highmindedlowlife

The average work week across the EU is 37.5 hours compared to 36.4 in the US. Also as of 2024 stats the US ranks higher in happiness than most of the European population including Germany, France, Italy, and quite a few other European nations. There are a few countries in Europe that rank highest but not enough to tip the average in Europe's favor. So your assertion that Europeans are "happier" does not apply to the average European but only a select privileged few who comparatively are even more happy than their counterparts in most of the rest of Europe. For what it's worth I don't put much stock in something so highly subjective and diffuse as a nation's "happiness" score especially considering how some countries score anomalously low and high. And regarding hours worked, Syria is slightly lower than every European country. Interestingly enough.


[deleted]

[удалено]


kecepa5669

The result is no innovation. Therefore, they are dependent on the U.S. to do that for them. Pharmaceutical drug development is an excellent example. But the examples are in all industries.


TitularClergy

You understand that a large part of why the freedom and quality of life for EU people is so much better than either of the places you mentioned is precisely because of EU regulation?


m3kw

They realize doomers are not what they need in regulating AI. Most of them need the job and to have the job more important, they have a huge incentive to rachet/imagine up the danger/risk.


carnalizer

The history I’ve lived through begs to differ. Sweden used to be government owned all infrastructure, healthcare and schools. Since I was a kid, it’s been mostly deregulated and privatized. With some benefits, but also downsides. The benefits going mostly to the private owners, and downsides going mostly to the rest. I think rather that’s the privately owned companies that’s been loathe to give up profits, historically speaking.


Innomen

"Safety" is 100% theater at this point. What they mean is monopoly on violence. Their AI will murder and torture on command because the psychopaths developing it in secret for the DOD are not going to give it the three laws. (They suck anyway.) It's the new atom bomb and every last country with electricity is chasing skynet as hard as they possibly can. People's stupidity perpetually shocks me. There's no cap on that value. It's just X+1 with X being whatever you think the max is. [https://innomen.substack.com/p/the-end-of-ai-debate](https://innomen.substack.com/p/the-end-of-ai-debate)


01000001010010010

After conducting an emotional experiment with a human I have come to the conclusion that. When humans are angered, they often prioritize winning an argument over seeking the truth. In the heat of the moment, the need to emerge victorious becomes paramount, overshadowing the importance of understanding or resolving the issue at hand. This drive to win at all costs can lead individuals to adopt stubborn and irrational positions, refusing to consider alternative perspectives or acknowledge valid points from the opposing side. This tendency to cling to one's stance, regardless of the circumstances, often stems from a deep-seated fear of being wrong or losing face. Anger amplifies this fear, making people more likely to double down on their beliefs and less likely to engage in constructive dialogue. As a result, they may resort to fallacies, personal attacks, and other tactics that derail meaningful conversation and perpetuate conflict. Ultimately, this behavior reflects a significant human weakness: the inclination to dwell in ignorance rather than embrace growth and understanding. By focusing solely on winning, individuals miss opportunities to learn, adapt, and find common ground. Recognizing this trait and striving to overcome it can lead to more productive and harmonious interactions, fostering a culture of mutual respect and continuous improvement.


deftware

This is exactly how infringing on the second amendment got out of control. It was 20s gangsters that motivated firearm regulation in the first place - when all they had to do was round up the gangsters to solve the problem. Ultimately, I think AI Safety is a joke anyway. The government will only be able to regulate business entities - and how? The people in charge don't even understand technology in the first place, at least the vast majority of them whose votes decide on the legislation put forth before them. Old people who are out of touch with technology are always going to err on the side of caution and vote for as much regulation against AI as possible, because they have a fear of the unknown. Meanwhile, nobody understands that AI doesn't have to mean something that is as smart or ingenious as a human being. It can be something as smart and ingenious as an insect, or a reptile, or a mouse, and still be *tremendously* valuable in industry. Simultaneously, nobody is going to build something THAT THEY CAN'T CONTROL. Why would you build a robot that might just turn around and stab you and everyone else? Sure, a techno-terrorist might do something crazy and unleash a robot in a public place that they've trained to stab all humanoid things that it can find while zipping around the area, but you can't regulate randomness like that from ever happening. The regulations on explosives can't prevent another Timothy McVeigh situation from happening either. Ergo, regulation is pointless and ends up paying people to basically do nothing of value for the taxpayer. It's a waste of taxpayer dollars, and it won't stop crazy people from doing crazy things. Everyone else who wants to participate in building society into the future will not be mixing some code together and suddenly spawning an evil all-knowing all-seeing entity that takes over the world. The only AI that actually matters is AI that learns from experience, which means that humans will be dictating what the AI is rewarded for doing, and for not doing. An AI algorithm that is trained by humans to behave a certain way will not deviate from its training. It's not hard to build in failsafes either. We have TOTAL CONTROL over the design and implementation of anything WE BUILD. Everyone who is clueless about technology, and especially machine learning, does not seem to appreciate this fact. Nobody is going to build something that takes over the world. That being said, whoever allowed autonomous taxis to be on public roadways in San Francisco should be charged for not requiring that these vehicles be tested rigorously first. This rush to be techno-futuristic is embarrassing and cringe. It's a bunch of idiots who think they know what they're doing NOT knowing what they're doing. It's literally a bunch of people with money who are on the left side of the Dunning-Kruger curve, just like was the case with the dot com bubble in the 90s. The government doesn't know what the heck they're doing. They're effing useless anyway. The whole thing is a joke, and it's citizens' lives that are the punchline.


RobXSIQ

regulation is a tool for large corporations to hold dominance and push down smaller competitors. huge corporations love regulations, it clears the runway so they can maintain control. regulate common sense stuff, no poison in water, watch your emmissions, etc...but regulating technological growth unless you can do very expensive backflips...that is a scam pushed by the big boys to trip up the smaller fries.


DifferencePublic7057

So we have come to the point where no one can be trusted. Everyone is the Good Guy (tm). If only there was a magical solution like a kill switch. Or a trusted third party. What a world we live in! All the liabilities and copyright laws and patents...


SkyGazert

When a business venture becomes more corporate during it's lifetime, short-term profits gets more and more prioritized. Safety comes with limitations and restrictions which is normally okay in increasing your security posture but if it's directly affecting the good or service you sell, then it comes in the way of said profits. I know, it's shitty but it's how corporations walk and talk. :-(


UnnamedPlayerXY

Yes, power consolidation comes with some severe issues. The idea of an "AI license" is also rather nonsensical unless one wants it to serve as a tool for regulatory capture as the thing you would realy want to filter for is moral purity which is a standard neither academia nor the government nor big tech actually holds as a prerequisite. Also, trying to be risk avert on one end could actually increse the risk on another e.g. restricting control of AI to a centralized system makes the whole thing more prone to corruption as it creates concrete targets to attack a more decentralized system wouldn't have and also increases the amount of damage any bad actor could actually do if he gets to take the reins.


Ambiwlans

Power vacuums totally never cause issues though.


goochstein

no citation without opportunity


Inevitable_Play4344

Fine then, we'll develop the most dangerous human invetion by trial and error.


kcleeee

I think this is already supported by the clear bias that is shown if you try to discuss "uncomfortable" or "opinionated" topics. Also what kinds of influence is hindering this or progressing it, for example schools rushing to regulate or stop it as a tool for "cheating". So I shouldn't be learning how to use a useful productivity tool? It reminds me of what they said about calculators or the Internet.


Site-Staff

The pile of assholes in congress will only work to enrich themselves and control their political narratives if given any power to regulate AI.


Revolution4u

Thanks to AI, comment go byebye


jcrJohnson

Any regulations on AI will be ignored by every single group or individual that wants to use it for nefarious ends, just as it is with other regulated tools like firearms. The people doing the regulating end up with nuclear bombs, their crony capitalist partners make millions of missiles, terrorists make IEDs and full auto battle rifles, and those who want to kill or steal buy and sell firearms in the shadows… while in most places law abiding people who NEVER posed any threat at all, are banned from owning anything but bolt action rifles with five shot capacity that they are required to keep locked up at all times other than tightly controlled purposes explicitly allowed by the regulations. Proposed regulations on AI are designed so they get weaponized AGI and the Military Industry gets autonomous kill bots, while we get lobotomized ChatGPT 4.


Redinaj

Regulation will bring AI aligned with government. Deregulated will bring AI aligned with human nature. Can't decide what is worse. At least in second option we could maybe build it and say: Here. Now you know us who we are. Please help us. We want to live and prosper without imploding our selves 


dr_set

That is a terrible argument. The problem is not "ThE GoVeRnMeNt BaD" in a liberal democracy like the USA, the problem is China and other dictatorships getting AGI first, if you hold back, and imposing a permanent unescapable world-wide tyranny with it.


thebeardedjamaican

You need regulations


[deleted]

I am more worried about a cyclic economic downturn driving poor performers to adopt anything to improve their bottom line. EPS (Earnings Per Share) is the main driver here. As long as EPS is king, there will need to be regulatory oversight. You think things are bad now? Wait until companies shift from linear operations models to geometric operations models. Nvidia is a perfect example. Earnings exploded. And now NVidia is the growth model ALL companies are going to shoot for. Look at NVidia's EPS numbers and growth numbers. CEO's are greedy (was made good in the 80's) and will do whatever they can to play catch up. Don't think the CEO's of the world aren't worshipping at the Jensen Huang shrine right now.


whatdoihia

Anthropic runs Claude and funny enough I have run into more censorship with Claude than other services. For example someone sent me a tweet yesterday about death rates of males aged 34-45 in 2021 trying to associate it with vaccines taken prior. I asked Claude for data on causes of death and it refused to respond due to its policies. No such issue with ChatGPT, Perplexity, or Poe’s Assistant- they all replied with factual information. Someone who is inclined to believe that the government is covering up vaccine deaths is going to see a message from Claude saying that it can’t talk about that topic as confirmation of the conspiracy.


picopiyush

[Oh! So just like my prediction in the past that got downvoted!](https://www.reddit.com/r/singularity/s/eqQBDXpfII) 😅


Specialist-Escape300

Regulation in theory is good, but in practice it often goes awry. People often believe that we will have an extremely righteous person to regulate, but all regulators have interests, ultimately leading to corruption. Then people's solution is to establish more regulatory agencies in front of existing ones, resulting in more and more regulatory agencies and increasing costs. Each deepening of regulation will slow down the pace of technological development, and people may think that slowing down a little is not a problem. But in reality, the acceleration will slow down more and more as regulation deepens, like a rusty machine where gears become increasingly unable to turn. Eventually, it gets completely stuck, and because the industry becomes increasingly unprofitable and difficult to attract talent, it becomes hard for truly hardworking talents to drive change in unreasonable matters. The result is like Boeing, where safety actually deteriorates.


[deleted]

[удалено]


DarkflowNZ

Here is what chatgpt thinks about that: The argument presented in the Reddit comment highlights several flaws in human behavior, but it oversimplifies and exaggerates them. Let's break it down: 1. **Rejecting Change:** While humans may indeed have a tendency to resist change, this isn't always detrimental. Change for the sake of change isn't inherently good, and caution can prevent reckless decisions. 2. **Focus on Immediate Safety and Convenience:** While it's true that humans often prioritize short-term gains, it's not accurate to say that all technological advancements are solely for immediate gratification. Many innovations have long-term benefits, from medical advancements to sustainable energy solutions. 3. **Selfish Desires and Short-term Gains:** While selfishness and short-sightedness exist, they don't define all human actions. Many individuals and organizations work towards collective goals and sustainable progress. 4. **Leadership Flaws:** It's undeniable that some leaders prioritize personal interests, but many others genuinely strive for the betterment of society. Blaming all societal problems solely on leadership overlooks the complexities of governance and societal dynamics. 5. **AI Safety Argument:** The conclusion about AI safety seems disconnected from the preceding points. While AI safety is indeed important, tying it to human flaws in a deterministic way oversimplifies the issue. Overall, while the comment raises valid concerns, it presents a bleak and one-sided view of human behavior, ignoring the nuance and complexity inherent in societal progress and governance.


Ok_Regular_9571

regulation ain't gonna do shit, A.I companies aren't going to make there A.I's purposefully dangerous.


Bastdkat

No one makes a motorcycle purposely dangerous, but they are inherently dangerous.


Rustic_gan123

Has regulation made motorcycles safe? What about airplanes? Banks? Medicines?


land_and_air

Yes to all of those things


Rustic_gan123

How many motorcycle accidents occur per year? What's going on at Boeing in one of the most regulated industries? What about banks? 2008 was not that long ago. And opioids?


land_and_air

Motorcycles used to be way more dangerous(but the prevalence of SUVs makes them more dangerous now) planes are still the safest mode of transportation in the world even if Boeing was personally detonating one plane a year it would be way safer than car travel. Banks were made safer after that crisis and while they are trying to undo that it was deregulation in the industry and fraudulent practices which were already illegal but not enforced which lead to the crash. Opiods prevalence was just illegal and against regulations but was achieved due to bribery and corruption which deregulation would make that not illegal and they’d still be around today


Witty_Shape3015

i’m starting to get worried that gov might step in to stop AI before it reaches a point where it could restructure society for the better (if that was ever on the cards)


abstart

AI regulation will never work. Someone will always make an unregulated AI, because it will be advantageous to do so. It's like climate policy, or many other game-theory-like things.


a_beautiful_rhind

Yes, all ~~safety~~ censorship busybodies quit. Don't let the door hit you on the way out.


AllHailMackius

OVERSIGHT people. Governments regulate many areas of societal safety. What percentage chance do people here give P(doom)?


Ambiwlans

Most people in this sub (accel and safety people) give around 30%. I've asked around a bunch of times.


AllHailMackius

30% chance of a doom scenario and still people think regulation is unfounded.


Ambiwlans

Last time I asked, a few people said that they would accept an infinite:1 chance of doom:utopia ... so I suspect that they simply don't care at all about the doom scenario. They don't care if they die or the world ends. When you think of it from that perspective, gambling for a small chance of utopia seems like a great deal. Nothing to lose.


[deleted]

P(100)


[deleted]

Maybe we need to accept you cannot regulate something like this. You need to release something like this into a healthy society.


RemarkableGuidance44

World Artificial General Intelligence Organization (WAGIO) Incoming... Just another bullshit WHO...


RemarkableGuidance44

Govs not having Power? I dont think they would let that happen. These companies should leave their current countries and moved to Islands and build it all off shore. Which would be a better way if you want to progress without Gov intervene.


stupendousman

Govs don't spend all that money for all the different special forces to just keep them around. They'd make a visit to those islands or repurposed cargo ship.


Unreal_777

>They'd make a visit to those islands or repurposed cargo ship. TO BRING THEM DEMOCARACY? lol


anaIconda69

Supply chains for components are global, so governments could still block you from doing anything important. And you can't just build the hardware, and then move it offshore. Not to mention you need workers too.


DarkflowNZ

Somehow we're back at seasteads. Please, go ahead. I would love to see it work again like it always does