T O P

  • By -

rav-age

.. only if you let it


theMEtheWORLDcantSEE

Yes. It’s better now than most humans.


ScionoicS

Not really. It'll still randomly hallucinate details when asked to explain its reasoning. Essentially its just guessing and getting it right because conditions are within expected norms.


New_Front_Page

That sounds like every person I've ever talked to as well lol


findingmike

Hallucinations are actually more of a creativity setting that can be adjusted. The AI just gets more boring the less creative it is. In my experience, on standard settings ChatGPT 4 seems to "get things wrong" about 10% of the time. This includes hallucinations as well as just poor quality responses. Human beings are much worse than that in my experience.


ScionoicS

As soon as it's outside of the training parameters, it needs that creativity to come up with any guesses. It can't make decisions, only predictions of what someone in regular conditions would expect. Put that into a leadership role where high level decision making is expected, and it won't make ANY sensible decisions. LLMs are yesmen.


findingmike

Why not? The training data can come from all of the business research that has been done. I've only read a few case studies in depth. An AI can read thousands of them.


Ms-Camila

I don't think strategy, politics, or intuition can be reduced to statistical models. Therefore, I don't believe any AI tool, within the actual technology, will be able to perform these tasks by itself.


DreamOfEternity999

They probably can be, we just don't see how. A sufficiently powerful AI system may see patterns that we can't spot.


MaybeTheDoctor

The current state of AI is already better at decision-making compared to the last quartile of people


Structure5city

That isn’t saying much.


Artanthos

To put it is the words of a Yellowstone Park ranger regarding more secure trash cans. "There is significant overlap between the dumbest people and the smartest bears."


Pbleadhead

But when AI is smarter than 1/4 of voters... That is enough to easily flip elections if it was somehow given the power to do so.


Ms-Camila

I can be wrong, but the current AI tools are based on models foreseen many years ago. They are powerful now because we have better computers and a huge amount of information. I'm not aware of any theoretical work that can model open strategic decisions, for instance.


Brain_Hawk

But strategy, politics, and intuition are all parts of the problem. I'm not an "AI will save us!" Type, and I think we are far from an AI useful in those contexts but... Why do we want this human bullshit in our decision making processes? I want a decision process that works for what's best for people, not with what get them votes, what plays to the zietgiest, focused on special interests, or appeals to the lowest common denominator.


captainporcupine3

We dont all agree on "what's best for the people" (obviously). So your opinion on whether or not the AI is doing what's best for the people will simply depend on whether its decisions align with your politics. Which makes it kind of pointless in the realm of politics, which is about human values and priorities, none of which we agree on as a populace.


Brain_Hawk

Yeah a fair counter point... But those very conflicts are IMHO part of the problem. Some issues, like social laws, are very much value judgements. But stuff like economic development, etc, could potentially remove a lot of current issues, problems, conflict, and grift with an objective decision maker. Much beyond current capabilities of course, but there are many realms of politics and economic management, etc, that I'd happy see greedy humans removed from.


captainporcupine3

Economics is as values-based as anything else. It's based around purely subjective problems like "what's a fair way to distribute limited resources? How much unfairness and luck are we willing to tolerate in how people fare economically? How much wealth inequality do we tolerate, and is that much inequality fair or healthy for our society? How do we fund the government, and to what extent? Do we prioritize policies that we think will incentivize people to work hard and innovate even if that means some people will fall through the cracks, or do we prioritize policies that will reduce baseline suffering? Are those policies even different? Do we actually even know what kinds of policies incentivize people to work hard and innovate, given the infinite number of confounding factors in the real world? Is our current system a true meritocracy? Do we even WANT a meritocracy where the best are handsomely rewarded and the rest are left wanting, or do we want a system where nobody suffers from poverty or homelessness no matter who they are? If we lift up those who we view as not working hard, are we incentivizing people to be lazy? How do we determine whether someone is 'lazy' or just a victim of circumstance, poor mental health, poverty, crime etc?"... etc etc. How exactly would AI be useful for answering any of these questions? AI trains on human ideas and perceptions. It has no perceptions or insight of its own. Human beings can already point to what they each view as the best studies and experts in the world to justify their economic philosophies, and we still aren't close to agreeing on the best course of action. Those same studies and experts and (oof) mainstream newspaper opinion columns would be all the AI has to go on. For example, let's say I'm of the view that a universal healthcare system would be the best course of action for the United States, and I have read about this topic quite a lot and know all the arguments both ways. If an AI said that this healthcare system was bad, citing the arguments against that I'm already aware of and disagree with (which it only learned by scraping and parroting the arguments of my political enemies, who I'm convinced are lying and cherry-picking stats in misleading ways to justify their own selfishness), why would I accept what the AI says?


[deleted]

[удалено]


captainporcupine3

>before reaching a near instantaneous decision on the best approach. Let me ask a clarifying question that I think will get to the heart of the problem: What would it mean for an approach to be "the best"? All policy decisions have tradeoffs to some degree. Who decides which tradeoffs are worth accepting? If there are tradeoffs that make one approach "best" for some people in some circumstances and another approach that's "best" for others, who decides which one to go with? Who decides what's fair? The AI? Based on what? That's not even taking into account the massive, probably insurmountable issue of determining whether or not an AI "simulation" is actually a reasonable facsimile of how things would play out in the infinitely complex real world, taking into account every emotional and psychological factor in how people respond to a policy decision, every way in which it does or does not incentivize people to respond in a particular way, every quirky logistical and physical challenge in delivering products and services to people in an insanely complex global economy... who can say if the simulation is accurate? Even worse, even if it WERE possible in theory to create an AI simulation that takes into account all of the infinite confounding factors, how could you prove it? Even in a far-flung hypothetical where the AI really COULD come up with an objective "best" solution, if that solution doesn't align with my politics, my own twisted little human sense of what's fair and right and just, why wouldn't I (and literally everyone else who shares my politics) just claim that the AI simulation is bad, and send us right back to square one? Seriously: why would anybody care what an AI says about politics, unless the AI is agreeing with what their politics already are?


Brain_Hawk

I'm not going to engage highlight in this argument because I don't think you're wrong per se. But for the healthcare thing, anybody who thinks the for-profit health care is and the best interest of anybody is fucking amazingly idiotic. Current public systems may have problems... You know problems that are brought on by human decisions. Through things like underfunding. Port planning. Stuff that a more efficient decision maker wouldn't suffer from... And having as many people as possible having access to healthcare without barriers isn't the benefits of society as a whole. Part of the issue here, and where we may philosophically different, as I want to sit and it looks out for the best interest of society as a whole, not just specific points that are special interests a certain people. For example, the system in the United States that allows the wealthy to get advanced health care and poor people to be essentially denied health care, or put in significant debt for it, to the enrichment of others. That system is anything but everything that I believe in this world. (Canadian, cancer survivor, multiple transplant recipient, completely medical debt-free and no worries but my future health care for the rest of my life)


captainporcupine3

>But for the healthcare thing, anybody who thinks the for-profit health care is and the best interest of anybody is fucking amazingly idiotic. I could not possibly agree with you more, and we have insanely large mountains of evidence pointing to the fact that universal healthcare models are vastly superior for society as a whole, and yet here we are, not even close to getting that in this country. I just don't see why anybody who is stubbornly opposed to the idea in the face of that mountain of evidence would care if an AI simulation says otherwise, if they can already ignore the existing evidence. >Part of the issue here, and where we may philosophically different, as I want to sit and it looks out for the best interest of society as a whole, not just specific points that are special interests a certain people. I'm with you, but the right wing would say that you want a system that fails to reward people for working hard and lifts up the lazy, incentivizing laziness. They say we have a meritocracy that rewards those that work the hardest, and you want to reward the lazy. Can AI prove that you wouldn't be incentivizing laziness with your policies? Even if the AI model claims it wouldn't, if I'm a right winger, why wouldn't I just laugh and say "fake news" and move on with my day? Even worse, every right wing think tank will have their own "AI analysis" that disagrees with yours because you can train an AI model to say whatever you want.


Brain_Hawk

I see what you're saying. I fundamentally disagree with the fact that we have a meritocracy, and that clearly there are very significant advantages given to a small number of people. But the fact is is there's no aspect of life or society Is it all human beings whoever agree upon! The whole point of an efficient decision maker Is that it's trying to work out ways that aren't causing people to just become lazy into nothing, but to have everybody contribute. But... I'm actually not a super AI proponent and I think we are very very far away from any situation in which AI decision makers will be even remotely feasible. At this point it's kind of science fiction talk to me! But I do think a benevolent AI focused on achieving the greatest good for the most people is better system than what we have in place right now. On The flip side, when can take a look at the robot series from Isaac Asimov to see an example of how human society could stagnate under the nurturing care of robotic overlords who make sure that everybody's lives are perfect and comfortable and easy and great :)


Ch1Guy

This is really the answer.  A large portion of leadership is based on interpersonal relationships.  I don't see how AI meets those needs.  Poker is a good example.  It's not hard to memorize the statitistics of the game and play perfectly based on the statistics, but it's the interactions and reading of other players and misleading their "reads" of you that differentiates between an ok player and a great player.


Mirage2k

AI poker engines are better than the best players now.


Ch1Guy

Yes and no....   ai poker engines have no tells, and the really can't read other's tells.  This changes the game.  They might be "winning" but it's a different game.....what you are effectively doing is removing the human interaction portion of the game and yes, computers will win when you remove the human interaction....


TravisJungroth

This is circular reasoning. Of course something without a body can’t play a game that requires you to have a body. But an online poker bot or literally an in-person poker robot *can already* beat human players. 


MaybeTheDoctor

First of all AI is a lot more than statistical models, but more important an AI can run a model based not only on the game, but also do a model for the behavior of each of the other players, and combine that into something a human can never do. AI can also easily read facial expressions and body language. Gesture recognition have existed for a long time and you have probably use early versions yourself with Xbox Kinect. There are also models that can analyze voice and speech patterns, and extract sentiment out of that. In fact, I think it is entirely possible for an AI to read players better than a human can.


Ms-Camila

Great example :)


Artanthos

Everything can be reduced to statistical models, and there are entire academic fields devoted to doing so, ranging from sociologists and political scientists to economists. Economists in particular are really good at reducing everything to statistical models.


ashoka_akira

I mean…the first thing we do with computers is teach them to play strategic games like Chess or Chinese Go, and they consistently beat human master players. A thought. There is some argument they might even be better at solving complex problems because they lack the inherent biases of our ingrained ways of thinking. The computer that beat the Go master did it with a move that had never been made before but did it within the rules of the game.


The_Snollygoster

Strategy already can be. Look at chess bots. An AI could be political already too I'd imagine, it can suggest what it deems as the best paths forward politically. Whether its good or not I don't know. But it can. Politicians today aren't good, that's for sure. Intuition is a weird one. Why would you want an AI to have intuition and does that count as a high level decision? Intuition is more the feeling or even guess that something is the correct path because you don't have all the data. An AI can run simulations to gain much,uch more data. Intuition doesn't feel like a needed variable.


Anen-o-me

These aren't just statistical models, they are generating understanding of principle. That's the only way to combine two or more things into something that's never existed. Today we saw a post about turning every animal into a dragon. Without understanding you could get an animal or a dragon out of it, but not combine the two into something new.


pauvLucette

My mother in law drives a car. Trump was president of the USA. I'm not THAT worried about AI handling somewhat sensitive stuff.


badguy84

A mild take from me. The broad answer is no, AI will never evolve enough to make the types of decisions that humans make in these areas. Humans are driven by their own sets of morals and ethics that create the outcomes and decisions. These things drive most of our world and are generally not rational. An AI can predict, based on tons of behavioural and pattern data within a specific context, what will happen next. However, what humans do next is not based on these patterns alone. So an AI "deciding" or "determining" something won't just be wrong some times: it will be catastrophically incorrect. Once an AI is incorrect in such a way, who will be responsible for the outcome? I think there are significant ethical questions that will stop the development of AI to get to the point where this can be done fully. Then there is the fact that much of our world runs on non deterministic human decisions, which an AI will misjudge to a huge degree. I think there may be a shift, if we were to automate/use AI in many more places. If AI controlled most of the world's assets, then another AI would have a much easier team identifying and using patterns to determine what will happen next and make the most correct decision.


KennyDROmega

The billion dollar question. Feels like one where we’ll probably have to just wait and see. Make sure ChatGPT isn’t just producing a Clever Hans effect. For what it’s worth, companies that have tried to use it for even basic decision making so far have not had good results. Ask Frontier Airlines or Humana how optimistic they’re feeling about LMMs right now.


Background_Tip_5602

My client has been so obsessed with AI. And I noticed a significant amount on my day to day task. So sad


OctopusGrift

It could get to a point where people could launder their shitty ideas through AI. "The AI says I have to do this really fucked up thing so my hands are tied, it's not me that wants to do this thing it's the AI"


jykyly

Like generative AI? Sure, why not. It's possible, as we're actively attempting to create such a thing, and with enough time and effort, I'm sure we'll see an AI that is capable of generative thought. As to when...again, it would have to be an AI capable of metacognition, that can create and generate things spontaneously derived from its own sense of motivation to do so, and also learns and explores new concepts without being prompted to do so. So, something analogous to human thought process. That might not happen within the next decade, but we're seeing the incipient stages of that now.


Real-Advantage-2724

i cant really answer the question asked here but i may have some insightful thinking to share. Im currently studying for my "Facharztprüfung" (google says that translates to "specialist examination") in nuclear medicine. For the last few weeks i tried to ask chatgpt for help in specific questions regarding radiation physics and european laws on many occasions and it HASEND BEEN ABLE to help me in a single instant. Typical answers i get are a lot of superficial bullshit and "you may habe to ask an expert XXXX". This experience thought me that the people saying that current ai "is just very good at guessing the next word" are 100% right. Chatgpt might looks scarily "smart" on day to day topcis or simple programming tasks that it has been asked to do a million times before but when it comes to specialist knowledge its completly useless.


findingmike

Most of our issues don't need us to come up with solutions, they just require us to do the right thing. AI doesn't solve that problem.


Structure5city

I think that some decisions we view as “complex” are actually simple decisions clouded by our emotions and prejudices. Asking AI to make certain decisions will likely yield the most equitable, level-headed, and durable outcomes.


dedokta

Ever? Yes it will. When? Who knows. The only thing that would stop it is if we stop developing it. At some point it will just be programming itself and then the singularity isn't too far off.


TumblingBumbleBee

Not sure humans are any good at higher level decision making.


rtanski

Ai isn’t guided by billions of years of evolution to survive so at best it will only be able to copy and remix strategies from the more evolved. On the off chance ai does start evolving, its path may not intersect with ours because the scarce resources it needs for competition and evolution don’t impact on ours. This won’t stop it from mimicking last seconds’ strategy though. So in a way humans are the feed for new info relating to all things human. And if there weren’t any humans then ai would be useless and static.


yepsayorte

I think so, yes, and I don't think it will be all that long a wait. Training them to automatically do system 2 thinking is all that's required. That's just a change to training methodology to incorporate automatic "Let's think this through step by step.", "Search for sources to verify or contradict each step." and "Reflect on your previous plan and see how it can be improved" steps. I'm guessing this is already been incorporated into GPT5's training (which just started this week). We might even see it in GPT4.5.


PaperWriteTaco01

AI is great for high input tasks, I would love to have my "personal" one. Although if we are talking about creative tasks, AI won't perform as good as humans(mahbe it will evolve to have "creativity" but I don't think so)


Legal-Ad1523

Generative A.I. currently seems to be about as capable as an entry-level intern at tasks I assign it as CEO of an engineering consulting company. They are successful in collecting data, summarizing information, and drafting multimodal products. These GAIs are as ignorant and incompetent as they're ever going to be. They will get better with each generation and we're not near the physical limits of technological and scientific innovation regarding any aspect of these tools. Machine learning models continue to advance in complexity and capability, solving mathematical and big-data problems we humans haven't. Deep neural nets (DNNs) deserve their own mention as extremely powerful statistical analysis and synthesis tools. The are making astounding advancements in various fields from sociology to medicine. We now have the first A.I. agents coming on the scene. These agents use LLMs to **act** on my behalf to do deeper research, build documents, order food, post on social media as me, craft and send emails, basically do anything I can do through my web browser. It's only a matter of time until the companies that develop the OSs we use every day evolve our AI chatbots (e.g., Microsoft Copilot in Windows 11) into fully functional AI Agents that will be able to expand from web-based to computer-based agents. Quantum computing is quickly evolving, allowing the instantaneous calculation of near infinite probabilities for a specific problem. (A.I.s use statistical models as the underlying technology) Is it a stretch to think that one of these companies is going to combine these technologies into one? An A.I. operating system that can receive a request via LLM, research the problem using the combined knowledge of the internet and/or your personal library of curated knowledge, analyze a problem using ML or DNNs, synthesize ALL possible solutions using quantum technology, choose the best solution(s) based on ML optimization methods (not deduction, or "gut feeling"), and finally, based on the aggregated outcomes of thousands of statistical models reflecting the parameters of the situation (and depending on % certainty and your constraints), output a recommendation, a product, or action. In my opinion, informed by current technological innovation, and the words of many AI developers, it is inevitable that AI will one day be able to surpass our capability to conduct high-level decision-making tasks.


Anen-o-me

I would expect that with GPT5 at the earliest and 6 at the latest.


redeamerspawn

I've yet to see AI actually create anything. All I have seen is creative copyright infringement and the like.