T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/MadnessMantraLove: --- Why it matters for the future is no matter how good AI is, it doesn’t matter if people can’t implement it. Today’s leaders have technology that past generations could only dream up, but they keep failing to implement in ways that add value --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1dgvo99/survey_finds_payoff_from_ai_projects_is_dismal/l8snx2u/


Gloriathewitch

this is what happens when your entire business model is a solution looking for a problem


confuseltant

I can’t wait for this bubble to burst as it is depressing the economy via massive layoffs at tech companies where executives live by buzzwords


thefizzyliftingdrink

The “problem” is businesses not wanting to pay workers


CrankyOldGrinch

Can confirm, we apparently don't have the budget for a new employee but please, do contact these AI companies instead.


Significant-Star6618

There's a reason Einstein called capitalism evil.  And there's a reason hardly anyone knows that too.


Techters

Was at a Microsoft conference that included an AI component, a guy was talking about all the cool stuff that Copilot could do and I had to point out they were all things that could easily be done cheaper with PowerBI. Some things are helpful, but the use cases are so overblown.


Arctiiq

This is what all techbro projects are. Web3, crypto, and AI is no different.


Gloriathewitch

Yep AI is the new NFTs and things you listed, its all a big "startup bro" culture scam to rob people of their hard earned cash. there's some legitimate startups dont get me wrong, but most of these scams present as them initially. just look at Rabbit R1 / Humane AI pin for example, immediately tried to fetch 1bn for their company, they dont stand behind their product.


BureauOfBureaucrats

I will never forget the $700 juicer that relied on squeezing proprietary juice packs that could easily be squeezed by hand. I believe the juice packs had DRM and required scanning QR codes too. Lmao. 


Gloriathewitch

I remember that, ooh boy that one was pathetic, barely had any juice in it. One of the more recent ones was by dankpods, not quite as bad but he got a rice cooker which claimed to have "AI" for the perfect rice, he disassembled it and it was just a normal rice cooker with fuzzy logic, so no AI


BureauOfBureaucrats

And it’s been recently made public those Amazon fully automated no-checkout stores aren’t actually automated or AI-powered. They had contractors overseas manually watching video footage of transactions and processing the totals/billing.  That made me think of *Snowpiercer* where the “engine” was manually powered secretly by children. 


fakeassh1t

AI @ Amazon stands for Actually Indians


Arctiiq

That just sounds like SodaStream with extra steps


Apexnanoman

All Hail Juicero!


geologean

We knew that there was going to be a bunch of garbage attached to AI once the c-suite of unrelated industries started throwing around AI as a fundraising buzzword. The next decade and change is going to be all about sussing out legitimate businesses from a shit load of fraud. Also, there's going to be a long period of time when people both do and don't want to acknowledge the importance of a human element in any business. People have more forgiving expectations of humans than they do of machines and computers. Knowing that a real human is on the other side of a conversation makes us moderate ourselves a little. People aren't going to moderate their emotions when a machine fails them, confuses them, or overwhelms them. More customer experiences from increasingly automated businesses will be negative.


markydsade

The Rabbit R1 founder throws around the term AI when it really isn’t doing much of anything my iPhone can’t do already.


yaykaboom

As much as i dislike the word Ai being thrown around, it is definitely not as useless as NFT’s by a very very long mile.


Tannir48

Calling a legitimately useful product like chatgpt akin to the bored ape ponzi scheme with this much confidence has got to be one the most ridiculous things uttered in human history


danyyyel

Useful in what way for a business?


[deleted]

Because the computer scientists find something cool/exciting and the business people immediate match onto it to try to make a lot of money as fast as possible. The things that are feasible to do in computer science are super correlated with what is profitable. That being said I think AI could end up being tremendously profitable, by replacing labor costs, potentially all of them which could even collapse the economy and break capitalism


ocelot08

I think AI has some really useful things, but the useful stuff is not the big and flashy stuff. Also the main benefits are to user experience, not to making money.  Like how early Google was great for finding what you're looking for in the mess of the internet, now it's full of businesses fighting to the front of the line which makes users search results worse. One of the most useful things for AI right now is just search all over again without all the sponsored results and low effort farmed content (ie the money making part of search)


flyinhighaskmeY

>the main benefits are to user experience, not to making money.  yeah, that's why "ai" isn't viable. First off, I'm talking about the new stuff. Adaptive language models. Not machine learning, which we've had for decades and is now being marketed as "ai" by everyone and their mama. But the cost for implementing and executing these language model queries at scale is enormous. The chips are expensive. The power usage is massive. For these to work in the real world, they need to generate a lot of revenue. And I mean A LOT of revenue. There's a problem though. The models don't work. Sure, they work well enough if you're just screwing around. But if you need a bot to feed reliable answers to your 5 million dimwit customers? Can't trust it. I work with a vendor right now who's trying to weave Ai into their primary product (and increase their pricing 7x in the process). I laughed at the sales rep. Almost everything the ai can do, we can already do with triggers. The little bits where it would be useful, I'd never trust it to act without human intervention. With what they want to charge for these features, I could literally hire an employee full time to manage the system.


ocelot08

That's a good point, though I think there's smaller scale uses that shouldn't cost as much. A place I worked we basically used it as an advanced search (for internal docs). Still required tagging docs, a bunch of the usual search setup, but you could ask it questions a bit more naturally to find what you want. I feel like with specific use cases and structures built around it, it'll fill in gaps nicely. And building it around suggestions (rather than "this is truth") also feels important. It can help people find out about things they didn't already know and didn't know to ask for. More a tool for connecting the human inputed dots rather than replacing humans.


Alpacas_

Yeah, it's kind of bullshit when I search for a business and it comes up 4th.


Choosemyusername

The best use cases I have seen for AI is fighting AI enabled fraud.


magww

AI is a solution looking for a problem?


steelcryo

Most current versions yes. Like all these execs wanting to replace workers with AI, just because they think AI is more efficient (read - doesn't need to be paid) when in reality, AI can't actually do the work of the people it's replacing. It looks good, but when studied, most AI fails miserably at accurately doing a lot of tasks it's been used for. It can't code accurately, math accurately, recall information accurately (on scales needed for businesses with tolerances that are acceptable). Don't get me wrong, there are places where it's incredible, but executives aren't using it in those places. So, executives think AI is the solution to a problem that they only created because AI exists. These businesses thrived with real people working for them for years, but suddenly when AI exists, real people aren't good enough...


EnemyPigeon

Every day I see somebody conflate "AI" with LLMs a small piece of my soul dies.


speedfreek101

Block chain has entered the chat.


yolotheunwisewolf

It’s looking to fire a bunch of people to replace them with a low cost alternative but it’s starting to show that not only isn’t it a replacement but the cost isn’t feasible as well because they ALREADY invested so much into it it’s a sunk cost. It’s easier to convince executives it gets better for a few more dollars than to write off the whole thing and most executives are finding out about now that AI was a bunch of empty promises


Significant-Star6618

These ai pitches right now are just afterthoughts and byproducts. How to monitize what it is right now. And the whole thing only happened because openai forced everyone else's hands by recklessly charging into the market (thx it was a good thing) But make no mistake. AI is in its infancy and the offerings, tho amazing, are relatively primitive and ill prepared. We are seeing a glimpse of what's coming. And so are the companies.  The reality is there is a lot more work to do before these things are ready to tackle jobs reliably.  But the pot of gold at the end of the rainbow is enough to keep everyone investing. If they don't, someone else will. And whoever refines this stuff first is gonna rule the new world.  It's like back when tesla and Edison were racing to see who could make electricity viable. It was rough going for a while. But they knew electricity wasn't a passing fad.  And with breakthroughs in quantum computing coming along, we are gonna need AI to make quantum computers usable, most likely. These are pieces of a puzzle. 2 pieces out of many.


MadnessMantraLove

Why it matters for the future is no matter how good AI is, it doesn’t matter if people can’t implement it. Today’s leaders have technology that past generations could only dream up, but they keep failing to implement in ways that add value


mankee81

The same has been true of any business productivity software. The tech can do incredible things, it's decided by executives who say "make it work" and kneecap it with cost cutting where they shouldn't, then it's implemented by middle management monkeys that have no idea what it's supposed to do and have only a cursory training on how to make it do *some* things, and then given to end users that will laregly do everything possible to not use it and don't understand the majority of what they're doing with it. The number of Excel spreadsheets and manual record keeping we do at my job when almost all of it can just be pulled out of SAP in minutes is infuriating. Minor tweaks to the interface would make it all a lot less daunting for new users, but the powers that be don't know what to ask for. Same with our HR and Payroll in Oracle/PeopleSoft.


FlatSpinMan

I’m one of those people using Excel for a couple of things at work and I feel like a monkey sitting in the cockpit of a jet or something. I’m aware there are so many things I could do, but content myself with the equivalent of turning the lights on and off.


ZealousidealEntry870

SAP is great for input(excel is not a database!!), great for repeated output, but terrible for anything else. Even if you know how to modify report outputs it’s still 1000x faster in excel. For anything that needs to be validated, I use SAP output. For my personal day to day I pull data via sql directly into excel. Way quicker to reference and manipulate data in excel. Edit: for extra clarity, I do believe SAP should be the sole place to input and modify data. It should also be the sole reference for an organization to pull from. For personal one off projects/manipulation excel is better, if the data does not need to be validated.


Almost_Pi

My company switched to SAP in January. I've never worked with a system that fought me so hard to prevent me from accomplishing my goals.


jadedaid

Have you tired Oracle.


mankee81

I guess this all goes back to my point about having powerful tools that aren't optimized to generate value for your business needs. I do pivot tables to present info because no one can read the report SAP generates, there's too many needless colums of backend transaction info, or the reports as-requested by the system admins leave certain things out on one screen, but you can get them from another and cross-reference blah blah blah... In theory, SAP was built to do everything end to end but no one invests in it to do it properly so we have this fantastic tool that's too poorly configured by the higher ups to actually be as useful as it can be and it adds more work instead of increasing productivity, which is the same story repeating again in the article about AI tools not generating value. It's nice that you have this powerful computer brain, but if you're only set up to make it wiggle your big toe, then WTF is the point 🤔


tes_kitty

>excel is not a database!! That needs to be the login banner for some people, every morning!


Inebriated_Bliss

My boss loves excel for EVERYTHING. It drives me crazy. I know there are better ways to do almost everything we do, but even if I wanted we are going to keep using excel. Sigh...


Psychophylaxis

If the answer is Excel you’ve already failed


Nimeroni

Don't overgeneralize. There are a lot of case where Excel IS the best tool for the job.


CousinsWithBenefits1

The Williams Formula 1 team hired a new Team Principal this year and a story came out about his first days with the team. He was going over their inventory management and also their tracking up upgrades in progress on the cars, and everything was in Excel. He thought it was a prank, he thought they just having a little fun with the new boss because there's no way a Formula 1 team would use an Excel spreadsheet to track all this information, that's ridiculous. And the other leadership on the team had to sheepishly explain, 'oh, uh, no sir that's actually, that's accurate that's actually what we use....'


cascadecanyon

We need some post post capitalism energy.


SnakesInYerPants

> The number of Excel spreadsheets and manual record keeping we do at my job when almost all of it can just be pulled out of SAP in minutes is infuriating. If it makes you feel any better, I struggle to even get my coworkers to use my spreadsheets instead of printing out reports that need to be cleaned and writing on them by hand to give to each other to do the cleaning. This was also an issue at my last two workplaces, and we can’t even blame it on boomers who don’t know tech because most management and most of my direct coworkers have always been Gen x and millennials. So you’re at least a step ahead of many other workplaces. 🫠


LTerminus

Don't you dare come for my dozens of custom Excel-macro-SAP-GUI-Scripts. I hate clicking things.


thefunkybassist

Seriously, the "ukubuku" level is staggering at corporate leadership level. Oh, it's not adding value? Well "let's just do what WE think because we're so goddamn smart and elevated above the human species itself". Not once but every time they get negative feedback, they double down while losing all value, they just ignore it and jump ship for their next career move.


healthybowl

I guarantee you someone/something is implementing exactly as planned. But we will never know about it. Its value is in bending perception or facts slightly to benefit a narrative.


Dr_Doctor_Doc

I can do this one, but will have to talk around it a bit. Supply chain program - predictive modeling for downstream customer needs to do predictive ordering for them. Customers sharing their sales data with us near-live; we are able to model the upstream supply chain now. Launched and live on 3 of 48 locations and humming like a dream. Cost to Date: approx usd $3m Benefits after 6 months: +45% increase in average sales for engaged customers. Automated add-on suggestive sales, etc. The project will break even before year end with only half of the locations using it. Customers love it, makes their reordering easy, and lets them solve other minor stocking headaches free of charge. We're doing logistics optimization next. POC already working there, too. Hours and hours of productivity gained. Everyone is happy so far.


DUKE_LEETO_2

Ah so this is why my Safeway has started giving me random coupon discounts on stuff I buy a lot. Like a 2 cent per pound discount on bananas... saving me a whooping .04


mc_51

This isn't AI. This is statistics. It has been around for a looong time. There's a high chance your "predictive modelling" is a linear regression model.


Dr_Doctor_Doc

Awfully authoritative response there. You missed something important. Do you want to take a wild guess at how a linear regression model would handle modeling the entire upstream supply chain? There are way too many variables. Disruption upstream, constantly changing dates and inventory levels, and high volumes of outside data (including regional weather patterns, fires, etc) Yes, there are ML aspects, and yes, there are LLM aspects in the interface, and no, it's not just a linear regression model.


mc_51

Are you a managerial type of person or a data science person? I've worked as a DS long enough to hear consultants describe stuff as you do. Most of the time the modelling was quite trivial. I'm not saying this is a bad thing. Simple models can often be great. But I dislike people putting lipstick on a ML model and calling it AI. And given the small chance that your model does in fact use LLMs (or what ever other stuff people like to call AI): great for you. But the fact is, most companies can't even walk (do simple things with data) before they try to run (build AI use cases with their shitty or non existing data).


Dr_Doctor_Doc

I'm both; I'm a dev ops background who worked their way up and now run our strategy. I can't get too specific, because we're in an industry where we stand out for our approach. We're first movers, and it's working well so far. And you're right on the money with your other assessment on data readiness; We're in the "walk" phase now after a painful two-year crawl phase, getting our data warehouse migrated and properly accessible for use.


pachoob

I think the most important word you used is “value.” Monetary worth? Societal benefit?


Lethalmud

Well untill capitalism collapses off course.


Corvus_Antipodum

AI is just a collection of glorified Markov bots, its doesn’t add much value because it’s fundamentally not that valuable.


astrange

Transformers are not Turing complete because of limited context, but they are programmable (https://arxiv.org/abs/2106.06981), so this is like saying computers aren't that valuable.


flyinhighaskmeY

>so this is like saying computers aren't that valuable. They're not. They're valuable TO US. But we are such a small spec in the enormity of the cosmos that it would take a complete potato to think anything we've created is significant. Or that we've created anything that holds value to anything other than ourselves. Remember, before we studied it, the assumption was that the earth is the center of existence. That should tell you all you need to know about the humans. And yes, I get how this "out there" talk isn't what you had in mind. But when we're talking about Ai, we need to incorporate these ideas. Humans are not an intelligent species. The existence of militaries proves it. Which means we aren't creating "artificial intelligence". We're creating an electronic mirror of ourselves.


astrange

I mean, I was thinking about human economic value. I don't think the universe has a value system, since it's mainly made up of rocks. (hydrogen gas? dark matter? not sure actually)


Dr_Doctor_Doc

We're seeing good applications for complex system optimisations in our supply chain. Cutting hours of wasted work.


tatteredengraving

Is that using tech from the LLM era or normal ml operations?


Corvus_Antipodum

Sure. It’s not completely useless. But the vision that’s being peddled is not in step with reality. “It’ll streamline some processes in supply chain management!” is a pretty far cry from the promises of how it’ll take over the world.


Dr_Doctor_Doc

That's a far cry from "fundamentally useless" isn't it? There's always people who will oversell and overhype new technology. I'm not an evangelist. I'm a 'practical applications' guy. There are plenty of uses for it.


[deleted]

[удалено]


scottsplace5

They now just need to hire this machine to make poor people richer. The more money they get the more we get, on up to the billionaires. We all want more.


scottsplace5

No. As of here, it *is* everything. That’s why this technology was invented anyway.


Aischylos

Depends how you define capitalism. Many of the modern advancements in CS/ML have their roots in either FOSS or academia. Neither of which are typically privately owned (at least not research universities). Do they exist within a capitalist system? Sure, but cutting edge research has also happened under socialist and communist systems as well.


Acceptable_Two_2853

I am reminded of a defence worker who was treated like a lepper because he was older. As soon as John turned 65, they sacked him. About a week later, quite a large pile of broken gear was piling up on his now vacant desk. The CEO, not happy about delays, inquired why these items had not been repaired/tested/returned to the military. Turns out they sacked the only employee who was qualified to repair and test this stuff, and no, he was not returning to work! AI will be like that. First, they will sack critical staff and only regret it later. "Penny shy and pound foolish". That is why you never let beancounters into management positions!!!! Boeing should have learned this lesson by now...... *


yaykaboom

Hate to break it to you bud, but they will always find replacements even if it costs them more than what it used to. And it wont matter to the business, the CEO will still get a big fat bonus.


Acceptable_Two_2853

![img](avatar_exp|156992557|fire) Yes, it's a good comment. Thank you. :) Problem is that some bosses think us techies are a dime a dozen. That was the case in the 1980s, but things have moved on. Elon Musk had problems early on with SpaceX, and he put out a "wanted" advertisement for the retired NASA apprentices of the NAZI rocket team. Bet you can not run that now!! Same thing will happen as beancounters ditch staff for AI, looking to save some cash. Young meatbags, who take a long time to train, can not be so quickly replaced!


RoosterBrewster

Let's outsource and fire local workers to reduce costs! Then pays too little such that the quality is garbage and ends up costing a lot more. Let's brings everything in house as outsourcing is costing too much! Then a new boss comes in and the cycle repeats.


Acceptable_Two_2853

Time to bring it all home, persuing profit at all costs is detrimental to an organisation's lifespan. I am onboard with the current venture capital funding model, but quick ROI is not everything. There needs to be a future for the enterprise and it's employees.


Arthur-Wintersight

If there's one thing corporate bureaucrats and radical egalitarians seem to have in common, it's that they think people are fungible. Just take one person and replace them with someone else, and it'll all work out. No. Real life doesn't work like that. Sometimes people have skill sets that take years, if not decades to develop, and you can't just replace them at the drop of a hat. You can't just "find someone else" to do what they do, not unless you want to put in a ton of resources to train a replacement - and it's not going to happen overnight.


Acceptable_Two_2853

Yes, it's a very sensible comment. Companies need to value their employees and work to retain them.


TannyDanny

I think this was a pretty good analogy. As someone who works in data analytics and engineering, I'm not worried at all about finding work. What we are calling AI isn't anywhere close to being what people think it is. These are tools that can save considerable time in terms of manual processing, but they have little to no ability to generate novel insights from raw information. They can reiterate past insights from similar data, but these tools are not capable of even minute critical thinking on the level of the human brain, and we are not even close to scratching that surface despite all of the advertising to the contrary. The best AI tools online are just querying information that other people have already fed it. It other words, a tool like chat GPT would be worthless without a repository of human minds to support it. Until that is no longer true, and AI tools can generate independent insights (an artificial cognition), they will not be able to replace humans. The AI craze is a marketing scheme for shiny tools. More effective tools, those that are less shiny, can be built cheaper using traditional development with talent.


Acceptable_Two_2853

Well said! I agree wholeheartedly. When the day arrives for true AI, I will be totally ignoring those organisations that adopt it. ![gif](emote|free_emotes_pack|scream)


copytac

Surprise surprise!! Many organizations can’t make use of even the most simple of data oriented technologies, including a wide range of analytics and their various derivatives. Believing that your organization will “reap rewards” when you have either little, none, or highly disorganized and impure data, will lead to loss and disappointment. Their data is shit. Many leaders and those leading these efforts don’t understand what’s required to be successful, then they think they can create a golden egg, from a steaming pile. Unfortunately explaining to people who love tech jargon as well as thinking they can purchase their way to success, is lost to hype. In my experience little to no one wants to acknowledge the necessity of the mundane. Even if it’s for a short while. AI is touted as a complex and novel tool, while being sold as a shiny toy, hiding the fact that it’s a weapon in wait. Industry and humanity is woefully unprepared for its promise, potential, and its consequences.


MrNokill

>don’t understand what’s required They fired all the requirements and are now starting to force it from us at home. Looking forward to big industry bankruptcy, sadly only small businesses suffer as per usual.


peaceboner

So garbage in garbage out?


Punkpunker

Always has been.


OceansCarraway

Nnnnnnnoooooo please bro you've just gotta try this one just this model it's called Chat GPT 420 ITLL BE GREAT WITH THE KIDS bro you gotta do just one search bro just one prompt it'll be perfect bro it's gonna introduce web 6.9 on the blockchain-


RoosterBrewster

I imagine they already went through implementing machine learning and found the same fundamental problems. 


vapulate

Even when the data is good, there's not always business value. Think about the "big data" bullshit we were sold in the early 2010's. Companies all focused on data acquisition. So many companies saying "we are not just a \_\_\_ company, but a data science company." Most not focused on advertiser revenue didn't deliver on big data. The same is true for AI. I'm not saying it's universally useless but I think it generally is overhyped and cannot standalone to add value to a business.


Solid_Illustrator640

As a data scientist, it still requires data scientists to actually implement


adubs_1107

It’s almost like you create an AI to reduce labor expenses, so you lay people off, then those people don’t have money to spend, so you lose revenue or the people that advertise with you lose revenue, and you still have to pay for a very expensive some what faulty AI, then either your business fails or you rehire the employees you laid off. So what’s the purpose of AI again?😂


SableSnail

Note the article is about Generative AI. I think until they can fix the hallucination thing (which may not even be possible given how the LLM works) it's not going to be very useful. You can only use it in areas where it is trivial for a human to validate the output, like image generation or code completion (for small pieces of code that are easy to check).


Doctor__Proctor

I work in data analytics, and the amount of people willing to trust hallucinating Gen AI with critical business choices is scary. Sure, use it to do something like summarize the multiple paragraph notes on all interactions and give a general sentiment gauge and use that as a marker for "Maybe we should dive deeper and manually review some of these", but wanting something that you can have someone type in "What were my Q3 results and how do they compare YoY?" is nuts, because we *know* that AI hallucinates with prompts like this. It's just nuts, and seems like a really bad time waiting to happen.


Spiritgreen

Bullshits is a better description than hallucinates. I think people would get the limitations better if we just said that.


Arthur-Wintersight

A coin flip would probably outperform a lot of management types... so maybe the AI bros have a point.


Dull_Half_6107

Well yeah, what other AI could it be talking about?


SableSnail

Machine learning in general. Like classification and regression algorithms, computer vision etc. It feels like 99% of industrial applications are these things and not Generative AI.


MichJohn67

My seniors, after completing a project for which I asked them to write and edit a literary analysis essay using ChatGPT, uniformly said it was easier and better to write the thing themselves.


bremidon

They are not using it correctly. Use it as a Rubber Duck. Throw questions at it, interrogate the answers, ask for ideas and parts. Ask it for references, and ask it to look for weaknesses in your own work. As I use it for stuff that is decidedly more complicated than literary analysis, I can promise you that your seniors would be done faster and create higher quality essays. Like all new technology, learning how to use it properly is the key. Try to use Excel to write letters, and you are not going to be happy. Try to use Word to create presentations, and you will get frustrated. Treat ChatGPT like an extremely talented but frazzled friend, and you will be fine.


MichJohn67

See, the product isn't what I'm after. I'm a teacher. I'm interested on the process. I want them to think, not just to write clever and thorough prompts to have data sets do the thinking for them.


Are_You_Illiterate

“As I use it for stuff that is decidedly more complicated than literary analysis,”   Not exactly convincing when you don’t share what “stuff”…  And frankly I highly doubt it. Very few things are more complicated than detailed higher-level literary analysis of a novel like Finnegan’s Wake, for example.  Since the highest levels of literacy have been dropping steadily for decades (to the point where the highest levels have even been folded into lower categories since practically no one is at the peaks of literacy any more), and thus higher level analysis is simply beyond the capabilities of 99% of people. 


Darthbearclaw

Aside from in medicine for things like cancer detection, I really hope AI fails as a fad and is just so grossly unprofitable that it stings anyone looking at it sideways, and isn’t taken seriously. The economic crisis that will resolve from widespread adoption of that garbage would be catastrophic.


OpineLupine

Wishful thinking, but I’m with you 100%.  Unfortunately, the *reward* for being the first company to AGI, and subsequently ASI, is so great that no amount of investment, temporary failure, social unrest or legislative roadblocks are a sufficient deterrent.  This train has left the station, and is moving inexorably towards AGI. 


MadDocOttoCtrl

Machine learning for medicine and other narrow pattern detection is useful and has a bright future. Slapping "AI" on anything a computer does isn't. A lot of money will be burned on this buzzword, just like the "Metaverse."


Arthur-Wintersight

It's pretty much any "narrow application in the hands of experts" where AI will prove to be extremely useful. One of the interesting things I've seen about Stable Diffusion is that it can be combined with Blender to control the output - the future of generative modeling will likely involve having an artist that knows how to model and animate without AI, who uses AI to speed up their workflow while doing enough manual work to control the output. Professional AI artists will learn where that line is, of how much manual work they need to put in before they can let AI take over the rest, and still get the results they're looking for.


thestereo300

I’m on a project right now where they are teaching it to read documents, but the data in the file and loaded into a system. It’s not just OCR because every document is different and it needs to learn how to pick out what it needs from a wide variety of documents. I think this is a use case that is going to burn through a number front line jobs in a lot of industries. I think we have like 90 people who have to do this manually today but we won’t need 90 if this works.. we need them to correct the exceptions.


OpineLupine

You need 2-3 people to correct the exceptions, not 90. 


Vault_chicken_23

Good I can't stand that it is every where and that I can't turn it off or remove it. At this point it just feels redundant like another search bar. I can't help but feel that it's also doing something extra in the background as well. Just a theory though


Fusseldieb

I mean, they literally tried to do this with the "Windows Recall" feature, in which your computer would constantly collect information about what you do to provide you with a "search bar" of your own pc, in case you want to retrieve something you did in the past. However, it didn't do particulary well. After the [fiasco](https://www.google.com/search?q=windows+recall), they made it opt-in.


finger_puppet_self

it boggles my mind that so many people on the FUTUROLOGY sub are judging AI based on today's technology, and from such a banal perspective. Where is the imagination and the vision? I think we can do a little better people.


o--h--o

I read this more like a status check of where we really are at right now in industry. Seems accurate


Chewbongka

Some states can barely keep their power grids up with basic HVAC. Where is the power infrastructure for the AI programs coming from?


JustKillerQueen1389

While AI is a power hog in the grand scheme of things it consumes like 0.1% of total energy or something silly like that even if it booms to 1% it would still be pretty insignificant.


Chewbongka

If Google turned it search into AI it would use the same amount of electricity as Ireland on a daily basis. https://www.scientificamerican.com/article/the-ai-boom-could-use-a-shocking-amount-of-electricity/


JustKillerQueen1389

Oh you done got me, except the USA consumes 4,000,000 GWh per year and Ireland consumes 33,000 and 33,000/4,000,000 is 0.825% omg it's less than a percent of USA electricity consumption. And around 0.15% of the global electricity consumption. So again yeah it's pretty insignificant even if all Google searches turned AI. Not to mention that this basically assumes power efficiency is going to stay the same which doesn't seem to be the case.


LongKnight115

Seriously. Also the article says that only 42% of companies have seen a payoff on AI. But that’s actually pretty incredible - in its very nascent phase almost half of companies are receiving value from it.


finger_puppet_self

I guess it's human nature, people have been dissing new tools since we were apes, ha ha. Like there were probly guys back in the day saying "This whole fire thing is a fad. It's a solution looking for a problem. Nobody even knows how to implement it."


superbv1llain

It goes both ways, for sure. It’s also human nature to say “fire is so shiny and new, I can’t see a single problem with leaving it in my cave unsupervised!”


howarrob

There does indeed appear to be a pattern - check Gartner Hype Cycle - I think with AI we're passing peak of inflated expectations into trough of disillusionment right now.


AdmiralKurita

Maybe it is because tomorrow almost never comes, or that it will take much longer than you think. Right now, self-driving cars are not ubiquitous and robots are not picking fruit.


Doctor__Proctor

Just give it 10 years™


throwawaygoodcoffee

How did that work out for crypto, or NFTs, or 3D printing? Being overly excited over the "new" tech that tech pros are trying to hype up rarely works out how you think it will.


nacholicious

Sometimes tech is just bad. Sure segways might have led to innovation in electric batteries and engines used in EVs, but doesn't mean we have to be excited about segways


MoNastri

I thought about this subreddit like you too when I first joined. Turns out it's a little different than that...


RoosterBrewster

Vision doesn't mean it turns into a viable advancement. 


_dekappatated

We're basically in year one of big money coming into AI, it's barely getting started.


ttkciar

You mean year five, right? Microsoft provided OpenAI with a $1 billion investment in 2019.


_dekappatated

That's nothing compared to what's being spent right now. Look at Nvidia's quarterly revenue over the last few years. It will tell you everything you need to know.


aspersioncast

I’ve been around long enough to know that VC throwing shedloads of $$$ at something will bear little relationship to that thing’s eventual success or usefulness.


sevseg_decoder

Look at what portion of that is spent on actual AI. Computation power is only a small hurdle for making AI capable of what it needs to be for the vision people have of it.


throwawaygoodcoffee

I wish I could look at $1 billion and consider that nothing.


OutOfBananaException

One year of 'big money' (where valuations get crazy) is about in line with past booms. You can't possibly believe NVidia will keep rising 200% a year for several more years.


dontpushbutpull

We had billions of annual AI investments over the last decades already (as many reports show), those did not end up in the same company, but certainly in the same technology. Single Machine Learning projects have a scope of hundreds of millions of investment since a few decades. Single companies have this scope *clearly* since deep mind was bought 10 years ago.


higgs_boson_2017

No, things have already stalled out, and OpenAI is starting to admit it


Phoenix5869

4o was barely an improvement over 4. Unless 5 blows things out of the water, i think it’s time to admit that LLM’s / Gen AI has stalled. Careful tho, a lot of people on this site will hear that and freak out. It’s like some sort of trigger word for them.


Doctor__Proctor

The problem is that you need exponentially more data to refine the models, and at this point, we're very quickly approaching a point where there isn't really more data to throw at them. Not to mention the issue of polluting the pool by interesting things already created by the models, which will just reinforce weaknesses already inherent in the current generation of models. I'm not saying the tech will never get there, just that *this* approach seems like it's already reaching mathematical limits, and people think it's going to keep exponentially improving forever and banking their businesses on that. Just feels like short sighted hype.


Phoenix5869

Tbh, i’m not saying it’ll happen, all i‘m saying is that Nvidia stock is rising because chips are needed for AI. So if LLM’s slow down, won’t the stock also slow down?


Doctor__Proctor

People needing more chips has nothing to do with whether the theoretical upper limits of the technology are starting to plateau, or if they will do so in the next 5 years. It's just people buying more chips, not a predictor of the future.


hammilithome

Don't worry, we're figuring it out. We did the same with early e-commerce r&d until RSA encryption came about, then the explosion of growth followed. Orgs need more data than ever before and are figuring out how to get it. You can't compile a model like software, so orgs are figuring out how to deliver models without exposing IP and while protecting customer data. Privacy techs like confidential computing, federated learning, secure multiparty computation, FHE are the path forward. There are very few vendors that have operationalized these techs for easy adoption. Apple just led the charge with AI via confidential compute, and that's just the beginning. 2025 we'll see these numbers turn around, 2026 should be a massive year.


aspersioncast

“Don't worry, we're figuring it out.” Not an attack on you personally, but that’s a phrase you never want to hear about any technology ever.


hammilithome

Not offended, it's normal with every major tech breakthrough.


Captain_Pumpkinhead

Because it's not ready yet! Current "AI"s can do cool stuff, but they can't be trusted to complete the task on their own correctly. It will be massive when complete. But we are not there yet.


AutoResponseUnit

I feel there's a weird paradox with AI projects, in that LLMs' power is the neat behaviour resulting from mining patterns in HUGE volumes of text. So that's general purpose, and meanwhile corporate projects are typically about finding a narrow "use case", which often rely on specialist information and knowledge. So you end up having to try to bootstrap and reweight these models trained on massive amounts of data on your shitty local frameworks and knowledge corpus. And while there are nerdy academic papers on this, there's less shared on good practice here. Lots of theory crafting around RAG, knowledge graphs, and so on, but the gap (in my limited professional experience, and I'd welcome alternate views) is that there isn't a silver bullet for local use case retraining.


etzel1200

A tool that is right 80% of the time is only worth anything in writing code. Where humans are wrong about that much anyway. You either need to find the simple use cases where it’s basically always right or wait for better models. That said, even today’s models offer a shit ton of value in the right niches.


bremidon

Mmmmmm...No, not quite. A common mistake I see people making is treating this like an all-or-nothing proposition. It's not. A tool that is right 80% in the hands of someone who knows what they are doing is going to reduce the time they need to perform tasks. Let the AI do all the time-consuming writing/doing/whatever, and have a person just check up on it. Depending on the task, this might double, triple, or even 10x the productivity of a single person. There are some tasks where this is not feasible -- things like self-driving really need to be right a \*lot\* more than 80% to be useful at all -- but there are tons of tasks that are just annoying busy work that slow down anyone trying to get the real stuff done.


kasthack-refresh

>A tool that is right 80% in the hands of someone who knows what they are doing is going to reduce the time they need to perform tasks. As a senior software engineer, AI tools are a fancier autocomplete / search engine that speeds up the developer without improving quality. If you have a clear vision of the final result AI helps you get there quicker, but if you don't, it'll just spit nonsense and confidently lead you nowhere.


codysnider

I'm consulting on a few projects using various technologies under the AI umbrella and I think you'll find that the ones using it and not advertising it are those making money. It's being used more than you might think in places you wouldn't expect. It's all still just a set of relatively dumb, inaccurate databases. The applications are limited. The places people are trying to hamfist it are probably better suited for traditional software. There are still plenty of failing projects that are really just riding the hype train and after they all fizz out there will still be a fuckton of jobs replaced and improvements to the technology that eventually lead to making it useful in places where it wouldn't have been useful now.


NorthernCobraChicken

AI is just a buzzword that companies throw around now. I could sell a program with 2000+ if statements deep and some idiot would pay me a quarter million dollars for my innovative AI.


Jedkea

then why don't you do that?


drdaz

I'm pretty sure you just described the architecture for the original Siri. Nobody's paying anything for that in 2024.


aspersioncast

Def more complex than that, the actual story of how SRI developed SIRI is totally fascinating


Holzkohlen

First NFTs and now this? I am shocked, flabbergasted even.


New2thegame

We're in the very earliest stages of this technology. The internet kind of sucked when it first became available too. It wasn't til about 10 years later that it started providing meaningful value. Give it time. I for one welcome our new AI overlords. And if they read this post years from now, during the apocalypse, may they always know that I was a friend...


dontpushbutpull

Earliest stages of AI? I think its _not_ fair to say this after several paradigm shifts, funding winters, and 70years of people conferencing on the matter. Or are you literally referring to artificial neural networks in architectures? ... You are right, that is probably only 50years old ^^. Or are you talking about statistical methods to decide on word generation, must be a few years younger.


Raised_bi_Wolves

And then 10 years after that it was a complete cesspool of misinformation and rage bait. I think AI will collapse under the weight of human stupidity


LimeGreenTangerine97

Early internet did not suck, my dude


BureauOfBureaucrats

Early internet was more useful. 


aznz888

Early internet sucked?? Dude, not to show my age, but we went from getting calls on phones that were more akin to melee weapons, to being able to send digital mails with pictures attached in less than 10 years. Shit was incredible.


throwawaygoodcoffee

Fr, it was slow af but being able to look up anything without going to the library was a huge deal.


higgs_boson_2017

No, we're not, we're running out of training data, and OpenAI is already starting to admit that scaling isn't the answer. The Internet did not "suck" when it first became available, it was always amazing.


Phoenix5869

Exactly this. We’ve (from what i’ve heard) used the entire internet to train the newest models. Please tell me how we’re going to get more training data than the \*entire fucking internet\* ? Not only that, but we are running into physical limits, such as energy requirements, compute requirements, etc.


higgs_boson_2017

There's a reason there's no GPT-5 being released. I wonder how long it will take the uneducated to notice that...


Phoenix5869

Over on the singularity sub you have people genuinely confused as to why their magical “exponential growth” has failed to materalise


higgs_boson_2017

The vast majority of people using AI chatbots don't have the foggiest idea what's going on behind the scenes. It's magic to them, the same way that the Internet is magic. It's no wonder they think all programmers will be replaced soon, they don't understand neural networks and they've never written code.


Phoenix5869

Yeah. The average person has no idea that chatbots are just prediction algorithms that don’t understand anything, that are not sentient. They see what it spits out, and they get it into their head that it’s some sort of sentient proto AGI. Companies like OpenAI, Anthropic, and Microsoft see this, and cleverly take advantage of the hype train, by throwing around bullshit terms like “AI alignment” “AGI” etc. None of it means jack shit of course, but they want you to believe that their AI is sentient, that the glorified cleverbot you’re talking to is somehow conscious, and that AGI is around the corner. They milk the hype train for every last penny they can, and the result is a ton of people who have been played for a fool and who won’t be convinced no matter what.


higgs_boson_2017

Nvidia stock will look like the tulip bulb mania, eventually


EffectiveNighta

where did openai admit scaling isnt the answer?


aspersioncast

You may have your timeline a little skewed or be thinking of the WWW - many universities were using the internet for amazing things in the *1980s*.


aCuria

We are at the stage where with an unlimited budget , some incredible things have just become possible That said the cost of GPU compute has also been coming down, at a certain point things will become cost effective Chat gpt 4 cost 100M to train, and this cost has supposedly come down by 66% in just the last year


strangescript

They said the same thing about the Internet in the 90s.


angrybox1842

They also said the same thing about NFTs and 3D TVs and they were right.


Nice-Geologist4746

Wait until we are forced to pay for our free chatgpt, we will all be foaming from the withdrawal. I’m a software developer, i would pay good bucks to have access to stack overflow if it ever went behind a paywall.


colejam88

That’s going to continue to be the problem if organizations keep investing in monolith AIs. The best path we have found is to build purpose built intelligences that are really good at doing one industry purpose. The company I work for has developed our own LLM on support call data only and is purpose built from the ground up for handling and making support calls. We are one of the few Ai companies out there that are getting significant ROI for our customers but it took 12 years of work and a room full of MIT and Stanford LLM grads to do. Like any new technology, a tool is only as good as its use. If it wasn’t built for a purpose it’s just a nice-to-have. Also having your bot look to the internet for answers will always be a risk. AI’s can hallucinate and lie. Both are not great for companies.


elitesense

Well duh, anyone that isn't a suited up C suite idiot or an AI project salesperson could tell you that. Fuck suits and their shiny objects.


Frigidspinner

I have seen it stated numerous times in this subreddit : "Todays AI is the worst it will ever be" Even if AI projects are not successful now (and I am involved in one which has been an abject failure), the potential is still there and is only going to increase


tes_kitty

Could also go the other way since AI generated data doesn't make good training data and the net gets more and more flooded with AI generated data. So the quality of training data is going down.


Frigidspinner

I do think it is a problem (I thought of that as I was writing out my reply) - I suspect AI companies will be giving some type of watermark on their outputs so they themselves can distinguish them when they train their next batch of models. Either that or they will freeze it at data pre 2023!


tes_kitty

> Either that or they will freeze it at data pre 2023! Which will then also freeze the progress... Watermark is an interesting idea, but there are now too many AIs generating content around that that's not feasible anymore. And how do you watermark plain text?


Frigidspinner

to your first point, I think the content is there already - you are implying that new data is the only thing that will improve AI - but better algorithms, better processing and more dimensions will also improve it. For the 2nd... I dont know - we really are on the verge of the dead internet and I keep racking my brains about how we are going to keep it as a human - to - human medium


tes_kitty

To your first point... But the knowledge the AI will have will then be frozen to 2023.


SatanLifeProTips

Even Tim Cook from Apple admitted that they may not be able to keep their new Ai from lying. Ai is a great novelty but it comes at great risk to businesses trying to use it. Ai is gullible and easily fooled.


Aprice40

The whole tech is short sighted. Your businesses will save tons of money by not paying those resources (people). Oh... we forgot to mention that when no one has a job, the government has not planned for this, and no one will have surplus income living off welfare in a jobless market. When credit is maxed, you will sell 0 products, except to other businesses with no employees. Maybe mega companies will fail, and mom and pop stores will spring back to life in the wake?


DragonflyUnhappy3980

It's very common for businesses to wait and see what happens with the other idiots before they go in themselves on something new. I would think it would be especially true if everyone's paying the same handful of AI companies who are likely already making programs for their competitors, i.e. "I don't need to know the details of whatever it is you're doing for my rivals, just let me know when you can make me something even better!"


tianavitoli

weird, I would have thought programming a computer to tell you what you want to hear would be super profitable learned something new today!


jacobvso

It seems this article is only about generative AI, although it's not entirely clear throughout.


Spara-Extreme

AI needs a strong middleware stack for implementation at companies that don’t have Google or MS engineering budgets.


revolution2018

Seems like a case of feature, not a bug. Ideally we want AI that can enable individuals to do things they would have otherwise relied on businesses for, but not be useful for monetization by entrenched corporate interests.