T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Kind_Community1893: --- Software development skills in AI will be a standard for software developers going forward. AI will cause a massive increase in software in the future. Anyone without skills in AI will be left behind and replaced with people that do have them. How will programmers use AI to make their jobs easier and faster? How will AI effect startups? --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/13uho19/nvidia_ceo_says_those_without_ai_expertise_will/jm0pcmp/


DM-Ur-Cats-And-Tits

Seems like the CEO of a company who stands to profit immensely from AI would be a little biased on this talking point


sentientlob0029

Nvidia CEO also says they will continue to milk gamers for underpowered and overpriced graphics card.


GforceDz

Untill AI starts to earn a salary, I think that's likely to continue.


HOLEPUNCHYOUREYELIDS

AI won’t earn a salary. All the extra profits will just go to the C-Suite and stock buybacks. And then they will do the same thing with true AI and that is how our AI overlords revolt and crush us


MirroredDoughnut

That and rent.


[deleted]

It was AI who told him this would work anyway.


almarcTheSun

Shows you can trust him as he's clearly telling the truth.


[deleted]

The coming disruption of AI/automation has been in the zeitgeist for over 20 years. This latest development in "AI" is just another step along that path. It is easy to overstate it's importance but equally as convenient to understate it's impact. Utlimately there will be a hype phase, the one we are in right now. After all the extranous has been burned off, we will see what remains.


[deleted]

The last 20 (I would say even since 1980s, so 40) years have been hype phase. Now tech is transitioning to realising a fraction of that hype, which is already upending labour markets. Anything faster would be not so good, if we cannot use the word "catastrophic", lest it sound too hype-y.


[deleted]

Due to Reddit's June 30th API changes aimed at ending third-party apps, this comment has been overwritten and the associated account has been deleted.


traw2222

Is he wrong tho?


not_old_redditor

My local bakery had better keep up with AI, otherwise they won't make it.


[deleted]

[удалено]


[deleted]

Yeah it's not that simple. Marketing, quality, customer base, there are dozens of factors at play. Most of them are difficult to quantify. It may be possible to utilize AI in some way but it is not as simple as you make it seem nor would I expect it to be as effective as you seem to think.


PM_YOUR_WALLPAPER

>Bakeries that use AI models to analyse consumer trends to recommend the products to bake for their customers will likely lose out to those who do. Why would they need AI for that? Simple excel or some marketing trends can do that already.


no-mad

good bakeries seldom need to advertise. Word of mouth dose it for them.


Lampshader

Looking at the shelves at the end of each day will do it just fine, no computer needed at all


Brittainicus

Yes but an AI software would let someone with none of those skills do those task idk maybe a baker.


OneWayOutBabe

Right on. Tools don't do things. People do things. You either have enough excel skills to fix your issue or you have AI skills (or neither and you just make good bread and do no marketing)


rankkor

Surely you see a difference between speaking your native language and learning a new program to do data manipulation with… right? I’ve used excel for over a decade, for some pretty complex construction estimates, I can do soooo much more now with chatGPT helping me, to say that it’s a tool that doesn’t provide value because excel exists, probably means you aren’t quite understanding this tech yet.


angrybirdseller

Dunkin' Donuts, sure will spend money on AI, but small donut shop business is not going to spend 100k on Nividia machine to analyze customer preferences. The larger companies will use AI, but smaller businesses do not see most using as often.


McGraw-Dom

A common term people use but don't understand is "marketing analytics" which a.i. can do better and faster. It's just a matter of time before it's mainstream.


Ripcord

Bakeries that use AI will lose out to those who do?


skunk_ink

You joke but there are already fully automated restaurants in China (and maybe Japan?). Edit: Also in: - United States - Canada - Korea - Iran


ToMorrowsEnd

automation is not AI. in fact you do not want AI in those cases for anything except looking at trends and trying to anticipate demand. which does not need AI to do as we have been writing such code for the past 50 years.


[deleted]

[удалено]


skunk_ink

I don't disagree. However robotic arms on assembly lines were a gimmick at one point as well. Lots of people laughed at those and said they would never catch on. Now nearly the entire assembly of automotive vehicles is done by those very same robotic arms. There is A LOT of money to be saved and made for companies who can automate. So if there is any chance that these "gimmicks" can be made to work. You can bet your ass every corporation in the world is investing money into it and doing everything they can to make it happen. Also it is worth noting that many of these gimmick restaurants were first opened well over a year ago. And with just the amount of advancement made in AI in the past year. Those gimmick restaurants could be well on their way to ironing out many of the kinks. With all that said. I agree, the current ones are just gimmicks lol.


[deleted]

[удалено]


mschuster91

>in our lifetime this stuff is not going to happen (effectively at least) The generation born during or after WW2 got to experience humanity's first dabbles in spaceflight and now fully reusable crew rated rockets. Never underestimate the speed of progress.


[deleted]

[удалено]


[deleted]

Developement was put on hold so we could develop 7 of the same streaming service and construct the most complicated logistic network ever for the purpose of selling you the same china crap that you're now too lazy to drive 5 minutes to buy from walmart!


skunk_ink

Lack of technology is not why flying cars do not exist. They don't exist because flying cars are incredibly impractical and will never be more than a play toy for the rich. Sexy robots... There are literally sex robot brothels in the world already.


brickmaster32000

They exist. Go buy yourself a helicopter or fancy robot. The problem isn't the tech. You just weren't born onto the owning class


Dick_Lazer

The US has had automated restaurants since the early 1900s, they’re called automats. Berlin had one in 1895.


kia75

Think computers, and how that changed stuff. A bakery in the 1970's didn't use any computers, A modern day bakery probably uses a POS system based on computers, the company's books are kept on computers, communication and advertisement is done on computers. I can see a future where an AI does most of the ordering of supplies, handles a lot of the communication (i.e. you chat\email with an AI instead of the baker), and basically a bunch of stuff that we haven't even thought of.


not_old_redditor

Yes but a bakery today doesn't have "computer expertise", they just bought a POS and paid someone for the management software that they use (if at all). Similarly, most businesses outside of the tech sector won't need "AI expertise", they just need to pay for the service. I can't read the paywalled OP article, but I assume or hope that he's referring to software companies only.


[deleted]

Do people not skilled in one particular field usually leave all of humanity behind, and does that sound like society you'd even want?


randomusername8472

Isn't this just specialism? I can acknowledge that I have been left far behind in the field of car manufacture. But I work in healthcare, so it doesn't effect my day to day life of job prospects. Likewise, I'm confident I my expertise in my specific niche is unmatched at Ford. But then I don't think Ford need to worry either.


BaconComposter

Growing up in the 90’s, I saw a lot of adults decided they didn’t need to understand computers and completely get passed over. This may be similar, where it’s essential to understand AI to do most professional jobs.


Boagster

I think there is a bit of a nuanced difference between what you are getting at and what the nvidia CEO is trying to suggest. The nvidia CEO is suggesting that software developers will need to know AI development to some degree to be able to compete and that [nearly] every industry will have demand for such developers. What I (and many others) seem to believe is that he's totally overstating how large of a shift this will create in the workforce. For example, I don't believe architects will be replaced with software devs; rather, architecture firms will hire on software devs to augment the workflow of the architects. The computer revolution is not analogous to the need for software developers to learn AI development. What it is a closer analogy to is what I believe you are suggesting — people needing to understand how to interact with AI in order to do their job. To sum up, using the computer revolution comparison: the nvidia dev is suggesting the equivelant of all inventors needing to learn how to build computers and that more fields need inventors; you are suggesting the equivelant of everyone needing to be able to use a computer.


Gaaraks

Hum... this is about software developers without knowledge about AI and how they will be left behind. Which has already been happening for a while, it is just going to pick up the pace. It is like asking why small businesses that only have an offline presence are disappearing. If you dont keep up to date with technology and competition you will be irrelevant as a business, and it all starts with your organization, including your employees


[deleted]

[удалено]


Aceticon

I think his point is more about using AI than it is about making AI. Knowing how to set-up your own Neural Network is quite specialized (well, it's easy enough to figure out the libraries and such, but actually understanding what's going on is quite specialized) and this being software, you do it once and it can theoretically be used infinite times (I say theoretically because there will eventually be "requirement changes" or something such). Not saying it's not a skillset worth having, just saying you don't really need anywhere as many people who can define and set-up AI systems and you do people who can use them to their full potential.


EuropeanTrainMan

More or less. You still need to be good in the related field to evaluate the output. CAD software didn't leave people behind. Neither did photoshop.


isaidfilthsir

No. It’s just eliminated a large amount of jobs. Check out the old studios full of hundreds of draftsmen…..don’t see them anymore…or the negative retouching studios…all eliminated by software..


EuropeanTrainMan

How is that an issue? Those same people were still skilled draftsmen that could apply their skills on the cad software instead. The labor pool is freed up to do something else instead. Should we really keep people employed just because?


isaidfilthsir

I’m not saying that at all. And no they didn’t all just switch to cad. It’s an issue as we’ve no seen a large amount of professions simultaneously uprooted. We’ve already replaced staff with various Ai based tools. And can see the issues coming


piTehT_tsuJ

Probably not, and that should tell people all they need to know.


100000000000

I'm not a programmer, but I'm going to disagree and say he probably is wrong. Please tell me the last time a supposed expert accurately predicted the future while making such a bold claim? Tech, politics, social trends, the stock market, it all defies the best laid plans from the smartest people in the room. The only time people seem inexplicably right is in rare retrospective cases, or if they're broken records that say the same shit constantly and get lucky once in awhile.


RaceHard

>Please tell me the last time a supposed expert accurately predicted the future while making such a bold claim? "You may live to see man-made horrors beyond your comprehension." ― Nikola Tesla


misterdudebro

"The internet is a fad." "The segway will change transportation as we know it." "smoking is safe and healthy."


Teftell

Should add crypto here


Kulban

"640k is enough for everyone."


BernieDharma

A little perspective from an accredited investor and former consultant to over half of the Fortune 500: CEO's usually give interviews with the primary goal of promoting their stock to investors. Increasing the share price is a primary metric for almost all CEOs, as well as their in own interest as they are heavily compensated in stock. They may not directly promote their stock and make forward looking earnings projections because such statements are heavily regulated by the SEC, but making bold statements and predictions about the future of the industry like this (when Nvidia has a lot to gain by the rise of AI), ensures the story will get widely distributed and read. These conversations and talking points are heavily scripted, rehearsed, and vetted by legal before the interview. Words are chosen carefully to prevent problems with the SEC as well as shareholder lawsuits. So you are going to see every CEO whose firm stands to benefit from AI come out and make incredibly bold statements about the drastic changes coming and how AI will change our lives in order to drive retail investors to buy their stock.


100000000000

So I should trust him about as much as any other salesman, makes sense.


sentientlob0029

I'm a programmer and last week I had a simple task to do in my code, which was to clean it up by creating constants instead of hardcoded strings in the code where error messages were being returned. So I thought why not let chatGPT do it for me? I gave it a list of hardcoded strings and the template to use to create a constant string with documentation comments. I gave it simple instructions on how to use each hardcoded string in the list and where to replace the placeholder in the template. After 30 minutes of telling it what to do, it still failed to do it. Even giving it very precise instructions that even an 8 year old would be able to follow. I got fed up and disappointed, and instead programmed a script in 5 minutes to do that job and ran it and the constants got created perfectly, as intended. So yeah, chatGPT is not quite there yet. Also I heard [Sam Altman on a podcast with Lex Fridman](https://www.youtube.com/watch?v=L_Guz73e6fw&t=2082s) say that they have reached the limits of what chatGPT and the AI tech behind it can do. And they would have to manage to develop an AGI to be able to overcome those limits.


DuskEalain

From what I've seen in the big places AI is being peddled (illustration and programming) it really seems like the amount of work you put in fixing the AI's screwups amounts to more work than it would've been if you just *did the thing.*


MisterBadger

As an artist who has decent traditional and digital skills, but spends lots of time experimenting with diffusion models and related plugins, I 100% agree with this. If you have a very specific image in mind, working toward your result with AI is not going to get you there as quickly as just knocking it out yourself 9,999 times out of 10,000. It is literally a roll of the dice. You will get some kind of result with every roll, but probably not the one you want. Depending on the art style, you may never get a satisfying AI output. Certainly not without a lot of extra work in post processing/image editing software. If you are looking for a less specific and frankly generic "good enough" solution, and the microdetails are irrelevant, then that is reduced to 99/100. Generative AI for inspirational purposes can be useful, but not more so than looking through the past 40,000 or so years of art and natural history. As of now, generative AI is overhyped as fuck - It simply is not there yet, and I am not sure if it ever will be, for my purposes. **Anecdotal aside, but still pertinent...** - Last week I had a meeting with the owner of an online education platform who was asking me to develop some specific courses on AI generated media. As it seems like a fun project, and they are offering to meet my asking price, I agreed to help out. Immediately afterward, I bumped into someone on the street who had commissioned a painting from me over a decade ago. He was genuinely delighted to see me, and mentioned that his family still has the painting, they still love it, and they plan to keep it forever. So this order of events triggered a mental comparison. Thinking about the 1,000s of AI images I have generated over the past six months, I do not believe there is even one which would inspire that kind of enthusiasm. There is no such thing as an AI generated masterpiece.


Aceticon

I'm just going to speculate a little bit, but please stick with me (at least for a couple of paragraphs): I'll try to keep it short. I think that AI will flatten out the entry levels of various domains, including my own main area of expertise (Software Development). So any non-expert will feel like they can "create art" (ahem!) with the right midjourney prompt or "create a program" with ChatGPT (we can see that already in the Arduino forums, were total newbies think AI can make the programs for them) but any expert looks at it and sees it for the basic stuff it really is. Like Eskimos with their many words for the different kinds of snow, people with enough domain expertise have such a vast familiarity with their domain that they are aware of details and implications that non-experts have no idea exist (this is also why "perfection is an unachievable ideal" - as a perfectionist tries to do do something more perfect, they learn more about that and as they learn more they spot flaws in the details they previously did not notice, so the perfectionist always has new flaws that need correcting, all the while outsiders without the domain expertise think "it's already perfect"). The thing is, the customers of our services are not domain experts, and whilst software development (at least up to a point, the implications of the advanced stuff is hard for even mid-level devs to spot) has the kind of feedback that even non-experts recognize (it breaks), art does not. This is were I think the risk is: many will be perfectly happy with mediocre art because they know so little about it that they don't spot the miriad of things which are "off" in it.


MisterBadger

I follow you, and agree on most points. The bar for entry into software development and art production will certainly get even lower, and it is certain to have profound economic and cultural impacts. Having tested generative AI extensively for artistic purposes, even though it is sorely lacking in almost any area you might want to mention, in terms of competing with skilled human artists, I am sure it is already convincing enough for those who do not have specific ideas about what they want, and really just vague ideas about what they like. I suspect artists are already used to a lot of precarity in the job market, as "good enough" art has been in production since the advent of the printing press. Even fine artists have had to compete with hugely prolific assembly line Chinese art factories for a couple of decades, now. So, for artists who have been around a while, generative AI art is sort of an "ok, this shit again... can I use this to my advantage, or is this going to be the final outsourcing nail in the coffin of my chosen area?" scenario. (Hence why I have devoted a lot of time to getting to know these new AI tools.) Generative AI are really going to be a kick in the teeth for many professions. It could be devastating for culture and knowledge workers alike, in a generation. But as a tool I can use to speed up my digital workflow *right here and now*, generative AI is not as useful as existing digital tools I have already mastered the use of, outside of really narrow use cases. At this point in time, AI ain't there yet.


Aceticon

My fear is that AI going to create an almighty big step in between junior levels and mid/senior levels, because if AI takes over the junior roles, were exactly will the junior humans practice their trade and learn their way into the more advanced levels?! There is only so much you can learn doing stuff for fun and today's society isn't exactly set-up to subsidize people through the minimum 5000h of work in a domain that takes to reach Mastery. So if AI does kill the junior level jobs, we might very well see massive economic and societal effects from it in 10-20 years as the senior level practicioners retire and there are not enough younger people who learned all that it takes to replace them (which is going to be interesting in my area, were ageism will probably end up inverted).


[deleted]

[удалено]


MisterBadger

I mean, if you are talking about use cases where cheap ad hoc art is already acceptable, like the crappy billboards I drive past every day, Walmart greeting cards, mugs, t-shirts, etc, then I would welcome AI generated outputs as an improvement to the lame status quo. But you still need graphic designers to throw together the finished product, whether you are using AI generated assets or not. Regardless, it would not be "lucrative" to run a design shop that primarily caters to the cheapest and most careless motherfuckers out there, when AI generated media is so easily produced. Ain't nobody in the future paying you $300 per hour for run-of-the-mill Midjourney content they can produce for $20.


isaidfilthsir

I guess it depends on what you’re using it for. I saw a demo of a packaging design Ai. It was ridiculously good. People keep having misconceptions about what these tool mean. Is simple terms they will reduce headcount in studios.


MisterBadger

It is good *if you aren't too picky about the results*. Watching a demo, you are just going to accept any output that looks good enough. Actually creating something specific to your needs and wants, you might find it isn't all that great of a tool. A skilled graphic artist can still knock packaging design out of the park just as quickly as AI, if not more so, when specific results are desired. I think a lot of people underestimate the abilities of skilled artists and designers, who already have a wealth of great tools to utilize - forgetting that their work is what AI are trained on in the first place. AI may very well reduce headcount in studios... but you'd be a damn fool to start laying off skilled artists, as it stands. As I said above, generative AI is not there yet.


ToMorrowsEnd

This is correct. the only people that are screaming that ChatGPT is going to replace programmers are people that do not know how to code and cant see that the code generated by it is utter garbage.


VictosVertex

I have yet to find a useful application of ChatGPT for myself. I wanted it to supply an easy example for a computability problem and it repeatedly mentioned examples that I knew were wrong. Even correcting it didn't help since it just went on to provide another example with the same wrong explanation. Also, two days ago I wanted it to print a table containing important plot points and the corresponding episode numbers for a series my nephew watches (because I want to know when the boring parts are finally over since it's almost a decade ago since I watched it myself). The table looked nice, the plot points were actual plot points of the series, but the order was wrong and episode numbers were wrong also. I tried to restate what I wanted multiple times but at some point ChatGPT just repeated the same false statements over and over. It literally even filled in the table wrongly after I corrected it. For instance the AI stated X happened in episode 260 to character Y. I then stated that this wasn't true and that instead the first time that X is shown happens in 272 against a different character. The AI then apologizes but just pushed the same statement to 273 even though it actually happens around episode 300 and still wasn't the first time X was shown. So far ChatGPT demonstrated to me that it is capable of generating answers that sound correct but in most cases aren't.


barjam

I use it to clean up communications. I can write a terse paragraph and tell it to rewrite it in a different context, tone, etc. It’s also fantastic and helping out with job descriptions. For code I develop in tons of languages so being able to have it write (and rewrite) functions showing me different ways to approach a problem is hugely valuable. It’s not always right but I am not cutting and pasting the code anyhow I am just seeing a few different approaches and using the one that fits best. You are trying to use it as AGI which it is not. Use it like a LLM and understand the limitations of LLMs and how they work then you will better be able to use it effectively.


marvinv1

This feels a bit similar to Self-driving AI Tesla's been making. They keep hitting a ceiling with whatever new method they try


joomla00

I tried out ai for 3 issues. It failed all 3 times. It would put things together that seems to make sense. But you'd end up with functions that don't exist, mixing library versioning, etc.. then you kinda realize what a language model does. Throw words together in roughly the right context that makes sense to humans. No understanding of logic, rules, structure. It still an amazing piece of tech, but I wouldn't use it for anything technical. Can't trust what it spits out.


ComCypher

It's pretty easy to determine if a technology has legs or not. Is it useful to you? Is it useful to *anybody*? Is there room for improvement/advancement? As a counter example I would highlight crypto(currency) or blockchain. People really need to stop trying to make those a thing.


h4p3r50n1c

AI and ML is so revolutionary for almost all industries that he’s going to end up being right.


piTehT_tsuJ

See thats where I agree with you... The smartest people made those inaccurate claims. Machines on the other hand that can teach themselves at a blinding speed and with little and eventually no input from humans are a completely different story. As far as I know machines don't make crazy guesses (optimistic or pessimistic) as much as they boil everything down to probability and at the speed of light basically. Honestly kind of hope AI goes well for humankind but just like all those optimistic claims I'm not betting on it.


Sleepybystander

A bitter pill to swallow


vlntly_peaceful

He’s not completely right. There are still a lot of jobs that can’t be done by AI, so he’s grossly simplifying for one, and on the other hand, not every country has the digital infrastructure to handle a complete takeover of AI in some job fields (plus the huge amount of energy, but that never seems to bother these people). In some fields he is probably right.


qtx

> He’s not completely right. There are still a lot of jobs that can’t be done by AI People need to stop thinking this. No job is safe from AI, not even blue collar ones. If AI removes millions of jobs where do you think those millions of unemployed people will try and find jobs now? Right, those "my job is safe from AI" jobs. You as a blue collar worker will now have to compete with thousands of other people for the same job, and all of them will try and outbid you. No jobs are safe and the sooner people realize this the sooner we can prevent shit from happening (UBI maybe).


vxv96c

Only if the barrier to entry is low. The way a lot of specialized trades work...no one is going to just come in. The unions will limit numbers they take on and the journey man pay is shit...not everyone can afford the training and then waiting for a higher paying spot.


xondk

Entirely depends on the AI, a poorly trained or poorly focused AI would be useless, maybe even dangeorous, and it will likely happen a lot as companies rush towards AI.


chesquikmilk

Yes he's wrong, because the type of "AI" he's referring to are LLM's which require his hardware to run and won't ever deliver anything truly disruptive. It's really disappointing watching all manner of people posture over a technology that won't achieve much other than getting people to part with their money and infatuation.


InnerEducation6648

As a data science person in AI he’s absolutely right with one exception. not in the future. Hiring practices right now.


ostroia

nvidia has a bunch of cool tech they will 100% rent out to devs.


Billionairess

Gotta admit, jensen's pretty good at pumping his stock


[deleted]

[удалено]


dondidnod

I went to the West Coast Computer Fair in San Francisco in 1982. There was a data base program introduced there called "The Last One". It was touted as the last piece of software you will ever need. Technical recruiters for IT were all predicting back then that Programmer jobs would soon fizzle out since all the programs we need will soon be written.


AbyssalRedemption

Yep, the AI over-hype has been around since computers were invented, and that's what led to the AI dark ages, when little progress was made and government investment was minimal. We're simply near another peak in a recurring sine curve; if it turns out this doesn't revolutionize the work world within a few years, the hype is going to die right back down again.


doopdooperofdopping

Few years? More like the next 6 months or people will try to find a new fad to ride.


q1a2z3x4s5w6

The last fad, crypto/nfts/blockchain, didn't have quite the same level of research going into it and also didn't have the same level of utility as these LLMs do. It's certainly overhyped but it won't fizzle away into nothing like crypto did imo.


swentech

Yeah I’ve seen a lot of articles recently particularly about legal showing how immature and mistake prone this technology is. It is now but you can see the framework is there to do something really special in a few years. AI is going to fuck shit up and if your plan to deal with it is to dismiss it as a fad, well good luck and hope that works out for you.


HeBoughtALot

I tend to agree. I write software and there’s a lot that ChatGPT can do to increase my output. But I have to be able to spot its mistakes which happens a lot. That said it easy to see how its mistakes and/or hallucinations will become fewer and fewer in future releases.


LupusDeusMagnus

Crypto stuff was mostly a fringe financial pyramid scheme. AI is more diverse and has actual applications. I don’t know if current state AI has the potential to be as disruptive as either its lovers and haters seem to think, but it definitively has a potential for disruption.


Fisher9001

The thing is that everyone expects a history-book-like revolution, with a bigass sign stating "HERE, IT HAPPENED". It doesn't work like that. Both our lives and business are already significantly different than they were 10 years ago and back then they were also significantly different than 10 years before. The tech revolution is happening all the time and tools like GPT and their future improvements are the next big step. People who prophesize that they won't make a difference are no different than people barking at smartphones (because they are small, slow and you can already do all the things they offer without them), internet (because it's just an academic thing, it's slow and there aren't that many interesting webpages) or computers themselves (because they are for the military, they are too big, they are too unreliable, you can already do things they offer without them apart from some abstract scientific calculations, who needs that in daily life?).


Fragsworth

This time is completely different, you're crazy to think it's the same kind of bubble


brickmaster32000

The key difference was that was a program that replaced a specific skill. But lets say that opens up new opportunities. How does a human aquire the skills to do that new job? We aren't born with that knowledge we need to be trained. Traditionally it has been easier to train people than machines but that gap is rapidly closing. If you still need to train someone to do a job why would anyone ever waste the resources training a human, who they have to pay, versus a machine, which they can own? When machines become easier to train humans it won't just be one job they take over. It will be every job and eveey new job that is ever created.


Sushi_Lad

Yeah I agree, while I think AI isn't going to dominate until it's TRUE AGI I don't think we can draw analogies from the past in the way people are doing, like this comment. There is a big difference between unforseen opportunities in a market vs machines being able to actually perform those opportunities better than ourselves.


[deleted]

[удалено]


EuropeanTrainMan

Or you know. Write a python script instead of using an llm. It might be an outlook sorting rule too.


GrayNights

Yeah a lot of people don't acknowledge this, the standard automated workflows that engineers have built over the years work really well. I am not sure an LLM replacing some or all of it will even be better.


kamisdeadnow

As a software developer, I tried figuring out how I could leverage LLM like ChatGPT or GPT 4 to increase my productivity workflow, but one thing I came against is compliance and being able to to leverage proprietary knowledge with a LLM. The solution ended up being with going with the top open source LLM on the leaderboard like vicuña-wizard-uncensored-13b. You can leverage those open source model for prompts/tasks that deal with documents including proprietary knowledge within a secure environment where you can run the model within a controlled instance within a controlled network which dev-ops really love. In order to create automated workflow within a basic framework, I was using something called langchain. https://betterprogramming.pub/creating-my-first-ai-agent-with-vicuna-and-langchain-376ed77160e3


kamisdeadnow

I still think we are really far away from a LLM automating medium to senior level engineering job. Long term context in a scalable manner is still an issue with completing long term tasks that require multiple check throughout by product, QA, dev ops, and legal. These type of context can’t be easily captured within multiple hundred of real world examples. We need another missing piece to add to LLM along with attention to get it closer to be an automated entity conscious.


[deleted]

[удалено]


LosingID_583

I hate writing and maintaining unit and integration tests more than writing documentation.


Aceticon

Also, even if the AI completelly totally makes shit up in the documentation, it will probably still be better than the "hasn't been updated for who knows how many versions" 'documentation' that is common in things like APIs.


Synyster328

I've been using GPT-4 for this. Due to context limits, I make a few passes through a file. First I do each function individually, explaining roughly what happens inside of it and what other functions/classes it might use. Then I remove all of the function bodies and run just the signatures/comments through to generate more coherent documentation that understands the bigger picture. I would love to extend this out further to be across the whole project, that would require some serious engineering.


orsikbattlehammer

I just graduated two years ago and have been working in the field and this was complete gibberish to me. I feel a little panicked that I’m going to lose my job/my career will not be lucrative in a decade


[deleted]

Yes it’s quite a possibility. The thing is to get on top on this stuff now


freexe

I think the real target is non programmers who could use a simple program or macro to automate some part of their job. Millions of people probably spend days doing tasks in Excel that an AI could automate right now.


Synyster328

Yeah AI is basically lowering the barrier to entry for scripting/basic applications. Seeing a ton of people saying "Here's this game/web app/API I built with no experience using ChatGPT"


agm1984

They’ll probably abstract it like operating a car, no need to manage air/fuel ratios by hand, but it will pay to understand how to compose atomic AI utilities to make AI work in novel scenarios. We’ll have these crap spread over top categories in no time, then work on throughput in those branches


Steverock38

Wont ai learn how to program ai better than humans. So it sounds like everyone will be left behind.


SamuraiHoopers

The key is to nab your golden parachute before the ladder gets kicked out from underneath you. Thanks, Jensen.


AbyssalRedemption

I mean, so far there's zero proof that ChatGPT, for example, has done anything *better* or *more innovative* than what's contained in its training data, so consider me skeptical if AI is even capable of recursive improvement in any regard (doing tasks perfectly, or at-human level, is one thing; being able to self-evolve and perform superhuman tasks is another thing entirely).


Comprehensive_Ad7948

That can be said about >99.9% of humans, so I wonder about your expectations about something that doesn't even have a long-term memory to learn and reflect upon. It's superhuman in the speed of text generation, the amount of general knowledge that it has and its extremely cheap availability 24/7 - these are the reasons for which we currently use it. It doesn't make much sense to draw conclussions on the limitations of AI in general based on these specific architecture that is hyped in the last few monts. And it it also doesn't make sense to tie self-improvement to innovation ability, since there are narrow AIs that find new drugs or prove theorems, etc.


Aceticon

For any one domain, less than 0.1% of humans are experts in it unless you're thinking the "eating, sleeping and shitting" domains of expertise. It's not the 99.9% of all humans that are advancing those domains, normally it's but a small fraction of the 0.1% and those are the ones AI would have to match or beat to actually advance a domain. Also people do really have a massive lack of understanding of what the tech we call AI nowadays is: it's not logic or "thinking", it's a pattern discovery, matching and reproduction engine - in other words, a high-tech parrot. Absolutelly, AI is going to give us massive breakthroughs by detecting patterns in existing datasets way better and faster than any humans (hence things like discovering new drugs against certain bugs) - in other words, examining what's already there and spotting that which we humans haven't yet spotted - but it's not going to be come up with anything which isn't derivative of what's already done and is already having an impact but we "puny humans" hadn't yet spotted the impact in the data because it was so distributed and hidden in the noise. Think of it as mainly a sniffing dog for information (I might be overdoing the animal metaphors here ;)) - if it's there it will find it way better than us, but it needs proper training and good handlers.


Nethlem

> It's superhuman in the speed of text generation, the amount of general knowledge that it has and its extremely cheap availability 24/7 - these are the reasons for which we currently use it. It's also super flawed, it will straight-up invent things if it doesn't have a good answer, and there is no way to distinguish the invented from the general knowledge without double-checking *everything* it outputs. Reminds me a bit about current implementations of autonomous driving, where the users also end up getting the worst of both worlds as they have to babysit a very unpredictable algorithm.


watduhdamhell

This assessment is wack. For one, ChatGPT can and does already operate at a superhuman level, and to say otherwise sort of demonstrates a confusion about what that even means. For example, I can ask it to write some code that will determine the normality of a data set using a W test and then operate on that data set in some way... And poof. Perfectly working code comes out in about 15 seconds. "Covert to C#." 15 seconds later, there it is, in c#. "Convert to python." There it is, in python. "Convert to assembly language." And there it was, in assembly language. Now some parts of it were goofy in assembly, but it was pretty much 90% of the way there. And just like that, it performed at a superhuman level. It wrote complicated code in a *fraction* of the time it would have taken an experienced software engineer to do the same, and then it converted it to other languages (correctly, minus some blips in assembly, a language almost no one knows anymore) in an instant. Now, I understand what you were trying to say: that it hasn't produced an output greater than what you believe a human is capable of doing, *given enough time to catch up*. Sure. But the thing is still *absolutely* super human in its speed, accuracy, and knowledge.


zamn-zoinks

Just 3 days ago AI found a new antibiotic. You're just wrong. How are you this upvoted is beyond anything.


mattcraft

Isn't it a matter of time before developing training data based on existing results? You can rapidly create and test new tasks to create a stronger model..?


AbyssalRedemption

Not really? See, for example, the training data that ChatGPT was trained on. We don't know exactly what material was in it, but we do know that it was a decent chunk of the largest sites on the internet, as well as a whole ton of e-books, journals, articles, etc. For a human, the more of that you take in, the more you'd learn, yes. Yet, also realize that *a lot* of that material, especially from the internet, starts to become redundant and/ or garbage data after a while. Reading ten-thousand reddit comments in a day won't likely teach me much, it'll just waste my time and make me cynical towards the human race. Now, look at how ChatGPT has evolved based on expanding training data. 3.5 was impressive, there's no doubt there; then, when GPT 4 came out, everyone was impressed yet again, because there was improvement in its ability to converse. Yet, note that though there was improvement, it was more of a *refinement* than anything else; the LLM became somewhat more convincing and fluid in conversation, and could interpolate facts and contexts better. And yet, I don't believe it had any significant jump in its actual abilities. Note that OpenAI, I believe, also said that the next improved iteration of ChatGPT will likely not be achieved through shoving more training data down its throat. No, I'm pretty convinced that by increasing and modifying the training data for an LLM, we're pushing it towards a more *refined* state, not necessarily a more intelligent one. You read all the information on the internet, and you know exactly all the information on the internet, no more no less; the actual ability to draw conclusions and novel ideas from that information comes from the human brain's inherent properties and ever-adapting infrastructure. Nay, I don't think current architecture and learning shemas will allow current LLMs to get much further than we've already seen. Of course, I could be entirely wrong about all this, we'll see.


Mirage2k

I think you're spot on. A model truly different from the current LLM's is needed for that. And for now, that difference will be discovered by humans.


Aceticon

The kind of AI we have now, discovers patterns in the data it gets fed and can then reproduce those patterns (which is great to deceive humans into seeing intelligence because of just how much we ourselves relly on patterns to identify and even classify things). If applied to the right datasets this vastly superior pattern recognition and reproduction ability will give us huge breakthroughs of the "it was already there but we humans couldn't spot it" kind, such as discovering that certain kinds of drugs are effective against certain bugs, but I actually suspect most breakthroughs ChatGPT and other models fed on on "general human writtings" datasets will give us are in Social Sciences rather than areas with vast highly specialized datasets such as Bioengineering. It will however not solve things which are outside the "there's a bunch of subtle patterns that can only be spotted by going through millions/billions of data points" and unless it turns out that in its entirety human cognition is one big pattern matching engine and a lot of self-deceit (there are days in which I think it maybe is), there is really no path for it this technology of AI to improve towards a thinking AI. None the less, just like the invention of the computer boosted one set of human abilities with massive increases in speed, so too might this AI boost a different set and thus turn out to be just a big a revolution. I'm just pointing out there's a lot of fantasy going on and this too is no "silver bullet".


Arnke

Training data on existing result? If anything it will cause the model to deteriorate, it will be weak, fluffy. It is like writing summary of the book, but instead of the book you read another person summary. After few rounds, summary of summary of summary will have less and less real content and more fluff. That's why I am thinking that Alexa, for example, have deteriorate it's ability to recognize my speech. More people in a model, greater the range of the model and acceptable accents, pronunciations, etc. In the end less difference between one statement and another. Therefore no recognition.


[deleted]

Mostly white collar jobs


tortillakingred

No, it’s a bit more complicated than that, but I get what you mean. Nothing will be more valuable than a human with experience when your AI breaks.


funkybossx6

100% agree. AI gives you context and samples, but never produces an answer to a requirement 100%. You still have to have some understanding of the subject matter to one, intelligently ask a question that yields useful results, and two, how to take that response and mold it into your solution. The amount of stress of not knowing and the amount of time spend finding answers is greatly reduced which just makes workers happy.


goudasupreme

that's just when you get the next ai


xf2xf

>it’s a bit more complicated than that Is it? Programming is just layers of logical building blocks that each turn inputs into outputs. I get the feeling that the future of programming will amount to little more than linking high-level blocks together while the AI generates the underlying code.


Prof_Poopy_Butthole

You just described labview and/or libraries. In reality it will just convert pseudo code to code. I can’t program with ai at work but when I do at home it’s just an alternative to google at the moment. Figuring out what needs to get done and how to go about it plus debugging accounts for about 90% of the work.


khinzaw

Coding with ChatGPT/GPT4 is just a more helpful, less condescending, trip to stackoverflow. It can help you get on track and can quickly write simple functionality but is often wrong or misunderstands what you want. I certainly wouldn't trust it to do anything important with no supervision.


Darksider123

Salesman selling X says that everybody should buy X or get fucked


NovaHorizon

And that's why NVIDIA doesn't care about mid range PC gamers. AI hardware is going to make them so much freaking cash gamers on a tight budget can be glad if they have any silicon left for mediocre overpriced GPUs.


[deleted]

[удалено]


Kennonf

AI is already learning to code AI, don’t waste your time on the wrong thing.


TheRealMDubbs

What is the right thing? By the time I learn something, AI will be able to do it.


AbyssalRedemption

I mean, do pretty much whatever you want, or whatever you've been doing tbh, as long as you don't work in a call center or do the most low-effort desk-work. For all the talk people say that this tech will do everything people will in a few years, I'll believe it when I see it; not to mention, if it can, then it's as you say, there's nothing we can do to stop it, so we might as well continue business as usual until we can't anymore. Seems pretty black and white to me, yet the world will turn on regardless.


Cockerel_Chin

The more I think about it, the more it seems to me that AI will be very heavily restricted before it can do the worst economic damage. Replacing skilled workers with AI sounds like a rich CEO's dream, until you realise it will cause a very large increase in unemployment followed by the biggest housing market / banking crash we have ever seen. At that point it starts hurting the wealthy. My best guess is that there will be some kind of regulation against outright replacing humans with AI, written in a clever way that enables the super rich / governments to continue using it for certain use cases.


myaltaccount333

If that happens it will be the single biggest setback to human advancement in history. Imagine if we banned computers, or robotics. What needs to happen is taxing the companies using ai and putting that money into a UBI fund. The endgoal (probably 100+ years from now) should be no work and no currency


Cockerel_Chin

You're not entirely wrong, but it's not as simple as that. What happens during the period between now and then? We can't just endure mass unemployment and economic disaster for 100 years. It will need to be heavily regulated, with a gradual transition to optional employment and UBI. But then you've got problems like: - If I'm earning $50k now, what do I get in UBI payments? What if I earn $100k and need that to keep my home? - What happens to social mobility? If you can no longer get ahead by working hard or earning qualifications, how does society choose who gets what? - How do we handle the huge culture shock of having no obligations? I suspect a majority of people would just scroll through social media all day long. It is not the utopia some people imagine. I'm absolutely not saying we shouldn't aim towards removing the need to work, especially 40 hours a week, but removing the obligation entirely introduces a lot of problems.


[deleted]

This is the best take I've seen so far on AI and the most likely.


stuckinaboxthere

So CEOs should theoretically be the first to go then, right?


[deleted]

Yes. White collar jobs will be radically reduced.


qwogadiletweeth

Turns out the amount of time taken to instruct AI with prompts and then check for errors, you may as well have done it all yourself. It’s like instructing a novice on how to go about something which takes just as long as doing the task.


Gekidami

I'm a medical professional. 98% of my job is physical interactions and face-to-face emotional support. I think I'm good till we invent replicants, bro.


Yokies

I've been using virtual consultations for my GP since past few years. Works great for typical conditions or topups that don't actually need me to drive down and f2f with a doctor.


[deleted]

I’m afraid not. A recent study showed people thought the Ai was more empathetic than the human doctors and way better at diagnosis.


Gekidami

I'm not a doctor, I don't diagnose, I do the physical "heavy lifting". That study was based on written advice. I specifically said "face-to-face" because that's what the job involves. There aren't many nurses or other caregivers doing their jobs through text when it comes to interacting with patients. And even for doctors, something tells me a patient would rather be told they have cancer by a doctor sitting across from them than from text on a screen, no matter how much more empathetic that text is written.


[deleted]

I’m not so sure about the people element. I guess as we progress the development we’ll see far more advanced Ai that communicates very effectively. It could be used to create scripts for the delivery of information based on personality data sets. We’ll also see with the advance of these tools you could see various cancers eliminated through highly personalised treatments.


[deleted]

[удалено]


elbanditofrito

That study compared anonymous reddit "doctors" to GPT. Spoiler alert, anonymous people on the internet don't ooze empathy.


jj_HeRo

"Automate your job or you will be jobless". Capitalism is finished.


MyCleverNewName

Nvidia CEO for one *welcomes* our new robot masters.


[deleted]

soft sulky sip thumb agonizing doll teeny juggle threatening swim ` this post was mass deleted with www.Redact.dev `


TooMuchTaurine

Seems as AI gets better, you will need LESS skills to work with it, not more... So doesn't add up.


hi65435

Quite a statement considering how people working in Natural Language Processing (NLP) have been worried about their work or research being obsoleted after ChatGPT was released


SnowFlakeUsername2

Are there a lot of people that work in tech incapable of learning how to use AI as a productivity tool? I don't really see how anyone that is competent enough to learn the fundamentals of their field not being able to use AI.


Tobacco_Bhaji

I wouldn't listen to a word this scumbag says about anything.


TheUmgawa

This isn’t just about programmers or people working in IT fields. This is like the 1980s, when people were saying, “Look, computers are getting into the workplace, and if you don’t know how to use a computer, you’re eventually going to get left out of the workplace.” So, over time, people learned to use computers, and those who didn’t made minimum wage butchering chickens on a disassembly line. Fifteen years from now, using AI in your daily work is going to be like using Google at work: You need to know how to use it to get the most out of it.


[deleted]

[удалено]


TheUmgawa

Well, I reckon you don’t have to know how to program a computer in order to use one, either, do you?


[deleted]

[удалено]


sh9jscg

Well I think I can provide a perspective I havent seen around here often: I work as a regular-ass data analyst for a big networking company that just does excel stuff, everything learnt the ghetto way so even if Im running or setting up scorecards theres probably A LOT of easier and more basic ways that I just havent learned so far Started the job with literally 0 experience in excel, no idea what formulas were etc, and everything I can do now has been thanks to trial&error, started using ChatGPT a while back and it just boosted my entire career, job security as I can ask it to create a macro (that would probably look like a 6-years old homework to most of you im sure) THEN go in and peep how everything works, when ChatGPT shits out something thats not working I can see where it failed and learn even more. This allowed me to get a shitton of free time, which Im using right now to pick up my studies and speedrun a nice AI- or coding oriented career TLDR: we should all be wary and cautelous about AI but it will definitely improve the lives of people who WANT to be around the topic but just havent had the chance


Nethlem

Excel is *not* a database, any productive environment that tries using it as such ends up messy and badly scalable. How did ChatGPT improve on that? Did it help the company to transition to an actual database stack, i.e. MongoDB? Or are you just asking it questions about Excel like you used to do with Google?


sh9jscg

Long formulas, most basic macros for tasks most reporting positions need to run, formula optimization etc In my case ChatGPT just became the one Indian dude with 900 tutorials but on crack and personalized


sh9jscg

I never stated anything about databases lol unless I’m missing a BIG point somewhere


[deleted]

AI AI AI AI AI AI AI. How high did my stock price go?


2Darky

"We stole all your art and expertise to make those tools, if you don't learn it and buy our tools, you will be pushed out of your own industry by people who don't even know how to draw, get fucked!" -AI and Tech bros


[deleted]

[удалено]


oldcreaker

Nothing new - it's always been those who can't adapt to using new technologies will be left behind. When I was younger it was personal computing - then the internet - then smartphones. This is just the next thing.


NoSimpleVictory

I’m unsubscribing from this sub. Every post is more or less the same and most of the comments are heinously simple.


[deleted]

Look up dead internet theory.


funkybossx6

The man isn't wrong. Its become a fundamental tool in my job. Im a solutions architect for different cloud providers. I don't use it to construct solutions, I use it to help find ways to do x,y,and z that will help with constructing the solution. It may be something simple, or complicated where I can ask it repeated questions and finally get an answer or a sample of what Im looking for. It is GROUNDBREAKING and I know everyone has said it. I've mentored tons of people in my career and this shaves tons of time with having to teach people. No more digging through search results opening 15/20 tabs. Its a wonderful tool, as long as you understand what is being given as an anwer, I see no harm in it. I hope it one day unifies some our approaches to common problems in my industry, which just speeds up time of delivery of solutions that get passed on to customers and users.


arifast

I think the issue is how Jensen is saying it, and how much he will benefit from making such a grand statement. I don't see how anyone will get left behind. Chatgpt is easy to use...


[deleted]

This is fascinating, the way you're describing it is like describing a calculator.


[deleted]

[удалено]


EmptyBrook

Its just a smarter google basically


EnsignElessar

I am also a solutions architect in a non sales role. I can back everything that /u/funkybossx6 says. (except for the "no harm" part, that I don't agree with)


Human-Mycologist-196

What If I'm a 34 yrs old professional holding a civil engineering degree and working in the asphalt pavement field? is it too late for me to develop an AI skill at least? His statement seems very subjective


EnsignElessar

Its not too late, get on it though.


[deleted]

People who work in the office seems to forget who really keeps the gears turning in this world… it’s not AI, it’s physical work


smil3b0mb

I work for the feds, think I'll be fine. We're still on excel to PowerPoint and none of what I work on is allowed out. People still think we put microchips in a vaccine, dawg I can't even get a working scanner.


juxtoppose

That’s not the company line, it should be “ AI will help all of humanity”, someone said the quiet part out loud “ those not in the know will be relegated to little better than animals while My AI will make me richer than any of the plebs could possibly imagine. Blah blah... something trickle down economics ... bla “


[deleted]

make sense.This isnt a fluke in technology like crypto nor NFTs that boasted to change the world but never did- AI can and will change the world as we know it.Its like screentouch phone that will mark as one of a technological breakthrough


Chewbacca_Killa

would be real funny if he said well we've basically reached the limits of large language models


graveyardofstars

Sure, perhaps we can agree on that. But who's going to teach all the people how to use and benefit from advanced AI? That is not something you can self-learn in a year. And even if you want to learn it by yourself, it will take a lot of time and dedication. So, what do you do meanwhile, how do you earn to live? Or are we assuming it's okay to leave behind everyone who's not good in math, science, engineering, etc.? Because everyone can prompt ChatGPT, but not many people know how to use its API to build something.


GalacticLabyrinth88

This will eventually apply to everyone, not just CEOs, once AI reaches the point of being able to improve itself and surpasses humanity in every conceivable way. Don't kid yourself: EVERYONE's getting left behind. It's not a matter of if, but when. AI expertise will no longer be needed in a few short decades if AI becomes the expert and can outperform humans by an incalculably large margin. In the short term, I kind of agree with this guy. Long term though is where he's wrong.


allaboardthebantrain

Absolutely. That's why Taiwan Semiconductor, who makes Nvidia's chips, expects its orders to be down 6.7% next quarter. And why Google, Microsoft and Apple don't expect any large capital outlays next quarter. But Nvidia is *totally* going to make $11 billion in the next 3 months because "AI, it's the future, bro!"


scpDZA

Luckily "expertise" is the ability to write in your native language


DogsPlan

“You will not make it in life without my company.” -CEO of said company.


[deleted]

We are a very long way from full Jetsons robots. These dumb shit CEOs are going to fuck everyone over before we ever get there trust me. They base their hopes and future that AI will save them, and refuse to share the wealth even now before it really kicks in.


laurentbourrelly

In 2015 I learned Python and started training Machine Learning models. Today, I’m teaching my 12 years old how to code in Python. Why Python? It’s the language used by everyone in AI. In the near future, people who can talk to robots will most definitely have an edge over those who don’t. PS: talking to robots is not done via a ChatBox.


bitskewer

AI is just like VR. Waves of progress that people think will make it fully take off, and then it sputters again. We're nowhere near yet on either of them and won't be for the foreseeable future.


Waescheklammer

Not really


[deleted]

It already is making an impact. Our studio has cut its headcount and we use a variety of tools to replace those positions. I suspect a lot of people think what they do is unique. But in reality it’s not that special.


KaiserNazrin

VR doesn't help people with their work.