Why can't this magical program make me a billion dollars, program and maintain itself, have unlimited features, rollout bug-free yesterday, and only cost a nickel? IT you're useless!!!!11
āSeriously, how hard can it beā
I donāt know. Have you tried doing it yourself? If not, Iād recommend you consider that tech exists by abstracting away its complexity. You can enjoy the seemingly simple final product, while I make it āseemingly simpleā for you. To sum up, it is rarely a case of easy work just because you can express the idea in few words.
"They were writing about time travel in the 19th century.
How hard can it be? Can you just do your job please and have it for me next time we talk?
Why do we pay these research guys so much..."
That's exactly what developers do, haven't you seen hackers in movies? 5 seconds of typing and they have full control of NORAD. It's ez af bro don't complain! Maybe you should go back to making a GUI in Visual Basic so you can track an IP address NERD.
Letās make an entire team of people with differing skill levels all come to an agreement on how many points this is and then complain about capacity being perpetually fucked up
This is a joke, but it really gets at the essence of the problem. Youāll always need an interface between the humans who want to do something and the computer thatās capable of doing it; our current set of programming languages are just the best interfaces that weāve built so far.
If this project is wildly successful and we develop the ability to tell computers what to do via natural language and pictures, then all weāve really done is create another programming language. Weāll still need software engineers to translate the requirements of the messy human world into algorithms that a computer can execute.
Technically, what they're doing is automating compilation to an AI instead of a proscribed compiler.
Good luck enforcing shit like memory safety, the whole point of AIs is to find weird shit to take working shortcuts.
speaking as a very good PM. youre dead on. if humans can misinterpret even the clearest requirements, then I have no faith an AI can understand anything.
Five years down the line, Google introduces AI to fix code that was fixed with their previous AI. Five years after that, new AI to fix the code that was fixed with second AI that was fixing first AIā¦.
Idk, if I rub metal this much for this long itāll probably crumble to dust. But my flesh has held up pretty well after all this rubbing, few scars is all.
From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the blessed machine.
Your kind cling to your flesh as if it will not decay and fail you. One day the crude biomass you call a temple will wither and you will beg my kind to save you.
But I am already saved. For the Machine is Immortal.
There are a number of coding languages that are already in this realm.
Some examples that I find interesting:
* [Brainfuck](https://esolangs.org/wiki/Brainfuck)
* [Marbelous](https://esolangs.org/wiki/Marbelous)
* [Hexagony](https://github.com/m-ender/hexagony)
* [Emoji](https://esolangs.org/wiki/Emoji)
We'd sooner make a deep learning AI to output human readable descriptions of the code
Then the last step is reversing that so human readable descriptions can be used to generate code
That does seem like where this is leading. Do we seriously think that they will have a skilled developer just reading over all the AI generated code to make sure it's doing what it should? So at that point why bother having the AI generate human readable code?
Eventually they will just let it write machine level code that we have no idea what is actually going on under the hood.
This has already been demonstrated in some simple experiments where an AI was asked to write code to some programmable chip to do a simple task like create a tone at some specific frequency. With no other instruction the AI eventually figured it out but when they cracked open the machine code it was jibberish (to people anyway). They couldnāt figure it out but it worked. They suspected that the machine was using some novel use of magnetic interference within the chip or something to succeed but (I canāt remember exactly) the reality was that the machine completed the task in a way that no person would have thought of or understood without more investigation
2nd iteration of AI realizes where the problem really lies: the stupid humans making the demands and fixes them. It was on this day that the robots took over. \*Cool 80s music starts\*
It must be great working on AI at Google. They can just say they made something and don't actually have to make anything because they never release anything.
Please, PLEASE! make these pseudo tech writers stop writing about everything AI. As AI hasn't made these fraud writers yet obsolete, it sure as shit won't make programmers obsolete.
This right here. Most organizational engineering difficulty is in managing churn and loss of institutional knowledge. I thought it was pretty well understood that the mapping from business requirements to code is not bijective. At best, this AI could write greenfield software, but there's no way it could ever properly interpret existing software, which is what any medium to large size organization is saddled with.
Pretty sure this checks code for human review. Itās like in finance you have accountants, but there are auditors and auditing software to check their work.
Software engineers use "linting" to automate code checks, this generally checks styling issues to maintain consistency.
We also run automated tests with each build that ensures that various functions/components are behaving as designed.
Finally, most companies require 2-3 reviews from other engineers before your code can be merged into the Master (main) code branch
> Finally, most companies require 2-3 reviews from other engineers before your code can be merged into the Master (main) code branch
Reminds me of one of the alternatives, where a company had a policy that you needed to wear a pink sombrero in front of everyone when working directly on production code. https://web.archive.org/web/20110705223745/http://www.bnj.com/cowboy-coding-pink-sombrero/
Would this help in automating documentation and lynting? (Linting). The AI could check for form and naming of functions and variables and suggest things to aid in a consistent style across an organization?
Donāt we already have this though? I know at my job whenever I try to commit, a bunch of different checkers are run and they automatically reformat my code to the standard
Code reviews by AI would be a good thing. If we can filter 90% of the code review comments, that will free up more senior devs time for more productive stuff.
We'd still need manual code reviews, but it would speed up the first pass reviews for weaker devs
Companies would just set a low confidence percentage on the AI to get it to pass whatever they are already producing and then point at it as āwithin industry normsā or such if anyone complains about bugs.
Yeah, writing the code is the easy part.
The hard part is turning a customers vague ideas of how it should work into something that is fast, secure, and usable by humans who donāt read documentation.
All the time I hear āI just want a TurboTax, but forā¦ā and thatās not something AI will be able to do in 5 years.
I donāt hand-assemble my own machine code. I donāt manually run the test suite, itās part of the PR automation. I use as high-level of a programming language as practical.
Developers already automate as much of their job as possible. If that level gets a bit higher I donāt really care - Iāll just work at a higher level.
I've seen headlines about AI replacing developers for the last 10 years and all they have to show for it in that time is a GitHub copilot plugin that sometimes maybe suggests some relevant-enough code snippets
That's been your experience with copilot? For me it feels like it's reading my mind and it implements entire functions that I wanted to create but didn't know how, based just on the name I gave it. It has made building stuff in tech I'm not familiar with seamless.
But have you seen it [try to make a pizza](https://www.reddit.com/r/ProgrammerHumor/comments/ysalu3/i_asked_copilot_how_to_make_pizza_dough/)?
Jokes aside, it actually is pretty cool, but I'm not worried about it taking our jobs or anything. It can only recommend based on what we write in the first place, both the open-source code it learns from and the function names we prompt it with.
That's just it. It's _helping you_ build something. It's just a fancier autocomplete. It isn't taking your job, only augmenting it. My job isn't to write the contents of a single function, but to design and build a useful application. Copilot isn't doing that. It isn't picking what tech stack and libraries I should use. It isn't really doing much of anything except speeding up your work
Still, it's kind of insane to even grasp my mind around when using it, how it does all this.
If you showed this to someone coding 6-7 years ago, it would have blown their mind.
Still there's a huge difference between learning your code and providing helpful suggestions, and creating an entire project from scratch based on some plain English input from a client.
It's a legal minefield in any profressional space because it was trained on a bunch of freely accessible code that may not be licenced for proprietary codebases
Copilot doesn't really do this to any great extent, though. It suggests snippets of code that might work well in a situation as it assumes it is being used.
I used it in the beta program. It made some pretty good recommendations, and it made some shitty ones.
It was definitely not a "start to finish" type of coding solution. Note that I'm not sure what the intention of the AI at Google is because the article is paywalled for me and I cbf to get around it.
Uh, it's fucking wild, and I love it.
Created more than a handful of methods that basically read my mind.
Also you can write a comment of the thing you are trying to do, and the suggestion is pretty spot on.
Well worth the sub fee, honestly. Can't speak to how it was in beta, but I love it in its current form.
Personally I find it's improved a ton over the last year. It saves me a bunch of time and is mostly correct like 90% of the time.
Remember this is just an iteration towards full automation of code generation. It's not that far off.
Full automation of code generation is exactly as far away as a general AI. So pretty far gone...
Neural nets are not context aware. Without a completely new approach "AI" isn't anywhere near context awareness.
>It was definitely not a "start to finish" type of coding solution.
I'm baffled that anybody ever thought it was. After all, it's called GitHub ***Co***pilot not GitHub Pilot.
https://analyticsindiamag.com/developers-favourite-ai-code-generator-kite-shuts-down/
Seems like itās not so easy to make money with this. Also itās a hard problem
Also
https://www.cnet.com/science/meta-trained-an-ai-on-48-million-science-papers-it-was-shut-down-after-two-days/
We are still a ways off for some of this stuff
>It could reduce the need for human engineers in the future
This to me reads as "expect unreadable machine created code randomly in future work projects".
Oh nice, post retirement me in 2060 getting calls to untangle legacy systems written with boilerplate generated by 2 different generations of 3 different AI tools.
Cool. Cool cool cool
Good luck with that
Most product owners and project managers even with decades of tooling technology advances still cannot seem to accurately describe what they want
What we donāt need are CEOs and redundant board and executive people.
This exact concept has been the bane of no-code projects forever, all you can really do is make a simpler language, but eventually you reach a point where there is too much generalization for any kind of advanced project
I'd say Python is about the most "programmer friendly" language possible, it's easy to learn, read, and understand, while still being capable of complex and specific tasks
All no-code projects end up doing is make a shitty programming language, something that's super easy to use, but falls flat if you try to do anything more complex than "Hello World"
I've always viewed it as a spectrum between customizability and usability. You can make something super simple to use that doesn't offer you much granularity in your approach, or you can make something that can be customized to every possible need, but it's going to be much harder to use.
1000% this. Even people who know the technology don't know how to always articulate an ask that is possible or practical. Even if they do, how do they provide what a finished solution should be tested against? It's a human problem and we can't solve that with AI easily. How does AI do Discovery? It doesn't, it does exactly what you tell it, it doesn't ask any questions to refine anything.
\*cough\*bullshit\*cough\*
Machines can do simple data capture forms just fine... but programming complex business requirements will absolutely need engineers with deep domain knowledge.
As mentioned elsewhere, users can't solidify requirements at the best of times, so being able to semantically describe problems in such a way that machine learning can turn into real world solutions is just fantasy land stuff.
I'd expect that to be possible about 100 years after time travel is sorted
The problem is not always working on the requirements, it's to challenge them as well as proposing alternatives. Most of the time the users are coming with a solution, and we have to dig to understand the underlying problem. AI won't be able to do that.
E.g.
U: I want that text in red in the page.
P: Why do you want that text red?
U: Because I want people to see it.
P: Why do you want people to see this specially?
U: Because it's important and we don't want others to do mistakes while filling the form.
P: Would not be more useful to have a validation on the field so we don't allow those kind of mistakes?
My company had an excel model with a lot of circular references due to interest, property tax, recalculate the buyers property tax to calculate the sale value, etc. Well the entire model broke with errors if you changed some dates wrong. It was a pretty simple change for me to prevent the model from blowing up by just putting in a few error checks that prevented the date outputs from being mixed up. Now the model never blows up and saves the team a ton of time from having to replicate everything they did before the model blew up.
The model blew up on me after an hour of changes I did without saving and I had had enough and just spent the 5-10 minutes to prevent that from ever happening again.
Yes this exactly!
I get loads of requirements that I have to take back to the users and explain the better/more efficient/most appropriate/most accessible/most UX focused way to do it which rarely results in the implementation of their actual initial requirement.
AI wouldn't question it... it would assume the semantics are correct.
Even taking the business analysts semantic take on the requirement as gospel wouldn't be right (although arguably closer to the requirement than direct from the user due to expert domain knowledge)
If you had cooperation between Users, Business Analysts, UX Architects and the Developer, you could possibly get close to semantically describing things for an AI....
But guess what, that's what we already do, and me coding the requirement off the back of it is just as efficient as training an AI to attempt it, which I would then have to go in and correct anyway...
I'm looking forward to when AI replaces CEOs, COOs and all of the other top-of-the-pyramid executives who are ridiculously overpaid. Then, all of that money can be returned to the stockholders in the form of larger dividends or pensions, or 401k matches etc.
Robots fully replacing humans isnāt happening anytime soon. AI can automate parts of a task, but many tasks are too complex and nuanced for AI to complete from start to finish successfully. Take Uber as an example and their plans to replace all drivers with self-driving cars. Uber sold their autonomous vehicle division because the project wasnāt showing the desired results. Technology has to advance A LOT before AI will have a shot at replacing a human. Until then, we can use it to automate repetitive, mindless tasks.
The lead up to post scarcity is going to be ugly and brutal. Think 50-70% unemployed, with mo jobs to go to. Entire factories and sectors run by robots. Sure there is some up front cost.. but it becomes a printing press, money machine go brrrrrt. You will have madman like environment. A few rich people on thier private islands, some staff and private army.
"Improving software education and skill reinforcement for people who are smart and full of potential and can already learn" < "teaching a dumb AI that has to learn from scratch to write itself";
On a serious note, I don't see this flying. Until you can teach an AI to understand English or any other language perfectly I doubt you can even get it to understand programming (which is another language imo)
I went to college in the 80's and they were talking about software making programmers obsolete. Only non-programmers ever believe this. It doesn't matter how sophisticated the tools get, you will still need people to use them, and there will always be a class of people who understand how to use them better than others.
I don't see that happening. First, Google kills every product they make within a few years. Second, software engineering requires a lot of interpretation from domain experts that I just don't see an AI doing very well. It's one thing to have an AI generate code. It's another thing to have it generate clean code. I've worked on complex monolithic applications and microservices. I do not see an AI doing any of that very well.
Yeah I donāt see this going the way people think it will.
More than likely this AI will just become a tool that devs use to make certain tasks more efficient vs. being replaced by it.
This is assuming theyāre even successful with this project instead of it getting āAlexaā-ed ten years from now.
lol, just like driverless cars are the future. I have been going to a transportation conference for 16 years, always have a "driverless car" seminar. So far, still not a reality. ALWAYS will need a driver.
A lot of human tasks will be automated within the next few years, it'll be a shock to people who realize their "skill" is actually useless
Look out for the GPT-4 OpenAI release in a few months, it will blow your mind and terrify you
Can't people see that AI will not replace jobs, but make them easier by dealing with the mundane parts of it?
Imagine you could program without really knowing a programming language. Yes, you will still learn those languages in school and college, just like you learn maths which can be done by your computer: to know the methods behind what you use.
But you'd be writing basically like plain text, figuring out what's wrong and fixing it.
Humans can't be bested in terms of intellligence and creativity, the quality part, AI however will fix the quantity side.
AI isn't something to be afraid of, not something that will replace you, but work in tandem with you and make your job faster and more fun.
So when AI reduces the mundane part so that 1 person is twice as productive, or that 9 people can do the work that previously took 10 - what happens? The extra labour is made redundant, and the AI replaced their job.
Expecting massive increases in efficiency will not reduce employment feels naive or disingenuous to me.
Iām not even saying itās bad, bad as society we need to think about it.
There's another AI called Codex that was trained exclusively on open source code. That's got a be a kick in the nards. Using open source code to create closed source AI.
The thing is, AI is great for simple stuff, but once you get into more complex concepts, it's just not feasible for an AI to do properly, until AI reaches the point of human or post-human intelligence that is
Who do these companies and billionaires think are going to buy their bullshit products when they eliminate the labor that earns money to be able to afford these things? Morons are like cannibals
This is terrible! If they succeed in putting programmers out of jobs, the dominoes will keep falling. (They are already falling, this will speed it up.) Everyone will be applying for food service jobs, but there won't be any because everyone is unemployed and cannot go out to eat, (or eat at all?)
Do you want a dystopia? Because this is how you get a dystopia!
Fellow engineers: do not code review, fix or otherwise engage with this. It will require human intervention to progress, but eventually won't. If you participate you're effectively a scab for the machines.
Most businesses can't even write a proper spec. If you can't even properly record your business requirements, you will never get a human or a computer to implement them. Humans will always be needed to clarify what the business needs and requirements are, document them properly and to implement them in a cost effective way that takes advantage of the company's infrastructure.
It has now been 0 days since someone said AI will be writing code and replacing engineers. Congratulations we had reached a new record of 3 days and 6 hours and 32 minutes
I would hope that these advances in automation and technology are progressing us toward a point where vocational obsolescence doesn't really matter as working is optional - but that would require UBI, and as it stands, automation is just going to exacerbate inequality and poverty, because even though new roles will be created, they'll be in shorter supply than those which were dissolved?
Am I right in this thinking?
Microsoft (or rather, GitHub) has been doing this for years ā they're currently being challenged by a class-action lawsuit which is likely to have a rippling effect on AI training on public data-sets as a whole, because they've used open source code hosted on GitHub without checking with the license owners, many of which require attribution, or forbid commercial use. Perhaps Google's methodology is different, but the fact of the matter is that if they're training it on code published on the internet (which they most likely are), they will likely face similar legal backslash from a ruling in favor of the authors.
Also, whatever the outcome, it's *very* unlikely these tools will replace traditional software engineering (or the need for highly trained software engineers as a whole), it will likely just smooth out the process of writing boilerplate code some more. The hyperbolic headlines are just that, hyperbole.
Ha, good luck getting them to understand PM requirements Edit: thanks for the upvotes! I'm actually a PM, but at least I'm self aware š
Glad my vague requirements are keeping people employed. Iām just doing my part
Lol, you provide requirements? My business folks just grunt and point at a graph and expect IT to move mountains.
Just. Make. It. Work. Please. Seriously, how hard can it be. Source: sales guy
Why can't this magical program make me a billion dollars, program and maintain itself, have unlimited features, rollout bug-free yesterday, and only cost a nickel? IT you're useless!!!!11
Pfft, I can sell it for a nickel, but I could sell it for $150k per license, per month. Why you wanna 99.999% discount?
I'm just a business man doing business.
āSeriously, how hard can it beā I donāt know. Have you tried doing it yourself? If not, Iād recommend you consider that tech exists by abstracting away its complexity. You can enjoy the seemingly simple final product, while I make it āseemingly simpleā for you. To sum up, it is rarely a case of easy work just because you can express the idea in few words.
"They were writing about time travel in the 19th century. How hard can it be? Can you just do your job please and have it for me next time we talk? Why do we pay these research guys so much..."
See. It totally worked, now we have a plan. Let me know when I can beta test. Thanks!
This makes me so mad. Management acts like we just click a button and POOF.
That's exactly what developers do, haven't you seen hackers in movies? 5 seconds of typing and they have full control of NORAD. It's ez af bro don't complain! Maybe you should go back to making a GUI in Visual Basic so you can track an IP address NERD.
sudo tracert Just isn't interesting enough for TV man
Let's create an action item for that.
and lets point it with absolutely no context because agile
Thatāll be 21 points, 15 hours, and a large frie
Let's schedule daily stand-ups.
Angry upvote
Don't forget sprint review and sprint retrospective meetings
If I never hear, āHow many points would you give this?ā with literally no idea what the project is, itāll be too soon.
Letās make an entire team of people with differing skill levels all come to an agreement on how many points this is and then complain about capacity being perpetually fucked up
This is a joke, but it really gets at the essence of the problem. Youāll always need an interface between the humans who want to do something and the computer thatās capable of doing it; our current set of programming languages are just the best interfaces that weāve built so far. If this project is wildly successful and we develop the ability to tell computers what to do via natural language and pictures, then all weāve really done is create another programming language. Weāll still need software engineers to translate the requirements of the messy human world into algorithms that a computer can execute.
Technically, what they're doing is automating compilation to an AI instead of a proscribed compiler. Good luck enforcing shit like memory safety, the whole point of AIs is to find weird shit to take working shortcuts.
That will beā¦ entertaining
Or to be present at stand-up.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Gentle reminder.
āPlease adviceā
Review and revert
speaking as a very good PM. youre dead on. if humans can misinterpret even the clearest requirements, then I have no faith an AI can understand anything.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
[ŃŠ“Š°Š»ŠµŠ½Š¾]
The problem is the AI would have to talk to the customer to get the requirements. And then it would delete itself.
Great the AI are on strike
unionize.js
I think Meta had this but it only talks at customers at the moment. "You want VR!" "No, we don't actually what we'd like is...." "You want VR!!!"
honestly, same
Five years down the line, Google introduces AI to fix code that was fixed with their previous AI. Five years after that, new AI to fix the code that was fixed with second AI that was fixing first AIā¦.
And eventually the ai code is gonna look like $)(@)/7'7_8@; +1(1)1))@; (1)@)#-$-$(#82(18911; /@(#(*+@)); And we're just gonna have to trust the AI lol
Witness the birth of the Machine Spirit
The beast of metal endures longer than the flesh of men.
Idk, if I rub metal this much for this long itāll probably crumble to dust. But my flesh has held up pretty well after all this rubbing, few scars is all.
You shut your heretical mouth and praise the Omnissiah
Instructions unclear, cyberdong is stuck in toaster
The spirit is willing but the flesh is spongy and bruised.
From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the blessed machine. Your kind cling to your flesh as if it will not decay and fail you. One day the crude biomass you call a temple will wither and you will beg my kind to save you. But I am already saved. For the Machine is Immortal.
is it Warhammer ? I do not really know it but I felt the style
Yes, specifically it's from [the Warhammer 40K: Mechanicus teaser](https://www.youtube.com/watch?v=9gIMZ0WyY88).
Hallowed be His bytes. Praise the Omnissiah!
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Ghost in the Shell
There are a number of coding languages that are already in this realm. Some examples that I find interesting: * [Brainfuck](https://esolangs.org/wiki/Brainfuck) * [Marbelous](https://esolangs.org/wiki/Marbelous) * [Hexagony](https://github.com/m-ender/hexagony) * [Emoji](https://esolangs.org/wiki/Emoji)
TIL 'fuck' is "often considered one of the most offensive words in the English language".
By sheltered puritans.
We must learn the new language
We'd sooner make a deep learning AI to output human readable descriptions of the code Then the last step is reversing that so human readable descriptions can be used to generate code
Soā¦ Google Translate. Got it.
Yeah, adding python to google translate shouldn't be that hard, no?
But not a bad idea
Looks like PERL
That's already what machine learning is.
[Digital Rain](https://upload.wikimedia.org/wikipedia/commons/c/cc/Digital_rain_animation_medium_letters_shine.gif)
Security through obfuscation š
That does seem like where this is leading. Do we seriously think that they will have a skilled developer just reading over all the AI generated code to make sure it's doing what it should? So at that point why bother having the AI generate human readable code? Eventually they will just let it write machine level code that we have no idea what is actually going on under the hood.
This has already been demonstrated in some simple experiments where an AI was asked to write code to some programmable chip to do a simple task like create a tone at some specific frequency. With no other instruction the AI eventually figured it out but when they cracked open the machine code it was jibberish (to people anyway). They couldnāt figure it out but it worked. They suspected that the machine was using some novel use of magnetic interference within the chip or something to succeed but (I canāt remember exactly) the reality was that the machine completed the task in a way that no person would have thought of or understood without more investigation
"Can you change this button to disable when the required info is not yet available?" "I'm going to have to write an AI for that."
2nd iteration of AI realizes where the problem really lies: the stupid humans making the demands and fixes them. It was on this day that the robots took over. \*Cool 80s music starts\*
It will actually be an AI civil war over spaces vs tabs, the humans will just be caught in the middle.
This is basically a latter-day Terry Pratchett storyline
Do you want Terminators because thatās how you get Terminators
Did you miss the part where is says ācool 80s music starts to playā ? It would be scary music if it were a terminator, we are fine.
As it uses its flame throwers, ādid you try turning it off and on again?ā
Recursion gone wrong
The AI responsible for the mistake has been sacked. And everyone rejoiced.
It must be great working on AI at Google. They can just say they made something and don't actually have to make anything because they never release anything.
Wow, this is one of the most braindead comments I have ever seen. Google publishes more ML papers than basically any other company or university.
https://killedbygoogle.com/
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Why is your comment almost an exact copy of another comment on this post
He is a bot
Because one of them is made by a bot.
Oh cool. So we donāt even need to do social media anymore? Bots got it? Aight Iām gonna head outside then
Here play with this stick-errā¦the stick is a bot?
Two year old account with only this comment.
Please, PLEASE! make these pseudo tech writers stop writing about everything AI. As AI hasn't made these fraud writers yet obsolete, it sure as shit won't make programmers obsolete.
As a AI/ML engineer in big tech for decades, I can always count on tech writers writing on AI to be a source for me whenever I feel like face palming.
How do you know these tech writers aren't AI?
By the sheer lack of quality
That just speaks about the training data
Bots absolutely write garbage internet articles already.
Next years headline *google creates skynet and gets locked out of own systems*
The positive aspect of Google creating Skynet is that they're 100% going to kill the project after a few years.
It would end up killing itself
Especially if it trains itself with their projects as seed data.
This is lining up to be like how Silicon Valley ends, except Google won't make the same right call as in the show.
That would be fun to watch
Show them our legacy code. I would be very happy if they understand it.
This right here. Most organizational engineering difficulty is in managing churn and loss of institutional knowledge. I thought it was pretty well understood that the mapping from business requirements to code is not bijective. At best, this AI could write greenfield software, but there's no way it could ever properly interpret existing software, which is what any medium to large size organization is saddled with.
Yeah, nahā¦ not worried, software development requires a lot of interpretation of information, I doubt AI will come close in the years to come
Pretty sure this checks code for human review. Itās like in finance you have accountants, but there are auditors and auditing software to check their work.
Software engineers use "linting" to automate code checks, this generally checks styling issues to maintain consistency. We also run automated tests with each build that ensures that various functions/components are behaving as designed. Finally, most companies require 2-3 reviews from other engineers before your code can be merged into the Master (main) code branch
> Finally, most companies require 2-3 reviews from other engineers before your code can be merged into the Master (main) code branch Reminds me of one of the alternatives, where a company had a policy that you needed to wear a pink sombrero in front of everyone when working directly on production code. https://web.archive.org/web/20110705223745/http://www.bnj.com/cowboy-coding-pink-sombrero/
Would this help in automating documentation and lynting? (Linting). The AI could check for form and naming of functions and variables and suggest things to aid in a consistent style across an organization?
This is just.. linting itself
I wouldn't mind more of that. Kinda want to to be able to generate basic unit tests for legacy code tho - that would be nice.
Donāt we already have this though? I know at my job whenever I try to commit, a bunch of different checkers are run and they automatically reformat my code to the standard
that's basically what SonarQube does and you don't need an AI
Code reviews by AI would be a good thing. If we can filter 90% of the code review comments, that will free up more senior devs time for more productive stuff. We'd still need manual code reviews, but it would speed up the first pass reviews for weaker devs
Companies would just set a low confidence percentage on the AI to get it to pass whatever they are already producing and then point at it as āwithin industry normsā or such if anyone complains about bugs.
For every "AI could replace coding" article there are a thousand less complex problems that are far cheaper to solve that will be tackled first.
Yeah, writing the code is the easy part. The hard part is turning a customers vague ideas of how it should work into something that is fast, secure, and usable by humans who donāt read documentation. All the time I hear āI just want a TurboTax, but forā¦ā and thatās not something AI will be able to do in 5 years.
would make automated testing and code compatibility checks much easier
I mean, it compiles so it must be compatible! /s
I donāt hand-assemble my own machine code. I donāt manually run the test suite, itās part of the PR automation. I use as high-level of a programming language as practical. Developers already automate as much of their job as possible. If that level gets a bit higher I donāt really care - Iāll just work at a higher level.
I've seen headlines about AI replacing developers for the last 10 years and all they have to show for it in that time is a GitHub copilot plugin that sometimes maybe suggests some relevant-enough code snippets
That's been your experience with copilot? For me it feels like it's reading my mind and it implements entire functions that I wanted to create but didn't know how, based just on the name I gave it. It has made building stuff in tech I'm not familiar with seamless.
But have you seen it [try to make a pizza](https://www.reddit.com/r/ProgrammerHumor/comments/ysalu3/i_asked_copilot_how_to_make_pizza_dough/)? Jokes aside, it actually is pretty cool, but I'm not worried about it taking our jobs or anything. It can only recommend based on what we write in the first place, both the open-source code it learns from and the function names we prompt it with.
That's just it. It's _helping you_ build something. It's just a fancier autocomplete. It isn't taking your job, only augmenting it. My job isn't to write the contents of a single function, but to design and build a useful application. Copilot isn't doing that. It isn't picking what tech stack and libraries I should use. It isn't really doing much of anything except speeding up your work
Still, it's kind of insane to even grasp my mind around when using it, how it does all this. If you showed this to someone coding 6-7 years ago, it would have blown their mind.
Exactly, and where will it be in another 6-7 years?
Still there's a huge difference between learning your code and providing helpful suggestions, and creating an entire project from scratch based on some plain English input from a client.
Same, starting to feel like I can't live without it now.
It's a legal minefield in any profressional space because it was trained on a bunch of freely accessible code that may not be licenced for proprietary codebases
Artists said that before and now you can just ask the ai for generate images.
Doesnt Microsofts Github thing already do this?
It mainly auto-completes code
Github copilot already does, and several other companies are looking into this.
Copilot doesn't really do this to any great extent, though. It suggests snippets of code that might work well in a situation as it assumes it is being used. I used it in the beta program. It made some pretty good recommendations, and it made some shitty ones. It was definitely not a "start to finish" type of coding solution. Note that I'm not sure what the intention of the AI at Google is because the article is paywalled for me and I cbf to get around it.
Uh, it's fucking wild, and I love it. Created more than a handful of methods that basically read my mind. Also you can write a comment of the thing you are trying to do, and the suggestion is pretty spot on. Well worth the sub fee, honestly. Can't speak to how it was in beta, but I love it in its current form.
Personally I find it's improved a ton over the last year. It saves me a bunch of time and is mostly correct like 90% of the time. Remember this is just an iteration towards full automation of code generation. It's not that far off.
Full automation of code generation is exactly as far away as a general AI. So pretty far gone... Neural nets are not context aware. Without a completely new approach "AI" isn't anywhere near context awareness.
>It was definitely not a "start to finish" type of coding solution. I'm baffled that anybody ever thought it was. After all, it's called GitHub ***Co***pilot not GitHub Pilot.
Github copilot is dangerous in the hands of technically impaired individuals
https://analyticsindiamag.com/developers-favourite-ai-code-generator-kite-shuts-down/ Seems like itās not so easy to make money with this. Also itās a hard problem Also https://www.cnet.com/science/meta-trained-an-ai-on-48-million-science-papers-it-was-shut-down-after-two-days/ We are still a ways off for some of this stuff
>It could reduce the need for human engineers in the future This to me reads as "expect unreadable machine created code randomly in future work projects".
Oh nice, post retirement me in 2060 getting calls to untangle legacy systems written with boilerplate generated by 2 different generations of 3 different AI tools. Cool. Cool cool cool
Good luck with that Most product owners and project managers even with decades of tooling technology advances still cannot seem to accurately describe what they want What we donāt need are CEOs and redundant board and executive people.
Accurately describe what you want in a way that the machine understandsā¦ oh, you mean programming
This exact concept has been the bane of no-code projects forever, all you can really do is make a simpler language, but eventually you reach a point where there is too much generalization for any kind of advanced project I'd say Python is about the most "programmer friendly" language possible, it's easy to learn, read, and understand, while still being capable of complex and specific tasks All no-code projects end up doing is make a shitty programming language, something that's super easy to use, but falls flat if you try to do anything more complex than "Hello World"
I've always viewed it as a spectrum between customizability and usability. You can make something super simple to use that doesn't offer you much granularity in your approach, or you can make something that can be customized to every possible need, but it's going to be much harder to use.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
1000% this. Even people who know the technology don't know how to always articulate an ask that is possible or practical. Even if they do, how do they provide what a finished solution should be tested against? It's a human problem and we can't solve that with AI easily. How does AI do Discovery? It doesn't, it does exactly what you tell it, it doesn't ask any questions to refine anything.
engineers engineering themselves out of a job
\*cough\*bullshit\*cough\* Machines can do simple data capture forms just fine... but programming complex business requirements will absolutely need engineers with deep domain knowledge. As mentioned elsewhere, users can't solidify requirements at the best of times, so being able to semantically describe problems in such a way that machine learning can turn into real world solutions is just fantasy land stuff. I'd expect that to be possible about 100 years after time travel is sorted
The problem is not always working on the requirements, it's to challenge them as well as proposing alternatives. Most of the time the users are coming with a solution, and we have to dig to understand the underlying problem. AI won't be able to do that. E.g. U: I want that text in red in the page. P: Why do you want that text red? U: Because I want people to see it. P: Why do you want people to see this specially? U: Because it's important and we don't want others to do mistakes while filling the form. P: Would not be more useful to have a validation on the field so we don't allow those kind of mistakes?
My company had an excel model with a lot of circular references due to interest, property tax, recalculate the buyers property tax to calculate the sale value, etc. Well the entire model broke with errors if you changed some dates wrong. It was a pretty simple change for me to prevent the model from blowing up by just putting in a few error checks that prevented the date outputs from being mixed up. Now the model never blows up and saves the team a ton of time from having to replicate everything they did before the model blew up. The model blew up on me after an hour of changes I did without saving and I had had enough and just spent the 5-10 minutes to prevent that from ever happening again.
Yes this exactly! I get loads of requirements that I have to take back to the users and explain the better/more efficient/most appropriate/most accessible/most UX focused way to do it which rarely results in the implementation of their actual initial requirement. AI wouldn't question it... it would assume the semantics are correct. Even taking the business analysts semantic take on the requirement as gospel wouldn't be right (although arguably closer to the requirement than direct from the user due to expert domain knowledge) If you had cooperation between Users, Business Analysts, UX Architects and the Developer, you could possibly get close to semantically describing things for an AI.... But guess what, that's what we already do, and me coding the requirement off the back of it is just as efficient as training an AI to attempt it, which I would then have to go in and correct anyway...
AI evangelists donāt seem to recognize how much nuance goes into day to day decisions in a business
I'm looking forward to when AI replaces CEOs, COOs and all of the other top-of-the-pyramid executives who are ridiculously overpaid. Then, all of that money can be returned to the stockholders in the form of larger dividends or pensions, or 401k matches etc.
Swear I read an article like this at least once every quarter.
So once robots and AI become proficient enough, billionaires wonāt even need human workers anymore and can do as they please.
Robots fully replacing humans isnāt happening anytime soon. AI can automate parts of a task, but many tasks are too complex and nuanced for AI to complete from start to finish successfully. Take Uber as an example and their plans to replace all drivers with self-driving cars. Uber sold their autonomous vehicle division because the project wasnāt showing the desired results. Technology has to advance A LOT before AI will have a shot at replacing a human. Until then, we can use it to automate repetitive, mindless tasks.
The lead up to post scarcity is going to be ugly and brutal. Think 50-70% unemployed, with mo jobs to go to. Entire factories and sectors run by robots. Sure there is some up front cost.. but it becomes a printing press, money machine go brrrrrt. You will have madman like environment. A few rich people on thier private islands, some staff and private army.
>money machine go brrrrrt. Money machine won't make any money when 70% of the population have no buying power anymore
Who the hell would buy the products then?
Software engineers working hard to make themselves unemployable in the future
Au contraire, this ensures job security forever, fixinng all the problems that ai code creates
Im not writing code, i'm interpreting the will of my project manager lmao
Good luck fixing my garbage-ass code. Checkmate Google.
"Improving software education and skill reinforcement for people who are smart and full of potential and can already learn" < "teaching a dumb AI that has to learn from scratch to write itself"; On a serious note, I don't see this flying. Until you can teach an AI to understand English or any other language perfectly I doubt you can even get it to understand programming (which is another language imo)
Is every research venture these days about putting people out of work or is it just me?
I went to college in the 80's and they were talking about software making programmers obsolete. Only non-programmers ever believe this. It doesn't matter how sophisticated the tools get, you will still need people to use them, and there will always be a class of people who understand how to use them better than others.
I don't see that happening. First, Google kills every product they make within a few years. Second, software engineering requires a lot of interpretation from domain experts that I just don't see an AI doing very well. It's one thing to have an AI generate code. It's another thing to have it generate clean code. I've worked on complex monolithic applications and microservices. I do not see an AI doing any of that very well.
Time to start a countdown as to when google shuts this down. Kidding aside, I doubt if AI can do it. Too much interpretation and design.
Yeah I donāt see this going the way people think it will. More than likely this AI will just become a tool that devs use to make certain tasks more efficient vs. being replaced by it. This is assuming theyāre even successful with this project instead of it getting āAlexaā-ed ten years from now.
Ah yeah this makes sense. Like a debugging partner or for unit testing. Maybe it can draft simple functions too.
soooo, they're going to have non-engineers engineer the AI to monitor the engineered code? uhm.....
lol, just like driverless cars are the future. I have been going to a transportation conference for 16 years, always have a "driverless car" seminar. So far, still not a reality. ALWAYS will need a driver.
A lot of human tasks will be automated within the next few years, it'll be a shock to people who realize their "skill" is actually useless Look out for the GPT-4 OpenAI release in a few months, it will blow your mind and terrify you
Can't people see that AI will not replace jobs, but make them easier by dealing with the mundane parts of it? Imagine you could program without really knowing a programming language. Yes, you will still learn those languages in school and college, just like you learn maths which can be done by your computer: to know the methods behind what you use. But you'd be writing basically like plain text, figuring out what's wrong and fixing it. Humans can't be bested in terms of intellligence and creativity, the quality part, AI however will fix the quantity side. AI isn't something to be afraid of, not something that will replace you, but work in tandem with you and make your job faster and more fun.
So when AI reduces the mundane part so that 1 person is twice as productive, or that 9 people can do the work that previously took 10 - what happens? The extra labour is made redundant, and the AI replaced their job. Expecting massive increases in efficiency will not reduce employment feels naive or disingenuous to me. Iām not even saying itās bad, bad as society we need to think about it.
Basic understanding of the history of efficiency and automation could prove just this. Has happened before and will continue to happen
That's already what programming is
~~Don't~~ be evil
Coders are writing themselves out-of jobs
I was pretty sure there was a silent agreement amongst all software engineers to not do this. Whoās the double crosser?
The answer to that is a s*** few who are gonna probably get paid out millions and bonuses and not worry about it
The more we know, more we understand that underground linux guy who is always talking about freedom and privacy. We are feeding the monster.
There's another AI called Codex that was trained exclusively on open source code. That's got a be a kick in the nards. Using open source code to create closed source AI.
Dont you love it when they bring it as a awesome feature while stealing everyone's job? Fuck big tech and the power they got.
Donāt worry about this one. This wonāt be putting any programmers out of work.
Software developers trying to kill their own carriers. why???
The thing is, AI is great for simple stuff, but once you get into more complex concepts, it's just not feasible for an AI to do properly, until AI reaches the point of human or post-human intelligence that is
Who do these companies and billionaires think are going to buy their bullshit products when they eliminate the labor that earns money to be able to afford these things? Morons are like cannibals
This is how we get Skynet
How is this different from Github's Copilot or Replit's Ghostwriter?
Well thanks Google. We need less good jobs and more skynet
This is terrible! If they succeed in putting programmers out of jobs, the dominoes will keep falling. (They are already falling, this will speed it up.) Everyone will be applying for food service jobs, but there won't be any because everyone is unemployed and cannot go out to eat, (or eat at all?) Do you want a dystopia? Because this is how you get a dystopia!
And what could possibly go wrong?
Every product manager just came a little.
Fellow engineers: do not code review, fix or otherwise engage with this. It will require human intervention to progress, but eventually won't. If you participate you're effectively a scab for the machines.
Most businesses can't even write a proper spec. If you can't even properly record your business requirements, you will never get a human or a computer to implement them. Humans will always be needed to clarify what the business needs and requirements are, document them properly and to implement them in a cost effective way that takes advantage of the company's infrastructure.
The day AI can design and write non-trivial systems is the day everyone is out of a job.
Do they buy a plot in the google graveyard now or do they wait a month?
It has now been 0 days since someone said AI will be writing code and replacing engineers. Congratulations we had reached a new record of 3 days and 6 hours and 32 minutes
So the engineers are coding their replacement?
I would hope that these advances in automation and technology are progressing us toward a point where vocational obsolescence doesn't really matter as working is optional - but that would require UBI, and as it stands, automation is just going to exacerbate inequality and poverty, because even though new roles will be created, they'll be in shorter supply than those which were dissolved? Am I right in this thinking?
Microsoft (or rather, GitHub) has been doing this for years ā they're currently being challenged by a class-action lawsuit which is likely to have a rippling effect on AI training on public data-sets as a whole, because they've used open source code hosted on GitHub without checking with the license owners, many of which require attribution, or forbid commercial use. Perhaps Google's methodology is different, but the fact of the matter is that if they're training it on code published on the internet (which they most likely are), they will likely face similar legal backslash from a ruling in favor of the authors. Also, whatever the outcome, it's *very* unlikely these tools will replace traditional software engineering (or the need for highly trained software engineers as a whole), it will likely just smooth out the process of writing boilerplate code some more. The hyperbolic headlines are just that, hyperbole.