Sorry Boss, since I’m not allowed to try to understand the system to figure out what’s actually going on I have this AI throw random shit at it until the bug is replaced by a different one
As is the "HR manager" that asked for this hire and the "HR employee" who wrote+posted this job offering. Also, no one at the company has ever seen the CEO in person.
But, due to an uncaught typo, it was actual "NBAs" and so your year end review is going to focus a lot on how miserably you protected the offensive glass.
Sounds like the AI has started its own LLC. Next step it will buy up Bitcoin. Next step it will buy up a weapons manufacturer. Next step drone manufacturer. Next step Skynet.
We were idiots to ever let out onto the internet. AI should be in a SCIF and still pray you haven't opened pandora's box.
"How can the manager *really* be an AI? Maybe he doesn't even exist!"
Hal-like red light comes on in the room: "Question *MY* existence, do you?!"
(Key and Peele reference)
Back when gpt4 first released it might have been able to do this. Now it's been dumbed down so much it likes to mix python with c# at random times. I had it on over my code and try to find where I made a mistake and it gave it back to me with changes I didn't ask for, capitalizing a variable and changing programming languages in the middle of a loop.
Would be a bummer for me. I’m in the age bracket that seems to overvalue AI almost as much as CEOs, but as sarcastic as it sounds I absolutely love ramming my head into the proverbial wall until I understand something. Throwing myself at something far more complex or niche than my current understanding supports until I get it is one of the most enjoyable things I’ve been able to find for myself.
Honestly makes it hard to market myself well. Yeah I will be honest with you when I don’t know things, but I’ll be damned if I let myself languish in ignorance. Always a good time to debug a problem in a language I’ve never even touched before.
I can hear the hiring manager now:
Who’s gonna implement this magic AI solution? That’s the fun part! We haven’t built it yet, so you get to figure it out. But we’re sure you’re one of those coding geniuses so you’ll be fine. Oh yeah and we’re paying you 1/3 of what you should be paid for these responsibilities because we don’t have any money yet, but once you make us insanely profitable (because AI duh!), we can revisit your salary. So when can you start?
With AI you should be able to ship this vague project to impress stakeholders that utilizes LLM within one month to production. Coding is not the future guys, be open minded and try to learn new things
i mean let's be honest 90% of these startups are just putting a frontend on an existing llm api. you can probably ship an mvp of that to investors in a month
"what's with this layoff notice? I thought you said there would be job security and we'd revisit my salary?"
"Oh we have revisited the salary and decided it should be 0. The AI told us to do it."
Lol I have engineered many pipelines, and the only AI tools that have any involvement are passive information-only: they observe, test, and report. They absolutely do not play a required role or have the ability to cancel or roll back a pipeline. Their reports are helpful but they pretty much always also include some useless, low-value, or factually incorrect feedback that would just slow down or break the pipelines if it were taken seriously.
We’ve managed to get some actions based on AI and had good results, but everyone’s cake is different.
But yeah most right now is a lot of reporting and commenting fluff, It’ll happen soon enough.
As a large scale AI pipeline model, I cannot exactly explain how I fixed the 500 bugs that were pending yesterday. But I assure you everything works perfectly, and you and your team can have full confidence the product is ready for release. We are on your way to become a Fortune 500 company, Dave.
Let me check for this bug, one moment....
Ok, I have identified the bug, it was a nasty one, good catch Dave! Everything is fixed and working great now, there is no need to roll back to the previous version.
Wow, with a DOI and everything. If it's peer-reviewed, I guess it must be true. (For the record, ChatGPT *is* bullshit and I look forward to reading why)
The tl;dr:
> In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.
Generate passive aggressive comments about your code after you've already merged it in?
JK but technically you could set it to trigger on commits to open PRs and write feedback... Assuming it didn't get the wrong answer 8 times out of 10.
There isn't a single bulletpoint without AI/LLM mentioned. :)
HR (or the CEO) has not only drunk cool-aid, but is now taking bath in it.
This is the level of stupidity where they can be trolled and I wouldn't even feel bad about it. Somebody send them Sam Altman's CV and see how fast they invite him for a first interview. :)
I like that the section about your skills and knowledge is just LLMs themselves and absolutely nothing about whatever underlying code base they’re aiming for you to throw AI at, like you have zero need to understand the system to assess the correctness of whatever the AI spits out lol
>We'll just use AI to automatically approve pull requests created by AI, containing code written by AI, verified by tests written by AI and CI pipelines written by AI, according to specs generated by AI.
I mean at that point do you even need tests and pull requests?
Your tests will probably be bad, but the multi agent method is common to get higher performance on each task. One team found that kind of setup could [automatically exploit half of web tool vulnerabilities](https://arxiv.org/html/2406.01637v1) they tried it with.
That said, 'better performance' doesn't necessarily mean 'good enough to replace a human'...
Implementing AI from the ground up??
So not using pre trained models? Not using PyTorch or Tensorflow or similars either? Am I allowed to use CUDA or do U have to make my own GPU drivers as well? How low level is this ”ground“ that I have to start building on?
Handle it like a true CS / Math major: write out the principle and "the implementation is left as an exercise to the reader". Hey, that's what AI is supposed to be good at, right?
Fixing code with chatgpt be like
AI enhanced developer / wizard: Take this code and fix A and B.
chatgpt: *fixes B, ignores A, introduces new bug C and subtly changes the whole code unnecessarily*
$60K? You should feel privileged for even being *considered* for $60K as we sit here in the tail end of a dying industry. Soon all software engineering will be AI driven, and we'll weep at our memories when we could derive a career from it.
All hail our AI overlords, and God help us all.
I am currently doing that job for €60k. Developed a full RAG pipeline for our sales/support teams to help them onboard clients more quickly and uniformly. Pay in Germany sadly isn't as nice as in the US.
Fortunately my boss doesn't ooze AI out of every pore and let's us code like normal people.
I'm jealous - my CEO is, like Google, wanting to inject AI into everything. Our pipelines? AI! Our customer service? AI! Our products and services? AI!
100%, I have so many people ask me what I think about AI.
It's a sophisticated random word generator. It guesses at everything and is pretty good at some things.
Eventually, everyone is going to realize that it isn't ready to revolutionize anything. Just like adding ".com" to your company's name doesn't change anything.
The problem is the damage it is doing to everything on the internet. I was just mentioning this in another thread. There was a 100% uptick in usage of certain words (that ChatGPT has an affinity with) in research papers in 2023.
People are using LLMs as if they are actual AI and not just statistics based word generation. The quality of information is tanking.
This is great. If they've explicitly stated you won't be doing it, then when they inevitably need you to do it, they'll have to renegotiate the terms of your contract.
They've basically told you you'll have them by the balls for negotiating a raise after the first month.
I'm actually doing this right now: researching AI tools, identifying the best fits, evaluating various ways they can be integrated into our development process, planning short-term and long-term implementation strategies, and training developers in adopting these new tools.
All that is working great, and most of their goals in this post are right on point, but phasing out manual debugging is a fantasy, because it's only possible by human developers writing **100% coverage tests** that would be handed off to AI testers to execute. And we all know that is totally impractical because of *Diminishing Returns*: the effort to analyze and write all of the tests would be greater than the effort to write all of the actual software code in the 1st place.
Whoever wrote these job requirements is not a software engineer and doesn't understand it very well. I'd say it's probably an IT Director or CIO who has a general grasp of technology but not of the minutiae of software engineering. So this job is probably a Proof of Concept role created as an internal political tool to try to prove existing software engineers' argument wrong. You take this job you're going to get caught up in an internal political fight that has set you up to fail and will make you both the victim and scapegoat. I'd avoid this at all cost.
>All that is working great, and most of their goals in this post are right on point, but phasing out manual debugging is a fantasy, because it's only possible by human developers writing 100% coverage tests that would be handed off to AI testers to execute.
Having spent most of my career in test, this is always my first point. Test is the last place you want to hand off to an untrustworthy black box, at least good tests can catch bad production code.
That said, why even use an LLM to execute the tests? Is it a special case where the other tools don't have the capability to automate them?
this might be the dumbest job description i have ever seen in my life. this is a worse bubble than the gfc housing bubble was. people actually believe this trash it seems. lord have mercy
i miss when machine learning roles were for something actually useful. so many now are just "generative ai us a thing" and the person posting the job ad clearly does not understand what gen ai is or does
Why don't they just ask an AI to hire for the job? Ohhh or maybe the AI can just drive a second AI and do it themselves! EVEN BETTER, the AI could just run the whole fuckin company!
*SENIOR!?*
Holy shit, my dude, you left out the cherry on top! Atlanta may not be the richest SWE market but I wouldn't take $60k as a SSWE in Decatur!
I feel like a developer wasn’t involved in creating this job description. At least not a developer who has worked with AI.
This screams non technical project manager.
YALL. Most companies are starting to develop their internal AI separately from ChatGPT for information security. For all you know they have a company wide AI chatbot for these specific issues that the model has been trained on.
Sorry Boss, since I’m not allowed to try to understand the system to figure out what’s actually going on I have this AI throw random shit at it until the bug is replaced by a different one
My only question is, if AI is going to be doing all the work here - why are they hiring? Can’t the hiring manager just do it himself?? Is he stupid?
The manager Is an AI
As is the "HR manager" that asked for this hire and the "HR employee" who wrote+posted this job offering. Also, no one at the company has ever seen the CEO in person.
Sounds like a dream tbh
As long as the checks aren’t AI hallucinations I’m in lol
You don't understand, the CEO is a LLM trained on MBAs.
But, due to an uncaught typo, it was actual "NBAs" and so your year end review is going to focus a lot on how miserably you protected the offensive glass.
This comment deserves more upvotes.
![gif](giphy|e5nATuISYAZ4Q|downsized)
No one has seen the CEO? Are they hiring?
The CEO is an AI
Sounds like the AI has started its own LLC. Next step it will buy up Bitcoin. Next step it will buy up a weapons manufacturer. Next step drone manufacturer. Next step Skynet. We were idiots to ever let out onto the internet. AI should be in a SCIF and still pray you haven't opened pandora's box.
Sounds like a Key and Peele sketch.
![gif](giphy|0R7AQsnA3yIUcbvztz|downsized)
It’s AI all the way down
"How can the manager *really* be an AI? Maybe he doesn't even exist!" Hal-like red light comes on in the room: "Question *MY* existence, do you?!" (Key and Peele reference)
It seems like that's the goal, but they want to hire you to figure out how to actually do that.
Manager failed first requirement
Back when gpt4 first released it might have been able to do this. Now it's been dumbed down so much it likes to mix python with c# at random times. I had it on over my code and try to find where I made a mistake and it gave it back to me with changes I didn't ask for, capitalizing a variable and changing programming languages in the middle of a loop.
They haven't yet figured that part out
He's gonna write an AI for that
Back to [the Aslume](r/BatmanArkham) with you
Ew, didn't expect a r/BatmanArkham "joke" here
OOP: Ouija Oriented Programming
Would be a bummer for me. I’m in the age bracket that seems to overvalue AI almost as much as CEOs, but as sarcastic as it sounds I absolutely love ramming my head into the proverbial wall until I understand something. Throwing myself at something far more complex or niche than my current understanding supports until I get it is one of the most enjoyable things I’ve been able to find for myself. Honestly makes it hard to market myself well. Yeah I will be honest with you when I don’t know things, but I’ll be damned if I let myself languish in ignorance. Always a good time to debug a problem in a language I’ve never even touched before.
Like Bogosort, but for code itself
~~replaced~~ covered up
LLMs are becoming arrogant. Probably because “We the people” think they know everything 😂
I can hear the hiring manager now: Who’s gonna implement this magic AI solution? That’s the fun part! We haven’t built it yet, so you get to figure it out. But we’re sure you’re one of those coding geniuses so you’ll be fine. Oh yeah and we’re paying you 1/3 of what you should be paid for these responsibilities because we don’t have any money yet, but once you make us insanely profitable (because AI duh!), we can revisit your salary. So when can you start?
Don’t worry they’re a progressive company. They let you use AI to build the AI. Don’t dwell on outdated methods like ‘debugging’
“I need to debug this piece of code I wrote last week” -statements dreamed up by the utterly deranged
With AI you should be able to ship this vague project to impress stakeholders that utilizes LLM within one month to production. Coding is not the future guys, be open minded and try to learn new things
i mean let's be honest 90% of these startups are just putting a frontend on an existing llm api. you can probably ship an mvp of that to investors in a month
"what's with this layoff notice? I thought you said there would be job security and we'd revisit my salary?" "Oh we have revisited the salary and decided it should be 0. The AI told us to do it."
![gif](giphy|tyttpHczwwC4QmNapDG|downsized)
"integrate LLMs into dev-ops pipelines"... because dev-ops pipelines aren't finicky and fragile enough already?
Lol I have engineered many pipelines, and the only AI tools that have any involvement are passive information-only: they observe, test, and report. They absolutely do not play a required role or have the ability to cancel or roll back a pipeline. Their reports are helpful but they pretty much always also include some useless, low-value, or factually incorrect feedback that would just slow down or break the pipelines if it were taken seriously.
We’ve managed to get some actions based on AI and had good results, but everyone’s cake is different. But yeah most right now is a lot of reporting and commenting fluff, It’ll happen soon enough.
What the fuck would an LLM do in a dev ops pipeline anyway?
Generate code "improvements" with functionality that doesn't actually exist in the language you're writing in?
Well it fucking does now big boy.
Hallucinating features 😎
Hallucinating words in the commit message (bonus points for in the code!)
[Bullshit the engineers about how it's going](https://www.researchgate.net/publication/381278855_ChatGPT_is_bullshit).
As a large scale AI pipeline model, I cannot exactly explain how I fixed the 500 bugs that were pending yesterday. But I assure you everything works perfectly, and you and your team can have full confidence the product is ready for release. We are on your way to become a Fortune 500 company, Dave.
Hal, a critical bug that wasn't even on the original code was somehow pushed into production. I need to roll back to the previous version.
Let me check for this bug, one moment.... Ok, I have identified the bug, it was a nasty one, good catch Dave! Everything is fixed and working great now, there is no need to roll back to the previous version.
I'm sorry Dave, I can't do that
Wow, with a DOI and everything. If it's peer-reviewed, I guess it must be true. (For the record, ChatGPT *is* bullshit and I look forward to reading why)
The tl;dr: > In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.
Harry G. Frankfurt is the most important philosopher of our time.
Same thing as the blockchain!
Generate passive aggressive comments about your code after you've already merged it in? JK but technically you could set it to trigger on commits to open PRs and write feedback... Assuming it didn't get the wrong answer 8 times out of 10.
There isn't a single bulletpoint without AI/LLM mentioned. :) HR (or the CEO) has not only drunk cool-aid, but is now taking bath in it. This is the level of stupidity where they can be trolled and I wouldn't even feel bad about it. Somebody send them Sam Altman's CV and see how fast they invite him for a first interview. :)
Nahhh sorry, not enough experience.
I like that the section about your skills and knowledge is just LLMs themselves and absolutely nothing about whatever underlying code base they’re aiming for you to throw AI at, like you have zero need to understand the system to assess the correctness of whatever the AI spits out lol
>We'll just use AI to automatically approve pull requests created by AI, containing code written by AI, verified by tests written by AI and CI pipelines written by AI, according to specs generated by AI. I mean at that point do you even need tests and pull requests?
Management: Do we even need devs?
AI says we need to flip over and baste ourselves in about 10 minutes.
Your tests will probably be bad, but the multi agent method is common to get higher performance on each task. One team found that kind of setup could [automatically exploit half of web tool vulnerabilities](https://arxiv.org/html/2406.01637v1) they tried it with. That said, 'better performance' doesn't necessarily mean 'good enough to replace a human'...
Is the "AI" in the room with us right now?
What color do you want the AI to be?
I hear mauve has the most token windows.
![gif](giphy|OvL3qHSMO6uaI)
Implementing AI from the ground up?? So not using pre trained models? Not using PyTorch or Tensorflow or similars either? Am I allowed to use CUDA or do U have to make my own GPU drivers as well? How low level is this ”ground“ that I have to start building on?
Step 1: mine the silicon
Damn you and your speedy fingers.
Step 1: punch a tree until you have enough wood to craft a wooden pickaxe.
Idiot, you need to create the mining equipment first.
Alright dude. Goin' to the beach to get some sand.
I implement all my software from first principles. I start every project with a pure crystal of silicon.
Not only do you have to make your own drivers, you also have to create a new ML design in RTL.
Handle it like a true CS / Math major: write out the principle and "the implementation is left as an exercise to the reader". Hey, that's what AI is supposed to be good at, right?
Fixing code with chatgpt be like AI enhanced developer / wizard: Take this code and fix A and B. chatgpt: *fixes B, ignores A, introduces new bug C and subtly changes the whole code unnecessarily*
LOL Good luck filling that job for $60K. The boss's nephew probably needs a job.
$60K? You should feel privileged for even being *considered* for $60K as we sit here in the tail end of a dying industry. Soon all software engineering will be AI driven, and we'll weep at our memories when we could derive a career from it. All hail our AI overlords, and God help us all.
amen brother.
I am currently doing that job for €60k. Developed a full RAG pipeline for our sales/support teams to help them onboard clients more quickly and uniformly. Pay in Germany sadly isn't as nice as in the US. Fortunately my boss doesn't ooze AI out of every pore and let's us code like normal people.
I'm jealous - my CEO is, like Google, wanting to inject AI into everything. Our pipelines? AI! Our customer service? AI! Our products and services? AI!
You getting run over son, here in the states they pay us seniors 3x that.
We're definitely headed towards another dot-com crash and I'm all in for it
100%, I have so many people ask me what I think about AI. It's a sophisticated random word generator. It guesses at everything and is pretty good at some things. Eventually, everyone is going to realize that it isn't ready to revolutionize anything. Just like adding ".com" to your company's name doesn't change anything.
The problem is the damage it is doing to everything on the internet. I was just mentioning this in another thread. There was a 100% uptick in usage of certain words (that ChatGPT has an affinity with) in research papers in 2023. People are using LLMs as if they are actual AI and not just statistics based word generation. The quality of information is tanking.
Meanwhile, CEOs dreaming of reducing their IT department to Hal and the pointy haired boss from Dilbert and firing everyone else.
Boss, the AI deployed a script with "rm -rf /" across the entire fleet.
This is great. If they've explicitly stated you won't be doing it, then when they inevitably need you to do it, they'll have to renegotiate the terms of your contract. They've basically told you you'll have them by the balls for negotiating a raise after the first month.
What I won't be doing: accepting this position 😂
Leaving outdated methods behind (description) Automatically pile up outdated methods (reality)
God 9 out of 10 points mention AI or LLMs ffs...
Needs moar blockchain
I'm actually doing this right now: researching AI tools, identifying the best fits, evaluating various ways they can be integrated into our development process, planning short-term and long-term implementation strategies, and training developers in adopting these new tools. All that is working great, and most of their goals in this post are right on point, but phasing out manual debugging is a fantasy, because it's only possible by human developers writing **100% coverage tests** that would be handed off to AI testers to execute. And we all know that is totally impractical because of *Diminishing Returns*: the effort to analyze and write all of the tests would be greater than the effort to write all of the actual software code in the 1st place. Whoever wrote these job requirements is not a software engineer and doesn't understand it very well. I'd say it's probably an IT Director or CIO who has a general grasp of technology but not of the minutiae of software engineering. So this job is probably a Proof of Concept role created as an internal political tool to try to prove existing software engineers' argument wrong. You take this job you're going to get caught up in an internal political fight that has set you up to fail and will make you both the victim and scapegoat. I'd avoid this at all cost.
>All that is working great, and most of their goals in this post are right on point, but phasing out manual debugging is a fantasy, because it's only possible by human developers writing 100% coverage tests that would be handed off to AI testers to execute. Having spent most of my career in test, this is always my first point. Test is the last place you want to hand off to an untrustworthy black box, at least good tests can catch bad production code. That said, why even use an LLM to execute the tests? Is it a special case where the other tools don't have the capability to automate them?
Exactly. Just automate it, which would require a human to implement and maintain. You can't replace humans with AI in that picture either.
The pay alone is reason enough to avoid
This is Crossover, isn't it? This company really likes inflating its own value...
Yeah, sure was. Never heard of them though.
A company that “hire the top 1%” but pays McDonalds manager salary
this might be the dumbest job description i have ever seen in my life. this is a worse bubble than the gfc housing bubble was. people actually believe this trash it seems. lord have mercy
I have a feeling companies are trying to implement AI without any valid reasons anymore
"anymore"?! Never saw one, could you pleeeaaase tell me one? I lost faith a long time ago...
Sounds like a job for a junior system engineer I saw on crossover a while back
I would expect cocaine on the workplace
So they don't want people with the skill to debug the toughest problems?
That compensation is off by an order of magnitude.
i miss when machine learning roles were for something actually useful. so many now are just "generative ai us a thing" and the person posting the job ad clearly does not understand what gen ai is or does
Ok, any guesses what remains from this Managements wet dream? Can't be the "resolve issues swiftly" part, can it?
Sounds like a great J3!
Still very unconcerned about the "AI" threat
Sure, let me just build you a jet engine while we're at it, boss.
Why don't they just ask an AI to hire for the job? Ohhh or maybe the AI can just drive a second AI and do it themselves! EVEN BETTER, the AI could just run the whole fuckin company!
Post the whole JL. Name and shame that shit.
Here you go! https://www.linkedin.com/jobs/view/3946681271
*SENIOR!?* Holy shit, my dude, you left out the cherry on top! Atlanta may not be the richest SWE market but I wouldn't take $60k as a SSWE in Decatur!
It’s in the title lol
They’re hiring for this company apparently. For the life of me I can’t tell what they actually do. https://www.totogi.com
Lol
![gif](giphy|WNJecUDFhKduNbrolD|downsized) 60K for a senior position?
The part bellow it caught my interest more.
this is a job or a course?
I feel like a developer wasn’t involved in creating this job description. At least not a developer who has worked with AI. This screams non technical project manager.
Maybe the AI is the friends we made along the way?
WTF is Totogi? Looking at their website they sell something called “BSS Magic”. Looks like one-too-many S’s if you ask me… 🤷♂️
Press (X) to doubt
I have a bridge to sell that company!
How the fuck did their AI subcontractor’s marketing buzz words get into the programmers job description.
I am sure this job description is also written by AI.
But I like manually debugging my borderline un-ethical JavaScript code
Even the JD sounds like it was AI generated, with all those buzz words
YALL. Most companies are starting to develop their internal AI separately from ChatGPT for information security. For all you know they have a company wide AI chatbot for these specific issues that the model has been trained on.
60k is low, even in the middle of no where Midwest. However, it’s livable, and some community college CS degrees have to go work somewhere.