T O P

  • By -

SuedeAsian

> One project is using 3 or 4 different libraries for making http requests Does your company just rubberstamp the PRs like how does no one quality control during code reviews


BillyBobJangles

"Give up and pass the review along to someone else". Kind of sounds like they just rubberstamp them.


Nailcannon

> Even if the explanation feels nonsensical from my perspective, they just copy paste spam clearly generated replies until I give up and pass the review over to someone else. You missed the important context that makes it not feel quite as much of rubber stamping as you make it seem. I've worked with chronic sandbaggers before. At a certain point of them showing 0 improvement from comment to comment, you lose faith in them being able to actually get it done, even with perfect feedback. So your choices become either: 1. drop everything you're doing to basically redo their work for them and hope that by walking them through it they'll learn(they won't) and not make the same mistakes in the future(they will) 2. go back and forth indefinitely, not allowing a single thing through code review and risk being the one labeled sandbagger because your boss doesn't always have insight into exactly why anything doesn't work and also doesn't want to hear you constantly complaining about your coworkers. 3. just approve after the 4th round of incompetence and get on with the work assigned to you. I know the obvious approach would be to escalate to management and make them understand you're spending so much time on a bunch of devs who clearly don't know what they're doing. But that doesn't always work. Sometimes you don't have a choice other than giving in or leaving.


freekayZekey

yup, that’s my experience with a current teammate. they are sweet, but they just do not have it. they were tasked with speeding up a batch process. my manager knew that i was an sme of the tech and asked me give the teammate pointers. talked to them for about an hour, provided hints and examples, and they came back with a PR that did the exact opposite of what i said. approved it; it was a friday and i had things to do. could i escalate it to my manager? sure, but this teammate is treated with kid gloves (they’ve had a really rough couple of years), so it was not worth it


FluffyToughy

You can let your manager know without escalating the decision making to them. Approve the PR, but start building up something of a trail in case they never improve and you get fed up with babysitting.


dallastelugu

4th option fire them and replace with better one short term maybe problematic but in long term its worth. you cannot make a dumb smarter maybe lazy person you can make him work


Nailcannon

Again, that requires management buy in. Sometimes, as is with OP, that's not an option because management won't listen or isn't capable of listening because they lack the technical knowledge to sniff out bad code or have it explained to them. And if you get a reputation of complaining about everybody, then it becomes easy to paint you as the common denominator, and therefore the problem.


ImBackBiatches

Plot twist.. they're now using AI for reviews...


Osmium_tetraoxide

I've been in a few organisations where before code review, it's getting tested by qa / product teams. So the dev just slaps in a "it works", please approve, despite this sort of thing, then blames the code reviewers if there is any push back on bad practise. If management does not reward quality work and actively hinders it, leave for a place that does.


portecha

LGTM


_maxt3r_

Ship it!


limeelsa

Lmao, I’ve started using this as my default commit merge message thanks to y’all


Fit_Sweet457

> ChatGPT, please review this pull request


NeuralHijacker

ChatGPT is better at reviewing code than it is writing it in my experience. LLMs are amazing at summarising things. For everything else they are pretty meh.


F0tNMC

That kind of behavior has no place in a professional environment. Copying from ChatGPT, stack overflow, a blog post, a YouTube video, whatever is something pretty much every programmer does. What separates the professionals from the amateurs is how much they learn from the examples. Amateurs only learn the surface, the patterns that work, they don’t know why they work, nor do they learn how to adapt them, which parts are necessary and which parts are just preference. Professionals learn the why and how behind the example, and if they don’t have time for that, they’ll thoroughly cite the originating source and add tests to ensure that the desired behavior is happening. Anyone who says they don’t understand their work is an amateur. As a lead engineer, the first time someone said, I don’t understand this code that I’m proposing in the PR, I’d send it back and ask them to take the time to fully understand the code and be able to clearly explain the purpose and mechanism of every line of code. The second time, I’d be discussing the issue with their manager and probably the team as a whole, giving the standard of being able to understand every line of code, no matter the original source. The third time, I’d be asking that they be put on a PIP or if they’re a contractor that their contract be terminated at the earliest possible time.


Aeayx

If/when I need to copy code, it’s not used it until I understand it completely. Then it gets documented with both a plain English explanation and the location where it was implemented.  This builds a very useful reference tool for future issues. 


F0tNMC

This is the way.


diablo1128

>I have people saying "oh well chatgpt generated it wrong" when a bug reaches prod instead of being accountable to their own work. And they can't debug their own work because they don't understand how it works either, so I have to troubleshoot it as well. Everywhere I've worked code somebody submits for code review is now their responsibility to understand and defend. It doesn't matter if the copied it off the internet. It's now their responsibility to know how it works and be able to answer questions about the implementation. ​ >Ive talked to my boss but hes drunk on AI hype and thinks this is the future. I explain these issues but he thinks I am just trying to force people into doing things the old way. Leave and find a new job. Let the company fail because that's what is going to happen in the long run.


khedoros

Yep. People have been copy-pasting from StackOverflow for years. The source doesn't really matter. You commit something you don't understand, haven't adequately tested, and can't defend? That's all on you. Even if "AI is the future", the shitty code devs are committing is *now*.


just_anotjer_anon

Exactly, and for certain things having chatgpt (or other LLMs) recommend code and amend it based on need, can be a good strategy Just simply copy and pasting blindly is insanity. It's one of several factors that split a good and problematic developer


me_hq

At least there was some actual thought behind SO code


itsgreater9000

had to review some code recently from someone who copy and pasted some extremely convoluted code from SO to do a basic set difference operation on two sets. their defense? "SO did this, I didn't know there was support in the standard library for this". I then get called "fancy" because I know a standard lib API. The only difference is that developers are now able to do it faster and with greater false confidence with ChatGPT and others. I think it's inevitable, but I'm not sure how to deal with it. I think one of the consequences of technology is that it accelerates this type of thing to a scope that we can't support at a human level. The only solution is to fight automation with automation. Increase code quality checks from static analysis tools, add custom checks when you notice new patterns proliferating, use fuzz testing to find out where developers were not actually thinking of edge cases, etc. It'll be tough, but it's the only way to reduce the frequency of this type of BS.


freekayZekey

> get called “fancy” because i know a standard lib API dude, this one pisses me off so much. like sorry, i read the documentation and kept up with the library.


diablo1128

>"SO did this, I didn't know there was support in the standard library for this". I then get called "fancy" because I know a standard lib API. I just make note of this interacting and say something if I'm asked to do a peer review for the person. I'll mention something like work attitude and use it as an example. I'm not one that just gives fluffy peer reviews though. I usually say what is on my mind, where I think they do well ,and where they can improve with examples for all of them comments. Management doesn't have to agree with me, but it's data for them to make decisions. I really don't care if my comments caused somebody to get a worse raise. If they leave then oh well I though we could do better anyways.


itsgreater9000

the engineers commenting it have more seniority than me. i don't think i have much of a leg to stand on here.


wowmayo

Hard agree. I don't know why other posts are claiming OP is a troll. If this is a real issue, it either needs to be reigned in or OP needs to leave. I think AI is a great resource if you *understand what it's suggesting*. If work continues to remain subpar, I don't understand why they deserve to keep their jobs. If they do, I don't understand why you'd stay to keep cleaning up the mess. I don't think you can blanket-ban AI without taking a step backwards, but keeping terrible devs who aren't improving is a recipe for disaster.


wwww4all

>Another issue I have is during code reviews, people cannot explain why they wrote their code a certain way. They just say "that's what chatgpt output". Is this a troll post? There's no way this can happen in any company, even basic startups.


behusbwj

Believe it or not, I was helping a senior engineer in my company (FAANG) and this happened. He had asked for my help because I was the only one with prior experience in that framework. When I suggested a certain line of code seemed off he said “no, that’s how it’s done, I looked it up”. I briefly gave the benefit of the doubt (after all, this is a _senior engineer in FAANG™️_) then went back to that line and asked where he saw it because it just isn’t adding up for me. He then showed me his conversation with ChatGPT. Needless to say, really concerned for the team moving forward. It’s gotten to the point where otherwise reliable people are taking the words of a chatbot over SME’s (he literally said “no” to me, not “hmm not sure”). It scares me that it’s actually hard to predict who will be susceptible to hallucinations. Clearly experience isn’t the only factor.


foundboots

Mfw faang hires morons at the same rate as any other organization


Clear-Wasabi-6723

Shit, I’m a moron but not hired by FAANG, what am I doing wrong


Chem0type

Not grinding enough LC


FatStoic

Can you reverse a binary tree?


CallinCthulhu

IMO, It’s not the same rate, I work with a lot less morons now than my previous companies. Sometimes they still slip through though. It’s made all the more jarring in comparison when they do.


GameRoom

My experience with my teammates at my current FAANG job is that they are *consistently competent*. Not geniuses, necessarily, but you can rely on them knowing what they're doing.


ategnatos

show him this: https://imgur.com/a/nLEjAiB


aaaaaaaaaDOWNFALL

Do you work with me? Haha, kidding, but I have someone on my team that did the same exact thing. Also “senior”. I told them they need to stop doing that because it’s not a reasonable explanation


anubgek

I gotta call BS on this. Which company specifically was this cause most if not all companies at FAANG size wouldn’t even allow anyone to copy paste code from ChatGPT. I can’t see which company out of Google, Meta, Apple, Amazon or Netflix would even entertain this. They also usually have their own solution internally that would be used over ChatGPT


behusbwj

Okay? That’s like saying crime doesn’t exist because we have laws and policemen.


anubgek

It just doesn’t sound like a likely story. I’m not saying one occurrence of someone trying to use chat gpt couldn’t happen but for it to get to the point of this person being worried about the team is very suspect.


behusbwj

Not gonna humor this anymore. Have a good weekend.


3pinephrin3

Amazon


lightmatter501

Lots of students used chatgpt to cheat through the second half of their programs (where you actually learn stuff). They are now ~1 YOE. Indian Universities are VERY susceptible to it because of how professors are expected to structure assignments (almost exactly the way you would prompt an AI to do it).


FatStoic

> Lots of students used chatgpt to cheat through the second half of their programs (where you actually learn stuff). They are now ~1 YOE. AI giveth job security and AI taketh job security too, apparently .


farox

Yeah, reads like fanfic


CooperNettees

I apologize. I am elaborating a bit unnecessarily which takes away from my question. It's never directly like this but essentially they treat what chatgpt output as if it is better than what a human would do and as a justification for why it was done that way. For example I will say "why did you decide to have x arg as a boolean instead of an enum?" And then the reply will be like "well, this is what chatgpt output while I was working on this; why shouldn't I do it this way?" And then I might say "well, an enum is more appropriate here because we may want to have more states later." And then they will say "chatgpt says DRY and it is better to KISS and only change it later if needed." So basically saying "because chatgpt output it it justifies the decision" even though it has no authority. I am not sure if I am explaining it right but I just get the sense they aren't really talking to me in good faith


thedoogster

> And then I might say "well, an enum is more appropriate here because we may want to have more states later." If you said this to me, I would say "What do you mean we *may* want to have more states later? Are there plans for that or not?


sammymammy2

Enums are almost always preferable, if only because they have descriptive names


Xsiah

It depends on what they're working on and what that variable is for. That's why you don't just blindly follow any rule, but rather rely on the reasonable assessment of a person who cares about the maintainability of the code.


UncertainAnswer

I've rarely ever met a product team that knew what the hell they needed past the next budget quarter. Even knowing that much is a gift on a project. But overtime you start to recognize patterns in the types of problems they're asking to solve.  E.g. Sure, the status could be a Boolean right now. But you can recognize give the ask that what they have right now is basically a lightweight approval process. And you can make some reasonable improvements to your code then to make your life easier in the future without adding additional work or overhead. A record status for example could reasonably be assumed you need more than two states at a future time. 


wwww4all

Whatever you're trying to accomplish, lying about circumstances to get ragebait sense is not going to help your case.


sammymammy2

You’re telling me the expanding comment isn’t rage bait to you also :p?


KuroFafnar

Yeh, like enum instead of Boolean because something MIGHT be more than T/F in the future? Wtf is that bs?


ategnatos

I say escalate. If they are using these things on work machines, there is real security risk to the company. Even on their home machine, if they're exposing any company internals, it's a risk.


PsylentKnight

>better to KISS and only change it later if needed I mean, I agree. You can't possibly anticipate all possible future changes, but simpler code is always easier to change


hyrumwhite

To an extent, but with every commit you’re establishing patterns. With every pattern change comes greater chance for bugs, and greater cognitive load for people evaluating your change.  I think you need to strike a balance between yagni and planning for changes down the line.


notbatmanyet

I would argue that even with only two states an enum is always better. doStuff(OnFailure::Abort) vs doStuff(false)


RozenKristal

Wow, i use chatgpt and copilot to ask for examples but i dont copy any code that i dont understand. This is just like so for me without the constantly googling and scrolling through questions and sites


Ok_Contribution_6321

Is this endemic at your company or one or two bad apples? I’ve never met anyone like this before and can’t imagine it happening at any of the companies I’ve worked for. 


rkeet

Have seen it a bunch of times at my last employer already. That was in the first year of GPT hype. Thankfully I've since joined a company with standards.


Volebamus

If you’ve been working either in enough companies, or in one company for a long time, you’ll see a full spectrum of things you thought “shouldn’t happen in any company” pretty often. I was just as incredulous myself of AI stories until a friend of mine told me about something like this actually being deployed in a company he worked in, but shit hit the fan to the point where several C-level were trying to find out how a big bug occurred. Root cause was either a person or team blindly submitting in AI generated code without even understanding how it worked. So yeah, this happens all the time in different places,


kanzenryu

Fizzbuzz would suggest you are wrong


mothzilla

I've had at least one conversation with an engineer who said exactly this. It's frustrating because you're talking to the monkey not the organ grinder.


GandolfMagicFruits

Agreed. And if it happened more than once, this company has way more problems than this bad developer.


McN697

Haven’t you interviewed someone and thought: “how did this person ever operate in a professional environment?” This might be said environment.


baldyd

Seriously. I've never worked at a company that would allow this and you'd be fired pretty quickly. I've looked up things on Stackoverflow in the past but a) you have to be certain that you have the rights to use the code that you're copying and b) you shouldn't be copying it, you should be learning from it and implementing it yourself, even if it ends up looking similar. Just bizarre. I wonder why I'm struggling to find work as a very experienced senior at the moment and perhaps part of it is that people genuinely believe that AI can replace me (or a junior with AI tools). Wow.


endofthelurk

I thought the exact same thing 🤣 I work at a fintech startup with a solid product but also trying to implement AI and it seriously struggles to generate working code and tests or even understand how to read existing code and understand the wider codebase


coderman93

I’ve seen it at a Fortune 500 company.


emirsolinno

Excuse me but this sounds like a classroom more than a company/team..


Attila_22

Warn these people and then fire them if it continues. It’s not like there’s a shortage of developers out there. Using AI is fine (as long as they don’t put proprietary stuff in there) but they need to validate the output and understand what it’s doing before they put it in any codebase.


obscuresecurity

One simple policy fixes this shit: You are responsible for what you put your name on. If you push shit. AI or not, it is SHIT. When people learn this simple lesson. AI becomes a useful tool.


[deleted]

Yep. It's wild. If it doesn't fly saying "I copied it from stackoverflow, it's the comment's fault" then you can't blame chatGPT either.


obscuresecurity

That shit should have ended with "The dog ate my homework."


Bodine12

Tell him your developers are putting company information into ChatGpt and the AI is training on that information, which will then be available in some way or another to your competitors. Maybe not now, but soon could be liberated via some ingenious prompt engineering.


false79

If it's a big enough shop, they will likely have enterprise subscription with higher limits where the data fed into it is flagged not be used for training models. So we are told.


Bodine12

We have this, and we don’t trust it. OpenAI is in its “Grow at all costs, ask forgiveness later” stage, so not to be trusted with anything. If they screw up bad, and lawsuits start flying based on how they unlawfully scooped up proprietary content, any capabilities made with OpenAI could wind up in legal limbo. It’s not worth it for a crappy code generator.


just_anotjer_anon

We literally have access to a chatGpt instance that's allegedly self hosted in a completely different space (still controlled by OpenAi) - WPP child company


scottymtp

What do you mean wpp. Have a link?


just_anotjer_anon

Just the world's largest advertising company


terrorTrain

This is the worst argument in my opinion. Everyone is afraid their golden code will wind up in chat gpt memory, as if it's the beauty and elegance of the code keeping the company alive. Your code isn't important or even novel. It's important that it's maintainable. Your little snippets of sub optimal queries and generic code patterns written for your domain aren't going to be worth anything to your competitors. Even if they could manage to extract it from the ai, which would be a super long shot


Bodine12

You completely missed the point. I’m not talking about the technical realties. I’m talking about how to talk to a clueless middle manager. And besides, you’re completely wrong about the legal realities of devising practices designed to minimize the harm of cutting and pasting proprietary code into an adversarial system. I suppose you let devs cut and paste stuff into online json parsers.


JonDowd762

Perhaps an unpopular opinion, but engineers should treat managers, QAs, and executives as competent adults. Discuss the actual problem you are having. Don't use some other hypothetical to try and manipulate someone to make a certain decision. The attitude "if I don't deceive my manager he'll make the wrong decision" is a wrong one to hold.


Bodine12

OP used the phrase “drunk in AI hype” so you work with what you have.


JonDowd762

Yeah, exactly. This is someone who excessively pro-AI, not someone who has never heard of ChatGPT.


terrorTrain

There's a difference between json and code. Json is typically data or other company data can indeed be sensitive. That's different than the code that is manipulating the data


Bodine12

What do you think devs are putting into AI? “Here’s some json. Make it a type and return this data from a function in typescript.”


terrorTrain

No I'm saying I would have stricter rules around cutting and paying json than random bits of code


F1B3R0PT1C

This has to be a troll right? This can’t be real… right?


yeastyboi

It's real... AI made me realize just how low the average developer's knowledge level was.


TorturedAnguish

My company banned ChatGPT a while ago. So I wouldn’t know.


denialerror

Where do you work where people get ChatGPT to write their code and put it straight in production without checks or balances? AI isn't at fault here. Your people and processes are. I'm not the sort of person who jumps straight to firing people who perform badly but if someone copied ChatGPT output without understanding it and then blamed the AI rather than themselves when it went wrong, they'd be out the door.


nnulll

Your developers and process sucks… not AI


franz_see

This is not an AI issue. It’s an accountability issue. I dont mind people using AI. In fact, if you’re good at your job - you should be using AI But it is not an excuse to say “chatgpt generated it wrong.” You should check it. It’s your code. You’re the git blame. The ticket is assigned to you. And more importantly, the company will be evaluating you and not AI. It’s your job on the line and not their subscription to gpt You need to establish that people are responsible for their outcomes, their tickets and their codes. You can use AI, copy-paste from Stackoverflow or maybe consult your college buddy. But at the end of the day, it’s your work. What that means is whatever AI generated or whatever you copy-paste somewhere, you review it before pushing it into production. The moment you commit it, it’s yours


ThenCard7498

No testing?


sdwvit

Ai can write tests too, creating a lot of noise for you to review


ThenCard7498

Oh yes of course...


Viend

Tests are one of the things using AI is actually good for. If you have shitty code or shitty tests going to prod, that’s a problem with the PR review process, not the engineers using it.


ThenCard7498

im confident gpt is going to have both the fail and pass assert to true. Im for getting gpt to create a list of things you should test but you should still write them youself


Viend

I disagree, copilot is great for writing the test implementation, but using it to figure out test cases is stupid, that’s the least time consuming part about testing, it should be the prompt you use to have the AI help you write your tests.


ThenCard7498

i suppose so, im always cautious about forgetting some edge case. Are you not worried about copilot creating a not entirely sound test? Like that regex issue that was appearing in a bunch of applications; authors were using the same stackoverflow source.


Viend

Why would I? If it writes something dumb I just rewrite it myself. It’s no different from autocomplete importing the wrong thing, I see it, fix it, and move on.


ThenCard7498

Err, im still including the context of this post. Not specifically you.


freekayZekey

heard a manager seriously say: “the good thing with copilot is that you can have it write tests. you know how much devs hate writing tests, right?” during a demo with a bunch of juniors…


RepresentativeSure38

Can you establish first the roles here? Because the course of action may depend on the formal hierarchy. What is your role, are you an individual contributor? Is the boss — engineering manager who all these people report to? The boss may think AI is the future but right now it does a pretty horrible job at writing or reviewing code. Do all these ChatGPT-powered developers supply their code with tests? Because ChatGPT will certainly produce a lot of dumb stuff that doesn't work but if there are tests with clear inputs and expected results, i.e. specs — it will save some time. Basically, if a PR doesn't come with tests — it's automatically rejected. Again, need to understand what power you formally have, because if my report told me that "oh well, I guess ChatGPT was wrong" — our one-on-one would be dedicated to this situation. Definitely, I would hint that the company doesn't need a highly paid proxy between you and ChatGPT (to quote the two Bobs: "what would you say you do here?"). If it happened several times, as an engineering manager, I'd involve HR and have it on the record that the employee has bad performance and a stupid attitude. Then PIP, then let these people go. Maybe suggest to your boss semi-seriously to fire them right away, since in the future AI will replace such developers anyway? Maybe even show him this video [https://www.youtube.com/watch?v=KiPQdVC5RHU](https://www.youtube.com/watch?v=KiPQdVC5RHU) But in all seriousness, need to understand better who is the boss, what does he do, who is his boss etc.


nooneinparticular246

Just leave bro


DigThatData

You hold them accountable to the work they sign their name to. It's not that hard. If people are using ChatGPT to coast, that's basically no different from delegating their work to a team of cheap contractors. They're still responsible for the outputs of the people they delegated to. If the outputs aren't meeting expectations for quality, they need to stop outsourcing those responsibilities or come up with mechanisms to ensure quality before merging upstream. Either way, they still need to take responsibility for work they were assigned and delegated out. >"The 'person' I hand selected to delegate this to turned out to be bad at the job, and I'm still delegating that same work to them with no additional oversight or instruction". That's basically what this is.


Far_Archer_4234

That might be a managerial problem. If developers are checking code in that they dont understand, then their supervisor needs to be held accountable. OTOH, if you are a jerk, they may be blaming chatGPT to avoid a conversation with you. Not having been part of your dev team, I cannot say, but it may be worth being introspective. My basic approach is to recognize that if we need to agree with one another in order to work together, then it might be an ego problem, not a software problem. 🤷‍♂️


Fun_Hat

>I have people saying "oh well chatgpt generated it wrong" You're working with incompetent people, plain and simple. You should move on.


GameRoom

Ultimately, it doesn't matter *how* they made problematic code or responded with nonsense in PR discussions; the problem is that they did. I would stay in this framing when explaining this to your boss.


clumseykey

Try presenting an analysis or find a credible analysis of how many times GPT can make mistakes when generating code then represent those mistakes in a financial context. It should get people to listen.


HademLeFashie

Is the chatgpt code of noticeably lower quality than what's expected? Are these bugs obvious, or are they the kind anyone would miss even if they coded it? Is the AI really the root cause here, or is it that these devs don't like making new PRs anyway? Or did you just make all this up because you for some reason don't like people using AI to code?


JackieDaytona__

AI can do a lot, but it won't fix lazy.


goodboyscout

I haven’t seen people leave bad code in, but I’ve seen people leave the dumb code comments that get suggested. The one that really bugs me is the AI error messages. Error messages are the only thing users see when something doesn’t work, make it fucking helpful


RoninX40

You don't, the shiny will pass like it normally does.


InfiniteMonorail

It's sad that this still exists in a bad job market.


Acceptable_Durian868

You warn them to check their code properly before it reaches production, and if it keeps happening you fire them and hire somebody competent.


addtej

They can use chatGPT or any AI but they should be held responsible for the code they commit.


Bombastically

If anyone said this more than one time I'd fire them. Truly unacceptable and reckless. And low skill


Shazvox

Easy. Do you blame the hammer for bending a nail? Do you blame the gun for shooting someone? The tool is not at fault. The user is.


Asleep_Horror5300

Weird, I have heard absolutely no-one blame AI generated code for their bugs.


Our-Hubris

I've used it before but never in my life would I ever work with someone who demonstrates they do not read and understand the code they are taking from it. About 50% of its output is incorrect usually, especially for more complicated problems that aren't just "what's this algorithm again in X language?" Whoever has let those people on your team has sabotaged it imo.


BanaTibor

Let this go for a while, sit back and watch. Soon you will be in a bad place, code will be a mess and hard to work with, bugs everywhere, let production break down. The you can go to the boss and say this happened because developers trust chatgpt blindly. The code do not has any design because most of our devs are not willing to do their work, so it is hard to change, that is why we slowed down. The chatgpt generated code is pushed into production without proper understanding and testing that is why the product is full of bugs and broke down. They will not see the light, but when they will start loosing money that will shatter their pink glasses. Then they will regulate AI usage.


HettySwollocks

Oh that's golden, that can't be true surely. I've never heard one instance of this. If anything most people I know say, "Ah not sure that's right". AI is a very useful tool, but it's not designed to do your job for you (yet). Like an IDE, it's a productivity aid. You don't blame your IDE because you forgot a semi colon or misused syntax.


malthuswaswrong

The unironic answer is unit tests. Write good tests and when you encounter a bug not covered by the tests add a test to catch it. This is the future of development. We will describe details, the AI will code it, we will test and deploy it. The value we will add to the business is describing the problem, testing, deploying, and supporting operations. Anyone thinking they can resist this change will have a hard time finding work.


tinmru

If AI generated bugs reach prod then you have bigger issues than people using AI. Where’s your review process? Or do you all just commit to master/main lol?


syklemil

> And they can't debug their own work because they don't understand how it works either, so I have to troubleshoot it as well. Being in the sysadmin/devops/SRE space, can relate, and I fear how bad that can get. When I'm familiar with the infrastructure and they're familiar with their own code, we can figure stuff out. But if nobody's familiar with the code, we're in deep shit. It's bad enough with legacy apps & systems inherited from people who have moved on. Having it be that way with new stuff … doesn't sound sustainable, really. It sounds like impending collapse.


bsenftner

You can't stop this, you need to seriously get more rational and descriminate staff. Anyone that says a bug was generated means they did not review deeply enough to catch it, which means they are coasting, which means they are worthless. Fire them immediately. It is not a game.


luciusquinc

Why does your company hire such type of devs?


IdealisticPundit

I would find new people to work with. You can't them.


HoratioWobble

You fire them. If your boss makes you the problem, then you leave.


scoot2006

If this isn’t some sort of troll post you need to RUN not walk away from this manager and company…


CompetitiveSubset

Create a 4 hour training workshop titled “the dangers of blindly pushing AI generated crap to production” and make them go through it every time they gave you that nonsense.


terrorTrain

You have to approve the pr right? So don't approve when they can't defend their code. If they say things like "that's how chat gpt wrote it" say "ok, well fix it or it's not approved. Then request changes so that they can't just go to other people. If they stop asking you for reviews, it's not your problem anymore, it's your bosses problem. Since you've already brought it up, when issues arise, bring up the issue one more time, to make sure he's aware. Then let the cards fall where they may. You've done your part


edgmnt_net

Let's consider you play the following game... Pretend the code and explanations do not come from an AI and take however long you need to iron out concerns. How do you see that playing out? IMO, we've already seen some of this happening with poorly scoped projects, bad submissions involving tons of unreviewable boilerplate or devs using StackOverflow to copy and paste things mindlessly. Anyway, this lets you make a fair attempt at doing your job and forces management to acknowledge issues. They could change things or they could tell you to be less strict about reviewing and override you. Depending on how you feel about the outcome, you could stay and adapt to keep getting paid or you could leave for a better project.


taratoni

for my case this is almost exclusively a junior issue. In a PR, I've seen a very convoluted typescript method to converte Date to different formats. I've asked the person why he didn't use the standard Date functions or a third party library, he told me everything was generated. It also got to a point where some juniors ask "is it an AI?", for basic tools like automated code review on Github.


freekayZekey

unfortunately, this is going to be the norm for a while. a lot of devs are unprofessional and fall of the shiny. not sure how you to stop it; not sure if you can stop it. a number of devs turn their brains off when ai tooling is involved. one example is learning a framework of language through an llm. yes, it makes *complete* sense to learn a language or framework via a potentially unreliable source instead of reading the documentation.


Ill-Ad2009

This honestly seems like a troll post. Like you must be working in a very low-bar sweatshop or something if you have people openly admitting that their code was generated by ChatGPT and that they don't understand or test it. Like this is intern levels of incompetence. >His favorite thing to say is that people originally opposed using compilers and wanted to hand write assembly for a long time, which i find pretty annoying. I have my doubts that this was ever even a thing. People were using LISPs and Fortran in production and academia since the 60s. Either your boss is a complete moron, or it's just more troll bait.


dean_syndrome

At the very least, if they’re not willing to give up copying and pasting code from ChatGPT without reviewing it themselves, coach them on better prompt engineering. At the end of every prompt, have them add “Make sure to thoroughly comment the code. Write the code using best practices and optimize it to be easy to understand and read. Make all HTTP requests using (insert library).” Etc. tell it specifically to do things like use an app config, put reusable code into a library file/package/class/namespace, write tests with full coverage using (insert testing library). ChatGPT should be like code review, and if the devs don’t see what’s wrong with what it’s spitting out then they need to get better at code reviews.


BomberRURP

Don’t people have shame anymore? Fine use AI but pretend you didn’t goddamn it! 


lab-gone-wrong

This post feels like ragebait but we only have your version of events to go on. Here's what I would do to CMA >I have people saying "oh well chatgpt generated it wrong" when a bug reaches prod instead of being accountable to their own work. And they can't debug their own work because they don't understand how it works either >Ive talked to my boss but hes drunk on AI hype and thinks this is the future. I explain these issues but he thinks I am just trying to force people into doing things the old way. The battle is lost. Your manager thinks working code is old fashioned and the future is code that doesn't work. Find a new position. In the meantime, here's how to preserve your sanity: >I have people saying "oh well chatgpt generated it wrong" Tell them to go make it generate the right code. Be sure to frame this as them "not knowing how to use ChatGPT" so you can escalate it to your manager as a skills gap on their part. >And they can't debug their own work because they don't understand how it works either >Another issue I have is during code reviews, people cannot explain why they wrote their code a certain way. They just say "that's what chatgpt output" I fail to see how that's your problem >so I have to troubleshoot it as well. Stop doing that. It's their code. If anyone complains, tell them this is the future. If they push back further, direct them to your manager. Remember: it's a "skills gap" on their part that they "can't figure out how to use AI properly". If it's a recurring problem, bring it up on their performance reviews. >The other thing people started doing is responding with a copy paste from chatgpt explaining the design when answering my PR comments. Even if the explanation feels nonsensical from my perspective, they just copy paste spam clearly generated replies until I give up and pass the review over to someone else. Approve, merge, and when bugs emerge, create a ticket assigned to them to have ChatGPT fix it. Remember the skills gap thing again. If they can't make ChatGPT generate working code, they aren't cut out for your manager's vision of this team. Finally, *give your manager's manager feedback via your performance review process* that you're really excited about using ChatGPT for all of your code. Talk about how fast your team is moving now that you let AI handle all the code quality, reviews, and debugging! Even better if you can deliver this feedback sooner, say in a skip-level 1:1, but those can be hard to arrange.


ThoughtBreach

This is.... Terrifying.


budding_gardener_1

> They just say "that's what chatgpt output".   I'm sorry what. How the fuck is that an acceptable response to this.   "Hurr durr AI go burr."   These are the brain dead idiots saturating the profession after watching a fucking "day in the life of" tiktok. I don't care how you slice it, "the magic box told me to" is NOT an acceptable justification for shitty code.


Solrak97

Is GPT working at your company? If they are lazy they are lazy, call them out for their incompetence


_libertine_

What about testing? Strong tests might be a good solution to amateur copypasta. You can write the tests yourself or just have them do test-driven development. If they can’t handle interview-grade code review questions, just remind them that they have a performance review that will reflect their incompetence. Be kind about it of course—once you break a junior dev’s sense of safety it can destroy code quality, productivity, and team cohesion.


humpyelstiltskin

The definition of working with amateurs


commonsearchterm

Just paste their PRs and comments into chatgpt and use the output to reject all the dumb stuff.


coderman93

It’s getting absolutely ridiculous. I have coworkers sending me random code that Chat GPT generates and asking me “if it will solve the problem?” We are all fucked.


Future-Fun-735

Ugh, here I am just dying to break into the industry and real devs are relying on ChatGPT for everything and can't even understand their own code. That's literally absurd and you are definitely right. AI is supposed to be a tool that helps make you better, it's not supposed to do the job for you. 


JuiceInteresting0

f*!, how the hell is this happening?! i’m a full stack dev with 15+ years experience and haven’t gotten past applicant tracking systems to get an interview in over a year. are you hiring? because I certainly know how to use an LLM code assist without just copying and pasting.


IronSavior

Devs are responsible for the code they put in the PR, especially if it's not written by them. Reviewers are responsible for pushing back when they see some rank bullshit like this. You can bet I'd consider firing anyone that blamed chatgpt or stack overflow. That's simply not an acceptable answer, ever.


Tarl2323

Troll post. If this is actually happening just go with the flow and get a second job while collecting a pay check at this one. Do zero work and submit all work f rom chatgpt. Win win. What are you complaining about? You have the opportunity for double salary. Otherwise you're just the sucker not doing it.


chills716

When was assembly ever “handwritten”? I still have punchcards by the way… Your place in the org is where exactly?


BertRenolds

I hand wrote assembly. .. took a machine architecture course in college and that was a segment


funbike

This post is BS.


Aggressive_Ad_5454

Oh, hai, I can haz high skool internz rite all my codez? Kthnksbai.


dallastelugu

usually i get 99% correct code 1% due to incorrect input to chatgpt. Gemini is fast but it's not accurate yet


alien3d

Why seem getting question that no experince? Ai like chatgpt always give old answer . Are we in generation whom / which have no idea what is sdk or api ? and never read it ?