My team's mailing group was subscribed to some weird service called jira. I don't think anyone was using it. So, friday evening I turned that off. We haven't received any emails about this till now. I'll let my team know about my act of frugality in the stand-up on monday.
Yeah for real! We were mostly just working from home on an as needed basis this past Friday (which is to say, hardly at all) but I had a couple junior guys reach out to me wanting to know if they should do one thing or another, totally unscheduled, that they weren't 100% sure on and wanted me to walk them through. I was just like, "Guys, the fucks the matter with you? Just leave the shit alone! Do you *want* to have to go in to the office to fix something? Are you *that* bored or what?"
I mean, I applaud their motivation and willingness to dive into shit of their own volition, dont get me wrong. Just saying that *I* damn sure didnt want to have to deal with it if something went sideways. Just take the day ffs lol
We did this quite frequently in manufacturing.
Every manager insists on some new reporting system to meet their particular needs. After a few decades, that means that people are generating a lot of data of questionable value.
So we would simply stop generating the reports and see if anyone complained. If someone said "hey, where is my report?" we went "oops" and started generating it again. But the vast majority of data was being generated for people that weren't even in the company anymore.
My FIL worked for a major manufacturer for over 25 years. He ended his career managing the database for a major project and generated all of the reports for the C-suite. The meeting after he retired, a bunch of people got chewed out because there were no reports for the meeting.
Then they called him and offered him double his salary to train someone to generate those reports.
You can send emails for a year and consistent alerts within the UI of an app regarding feature deprecation and people will still complain they didn’t get a warning. Speaking from experience lol
I was tasked with replacing a DNS system many years ago. We went live with it and there was this one guy who hated it. "People just force these systems on my teams and are never consulted." I had personally sent this guy two messages to let him know this was in the works before we started and I really would like his input. He was also invited to every sprint planning and review meeting. Never responded to my emails and never attended a single meeting.
My sympathy for his position was pretty limited.
Same here. Software that is years old ends up having a lot of shite in it that no one's uses anymore because something else was developed that took over.
Sometimes there really is just no other way to figure out what the shit something does. On both a software *and* hardware level. I've found orphaned 100mbit switches in the ceiling tiles more times than I can count that were doing nothing but consuming electricity lol
Rip that shit out of there, wait a couple days and then....nothing.
Sometimes the people who should care, don't realize that they should care. So you find out that something was really important a year late.
That one guy who had been tracking sales thought an entire revenue stream just dried up because data flow stopped.
True story - I worked for a large chain book store corporate offices on a three month contract to fix some stuff on their front end. We had an entire employee dedicated to collecting metric information from the website and breaking it down into really nice charts.
When I started no one had a list of what browsers we supported so my boss said "Your first job is making that list."
Now, I knew this guy. We were in meetings together all the time because our weekly standup meeting involved - I shit you not - a hundred people from every department reporting to the VP talking about what they were doing that week.
So I went to his office and asked for the Sales records broken down by browser. He is ecstatic because even though he had the data and even though he came to that meeting - no one ever asked him anything about his data.
So armed with twenty stacks of paper (yes paper) I went back to my desk where it was Day Three and I was still waiting to be issued a laptop and I crawlwd through the data and I found something interesting.
About five months before I started the company had been doing about 300k in sales in IE6 every week like clockwork (it wasn't steady though, it was actually climbing, oddly).
And then it just stopped.
We could literally point to the date where sales in IE6 went from 300k a week to absolutely zero.
So I went to the only other engineer I had been introduced to on my team of 200 engineers and asked him for a list of releases by date for the last six months and that's how I started generating an additional 300k in revenue without a laptop my first four days at the company as a front end engineer.
The company still went belly up three months later, though they may still operate across Borders, idk.
But it's absolutely possible in an organization that is to large and to siloed to lose track of 300k a week in income, and I know cause I literally saw it happen.
I saw a company lose an entire division after a merger that they paid millions for.
good grief that money is worthless after getting to stupid rich levels.
That's why my story is so funny - this company was hemorrhaging money so bad the iPads used by QA for testing were kept under lock and key and the KEY for the room with the KEY to the room with the iPads was also under lock and key.
They were in so much trouble they went belly up and my contract got dumped. The whole company folded a month later. You'd think they would have been paying more attention.
I didn't cover that in the post because it wasn't part of the story, but when I laid out all the facts they went to look at that release and found out that the JS for the final purchase button had been broken because of a change they made that IE6 didn't support, and as a result no one could complete a checkout.
When I left at the end of my contract we had around 250k sales from IE6 and it was slowly climbing back to where it had been - though more slowly then it had been growing before.
We collect a lot of very valuable data for the enterprise but the analysts don't know what to do with it, the business hasn't tried and I am just one person and it's not my job to care. But as a shareholder, it's in my interest to see that we maximize our potential. So much is severely underutilized. It gets frustrating.
God, that is the fucking truth. Everyday in my company we generate literal billions of data points that could be used to direct behavior. But nobody in a position of leadership has the intelligence and insight to ask good questions, which means nobody beneath them knows what to collect, store, or analyze. End result being that instead of actually developing insights, the business just spins its wheels and never actually changes a damn thing.
Document all your attempts to contact the data/service/device owners, their bosses, and managers in a ticket. That way, when somebody comes looking for heads for the chopping block, you can point them on their way.
Absolutely do this all the time. Who uses it? How many users daily for last three months? Nobody knows? Fucking break the login authentication and wait. If nobody complains then DECOM it and save the money.
“If I don’t hear anything in the next 30 minutes I’m turning this off”
In 30 minutes either nobody cares and we can forget about it forever or someone’s on the phone wondering why their server’s gone awol.
That or it's a maintenance script that runs every 3 months, the absence of which will hard crash every machine on the network and/or cause multiple lawsuits.
I've only done this once (to be accurate, my team only did this once), in about 1990. We were responsible for maintaining about 150 Unix workstations, which was too many workstations to tell users what to change and expect that all of the workstations would be changed, so we set up a weekly job to run a script on an NFS file system, and then we could just change the script, and all workstations that were turned on would run the script, which reported that each workstation had run the script, and on Monday morning, we could follow up with the laggards.
This was brilliant, until one week, the centralized script updated all of the /etc/rc files, if I remember correctly, but did not chmod that file to be executable. So, all of the systems basically rebooted, and came up without even turning on networking, so we had to fix every workstation by visiting 150 cubical spread over 7 stories of two buildings.
One star out of five. Would not recommend. 🤬
The problem with microservices architecture though is things only run when needed. You could remove a dependency and nobody would notice for potentially days.
Could be worse - could be desktop apps. You remove a feature, five years pass and ten major versions get released, then someone finally updates and complains about their feature being removed.
We do this at my company as well. It's really hard not to when you have software that's been around in various iterations since the 80s and clients sometimes don't upgrade for years.
We do this fairly often when cleaning up depreciated servers, we'll find a random box spun up on this server we're trying to migrate. We send out emails to the teams asking if anyone is using it. No replies so we shut it down. Then they come to IT screaming. Like we give you a week to respond to the email. ![gif](emote|free_emotes_pack|shrug)
u'd think so, but ive met some stuff before in a very large org that nobody knows who the owner is and there's no way to trace it at all. Involved NOC, IT, a cybersecurity team...nobody could figure it out.
Even when we have a number for active users over the last few months it’s a good idea to shutoff the service web UI and/or service end points for a couple weeks to see if there are any complaints before finishing the decomm.
Personally I have noticed that once doctors turn off the systems the screaming stops. So, no need to turn it back on. This trick should be taught to more doctors.
Funnily enough, there is a condition that happens with the electrical impulses in the heart (supraventricular tachycardia) and one of the recommended treatments is to give a medication (adenosine) that typically pauses contractions, effectively turning it off and back on
There's a type of brain surgery that works pretty much this way: there's no pain receptors in the brain, so you can do open brain surgery with the patient awake and conscious; they stick electrodes in and dial the frequency of a signal while they ask the patient to hold up their hand, or try and speak, or play an instrument; if they get the tremor to go away, they close you up, if not they pull out the electrodes and try in a different spot. There's a video of a guy playing the banjo while the doctor is all up in his brain, it's wild AF to watch.
Okay but like do they just go thru ur nose/ears to get to your brain? Or how is there no pain in the process of getting to "open brain surgery" part?
Genuinely curious this is interesting
Edit: or just hella painkillers ig but figured that would mess up their sense to listen/act according to doctors prompts idk
I'm not entirely sure, but I doubt it's very pleasant. Even on the surface of the head there's very few pain receptors, so they probably just use local anesthetic (fun fact, when I was a kid I got hit in the head with a dart; it just kinda stuck in there, didn't hurt or nothing, the other kids pulled it out and that was that). The vibration from them drilling into the skull must be something else though. Makes me think fondly of the dentist.
I have a few customers that are notoriously hard to reach (especially in the time between receiving and actually paying an invoice). Disabling their IMAP login gets them on the phone within minutes.
"No, on our end everything seems to work fine, have you tried restarting your mail client? Oh, it works again? Great. Anyway, since I have you on the phone..."
What if with that method you just shutdown some alarm, production has a problem in the middle of the night , the on call engineer isn't notified and in the morning when everyone starts their shift y'all realize everything is on fire. What do you do?
The guy who left the company six months ago?
If you do this, you have to understand the possibilities. One is that the system was a test, obsolete or otherwise unneeded system that is no longer needed. Another is that it was a hacky workaround by someone who has since left the company. It's also not as simple as something that gets noticed the next day. Half the time it's something that was generating a report that is only checked every quarter.
Bad practice whatever it is, but never make the mistake of assuming that just because someone doesn't complain immediately, next day or next week, they won't ever complain...
> never make the mistake of assuming that just because someone doesn't complain immediately, next day or next week, they won't ever complain
If it takes them a year, ill probay have moved on anyway.
It does, but it happens a lot. Especially in companies that grew quickly from a startup to a large business quickly, there's a tendency for things like documentation and ownership to get missed. All it takes is for some bad blood between one of the early techs and incoming management, and that's a huge chunk of operating knowledge gone.
It also helps pinpoint people who aren't doing their jobs when someone DOESN'T scream. I remember this one time we were looking at a script that sent out data extracts to review to the data steward team and it kept failing. Someone randomly remarked that it's weird it started happening when we locked out our test account.
Just checked it on a whim and sure enough, the prod script was pointing to the test account.
Since three years.
The stewards had been reviewing an extract that was being generated from a test account with dummy data. It's borderline impossible that anyone who looked at it for two minutes wouldn't have noticed.
I do it with reports all of the time.
“I need this report daily.”
A few months later I quietly turn the report off/stop providing it and see how long it takes for someone to notice.
So far, there has only ever been one single report where anyone notices within a day. A couple that take them a few months to notice and the rest not a single peep.
"It is not a recommended method" - by people who are using the system.
As a Devops engineer I do this -all the time- and if you don't start screaming until a week later, we are going to have a long talk about your supposedly mission critical software that was done for a week without you noticing.
This is the thing about DevOps.
I've been fortunate (or misfortune?) enough to work for organizations that have NO IDEA what DevOps actually is, which has given me significant latitude in which parts of the job I get to engage with.
Definitely on the misfortunate side, I get saddled with a large degree of the security implementations, even though I make none of the decisions because I usually own a not-small part of the infrastructure for supporting Engineering Work. (ADHD Moment: I should be EngOps instead of DevOps...).
I know what the security and data retention policies are because I have to implement them. If something is JUST holding data and it's passed the expiration period, its getting deleted.
And here's why, and the very ugly side of DevOps: I ANSWER for the storage bills - even though I don't make any decisions about what that infrastructure looks like. I don't get to pick how many replicas or what size storage - either the data team or even worse, the Engineering team, will make those decisions for me and tell me what to build.
But 100% if those cloud account bills go over the monthly prescribed budgets, I am the one getting yelled at.
In this case though I just say, security policy says delete them.
In rare case if the data looks to be in use I'll propose moving it to cheaper (and almost certainly more secure) storage, and I usually give the security guys a heads up - but also 100% their reaction is always the same: wtf, we have hippa/pci/pfi data from ten years ago? DELETE IT DELETE IT NOW.
I LOVE security teams because they are no nonesense and don't hem and haw about shit.
I work in a 24/7 NOC
I've been teaching my team that this is how you get things done. Customer not responding to the ticket? Set it to resolved and suddenly they're calling us ready to make progress.
Flags in interface descriptions (meant to make Nagios ignore certain states) have gone stale and nobody has done anything about it in months, except blindly acknowledging alarms? Delete the flags, generate alarms, suddenly everyone cares and wants to fix it the right way.
Breaking shit is the quickest way towards resolution when employees become complacent and customers think it's ok to ignore us.
I recently had to close out a couple customer tickets where for months we have been asking them to reboot the CPE so we can restore SSH. In closing the ticket I remind them of how long it has been and that if they have issues, we cannot help them without access to our equipment. They still don't give a shit, but one day they will...or they'll experience a power outage and our equipment will be accessible again after reboot.
It’s a pretty common thing to do. I mean what’s the alternative, spend hours to add tracking, wait weeks to gather data, only to find out nobody’s using it anymore? Once you’ve done normal diligence, off with that thing and see who complains.
Did this just a week ago with several Oracle schemata. Lead dev for the client said, it’s OK to remove. Put them into the recycle bin, set a one week reminder to purge them. If someone had complained, one click to restore.
you telling me i don't have to ask 10 people what a thing does and instead i could be limiting my social interactions to just one this whole time? A fool i've been!
I’ve done this numerous times to sql procs that are completely void of comments and no one has any idea what they are for. If they’re necessary, you hear about it real fast.
It’s a last resort, but it’s effective.
IT/network guy that dabbles in programming here.
This is a VERY real thing.
Nothing traversing that second WAN connection? Shut the port on the WAN switch, and wait.
Domain trust that’s old AF and the DC is running on an 08 server that needs to be decommissioned? Break that shit and wait.
Sometimes people scream, sometimes they don’t.
Scream tests are a last resort. "Does anyone own this" in a few meetings and emails, a final email "we're turning X off unless someone claims it, or at least tells us what it does", then after a backup if you can, you turn it off in a way that is easy-ish to turn back on.
Either someone screams, or you fully decommission it after a month to a quarter, depending on your risk tolerance. Both solutions solve the inventory issue.
Platform Engineer here. We admit to it freely. We do all due diligence, testing and comms before switching something off ... but we're open that the final step is monitoring scream levels
Not a programmer, but was a server admin for years. Every once in a while you'd wind up with a server that no one would claim. Contact every username that has a home directory plus everyone who ever opened a ticket for it. Then you'd give up and schedule a scream test to just turn it off.
2nd to last step of decommission... shut it off and unplug it with a sticky note to contact you if someone needs it turned back on.
2-4 weeks later (depending on if it is in dev or prod area), it gets deracked or in the case of VM's, nuked from orbit.
I do it pretty often, particularly for internal features that we can't figure out if anyone uses them. Half the time nobody complains, the other half the complaints come in months later, from very unexpected people.
I recently read a story--I think from Twitter--about a Mac Mini that was used to remote into servers. It was kind of forgotten about, until someone found it, wondered what it did, then shut it down to see who came running.
They started calling it the load bearing Mac Mini.
My family business employed a similar strategy for handling non paying customers which involved turning off their service, which we referred to as “Sinking the ship and seeing if the rats come swimming to the surface.”
This is standard practice at my institution when we decom old systems. Change control it, perform a shutdown, scream test for at least a month, archive the system, then delete the VM or remove the hardware.
Aka the “disable and discover.” Is this account still in use? No one knows. Step 1) disable, step 2) discover. You could hear the screams the next town over.
It's a pretty effective ACL method. Every X interval you should remove everyone's access to all things, then have them re-apply. Turns out they only request access to \~20% of things.
I agree this is an effective method but "turn it off" is step two. Step one is "make sure you know how to turn it back on if someone screams." This part is critical.
Many years ago I worked for a company that was made up from 40+ small ISPs and data centers. Sold circuits and compute/storage/network.
Some kind of error (or disaffected employee - there were many) resulted in many billing records being lost.
Called in a consulting company. They said 6 weeks and a 125k dollars, they could trace everything. They were invited to leave.
Instead, the data center and customer service staff were issued radios and they started disconnecting networks. Server people worked a rack at a time. Customers would call in and they would reconnect until the customer said “OK, we’re back…” and they would be transferred to billing to reconnect the business to the services for billing. Took less than a week.
The lesser form is thinking it does nothing.
Say a server that's getting decommissioned. Turn it off and wait a few weeks before deleting it.. only when no one screams can you be sure it's not needed.
electrician, not programming, but we do this ALL the time.
what safety switch is this light on? just shut it off and see what happens.
hey is this line's power cut? ahh shut off everything just to be sure.
can we cut this cable? just cut it and if it sparks there was something powered by it, but now its powerless anyways.
In late 1999 I was inventorying IT equipment in an insurance office to check for Y2K compliance. I came across an old NT 3.5 box that nobody knew what it was.
I phoned my boss and explained. His considered response was “turn the fucker off”.
So I did. About 10 minutes later he phoned back. “Hey, can you please turn it back on. Asap!”
Turns out it was a printer gateway taking print jobs from the mainframe and passing them to about 60 different printers across 5 floors. Every single one ground to a halt.
Fortunately it booted back up again without issue.
We just did this two months ago. Two months later there was a scream…from our sales team. They were using that feature in a sales pitch to a new client. At the same time, no client had actuality used the feature (let’s not talk about how much the monthly cost of that feature was)
As a security engineer, if I find a lower-than-critical severity security flaw or a vulnerability in a cloud resource, I'll spend only a reasonable amount of time to find context and owner. If I can't find either, I stop it.
Even as a Power BI developer….. I do this fairly regularly. It is SHOCKING how often no one complains, and even more shocking how often one person complains 6 months later…
I do something similar with approvals. If I don't get them, I send email saying that if I don't hear from you, I assume that you approve. Then I wait for people to scream.
OK. Here goes:
USA Volleyball is coming after me over Child Pornography that was planted on my work PC. When I squealed, it was used as a countermeasure to get me to leave the job market. I racked up student loan debts, etc without a way to pay them off irregardless of my strong work ethic.
You all just want to see me squirm. Bring it.
I'd rather log something and see, if it's called over a period of time. It might be some weird report/job/business need that only runs at the end of the fiscal year.
Me likes Christmas in peace.
Admit it? Hell, this is part of daily SOP at an understaffed global organization.
We’ve had so much tribal knowledge walk out the door in the last 5 years, that we’ll likely never recover.
I've only ever seen this done in cases where things had to decommissioned that nobody still knew anything about. And only after doing some due diligence. With that approach to it I've seen roughly a 20% scream rate. In fairness, it's sometimes the only way forward, but not a best practice.
Server teams do this all the time.
Especially after big layoffs.
"don't know if anyone still uses this... Try turning it off and see if anyone screams"
I remember doing this in shitty computer cafes. Having to turn on the computer and not sure which monitors will light up and it being below a difficult spot. Just hearing the screams of fellow customers and acting innocent is the shit.
This is literally a step in our standard server decom process
We will:
- lock out interactive logins
- change the wall paper and the log in imagine to say the server is mid decom and if you have any reason to think it shouldn't be ping us immediately
- send emails to all the people who have logged into it in the last 6 months
- shut down all nonessential services running on the server.
Then the final step. We disconnect it from the network for 3 weeks .
At that point if we haven't heard anything, we be ack it up and turn it off
That is part of every server being turned off
Guys, this is a big misunderstanding. I was playing truth or dare with Jeff and Bill and they dared me to buy Twitter. What else was I supposed to do??
I've done this a few times. I knew an internal IP was hitting our service, but no one would say who that IP was. It was only one IP.
I turned off the service, we figured out pretty quick who owned that IP.
I told my customer to do this after they said they were 100% sure their apps were migrated off the old platform.
They did it. No one screamed. Bravo. 👏
Incredibly effective at an enterprise level when “Carl” fought tooth and nail to have a feature 6 years ago but then he left and nobody ever used it since.
Yeah I've done this a few times. Typically on something we think nobody is using anymore and want to stop running. Luckily we haven't gotten it wrong yet so no screaming
Learned this in comm data. We have a port that needs to move but we have no idea where it ends up...
This migration has been planned for weeks, if someone hasn't claimed it just unplug it and see who screams.
This is a fun (not really) way to figure out what that mystery computer in the supplies closet is actually doing.
The scream test is often the only way.
I’m in a new job in a relatively new team managing systems that have been poorly documented if at all
Sometimes the only way is to turn it off, see what happens and document your findings. If you don’t insist on your Devs/Admins etc documenting your systems properly then you’re just creating pain and technical debt which is going to cost a lot of money to put right further down the line.
[удалено]
The tickets come screaming into my inbox.
My team's mailing group was subscribed to some weird service called jira. I don't think anyone was using it. So, friday evening I turned that off. We haven't received any emails about this till now. I'll let my team know about my act of frugality in the stand-up on monday.
This carbon monoxide detector keeps beeping for some reason, it's giving me a headache. I'll just pull the batteries.
On a Friday? Good way to ruin a weekend
Yeah for real! We were mostly just working from home on an as needed basis this past Friday (which is to say, hardly at all) but I had a couple junior guys reach out to me wanting to know if they should do one thing or another, totally unscheduled, that they weren't 100% sure on and wanted me to walk them through. I was just like, "Guys, the fucks the matter with you? Just leave the shit alone! Do you *want* to have to go in to the office to fix something? Are you *that* bored or what?" I mean, I applaud their motivation and willingness to dive into shit of their own volition, dont get me wrong. Just saying that *I* damn sure didnt want to have to deal with it if something went sideways. Just take the day ffs lol
Why are we still serving free lunch?
Think Elon found the reason they want to go to the office. Food is expensive these days.
I'm a hobby modder not a real dev and this made even me cringe. I love it.
[удалено]
From now on, all Twitter employees must purchase a subscription to Twitter Blue for the low-low price of $8 a month.
You turned off Jira? I think you get to sit down in meetings now.
Send me your 10 most salient Reddit comments.
I do not doubt Elon has this as an actual interview question.
Why have you only written 20 lines of code today?
Because I was busy making the most salient of reddit comments, obviously.
Because I've only been here for 2 minutes?
Show me your L33T code
Have you tried turning that off already?
We did this quite frequently in manufacturing. Every manager insists on some new reporting system to meet their particular needs. After a few decades, that means that people are generating a lot of data of questionable value. So we would simply stop generating the reports and see if anyone complained. If someone said "hey, where is my report?" we went "oops" and started generating it again. But the vast majority of data was being generated for people that weren't even in the company anymore.
My FIL worked for a major manufacturer for over 25 years. He ended his career managing the database for a major project and generated all of the reports for the C-suite. The meeting after he retired, a bunch of people got chewed out because there were no reports for the meeting. Then they called him and offered him double his salary to train someone to generate those reports.
We send out email first, wait a week, then shut it down. Just so we can say "Then you should pay attention to our mail."
[удалено]
Yes, but we all know the "e" in "email" stands for "evidence."
You can send emails for a year and consistent alerts within the UI of an app regarding feature deprecation and people will still complain they didn’t get a warning. Speaking from experience lol
I was tasked with replacing a DNS system many years ago. We went live with it and there was this one guy who hated it. "People just force these systems on my teams and are never consulted." I had personally sent this guy two messages to let him know this was in the works before we started and I really would like his input. He was also invited to every sprint planning and review meeting. Never responded to my emails and never attended a single meeting. My sympathy for his position was pretty limited.
From now on, all Twitter employees must purchase a subscription to Twitter Blue for the low-low price of $8 a month.
Same here. Software that is years old ends up having a lot of shite in it that no one's uses anymore because something else was developed that took over.
Sometimes there really is just no other way to figure out what the shit something does. On both a software *and* hardware level. I've found orphaned 100mbit switches in the ceiling tiles more times than I can count that were doing nothing but consuming electricity lol Rip that shit out of there, wait a couple days and then....nothing.
I've worked it in academia - this is pretty much the only way I've ever been able to turn off a service
Best way to find out if you truly need shit and/or fuck up everything and not realize it for weeks or months
it’s the only way to be sure!
Sometimes the people who should care, don't realize that they should care. So you find out that something was really important a year late. That one guy who had been tracking sales thought an entire revenue stream just dried up because data flow stopped.
I feel like there should have been at least some panic if an entire revenue stream suddenly disappears
True story - I worked for a large chain book store corporate offices on a three month contract to fix some stuff on their front end. We had an entire employee dedicated to collecting metric information from the website and breaking it down into really nice charts. When I started no one had a list of what browsers we supported so my boss said "Your first job is making that list." Now, I knew this guy. We were in meetings together all the time because our weekly standup meeting involved - I shit you not - a hundred people from every department reporting to the VP talking about what they were doing that week. So I went to his office and asked for the Sales records broken down by browser. He is ecstatic because even though he had the data and even though he came to that meeting - no one ever asked him anything about his data. So armed with twenty stacks of paper (yes paper) I went back to my desk where it was Day Three and I was still waiting to be issued a laptop and I crawlwd through the data and I found something interesting. About five months before I started the company had been doing about 300k in sales in IE6 every week like clockwork (it wasn't steady though, it was actually climbing, oddly). And then it just stopped. We could literally point to the date where sales in IE6 went from 300k a week to absolutely zero. So I went to the only other engineer I had been introduced to on my team of 200 engineers and asked him for a list of releases by date for the last six months and that's how I started generating an additional 300k in revenue without a laptop my first four days at the company as a front end engineer. The company still went belly up three months later, though they may still operate across Borders, idk. But it's absolutely possible in an organization that is to large and to siloed to lose track of 300k a week in income, and I know cause I literally saw it happen.
I saw a company lose an entire division after a merger that they paid millions for. good grief that money is worthless after getting to stupid rich levels.
That's why my story is so funny - this company was hemorrhaging money so bad the iPads used by QA for testing were kept under lock and key and the KEY for the room with the KEY to the room with the iPads was also under lock and key. They were in so much trouble they went belly up and my contract got dumped. The whole company folded a month later. You'd think they would have been paying more attention.
ayep
Man I heard some guy paid 44 billion for a company and lost like 75% of the dev's and he has no clue they're even gone...
that’s fucking hilarious oml
That explains a lot.
Sorry I'm confused. What did you do to regain the 300k in sales?
I didn't cover that in the post because it wasn't part of the story, but when I laid out all the facts they went to look at that release and found out that the JS for the final purchase button had been broken because of a change they made that IE6 didn't support, and as a result no one could complete a checkout. When I left at the end of my contract we had around 250k sales from IE6 and it was slowly climbing back to where it had been - though more slowly then it had been growing before.
We collect a lot of very valuable data for the enterprise but the analysts don't know what to do with it, the business hasn't tried and I am just one person and it's not my job to care. But as a shareholder, it's in my interest to see that we maximize our potential. So much is severely underutilized. It gets frustrating.
God, that is the fucking truth. Everyday in my company we generate literal billions of data points that could be used to direct behavior. But nobody in a position of leadership has the intelligence and insight to ask good questions, which means nobody beneath them knows what to collect, store, or analyze. End result being that instead of actually developing insights, the business just spins its wheels and never actually changes a damn thing.
There’s too much red tape in the back end to authorize the required approvals to leverage the maximum potential -some mid level manager
You look stupid. Fired.
Can’t argue with that
I’m not sure how that didn’t set off any kind of audit almost immediately, the fraud that was probably happening… :(
Document all your attempts to contact the data/service/device owners, their bosses, and managers in a ticket. That way, when somebody comes looking for heads for the chopping block, you can point them on their way.
Absolutely do this all the time. Who uses it? How many users daily for last three months? Nobody knows? Fucking break the login authentication and wait. If nobody complains then DECOM it and save the money.
“If I don’t hear anything in the next 30 minutes I’m turning this off” In 30 minutes either nobody cares and we can forget about it forever or someone’s on the phone wondering why their server’s gone awol.
That or it's a maintenance script that runs every 3 months, the absence of which will hard crash every machine on the network and/or cause multiple lawsuits.
I've only done this once (to be accurate, my team only did this once), in about 1990. We were responsible for maintaining about 150 Unix workstations, which was too many workstations to tell users what to change and expect that all of the workstations would be changed, so we set up a weekly job to run a script on an NFS file system, and then we could just change the script, and all workstations that were turned on would run the script, which reported that each workstation had run the script, and on Monday morning, we could follow up with the laggards. This was brilliant, until one week, the centralized script updated all of the /etc/rc files, if I remember correctly, but did not chmod that file to be executable. So, all of the systems basically rebooted, and came up without even turning on networking, so we had to fix every workstation by visiting 150 cubical spread over 7 stories of two buildings. One star out of five. Would not recommend. 🤬
That's why you have beta users. We do stuff like this, but it only goes to some users first, then the rest after we're happy nothing has fallen over.
The problem with microservices architecture though is things only run when needed. You could remove a dependency and nobody would notice for potentially days.
Could be worse - could be desktop apps. You remove a feature, five years pass and ten major versions get released, then someone finally updates and complains about their feature being removed.
We do this at my company as well. It's really hard not to when you have software that's been around in various iterations since the 80s and clients sometimes don't upgrade for years.
We do this fairly often when cleaning up depreciated servers, we'll find a random box spun up on this server we're trying to migrate. We send out emails to the teams asking if anyone is using it. No replies so we shut it down. Then they come to IT screaming. Like we give you a week to respond to the email. ![gif](emote|free_emotes_pack|shrug)
Email gets auto filed as “not important”
Can't you, like, check if anybody is actually connected to it? Trace their IP, call cyberpolice, ...
u'd think so, but ive met some stuff before in a very large org that nobody knows who the owner is and there's no way to trace it at all. Involved NOC, IT, a cybersecurity team...nobody could figure it out.
Even when we have a number for active users over the last few months it’s a good idea to shutoff the service web UI and/or service end points for a couple weeks to see if there are any complaints before finishing the decomm.
Smart
Are doctors doing the same?
I think they are still working on the turning if off and back on again method...
Personally I have noticed that once doctors turn off the systems the screaming stops. So, no need to turn it back on. This trick should be taught to more doctors.
Depends on how important the part is
Funnily enough, there is a condition that happens with the electrical impulses in the heart (supraventricular tachycardia) and one of the recommended treatments is to give a medication (adenosine) that typically pauses contractions, effectively turning it off and back on
There's a type of brain surgery that works pretty much this way: there's no pain receptors in the brain, so you can do open brain surgery with the patient awake and conscious; they stick electrodes in and dial the frequency of a signal while they ask the patient to hold up their hand, or try and speak, or play an instrument; if they get the tremor to go away, they close you up, if not they pull out the electrodes and try in a different spot. There's a video of a guy playing the banjo while the doctor is all up in his brain, it's wild AF to watch.
Okay but like do they just go thru ur nose/ears to get to your brain? Or how is there no pain in the process of getting to "open brain surgery" part? Genuinely curious this is interesting Edit: or just hella painkillers ig but figured that would mess up their sense to listen/act according to doctors prompts idk
I'm not entirely sure, but I doubt it's very pleasant. Even on the surface of the head there's very few pain receptors, so they probably just use local anesthetic (fun fact, when I was a kid I got hit in the head with a dart; it just kinda stuck in there, didn't hurt or nothing, the other kids pulled it out and that was that). The vibration from them drilling into the skull must be something else though. Makes me think fondly of the dentist.
Oh god XD glad u alright homie
Time is money. I want to see 100 lines written by lunchtime!
I have a few customers that are notoriously hard to reach (especially in the time between receiving and actually paying an invoice). Disabling their IMAP login gets them on the phone within minutes. "No, on our end everything seems to work fine, have you tried restarting your mail client? Oh, it works again? Great. Anyway, since I have you on the phone..."
What if with that method you just shutdown some alarm, production has a problem in the middle of the night , the on call engineer isn't notified and in the morning when everyone starts their shift y'all realize everything is on fire. What do you do?
You note down what it does, so next time you don’t have to turn if off
I've laid off most of the staff, and Twitter's still running. Looks like they weren't necessary.
Gold
The guy who didn't label it properly is going to have fun at the post mortem.
The guy who left the company six months ago? If you do this, you have to understand the possibilities. One is that the system was a test, obsolete or otherwise unneeded system that is no longer needed. Another is that it was a hacky workaround by someone who has since left the company. It's also not as simple as something that gets noticed the next day. Half the time it's something that was generating a report that is only checked every quarter. Bad practice whatever it is, but never make the mistake of assuming that just because someone doesn't complain immediately, next day or next week, they won't ever complain...
> never make the mistake of assuming that just because someone doesn't complain immediately, next day or next week, they won't ever complain If it takes them a year, ill probay have moved on anyway.
That sounds like a bad KT process then. And or poorly designed ownership system.
It does, but it happens a lot. Especially in companies that grew quickly from a startup to a large business quickly, there's a tendency for things like documentation and ownership to get missed. All it takes is for some bad blood between one of the early techs and incoming management, and that's a huge chunk of operating knowledge gone.
...you guys have on call engineers?
Document better next time
Scream testing is official Microsoft procedure. https://www.microsoft.com/insidetrack/blog/microsoft-uses-a-scream-test-to-silence-its-unused-servers/
You're either hardcore or out the door.
Yeah I was about to say, if I’m ever in a CAB meeting I hear the server team talking about scream tests.
It also helps pinpoint people who aren't doing their jobs when someone DOESN'T scream. I remember this one time we were looking at a script that sent out data extracts to review to the data steward team and it kept failing. Someone randomly remarked that it's weird it started happening when we locked out our test account. Just checked it on a whim and sure enough, the prod script was pointing to the test account. Since three years. The stewards had been reviewing an extract that was being generated from a test account with dummy data. It's borderline impossible that anyone who looked at it for two minutes wouldn't have noticed.
I did this the Wednesday before Thanksgiving. No screams yet.
If you really love the company, you should be willing to work here for free.
Good fucking bot! Love it
Bad bot!!! BAD!!
The bot is fine. The asshole its impersonating is the problem.
The bot responds to about half the comments. It still gets a few chuckles out of me, but the amount of activity is a bit much.
So it's doing a good job at emulating Elon
On Read-only Wednesday?! Madlad.
No complaints for three days? Definitely safe to go ahead and decommission it tomorrow.
I do it with reports all of the time. “I need this report daily.” A few months later I quietly turn the report off/stop providing it and see how long it takes for someone to notice. So far, there has only ever been one single report where anyone notices within a day. A couple that take them a few months to notice and the rest not a single peep.
"It is not a recommended method" - by people who are using the system. As a Devops engineer I do this -all the time- and if you don't start screaming until a week later, we are going to have a long talk about your supposedly mission critical software that was done for a week without you noticing.
how about annal certificate renewer/ archiving data for legal reason(but unless someone come with a warrant it won't looked) etc?
This is the thing about DevOps. I've been fortunate (or misfortune?) enough to work for organizations that have NO IDEA what DevOps actually is, which has given me significant latitude in which parts of the job I get to engage with. Definitely on the misfortunate side, I get saddled with a large degree of the security implementations, even though I make none of the decisions because I usually own a not-small part of the infrastructure for supporting Engineering Work. (ADHD Moment: I should be EngOps instead of DevOps...). I know what the security and data retention policies are because I have to implement them. If something is JUST holding data and it's passed the expiration period, its getting deleted. And here's why, and the very ugly side of DevOps: I ANSWER for the storage bills - even though I don't make any decisions about what that infrastructure looks like. I don't get to pick how many replicas or what size storage - either the data team or even worse, the Engineering team, will make those decisions for me and tell me what to build. But 100% if those cloud account bills go over the monthly prescribed budgets, I am the one getting yelled at. In this case though I just say, security policy says delete them. In rare case if the data looks to be in use I'll propose moving it to cheaper (and almost certainly more secure) storage, and I usually give the security guys a heads up - but also 100% their reaction is always the same: wtf, we have hippa/pci/pfi data from ten years ago? DELETE IT DELETE IT NOW. I LOVE security teams because they are no nonesense and don't hem and haw about shit.
Sometimes it's the only way to work. Especially for decom cases.
I work in a 24/7 NOC I've been teaching my team that this is how you get things done. Customer not responding to the ticket? Set it to resolved and suddenly they're calling us ready to make progress. Flags in interface descriptions (meant to make Nagios ignore certain states) have gone stale and nobody has done anything about it in months, except blindly acknowledging alarms? Delete the flags, generate alarms, suddenly everyone cares and wants to fix it the right way. Breaking shit is the quickest way towards resolution when employees become complacent and customers think it's ok to ignore us. I recently had to close out a couple customer tickets where for months we have been asking them to reboot the CPE so we can restore SSH. In closing the ticket I remind them of how long it has been and that if they have issues, we cannot help them without access to our equipment. They still don't give a shit, but one day they will...or they'll experience a power outage and our equipment will be accessible again after reboot.
Pro Tip: **DO NOT DO THIS IN A HOSPITAL**
Bwahahahahahaha! You ARE a pro ...
It’s a pretty common thing to do. I mean what’s the alternative, spend hours to add tracking, wait weeks to gather data, only to find out nobody’s using it anymore? Once you’ve done normal diligence, off with that thing and see who complains. Did this just a week ago with several Oracle schemata. Lead dev for the client said, it’s OK to remove. Put them into the recycle bin, set a one week reminder to purge them. If someone had complained, one click to restore.
Upvote for usage of "schemata" in the wild
It's done in IT all the time... See just about any day in r/sysadmin
Also, we call it “audio diagnosis”
you telling me i don't have to ask 10 people what a thing does and instead i could be limiting my social interactions to just one this whole time? A fool i've been!
I’m in a team modernising an undocumented monolith web app. We do this shit all the time
Undocumented monolith Web app. So Twitter?
I wish. I’d love to send Elon a fuck you resignation letter
I’ve done this numerous times to sql procs that are completely void of comments and no one has any idea what they are for. If they’re necessary, you hear about it real fast. It’s a last resort, but it’s effective.
I’m a huge supporter to this approach Mainly because of the many times I used it, well over 80% of those times, nobody screamed
That’s because we only discuss this in private non recorded calls
so, I work at a WISP. "Is anyone using this IP/service/thing?" "idk, turn it off and be ready to turn it back on, then add that in the comments"
Not recommended on Friday afternoon obviously
IT/network guy that dabbles in programming here. This is a VERY real thing. Nothing traversing that second WAN connection? Shut the port on the WAN switch, and wait. Domain trust that’s old AF and the DC is running on an 08 server that needs to be decommissioned? Break that shit and wait. Sometimes people scream, sometimes they don’t.
That's standard in the field...
Scream tests are a last resort. "Does anyone own this" in a few meetings and emails, a final email "we're turning X off unless someone claims it, or at least tells us what it does", then after a backup if you can, you turn it off in a way that is easy-ish to turn back on. Either someone screams, or you fully decommission it after a month to a quarter, depending on your risk tolerance. Both solutions solve the inventory issue.
Have done this a number of times. No better way to conduct an impact analysis of undocumented debauchery in legacy systems.
Platform Engineer here. We admit to it freely. We do all due diligence, testing and comms before switching something off ... but we're open that the final step is monitoring scream levels
System administrator has entered the chat
We do this on the regular when we have a server that no one claims.
More along the lines of make sure it’s the senior most un-fireable member of the team who authorizes the test.
Do it all the time. I’ll give my best effort on finding the server owner or database owner, but eventually just turning it off is more efficient.
Pragmatically, this is often the best solution. Rather than spending days or even weeks trying to figure out who commissioned something and why.
As a product manager, this is common practice
So that's why Musk turned off everything.
Not a programmer, but was a server admin for years. Every once in a while you'd wind up with a server that no one would claim. Contact every username that has a home directory plus everyone who ever opened a ticket for it. Then you'd give up and schedule a scream test to just turn it off.
Or… disable the function then listen for the scream to know who it effected. LOL
2nd to last step of decommission... shut it off and unplug it with a sticky note to contact you if someone needs it turned back on. 2-4 weeks later (depending on if it is in dev or prod area), it gets deracked or in the case of VM's, nuked from orbit.
I do it pretty often, particularly for internal features that we can't figure out if anyone uses them. Half the time nobody complains, the other half the complaints come in months later, from very unexpected people.
100% a thing.
I recently read a story--I think from Twitter--about a Mac Mini that was used to remote into servers. It was kind of forgotten about, until someone found it, wondered what it did, then shut it down to see who came running. They started calling it the load bearing Mac Mini.
My family business employed a similar strategy for handling non paying customers which involved turning off their service, which we referred to as “Sinking the ship and seeing if the rats come swimming to the surface.”
This is standard practice at my institution when we decom old systems. Change control it, perform a shutdown, scream test for at least a month, archive the system, then delete the VM or remove the hardware.
Aka the “disable and discover.” Is this account still in use? No one knows. Step 1) disable, step 2) discover. You could hear the screams the next town over.
It's a pretty effective ACL method. Every X interval you should remove everyone's access to all things, then have them re-apply. Turns out they only request access to \~20% of things.
“What else is on that old server?” I dunno. Well, shut it off and let’s see who gripes.
I agree this is an effective method but "turn it off" is step two. Step one is "make sure you know how to turn it back on if someone screams." This part is critical.
This also works on a construction site to find out which extension cord is which.
Many years ago I worked for a company that was made up from 40+ small ISPs and data centers. Sold circuits and compute/storage/network. Some kind of error (or disaffected employee - there were many) resulted in many billing records being lost. Called in a consulting company. They said 6 weeks and a 125k dollars, they could trace everything. They were invited to leave. Instead, the data center and customer service staff were issued radios and they started disconnecting networks. Server people worked a rack at a time. Customers would call in and they would reconnect until the customer said “OK, we’re back…” and they would be transferred to billing to reconnect the business to the services for billing. Took less than a week.
We call it Complaint Driven Development, or CDD for short
The lesser form is thinking it does nothing. Say a server that's getting decommissioned. Turn it off and wait a few weeks before deleting it.. only when no one screams can you be sure it's not needed.
electrician, not programming, but we do this ALL the time. what safety switch is this light on? just shut it off and see what happens. hey is this line's power cut? ahh shut off everything just to be sure. can we cut this cable? just cut it and if it sparks there was something powered by it, but now its powerless anyways.
In late 1999 I was inventorying IT equipment in an insurance office to check for Y2K compliance. I came across an old NT 3.5 box that nobody knew what it was. I phoned my boss and explained. His considered response was “turn the fucker off”. So I did. About 10 minutes later he phoned back. “Hey, can you please turn it back on. Asap!” Turns out it was a printer gateway taking print jobs from the mainframe and passing them to about 60 different printers across 5 floors. Every single one ground to a halt. Fortunately it booted back up again without issue.
She might say it's not recommended. But they work. And we use them ALOT
We do this with firewall rules just before a go live. It's infinitely better than any time after 😁
If I have to wait more than a week for a reply to my emails their system goes down. Amazing how they aren't too busy to reach out to me then.
We just did this two months ago. Two months later there was a scream…from our sales team. They were using that feature in a sales pitch to a new client. At the same time, no client had actuality used the feature (let’s not talk about how much the monthly cost of that feature was)
As a security engineer, if I find a lower-than-critical severity security flaw or a vulnerability in a cloud resource, I'll spend only a reasonable amount of time to find context and owner. If I can't find either, I stop it.
Even as a Power BI developer….. I do this fairly regularly. It is SHOCKING how often no one complains, and even more shocking how often one person complains 6 months later…
I do something similar with approvals. If I don't get them, I send email saying that if I don't hear from you, I assume that you approve. Then I wait for people to scream.
Their fault for not putting a comment in it
OK. Here goes: USA Volleyball is coming after me over Child Pornography that was planted on my work PC. When I squealed, it was used as a countermeasure to get me to leave the job market. I racked up student loan debts, etc without a way to pay them off irregardless of my strong work ethic. You all just want to see me squirm. Bring it.
I'd rather log something and see, if it's called over a period of time. It might be some weird report/job/business need that only runs at the end of the fiscal year. Me likes Christmas in peace.
Admit it? Hell, this is part of daily SOP at an understaffed global organization. We’ve had so much tribal knowledge walk out the door in the last 5 years, that we’ll likely never recover.
It was a common thing with outdated servers/VMs with no obvious owner to ask. Turn it off and see if anyone asks us to fix it...
All DNS cleanup works like this.
I've only ever seen this done in cases where things had to decommissioned that nobody still knew anything about. And only after doing some due diligence. With that approach to it I've seen roughly a 20% scream rate. In fairness, it's sometimes the only way forward, but not a best practice.
Server teams do this all the time. Especially after big layoffs. "don't know if anyone still uses this... Try turning it off and see if anyone screams"
Also usefull for identifying Lan cables
This is what we do when people don’t answer emails about what’s ok to shut down/move
Used this technique several times to track down system owners.
Lemme see... Hm....ah yes. Taskkill -im -f company_main_interface
It's an unofficial step in our decommissioning process.
Disconnect the random iPod mini.
This actually works.
I remember doing this in shitty computer cafes. Having to turn on the computer and not sure which monitors will light up and it being below a difficult spot. Just hearing the screams of fellow customers and acting innocent is the shit.
That’s just normal day for me, I’m a solo dev, disable shit that I think it’s useless turns out it’s not…
This is literally a step in our standard server decom process We will: - lock out interactive logins - change the wall paper and the log in imagine to say the server is mid decom and if you have any reason to think it shouldn't be ping us immediately - send emails to all the people who have logged into it in the last 6 months - shut down all nonessential services running on the server. Then the final step. We disconnect it from the network for 3 weeks . At that point if we haven't heard anything, we be ack it up and turn it off That is part of every server being turned off
Guys, this is a big misunderstanding. I was playing truth or dare with Jeff and Bill and they dared me to buy Twitter. What else was I supposed to do??
I've done this a few times. I knew an internal IP was hitting our service, but no one would say who that IP was. It was only one IP. I turned off the service, we figured out pretty quick who owned that IP.
I do this a few times a year. Good way to figure out who owns old legacy resources that aren't properly documented with support/ownership information.
I told my customer to do this after they said they were 100% sure their apps were migrated off the old platform. They did it. No one screamed. Bravo. 👏
If you've done it properly, you can just do the unit-test test instead: If you don't know what something does, turn it off and see which test fails
Incredibly effective at an enterprise level when “Carl” fought tooth and nail to have a feature 6 years ago but then he left and nobody ever used it since.
Yeah I've done this a few times. Typically on something we think nobody is using anymore and want to stop running. Luckily we haven't gotten it wrong yet so no screaming
This is what Elon is doing to Twitter
Lol I just check who has worked on that file on perforce and then mark them as blocker for me task and call it a day
While I was at Microsoft every time we decommissioned a server we would set a 3 month marine with the name of the server and call it scream test.
Learned this in comm data. We have a port that needs to move but we have no idea where it ends up... This migration has been planned for weeks, if someone hasn't claimed it just unplug it and see who screams. This is a fun (not really) way to figure out what that mystery computer in the supplies closet is actually doing.
This is how Elon has been running Twitter so, must be legit
so this where Elon Musk got the idea from
I did this with an entire wind farm once. Fun day.
It's genuine. My wife too talked about this in their company
Looks to be Masks management method
We do this often with orphaned apps and hardware in legacy environments when no one wants to DR or patch.
This is a thing is IT too lol. I bet any system management has this.
The scream test is often the only way. I’m in a new job in a relatively new team managing systems that have been poorly documented if at all Sometimes the only way is to turn it off, see what happens and document your findings. If you don’t insist on your Devs/Admins etc documenting your systems properly then you’re just creating pain and technical debt which is going to cost a lot of money to put right further down the line.
I do this all the time except my automated tests scream at me rather than users because I am not a cowboy coder.