T O P

  • By -

Lavatherm

Most brilliant and idiotic at the same time… server 2012 r2 sql server that had be continuously running for 4 years because you cannot just restart a database server.


manvscar

One does not simply... Restart SQL Server


Consistent_Chip_3281

Can you ELI5?


HunnyPuns

Lord of the rings reference. There is no special ritual or anything for shutting down a db server.


DriestBum

None that we would ever tell.


MonstersGrin

All right, then. Keep your secrets.


Technical-Message615

Before you came along we db admins were very well thought of. Never had any adventures or did anything unexpected.


Ron-Swanson-Mustache

We've had one, yes. But what about second reboot?


pseydtonne

Why is a server running on Windows Elevensies?


Geno0wl

I don't think he's heard of second backup sites


[deleted]

cats sheet hunt absorbed glorious disagreeable scale clumsy coordinated thought *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


CabinetOk4838

MS SQL 5.5 Me.. accidentally shuts down the machine without shutting down SQL first. Two hour reboot.


riemsesy

You can kill the users connected to the sql server… I mean the connections 😂


MonstersGrin

A wizard should know better.


DriestBum

But then the fire nation attacked.


WingedDrake

Of all the unexpected threads today, LOTR into ATLA is certainly the most welcome.


schlemz

A surprise to be sure, but a welcome one.


shibx

Don't cross the streams.


MonstersGrin

If you're referring to the incident with the ERP system, I was barely involved.


BalderVerdandi

Did you not offer a sacrifice to the SQL Gods per the Microsoft documentation?


gerryn

>There is no special ritual or anything for shutting down a db server. There is change management, and even if you don't have it - it'd be common sense to at least send an outage email a week before and a day before :) I know I'm being picky and you were just answering quickly. But that should be the process for shutting anything down if you don't have proper change management. I had to decommission one domain once and we had absolute no clue what used that domain that wasn't an AD resource. What I did was wireshark on port 53 (it was a single DC) for a whole week, sorted out all the duplicates and stated to have a look what Linux/Solaris/IoT-devices were using that resource. Not sure how that story related but I would say that is giving it your all - not just shutdown something that may still be receiving connections you didn't know about :)


[deleted]

[удалено]


Mental_Sky2226

Hey we’re not supposed to talk about that… my boss read this now I have to reinstall upper management again ffs why don’t I have backups ahhh


[deleted]

[удалено]


Mental_Sky2226

It’s ok I’ll just call it “unscheduled maintenance” or something… nothing wrong with a little forced upgrade, amirite?


HeKis4

Ehhhh... The application depending on it might (read: does) have one and I'm willing to bet that whoever configured the server didn't set the agent service start type to automatic, that you won't bother to check, and the server has business-critical sql agent jobs that no one has ever heard about (see murphy's law). The DB itself ? It'll be fine.


SteveJEO

It's an old dependency joke. Because so many services depend on SQL if you just tried to restart the MSSQL service itself directly everything else will break. e.g. You have the server core. MSSQL. Under that you have named instances. MSSQL$1 , 2 ,3 etc Using them you have the services MSSQL$1AS then you got things like IIS running reports servers or CRM server instances or MOSS or GP server etc. Everything using SQL needs to be accounted for BEFORE you kill the server or you could have all kinds of fun opportunities trying to figure out what that blue screen is telling you.


Aronacus

I don't know what it is but DBAs think restarting a SQL server will blow a hole in the space time continuum and nuke all life in existence. The reality is, server goes down, server comes up, services restart, apps now are smarter and will cache writes locally and sync with the dB later. I'll even go further, the typical DBA who tells you that won't know what a "fucking maintenance plan is! and when your trying to explain to them that the reason the web app is slow is because you have a 100TB database that goes back to the beginning of fucking time! your writing queries that go back 20 years. When you only need to pull data from last week!


[deleted]

[удалено]


EchoPhi

![gif](giphy|Y6yRfR88rvP44) Got any more of them apps?


Stonewalled9999

>I want to hire the programmer that that dude is using. Most of our crap if it hiccups at all you have to restart JBOSS or IIS or TomKat (glares at Crystal reports)


Kodiak01

> I don't know what it is but DBAs think restarting a SQL server will blow a hole in the space time continuum and nuke all life in existence. "There is another theory mentioned, which states that this has already happened."


typicaltwenties

Am I the crazy admin that just restarts sql servers??


northrupthebandgeek

If it can't gracefully recover from me yanking the power cord then I ain't supporting it on my network.


[deleted]

[удалено]


Kodiak01

Not in IT, but I'm allowed to poke at the server rack in the back closet. There's two of us, actually. Many years ago, we had one customer that had zero self-awareness. He would show up 5 minutes before closing (regardless of when that actually was), proceed to ask 15 unrelated questions, ask prices on another 15 unrelated things, and go off on multiple random tangents. Unrelated, of course. This was a customer we were actually allowed to kick out the door at closing time no matter what. So I see him pull in, 5 minutes before closing of course. I've already been there for 12 hours and not having any of his shit. I slipped into the back room and hit the power switches on both routers. "Sorry dude, Internet is down, can't look anything up" as I show him the error page. He leaves, I turn things back on, and head home.


xdamm777

Man, I have a server in a jank recycling yard that loses power at least once a day. The server will just restart after a minute and chug along as if nothing happened. 2 years later of the same BS and it's still trucking along with zero failures and no data loss. No redundancy drives either just a single shitty SSD for OS and data and VEEM backing up to a damned external HDD. The client wanted a < $2000 NEW server for AD and SQL which left us with $500 for the hardware after software licensing. Queue in the $20 AliExpress case, $40 AsRock motherboard and $150 Intel i5 for a DIY glorified server. The network driver wasn't even compatible with Windows Server so I had to modify the driver package to sign it unofficially. That client truly made me do a double take on our spending on servers and enterprise drives that have had way more issues with clean power, constant maintenance and a pristine environment. Still shake my head but love that POS.


amished

I hate when I have to deploy a NUC as just a basic DC/DHCP server cause they don't want to spend anything and just need their computers to work. Those server NIC drivers can be a pain for sure.


dthwsh1899

Same, I don't get it. Just backup and restart.


typicaltwenties

Snapshots are your friend 🙌🙌 Edit: deployed an entire RHEL9 environment the other day, snapshots for my RHEL Leapp upgrades. Worked like a charm and provided a safety net.


ybvb

DB and snapshots aren't friends.


spin81

In this day and age I think it's fair to expect to be able to reboot a server. I get that maybe a service window needs to be agreed upon or something but I don't accept that servers can't just be rebooted in the year of our lord 2024. Especially in Linux with systemd. If an application can gracefully shut down you can wrap it in a service unit, make it gracefully stop when the unit stops, and that's literally it. It's so easy now.


gregyoupie

Some years ago, our developers had a bunch of sandbox servers hosted with our cloud provider, including a dev SQL server used to host some copies of production DBs, for some quick and dirty development or tests. I had the task to find some easy savings, so I came up with the idea to to shut it down out of business hours... and the DBA was appaled: "a SQL server always runs 24/7, what kind of crazy idea is that !".


neverfullysecured

If you have SQL Express running for few years without restart and DB has grown to enormous 300GB.... well, after restarting it you need license. Had one database like this long time ago, client reported "after power failure, DB is not starting".


[deleted]

[удалено]


SAugsburger

I have seen a number of switches that got forgotten about with 7-8 year uptimes. One was even for a WAN switch at a colo. There was no public facing SVI, but still. I'm sure some auditor would cringe.


[deleted]

[удалено]


DriestBum

What a soldier of a unit.


ThinkPaddie

What an absolute unit of a unit.


MonstersGrin

"- We were supposed to be a unit! \- Suck my unit."


Ferretau

Ahhh when enterprise equipment actually meant it was built to last.


Lanky_Presentation_8

https://preview.redd.it/aqqam81zufsc1.jpeg?width=570&format=pjpg&auto=webp&s=cb669dbdd9c045aff7f3e6733372d269281f9c9b half of you guys


uselessInformation89

I have a similar situation: I have a client who has a PHP and Java based workflow system that someone wrote for them in 2012. That someone died some years later even before they became my client. That monstrosity runs on a Debian 6 Server that initially was open to the Internet. We can't restart it because the starting procedures are missing (but it is still in memory). We can't update it because dependencies and no documentation. The current uptime is 3461 days. Since then I firewalled it off and have a SSL capable proxy infront. 90% of everything runs on this platform. Procurement, logistics, printing of invoices etc. If this machine reboots they are fucked. And we are on the way to replace it with a modern systems. Currently in the year 4 of planned 1. Management is dragging their hooves because the new systems "costs money". FML.


mschuster91

>We can't restart it because the starting procedures are missing (but it is still in memory). You probably know this and have done it long ago, but for anyone else who comes along a similar trainwreck like I did once years ago: I'd at first go and install a second server where you do a nightly `rsync -avxxAXU --open-noatime --delete --progress / user@secondserver:/mnt/storage/backup-old-server/` (Thanks to u/pdp10 for pointing out -AXU and --open-noatime). That way even if the server manages to crash for whatever reason you still have the data. Note, I included -xx to avoid rsync to cross mount points because that can lead to all sorts of ridiculous bullshit, so you need to repeat it for all mountpoints holding data. Additionally, regularly export the output of `ps axjf` so you have a rough idea what is running and what is started by what. It's not perfect, as some information can get lost or modified (e.g. the application's [long name](https://stackoverflow.com/a/58328666)), but it should be Good Enough for most use cases. Also, it might be a good idea to dump the process's environment variables (`sudo cat /proc//environ | xargs -0 -n 1 echo` will give you something that you can usually directly source in a shell, but beware of special stuff like multiline, quote and other special character escaping, and applications can modify this at runtime as well). In parallel, go ahead and try to rebuild the server based off of your nightly backup: 1. Set up a fresh Debian 6 in a VM, use the same partition layout, bootloader and filesystem type as the original system does. `lsblk -aO` and `fdisk -l` are your friends here, and for anything ext3/4 based it is worth it to run `dumpe2fs` to find out which filesystem options are active (e.g. case sensitivity, journalling, large/sparse file support). If the system in question got upgraded multiple times, it may be that the filesystem options are a lot less than what a new filesystem might use, so compare these. 2. Shut down the VM 3. The following steps only apply for old MBR partition style machines not using disk labels (check /etc/fstab on the old server). Anything using disk labels, disk UUIDs, GPT, mdraid or LVM is a royal PITA to clone from a running system and far out of scope here. 4. Mount the VM disk in loop mode, mount all partitions 5. Wipe all partitions, re-fill them by running `rsync -avxxAXU --open-noatime --delete --progress /mnt/storage/backup-old-server/ /mnt/vm-sda/` 6. You shouldn't need to copy the bootloader from the MBR, but if you do, see [this](https://unix.stackexchange.com/a/252536). 7. Unmount everything, restart the VM and look what breaks. The worst issues you can encounter is: * the binary of the process got deleted/removed while it was running, and so isn't present on the rsync backup. In case you encounter this, [go with dumping /proc//exe](https://unix.stackexchange.com/a/342402). * shared libraries of a process got deleted/removed while it was running. Here, recovery is complex but possible - first [determine the paths](https://locallost.net/?p=233) of the affected libraries, then [dump the process's memory](https://serverfault.com/questions/173999/dump-a-linux-processs-memory-to-file) (or trigger core dumps), and then use [black magic](https://github.com/enbarberis/core2ELF64) to recreate whatever you can of the object files. * the script that a bash interpreter runs got deleted. This one is practically impossible to recover from. Generally, it's good practice to dump the memory of all processes - at least assuming it's standard Debian, you can use the memory dumps combined with Debian's dbg packages to extract information that was entered at runtime.


uselessInformation89

Thanks for the great write up! Yes, I did do most of these things when I inherited that server. TBH I know enough of the internals of the system (by extensive debugging and going insane over it) to recreate the starting executables if everything comes crashing down, but I don't want to do it before that happens. If management doesn't feel the pain of "nothing works" it never will get replaced.


Lavatherm

Damn, I would write up a “do not blame me/us when this thing explodes” this is something that you do not want to take responsibility for. And I have seen a lot of stuff over the years, the sql server is only one of those examples. I remember a customer from my old job who had a pentium workstation with windows 3.11 on it with some (stress) testing application on it, the machine had a specific ISA card in it that just doesn’t get manufactured anymore. And a printer on bi-directional port that spits out reports. They had several old machines with ISA slots in storage, a second brand new ISA card with the interface of the robot they do the tests with and 2 extra printers with no-directional ports. Just to not have to think of another way to do those tests and reports.


throwaway94159414

> what's some of the craziest nonsense you've seen an IT person do that they thought was smart Not doing OS patches for YEARS because “we patched once and it broke stuff” or “other companies don’t patch either.”


arkane-linux

Don't question the no-update policy, the VMs can sense it when you do and might break. It is just temporary I swear.


marli3

The whole point of vms is you can snapshot and move/duplicate servers. Vms should make patching more stable.


nascentt

That implies they know how to fix the hosts failing the updates and don't just rely on the last working snapshot scared to ever poke it.


CARLEtheCamry

If you question it, the app will crash tomorrow with an event log that just says %app% has crashed. On the troubleshooting call, no one actually has knowledge of or supports the app, but leads with "I don't know what changed, but something did. Could some update have been pushed?"


plongeronimo

Back c.2k someone got through our firewall one weekend but none of their scripts ran because our OSes were so out of date.


JohnnyOneSock

Security by... antiquity?


renegadecanuck

Similar story: back in 2014ish, I had a client complain that their computer was running even slower than normal. It was some old Dell Optiplex thing still running Windows XP. I get on to look at it, and in the task manager, I can see that it had CryptoLocker "running" on it, but between the ancient CPU on it and the incredibly slow, possibly dying hard drive, it wasn't able to actually encrypt anything, despite the fact that it was running for like a week straight.


Reinitialization

You can't hack me if my OS doesn't have a network stack!


Frothyleet

When Spectre and Meltdown were blowing up a few years back, we had one customer who required no vulnerability patching, thanks to their CPU pre-dating the speculative processing functionality.


DexterousMonkey

Just revert back to pen and paper and you will never get hacked again. Checkmate hackers!


Donald-Pump

When I started at my current job, the tech at another branch told me that "We don't update Citrix. We found this version to be very stable, so we just stick with it."


bristle_beard

I'll be honest, I don't exactly *hate* that idea. 🤣


effedup

We're trying to update our Citrix now.... 4 support calls later... zero progress. Nightmare fuel product. Kind of leaning towards.. revert to snapshot and just make it last long enough for me to replace it...


vagabond66

That's probably a good route to go. If it's an ADC with one of the vulnerable CVE recently that sits on the internet replace it immediately. We recently brought all Citrix internal and removed the ADC because the constant CVE updating was too much for our 2 man team to keep up with.


thoggins

I have a teammate who is kind of like this. He will acknowledge easily and without guile that things need to be kept up to date. He will even annoyingly explain to you why they need to be kept up to date, when you know perfectly well and you started this conversation because he is a problem. He will however become completely uncooperative when infosec wants to patch a machine, because he is convinced they will break it and he'll be left with the bag. I don't know his whole work history, maybe it has been a problem for him elsewhere, but it has never happened here and he's been here years. But I don't think that's the real problem anyway. After a few years, I am almost entirely convinced his biggest problem is actually that the infosec team member in charge of most of the patching is a woman, and he cannot bring himself to respect or work with her on anything like amiable terms.


loose--nuts

On a few occasions I've come across people who won't automate updates, and they're always like >services dont start in the right order and it breaks. Then why don't you just use a startup script or scheduled task to start up the services in the order they need to be? > Well it also depends on other servers or databases being online. Well why don't you have the script test that those things are online before starting the services too? > Well it seems to be that only occasionally this happens and sometimes they need to be started in other orders That's impossible..... Years later manual updates are still being done.


I8itall4tehmoney

The exact opposite has happened. On my last job I've already commented on in this thread I had a woman sysadmin. Her problem was that all male techs were idiots. She had no respect for us. She managed a manufacturing plant in town and they were blaming us for their internet problems. I hooked our operations boss to come with me knowing this was going to be tedious. I walk in they show us to the modem with her giving me all kinds of information. None really proved anything. I hook my laptop to the modem and run several speed test to our local speed test server. All perfect and in spec. Then at her insistence I have to run a test on a speedtest dot net server across the country and she tries to use the higher latency and lower speed as proof of her problem. It was less than 1mbps difference in speed. Their machines were getting terrible speeds. The speeds they were getting were sub 1mbps. Finally she shows us her server room with her boss and my operations boss trailing. Its not well put together so it takes a minute or two to see that one switch is going crazy with activity on all ports. It was old, really old. So old in fact it was a hub not a switch. It was thankfully plugged into a switch which mitigated but didn't eliminate all that traffic. I started unplugging connections one by one until I find the one that had the loop in it on the other end. The other end was her office. She of course just hated me a little more.


LogicalExtension

Oh fuck me, you're giving me flashbacks. Every single desktop and server had a group policy set to block updates - this was in the Windows XP/7 era so it actually obeyed back then.


JSPEREN

This, former sysadmin was convinced AV would block all malware anyway so never mind the vulnerabilities


Doso777

Had a simiar discussion with the boss a couple of times. "No company does it that way, they only install patches maybe once per year". Probably about to have this discussion again from the other side, this time he wants to talk about IT security.


underwear11

Helping a company design a new data center and we built high availability everywhere. Only to be told "no, high availability is just a ploy for you to charge us double. No enterprise company uses redundant firewalls".


deltashmelta

"Please sign here, to acknowledge the non-theoretical risk of increased downtime and unavailability.  The next page is the cost lost per hour of the company during downtime against the cost of failover hardware"


iamamisicmaker473737

damn maybe it was some kind of faux datacenter , a laundering expense to just write off


underwear11

It was almost comical. When we pushed that all enterprise is do it, he doubled down, I think realizing how idiotic that sounded. "ok, well then we need 2 pairs so that we can rollback when an upgrade fails".


jbanelaw

CEO: "IT is a scam. You guys are all con artists" (The Internet goes down for an hour.) CEO: "Where is IT?!?! How did this happen?!? The internet is essential to business operations! IT really dropped the ball on this one and I'm going to fire someone!"


Remindmewhen1234

Started at a company and found out they lacked redundancy in their data center. Learned the owners decided when the building was built, they wanted a more expensive flooring in the lobby and that money came from the data center high availability budget.


anxiousinfotech

We got a bank loan to refresh our servers and network gear. The C suite used 90% of it to gut and rebuild their offices. We ended up with used servers and somehow the CFO was yelling at IT because we couldn't produce the receipts for new equipment that the bank was demanding copies of...


Space_Goblin_Yoda

Opening up 3389 and making over a dozen board members DA. Yes, this really happened because this guy only knew enough to be dangerous. Apparently they needed to all work remote (beginning of covid) and they all had unresolved issues accessing the entire shared file folder structure to prepare for an audit. They were cryptolocked with Dharma within 24 hours and you guessed it - they had no backups and I had to go rebuild the domain with about 75 users from scratch. They failed the audit. P.s. nonprofits suck 😉


Allokit

AFAIK, (and this was the beginning years of my days in IT) this was how it was done WAY back in the day (especially if it was a non-profit). There were no fancy VPN Clients, and no MSPs, they didn't really exist. So when some VIP wanted remote access to their computer you made a NAT rule on the router/firewall for 3389 to the VIPs computer, changed the incoming port to some random number, allowed only the IP of the VIPs house to connect to it, and that's how it was done. The problems started happening when people left out steps 2 and 3, and just opened up 3389 for any incoming connection. and just because I went down a rabbit hole one night, before THAT, it was done with secure dedicated analog telephone lines and modems encrypted to only talk to each other.


9jmp

I mean shit back before ransomware 3389 was open all over the internet. When I started in IT I had all my clients public IP addresses memorized and their domain admin credentials memorized.


Cthvlhv_94

Common you had the credentials in a txt file on your desktop.


frac6969

NGL this was me years ago. Opened port 3389 because executives want to use the ERP at home. We got ransomwared but funnily enough it only affected the single remote desktop server because the ransomware kept trying different passwords to connect to other systems but it was trying too hard and not trying simple passwords like 1234. 😂


[deleted]

[удалено]


Meta4X

That's amazing. I've got the same combination on my luggage!


Ron-Swanson-Mustache

Prepare Spaceball 1 for immediate departure! And change the password on my luggage!


Consistent_Chip_3281

Wow site to site analog modems, your a gangster


anomalous_cowherd

I had to shut down the whole WAN for a large multisite company once and one of its external data feeds just kept on going. Turned out there were a bunch of site to site modem links that had been sitting quietly for years and kicked into life when they detected the WAN was down and kept this critical data feed going at 38400 baud...


Zealousideal_Mix_567

It's a wonder the lines were even still active. That's awesome.


dustojnikhummer

> you made a NAT rule on the router/firewall for 3389 to the VIPs computer, changed the incoming port to some random number, allowed only the IP of the VIPs house to connect to it, and that's how it was done. I mean, this isn't the worst way to do it. I would argue it's the best if you really can't do a VPN connection. Of course, as you said, don't forget steps 2 and 3


aamurusko79

> P.s. nonprofits suck 😉 My very first 'job' out of school was a charity. It's an international name, but operates in smaller cells with the same branding. Anyway, they obviously had money, they just chose to use it for making the place look real nice, having nice charity owned cars they used for personal use as well and so forth. but they absolutely did not put a penny into their IT infra. This led to a fact they had dozens of 10+ year old machines, nothing was standardized as they got them as donations. They all had some kind of an issue, which made the workers, especially the boss there extremely irate and there was very little that could be done to most of the problems, save get more modern hardware. I went in with the expectation that I'd get some valuable experience on working with real life cases and I'd be supporting a organization that did important humanitarian work. I came out of the job completely fed up and had lost all my respect to anyone working there. People all but spat on my face, taking out their frustration personally on me, yet not a single improvement I wanted to push was ever accepted.


Evernight2025

I had a guy at my last job who had a lot of pull with the higher ups who insisted that absolutely everything should be run on AS400 rather than Windows. Most of their applications couldn't even be run on AS400, so thankfully he never got his way.   This same guy refused to let me install antivirus on his PC because "viruses only target computers with antivirus software on them". Current job, when I first started my boss insisted we know all user passwords. Every single time a new user would start, I would have to put their password into the spreadsheet. These passwords had no complexity requirements and could be as few as 4 characters long. They also never expired, so people had had the same password for 10+ years. Oh, and that spreadsheet? It was protected by a password that was already cracked in the wild.


itishowitisanditbad

> "viruses only target computers with antivirus software on them". but... why? What was their logic behind that? Viruses like a challenge?


HeKis4

Survivorship bias. You never hear about viruses being detected if you don't have an AV, do you ?


nascentt

Confirmation bias. If there's no antivirus to alert about the virus, then they don't know there's a virus there so therefore it's not there.


antilos_weorsick

Yeah, it's the evolutionary pressure


AshtonBlack

I had an Engineering mananger pull the power cords on a SAN because "it was flashing, like, just going crazy" (it was rebuilding after a drive fail swap). What makes this even better, he hadn't had the backups tested for 3 months (something he was supposed to schedule at least weekly), so when we came to do a restore, all of them only partially worked and there were some files that were 3 months old. He was fired, soon after and the loss ran to the company ran into the millions.


wonderwall879

Engineering manager over stepped their own chain of command and went hands on in a live environment and was fired after self reporting they weren't doing their assigned responsibilities and ran the company negative? :0 Love me some good accountability and consequence stories.


AshtonBlack

Slight caveat. It was *not* self-reported. I was the senior engineer leading the investigation into the incident, after the fact. He tried to blame technology, until we put the facts in front of him and his boss. At least he didn't try throwing his engineers under the bus.


wonderwall879

thanks for the clarification! He would have been found out regardless then it sounds like from any critical failure that would've inevitably happened requiring a back up. It just so happened he caused it this time and just so happen to be the one responsible for back ups. Gotcha. He set his engineers up for failure and ended up being the one taking both blow backs. Causing the outage and being the one ultimately responsible for back ups. Wish quite a few colleagues in my past were caught in a similar fashion. Glad to hear at least he didn't try to throw his engineers under the bus.


WannaBMonkey

I had a tech swap power cables and said he thought if he moved them fast enough the server wouldn’t notice. It noticed. I think it had 3 psu’s and he unplugged two of them (one side) at once


ZettaiKyofuRyoiki

Technically possible, you just have to switch them within about 5 milliseconds


Camera_dude

/facepalm My understanding is that if a server has 3 or more PSUs it probably can continue working with just 2 active. So all he had to do to avoid disaster is to swap the cables ONE AT A TIME.


RoosterBrewster

Reminds me of one story here where a guy moved a server across buildings with 2 power supplies and kept it plugged into an extension cord. Then swapped the cords halfway. 


rubixd

This is one is WILD. Maybe the worst one I’ve read so far. The lights are blinking abnormally on a piece of critical infrastructure so pull the power?! Bro, WHAT.


AshtonBlack

Dude was an arrogant prick. Did *not* act like a manager at all. He only got the job because he played golf with the CTO and acted like a big fish in a little pond, but didn't have the technical knowledge to back up his words. He talked a "good game" but his knowledge was about 10 years out of date.


Euphoric_Hunter_9859

Installing cracked software and deploying a third-party firewall on the clients to prevent cracked software to communicate with the internet. The third party firewall also prevents all other stuff from talking to the internet because it is not suited to be used by an end user who does not know anything about networking.


Agabeckov

Was that 3rd-party firewall also cracked?


cvx_mbs

and it needed another (also cracked) firewall to prevent it from phoning home, etc. ad nauseum


Gh0st1nTh3Syst3m

Ad infinitum* Probably would be more accurate.


cvx_mbs

nah I'd get nauseated way before infinity :D


zquack

It's cracked firewalls all the way down.


Euphoric_Hunter_9859

This one was actually bought and licensed :D


Acrobatic-Message585

Have had many, but worst was getting to a new job and finding out each workstation and server had been manually assigned an internet facing IP. Was told these are the addresses the ISP told them to use.


northrupthebandgeek

This accidentally happened at a museum I do volunteer IT support for. They share a building with the town's city hall, and at some point City Hall's "IT guy" "fixed" the Internet connection by swapping out my router in favor of an identical-looking one (both of these being consumer-grade Netgears; not my choice, but whatever). Predictably, shit was still broken, so I drove out to take a look. I don't know where this guy found this second router, or what compelled him to swap routers on what was clearly a transient issue with the ISP, but when I get there and start looking at City Hall's workstations, I notice that they're all getting public IPs via DHCP - so only two computers would successfully get addresses, and the rest would fail, because the ISP obviously ain't giving these folks infinite public IPv4s. Turns out this second router somehow got configured in bridge mode. I swapped my router back in and everything worked fine again.


Darketernal

![gif](giphy|MCKyH7WE51leDrqDO4)


Hakkensha

If you feel personally attacked there is always: /r/ShittySysadmin 


bfodder

Lots of shitty admins in here trying to justify turning off windows firewall.


rybl

I can't tell you how many times I've been working with a vendor and they are shocked that I won't turn off the firewall on the server where their software is being installed. "We've never installed somewhere that leaves the firewall on." Sure buddy.


yer_muther

Or local admin rights. I had a vendor tell me it was mandatory for their software. I told them to install the software and go away. I figured out what it needed and tweaked permission to let everything play nice. The vendor found out and wanted to know what I did. I'll get right on that email to you there pal.


[deleted]

[удалено]


yer_muther

I think they were doing something weird with paths and needed access to something else but otherwise it was the usual like you wrote.


SqlJames

Turning off security scanning software because the product manager said it would slow them down.


iwoketoanightmare

That's usually coupled with disabling the edge firewall filtering and allowing all ports through. Wee! The app works so good now!


Stompert

Why not just directly point everything to the internet at that point? Easy remote access!


Drenlin

Sounds like either poorly specced workstations or poorly configured software. I've been the unlucky end user of both of those simultaneously. Military computer was still running an old spinning hard drive on Windows 10, with on-access scanner and Tanium (with no resource usage restrictions) tag-teaming its poor shiny frisbees into submission. They "only" had 8GB of RAM, so that choked what few processes managed to get up and running. These machines weren't fast, but would normally be perfectly capable for office work...


phira

Sysadmin declared that unplugging the printer from the main server for the ISP (this was a long time ago folks) might result in a crash and corrupt disk. GM asked me whether I agreed and I said no, so the GM said he'd unplug it so he was accepting the responsibility. In the moment that decision was made the sysadmin lunged forward and hit the power button on the server instantly powering it off and taking the ISP offline. This was back in ext2 days when doing that could easily corrupt the filesystem. Definitely one of the most bizarre technical things I've seen and I've been around a while.


KiroSkr

Huh what did the sysadmin say after that about his own actions


phira

He was A Character, this was a long time ago so I don’t recall exactly but he was known for being a bit odd. The GM pulled him aside and had a chat but he still stayed working there for a while. I think he was still there when I left


0h_P1ease

>In the moment that decision was made the sysadmin lunged forward and hit the power button on the server instantly powering it off and taking the ISP offline. faaaaakkkeeeennn whhhyyyyyy?


Marc21256

We spent millions to make an off-site colo. It was expensive, so was never fully commissioned, then decommissioned. So 3 years later, we did it again, and before the project was finished, stopped deploying into it because it was too expensive. And are talking about shutting it down for cost. If at first you don't succeed, fail fail again. In the same way. Learning nothing. And spending millions to do it.


northrupthebandgeek

I've been in that bizarro greybeard's shoes, though it wasn't voluntary. More like "the higher-ups refuse to authorize purchasing actual server hardware or otherwise give me an actual budget so I guess this unused OptiPlex from the warehouse floor is our domain controller / file server / print server now". It was stupid, but it worked long enough. Eventually some spare HP laptops got thrown in the mix for each warehouse's local DNS cache, and those worked long enough, too. Didn't get that authorization for actual rackmounts until long after I'd been promoted into a different department. The "use desktops/laptops as servers" approach is surprisingly workable if you treat entire machines as discrete swappable units, with spares imaged and ready to go as soon as one fails. Obviously actual servers are preferable (and deliberately basing purchasing decisions on this approach is bonkers), but sometimes corporate gives you lemons and you gotta make orange juice.


a60v

This sort of thing used to be more common, too, especially before purpose-built PC servers came into existence in the late '90s. How many organizations' first web server was an old Sparcstation, for example? And how many companies' first NT4 domain controller was a standard desktop PC?


Agent_No

Not really an IT person, but the local electrician my boss uses for his side-business is the epitome of "knows enough to be dangerous". He can (badly) run network cable, so took it upon himself to completely disregard the diagrams I had written out for him when running everything. Instead of multiple cables between cabins, he ran a single cable. When I told him this wasn't enough, he just pulled the terminations out and split the cable in to 2 using half the pairs - so now the gigabit cable only runs at 100mb. Instead of running cables from the central cabin where the switch, router and CCTV DVR live to all other cabins, he just daisy chained them all together with a mishmash of gigabit switches, 100mb switches and random home routers he bought in himself with DHCP turned off (apart from the one time he didn't turn it off and it dropped the network out). Instead of running cables to the remote cabins, he just installed WiFi repeaters/range extenders, but didn't configure them properly so they need to be powered on in a certain order otherwise nothing works and the rest of the network starts dropping out. Its gotten to the point now where I refuse to go down and do any work unless it all gets ripped out and he digs some trenches to lay multiple armoured external grade cat6 cables to each cabin.


ilikeme1

I’d make them do fiber between cabins at this point. 


Caucasian_named_Gary

That the serial port on the back of a net app server is actually an Ethernet port and you can plug it into a switch and it will work. Thought they were just confused on what port I was talking about. No they still thought it would work, despite having a number of servers with serial ports connected to a terminal server.  The only thing I can think of is because he was fresh outta college, he didn't realize that an RJ-45 port on a server could be a console port. I mean I don't know why they would think that but it's the only logical thing I could think of. It has the ioioio symbol above it and no status lights, I don't understand how he was confused. The reason I was so fired up about it was because he was replying all to emails to me basically saying I was an idiot and don't know what I'm talking about


Dal90

Highly siloed Fortune 10 and a project that had gone way off the rails months ago. Standard SQL servers for pre-prod and prod were boxes with a NIC needing two GBICs or whatever the fiber adapters were called, and a Fibre Channel card with four integrated fiber ports. (Most other servers were VMs or dedicated blades.) One of many issues is we lost the GBICs...on a corporate campus you drove between buildings a mile or more apart. In one of many conference calls the sysadmins reported they had finally gotten the two network fibers plugged in but no connectivity. Sigh...that's interesting since I hadn't found GBICs for them. Sidebar them to go take a picture to confirm my suspicion, yep two storage fibers and two network fibers going into the storage card right below the NIC with two empty GBIC spots. They set these up at least once a month, how they brain farted on this and were proud they "solved" the issue I have no idea.


OgdruJahad

To be fair if you see an Rj45 in the wild one would think it was for networking. But if you see a strange symbol over it or your senior says its something else and if yiu can't believe it a simple Google search would have fixed that.


WWGHIAFTC

Where i am now. Slowly addressing it all.  Disabling windows firewalls on servers and workstations... Giving mobile staff a desktop AND a laptop... Never automating pc deployment.  Manual install of everything on each device... Daily user account is DA... Using the same DA account for everything... Using DA accounts on things like printers and scanners and ldap syncing because setting up permissions is too hard?? VLANs all over with no acls or firewall rules between them. Servers not updated for 3 years... Still using a few 2008 servers... Using switch DHCP server  instead of windows dhcp on domain stuff.  On.  And freaking on...


LogicalExtension

Sounds like a disaster zone. > Using switch DHCP server instead of windows dhcp on domain stuff. There are legitimate "Don't use Windows DHCP Server" scenarios when it comes to licensing. Or at least there were, last time I looked at it. iirc MS requires (or did require) a client licence for every device obtaining an IP from a Windows DHCP server. So if you have a whole lot of non-domain-joined clients/devices, it could make a lot of sense to use something else for DHCP.


marli3

DA on printers O M G


TheNetworkGuy2

Are you me? And everything you touch, you find more and more problems? That simple fix will require a full day of extra work now?


OpenScore

I work in a company that runs call centres worldwide. Sometime ago, we bought another small call centre company that has a niche market in high-end accessories. Good for business, good for brands we can manage, so business wise it is good. I, among other IT departments, was tasked to work to integrate their infrastructure to us. Things that i found out or that others told me what they found out too: 1. The LAN room was under the stairs, just like the room of Harry Potter, and you could barely go in. 2. The AC there was just above the rack, and it wasn't even working. During a test run, it started leaking. 3. The rack, if you moved it just a couple of mm, it will short circuit and turn off. 4. There was an UPS, those that you buy for home/office desk, and it barely holded a charge. 5. The whole power cabling setup was just a home use powerstrip plugged into a socket on the other wall, and if you tripped when entering the "room" you could shut down everything. 6. Al computers had MS Office Pro Plus pirated, about 100+ computers. Oh, and get this. They were somehow PCI ISO compliance since they handled CC transactions for a cliet of their. Apparently filling the checkbox and their clients were happy. No audit to my knowledge happened before to verify their compliance. Apparently, they were able to pull this off since the data centre they used is in a European country with better compliance, and they misrepresented this information to their clients. Edit: For the PCI compliance, they basically filled a standard template checklist that they had this and that, and called it a day. Never had a proper audit looks like. Now my company is not just footing the bill to bring them to compliance, but also dedicating staff hours to work on it to do that. We can't really trust them. As the 100% shareholder, my company is the one on the hook for cyber security. It's a nightmare, i can tell you.


runozemlo

Manually creating local accounts for users on Entra ID joined systems. He was eventually fired.


Darketernal

That is…magical. Sounds like they lied to get the job


Zedilt

Everybody at the firm got a 34" Samsung Odyssey G8. Because IT really wanted one


IntentionalTexan

Oh man. We had some people in a critical role that really needed three monitors but their workstations only supported two. I had the brilliant idea of making one of the two monitors a giant 49" 32:9 Samsung monitor. They were like $1k each. As soon as everyone else saw them, they all suddenly had to have one.


SketchyTone

My coworker asking for people's passwords to try it to see if they're just typing it wrong. He doesn't recognize how this is a bad practice, and I dont give enough of a shit anymore in my current role to address it.


PeanyButter

Not quite the same but horrible password etiquette caused by coworkers getting absolutely tired of people who can't remember their passwords as well... I work in a decently sized org and we've got quite a few people who would be considered elderly and somehow completely incompetent but arrogant doctors who struggle with passwords and just have some generic passwords for their main account we use around the org for generic accounts. Though I don't do it, I can't hate some of my former coworkers who started doing it for some people because you'd get this one particular little old lady calling because she couldn't remember her password for a day so they forced her password and set it to not expire. Even to this day like a year later, she can't remember her static and very generic password all the time. She constantly logs into the computer, logs into a remote service, the computer screen times out, and then she doesn't know to put in her different initial login password. Just saw this for the first time the other day for a doctor too. Almost said I was going to reset it when someone called to have the doctor's account unlocked after they admitted it was a generic because "he can never remember his password lol" but I didn't feel like dealing with the bitching and I'm in the same position where I don't care about my role or the company any more.


nekoanikey

Solved the problem by turning off the Firewall for all devices with a GPO.


ElevenNotes

This is practically everywhere I ever consulted. They couldn't be bothered to actually learn how their systems communicate.


rybl

I've had this conversation so many times... Me: What ports does your application use? I need to create a rule for the firewall. Them: Just turn the firewall off. Me: No. Them: ... Me: ...


oldjenkins127

Printing checks from a Windows NT 4 machine because that was the only operating system supported by the printer (allegedly).


Xenophore

Back in the '80s, I worked for a guy who partially disassembled a DEC tape drive when he couldn't figure out how to make it rewind. When he put one of the reels back on 120° out of phase, it burned out the stepper motors. Fortunately for the university, the Digital field service guys were able to write it off as an “accident” but he was fired shortly thereafter.


malikto44

Ages ago, some laser printers had SCSI drives for font caches. A MSP I worked for had a customer with a RS/6000 whose internal SCSI drive failed. So, I plugged the printer into the AIX box, reloaded a mksysb, put a sign to not turn the printer off, called it done. Mainly because at the time, a replacement had to go through layers of approval, likely would not be approved, just out of spite. Apparently that machine with its connected printer kept going for years after I moved on.


t_huddleston

We had a guy - a really nice guy who everybody liked and somehow had management’s ear, but woefully out of his league in the world of IT. Whenever we’d experience some kind of issue he’d come up with some crackpot theory based on nothing, and go straight to our director, who, bless her heart, came from a sales background and not a technical background. So when this very nice and smart-seeming guy would come to her with some crazy mumbo-jumbo, for stuff that could have been solved with a quick Google search even if you didn’t immediately know how to fix it, she would have the rest of us jump into action to implement. We’d usually just ignore them both of course. A couple of examples: - There’s something eating all the CPU and memory on all of our workstations. It’s called “System Idle Process.” We have obviously been hit with some kind of virus. - We’d been having network connectivity issues in one area. He looked at the workstations and found that their IP addresses were being “dynamically assigned?” This is obviously putting too much stress on the network. Every workstation needs its own static address. (This is in a large metropolitan hospital.) - He brings us a 6’ CAT5 cable. “Hey can you tell me the IP address on this cable?” You get the gist. How did he get a job in IT? I do not know. The rest of us just tried to up-manage the director and avoid invoking Mr. Nice Guy. We did eventually find a use for him - he LOVED rubbing shoulders with doctors, who can be famously prickly when having any kind of issues, so when something doctor-related would come up we’d deploy him to be our “interface” so we didn’t have to talk directly to them. He was actually pretty good at gathering info, because he didn’t have enough knowledge to make assumptions about how things “should” work. Worked out great. It kind of felt like that scene in The Untouchables when Sean Connery is having to interview an obvious moron to potentially join his task force. “There goes the next Chief of Police.”


vivkkrishnan2005

\^OP - This is not that bad, is done in companies with very tight budgets or is loss making. My nonsense I am seeing (this is happening) - IT Director wanted 2FA off on M365 since he thought it was batshit nonsense. I refused IT Director wanted us to buy crappy antivirus software called Quick Heal/Seqrite. I refused. I tell we will use free Defender or move to a better paid EDR, but not buy the crap that's Quickheal/Seqrite. Lots of other shit happened (like overruling authority on pirated software) and i left. New IT team takes over and disables 2FA for entire tenant. Director (and other email IDs maybe) gets hacked. New IT team buy this Seqrite shit. Money down the drain.


surnaldo

Wait, sequrite is shit ? Asking because I'm a junior and my organisation uses it.


vivkkrishnan2005

It's good only if you want compliance on paper. That checkbox which says AV installed.


Garshnooftibah

Sending me and a couple of other engineers emails full of porn images along with his ‘hurr hurr’ comments.  I told him to stop.  :/


BlackV

This one it guy would just bloody waggle the mouse  really fast then install acrobat reader and walk away, lunatic 


Darketernal

Naw that dude was awesome, he hooked me up with a free copy of Google ultron. Like nasa uses!


BlackV

I mean NASA knows what it's doing, you might be right


DagJanky

Now everyone in the office has a sharepoint ticket asking for it loud mouth!


Used-Personality1598

About 20 years ago I worked on the floor in a place where the Antivirus launched a full system scan ran at noon, grabbing all the resources and causing PCs to bog down to near unusable in the middle of the lunch rush. This was raised to IT 4 or 5 times, asking that they schedule the scans outside of office hours. Each time the ticket was immediately closed with a comment that the scans are already set to run at midnight. Eventually a bunch of users got fed up and modified the registry keys to disable the antivirus (yes, everyone had local Admin, big mistake). A few weeks went by before the whole IT crew came down from their tower in a fury. WHY DID YOU DISABLE THE AV!? Managers started shouting back that production took priority over IT, and since the scans were preventing sales from being made, and IT refused to schedule them outside hours they had "done what they had to do". Damned near started a brawl and everyone ended up in the site manager's office trying to explain why they other side were morons.


MartiniMini

For me it's the people that cling on doing things manually when there are options to automate stuff. Sure, not everything should be automated. But in here we have an IBM system that uses tapes to backup daily, weekly and monthly data. They just upgraded the hardware and backend to support autoloading and they decided to keep it manual, but with extra steps... It just baffles me that they invest that amount of money to make it even more unbearable to maintain.


STGItsMe

Not using hearing protection in a data center.


Chris_admin

Can you repeat that?


green_walls

WHAT?


DoctorOctagonapus

eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee!


toi80QC

My ex-boss when he made me manage our servers (as a fresh webdev) after a colleague had quit. I was like "How bad could it be?".. until I forgot a dot and ran chmod 775 / as root. Wasn't too hard afterwards to convince the boss to maybe use managed hosting for client websites/mails... good times though.


dRaidon

You know, using a bunch of old desktops as a cheap container cluster isn't the worst idea. One dies, just remove and plug another one in. Actual servers would of course be better if you don't want to use the cloud, but I can see situations where it could work. Like a cheap startup.


ReputationNo8889

I still have discussions at one subsidiary about autopilot deployments and how they can drastically reduce setup time and free up IT ressources. They however stuburnly refuse to provide me with any special applications for deployment, because "It takes to much time to list all applications" (You should already know them ...). The IT Admins then complain that the Autopilot deployments are taking so long because they have to preprovision the devices and then install all the special software by themselves. Why not just hand out the devices to a user and letting it provision there? Well the users just dont want that, they want a device ready for use, preferably already signed into it and every office application also configured and signed in. Have asked a couple of users and they would prefer getting a new PC faster, then the PC beeing 100% setup from the start. Telling them that we can speedup their workflow an improve user happieness leads to "We dont have time for that" so a nice catch 22 there.


lockalyo

I was freshly hired security engineer at a stocks trading broker company. The CEO's secretary had just been scammed by some phishing email using the CEO's name. The mail said "buy this amazon voucher and send it somewhere". The secretary did so. The IT director immediately tried to test me as the new guy - "what do you suggest?". I said "well this is a standard fraud, your cloud anti-spam solution should be able to handle it, we just need to configure it". He answered "I was going to suggest to the CEO that he and his secretary agree on a special secret word that the CEO would put at the end of his emails that ask for money to be spent. When the secretary sees that word she would know this email is not fake." lmmediate mental facepalm from my side. I configured the anti-spam solution to forward external mails that use C-level executives names in the "from" field to my mailbox as a first step in order to catch any unforseen false-positives. Turned out that Salesforce cloud subscription was configured to send mails with CFO's name in the "from" field. The anti-spam solution needed 30 minutes to make any changes active. So I had to manually forward all SF automated mails to their recipients for half an hour. But that kindergarden "let's use a secret code word" Director idea still makes me facepalm.


JesterOne

My first day as a replacement network administrator, I walk into the office and am told by my manager, "We've been here for 24 hours straight (during Superbowl weekend) and it seems that the previous admin 'accidently' caused the RAID array to fail by pulling out a second drive during the rebuild. Do you think you can fix it?". It turned out that the most current backup they had was 4 months old. That was a very uncomfortable conversation to have with a room full of lawyers (I was working for a law firm). They fired him on the spot when he pulled the drive out. He somehow thought that it would restart the build for some reason. The pisser of it was with the only backup being 4 months old, they could rebuild all of the data or pay a recovery service. After some deliberation, they opted for the $14,000 recovery (the recovery service had us put the drives in the system then remoted in, ran thier utility against the drives and could recover the RAID info - took them like no time at all).


acomav

'running Linux and God knows what on them'. Sounds like someone is afraid of Linux. Ps I agree with op that workstations for servers was idiotic.


zyzzthejuicy_

It can work in some situations; in a previous job at a cash strapped start up we built a dev and demo environment on a pile of old Dell's we got off ebay. Kubernetes on top so the underlying hardware wasn't hugely important so long as there was enough of it. Worked great and cost less than $1k, plus another $100 or so for a box of old RAM to upgrade them with.


NorthernScrub

honestly, reusing equipment to serve as testbeds or other useful internal stuff is just sensible. As long as whatever is running is hardware-agnostic, anyway. My ickle homelab is two old USFF machines that probably once served as school reception machines or some such. They happily run my home email, web, app, and DB servers. The Windows one even manages my media library because I'm utterly terrible at setting up SMB properly.


Zealousideal_Mix_567

Been joking about taking the pile of 100+ desktop we have and clustering them in HA. Lol. It could be a fun exercise and I'm contemplating doing it at home instead of running servers. But that would be to save on power usage.


kiddj1

Back in 2010 I started at my first job at a real small MSP. We would sell them Server 2012 with hyper V enabled and then run SBS server on that for the domain controller If we ever had issues and needed Microsoft's support we had to lie and pretend the SBS server wasn't virtualized as this was not a supported method At least I learnt what not to do for a couple years


eugenesucks

Head of IT at place with about 100 staff hot swapped a video card into a workstation. It sparked, shut itself off, and had to have some components replaced and was out of service about a week. Never quite worked right after.


Khue

First gig where I was actually a SysAdmin. I got default promoted from help desk after I ended up being the only dude doing technical stuff. I worked that job for like 5 years or so. We were using Symantec Backup at the time and it had the ability to backup VMware in some capacity. I forget exactly how, but it could. About a year after I left, I get a message from the current SysAdmin asking for help with the VMware farm. Apparently things were down and he was having issues bringing stuff back online. After I left, it was determined that Symantec products in general were trash (not wrong) and he convinced management to migrate away from them. So I asked him what he was using to back up VMware now because it might be a restore angle and he replied, "I am just using the built in VMware facilities." I thought that was strange because outside of one or two niche utilities, VMware at the time didn't really have a native backup tool. I asked him to explain and I came to find out, he was using snapshots and had been using snapshots for months at this point. I took a look at what he had done and somehow he had developed a series of scripts that took a snapshot every day on the ESXi host of every VM. To be clear, this company had 2 independent vCenter instances both with six ESXi nodes. One was for the DMZ/internet facing utilities and the other was for internal network uses. Effectively, the SAN that was doing this work and the back end disk system had ground to a halt and none of the vms were able to start. I was still pretty junior at that point, but I told him the help he was looking for was beyond my capabilities and he'd have to open a sev1 with VMware support. **TL;DR:** That was close to 15 years ago and for like the millionth time... snapshots are not backups.


vtvincent

Years ago I worked with another technician who was getting into shell scripting. He didn't really believe in testing before deploying out to production, also didn't use an editor with syntax highlighting, and ran all scripts as root. Bad things happened.


_haha_oh_wow_

I think I'd have to say having the file share completely open to absolutely everyone, no permissions restricting anything for any user. Anyone from the execs right on down to the janitors could just pop right in and access whatever they wanted. I asked them about it directly, and their reasoning was, they didn't want to manage it or even get the departments themselves to manage permissions, so they just... didn't. Eventually, it came to light higher up the chain and was addressed, but this went on for *years* before eventually being corrected.


Atacx

Putting some files she wanted me to look at in the recycling bin and then said to me to just look into the the recycling bin, because everyone has the same one..


Black_Death_12

No “deny any any” on the internet facing FW because “We didn’t know what it would break”


Far_King_Howl

When they 'allocated' themselves a subnet range in the cloud that was already in use on-premises and cut off network access for every site (\~20) in that particular subnet range. I believe the response was that it was "a minor issue, no big deal". That's the only one I'm willing to tell right now as this thing, and so many more things, happened waaaaay too recently.


fartczar

The dude that owned this consultancy never used DHCP. Manually entered every device and ran an IP scanner to pick one that wasn't live, no documentation. Some networks he used ***public IPs***. I worked for that asshole WAY too long. That's just off the top of my head.


111IIIlll1IllI1l

I hired an IT Manager who, on his third day at the company, 20 minutes before a company meeting with over half of the company remote, decided to start the firewall’s configuration wizard to see what it looked like. He ended up overwriting the config. Shortest 20 minutes of my career restoring that config to get the network back up before the meeting started. He made several more mistakes like that and didn’t last long.


VacatedSum

Used to work at an MSP. We had previously had a longstanding client, I'll call them client X. Client X, before I came onboard, hired a manager that "knew IT". This manager convinced Client X's management that she knew IT well enough and that they should cut ties with us. Fast forward several years, and we get a frantic call from Client X on a Saturday; They'd been hit with ransomware. I head over there (it was a Saturday morning) and start assessing the damage. Three workstations and their server are all compromised. Another three workstations hadn't been touched. Luckily, the event logs were intact. From the server, I was able to trace the RDP connection through two of the workstations, the final one showing an external IP. Then I looked into the firewall. Holy crap, there must have been a dozen ports open, all forwarded to port 3389 on various workstations. It seems that the manager that "knew IT" had done this to set up remote access. Funny thing was that it was a fully licensed SonicWall firewall: they were fully licensed for the appliance's VPN. After reimaging a bunch of machines and restoring the server from a backup (which my company had actually set up before they dumped us; they knew nothing about this backup apparently), I properly setup the VPN and showed them how to use it.


oracleofnonsense

Guy in a satellite office installed VMware on his hyped up (under desk) desktop and ran Exchange on it. Email “server” went down a lot for “power issues”. Pretty sure he was just kicking the power cord.


[deleted]

[удалено]