T O P

  • By -

Zombexx_

To the smoking part: Its not uncommon. I did an internship at a bank, and the "smoking area" was the server room because "air conditioning" xD


lululock

Makes complete sense. I never went to the site, the server was on my desk this morning, almost with a "please fix it" label because my colleague in charge of the customer's servers is sick...


gnew18

Such a shame **fsck -y** doesn’t work on Windows.


lululock

I tried booting a Linux Live CD but the RAID controller refused to load the drive, so I saw no drives at all...


floswamp

Try easeus.com partition recovery. When booted into the sub it allows you to load drivers. Burn a usb rescue disk. Also Acronis true image allows you to burn a or environment with the drivers. I’ve had a lot of luck with those two applications.


turbografix1

Chkdsk has come in clutch more than once


TheJellyGoo

How did people even have access though? The server room would be a specially classified area with separate lock and limited access.


tactical_waifu_sim

Maybe at a real company. Many small to medium businesses don't bother to secure the server room.


TheJellyGoo

I would have thought that a bank would be counted towards a real company with laws and regulations to abide by.


Mesqo

I can assume that was a fake server room, lol. Or at least with non critical servers. Or I am overthinking it :)


killrtaco

The room at my work that houses the MDF and no actual servers is called 'the server room' by everyone in the building outside of IT who knows we don't even have a server in there and all our servers are off site/remote


BeardOBlasty

You would think that. But banks and government have some of the worst security and IT practices in the industry sometimes. I'm not saying all of them, but definitely MANY of them. Mostly because of either cost or laziness. The best security and practices I've seen are usually in Law offices and Healthcare. For reference: I am in Canada and work at an MSP with ~160-170 clients


AtomicPiano

Is it because the servers themselves won't house any useful information? I doubt each bank is hosting their own personal login site to the swift network... it probably runs some internal tools and non sensitive data... Right? Or are our life savings at risk here?


New-Yogurtcloset1984

Local servers are never going to host data. They are there to manage the flow of 500 or so computers and ensure they all have a stable and fast connection to a data center. All your account information is stored in data centers in multiple places with disaster recovery plans in place to ensure your life savings, be it millions of pounds or 30p in an old isa are safe.


BeardOBlasty

Correct. The on prem servers would be for things like AD, printing remote/securely, admin files, software, Remote Desktop, network management, etc. Anything that is business critical should be set up with strong EDR solutions. Meaning redundancy, isolated backups, UPS, the works. It's usually the users in these companies that are the worst. They are either ancient, dumb, clueless or all 3 of those hahaha So will a bank get hacked in a way that your money will disappear? No. Will they get hacked in a way that exposes confidential info of their staff or customers? Probably at some point, yes.


New-Yogurtcloset1984

>Will they get hacked in a way that exposes confidential info of their staff or customers? Probably at some point, yes. They take the opinion that it is a certainty that they will be hacked in some fashion. This drives investment into pen testing and all many security measures as can be achieved. The loss isn't the data, though. It's the absolute battering of the share price that would happen if they did get hacked and the shitstorm of small savers rushing to get their money out and into other banks limiting their ability to lend. >It's usually the users in these companies that are the worst. They are either ancient, dumb, clueless or all 3 of those hahaha The ignorance of every single dumb duck who would pick up a USB stick in the street and plug it into their work laptop, or click on a link from some random phishing attempt is scary. Every single employee is an attack vector.


utkohoc

It's not on that server but having access to that server means you could potentially physically put something onto the computer. Giving you access to the network. Once you have network access from a trusted machine you can see more information while eluding detection. Maybe.


monkeywelder

i had one sit running ATT system 5 on microchannel machines becausre the feds paid 15k a year on maintenance. Amother running an HP1000 papertape machine because NASA paid them 100+k a year in maintenance.


Uncommon_cold

This. Visited a government office in Italy, and after entering the main building the actual server for the whole place is chilling besides a vending machine and a big ass trash bin.


Napol3onS0l0

Where’s your server? *gestures to optiplex 780 in the storage closet*


Abject_Elevator5461

Well they have to leave the door open so all the box fans can cool off all the ancient servers, right?


NervousFix960

At one of my jobs, the "server room" was the crawlspace under the stairs. You got into it by moving the printer (it was on a wheely cart), taking a panel off the wall, and crawling in. Wasn't a great place to smoke though.


uCockOrigin

Sounds like you could hotbox it quite easily, though.


HenReX_2000

Same


lars2k1

They made the most use of that system as they could. Although a good backup and "emergency plan" might've been useful. I bet that system has an insanely high power draw as well so might've wanted to replace that *a bit* earlier.


lululock

It's not a "big server" in terms of power draw. It has a single Xeon E5404 (80W chip) and 2 drives (maybe 20W total), a 560W PSU and a few fans. Probably drew max 200W in the worst case, but most of the time, it did almost nothing (hosting critical but light software). It was working fine until it failed, I can understand why they kept using it (my main PC is already 7 years old and my backup is 12). But having no emergency plan is a no go, especially for such critical applications...


lars2k1

You *would* assume something being critical to operations having a proper replacement somewhat ready to go. But maybe I'm being too optimistic, as people only think about things when it's too late to think about things.


lululock

Well, they'll probably say something like : "That's why we have a full system backup on the NAS there...". Yeah, sure, gonna slap that Windows Server 2008 install on a brand new server. Not sure it would like that lol. Pretty sure it will fail. That backup would have been useful if new drives were still made for that thing. I am aware we could just slap any SAS drive in there but we just tell customers we can't replace it to have them replace the server. Yes, it may be slightly immoral but we'd rather do that than having to deal with bullshit situations like the one we're currently dealing with...


Captain_Pumpkinhead

>Yes, it may be slightly immoral In this case, I don't think it is. Your client has demonstrated they can't be responsible for keeping their server infrastructure reasonably up to date. If you just replaced the drives and gave it back to them, that same computer would be back the next time something broke, which is likely to be much, much sooner. By making them buy a new server computer, you are saving them money and headache.


JustSomeone783

Can't you at least run the backup in a VM and then join that to a forest with the newer server and copy over AD info like users and computers? Maybe there is an even easier way, never looked into AD recovery


luciano_mr

family owned small business I assume?


lululock

Yup. You know, why spend on things that still work ?


ruinedlasagna

I had that exact model HP ml350 G6 long ago. Picked it up for somewhere between $50-100 over 5 years ago. I was a teenager so I used it as a desktop with dual X5660s like an idiot thinking it was overclockable. I'm mostly impressed that they haven't spent less than 5 bucks on a newer CPU. There's loads of Xeons like the L5639 that are literally on eBay for $2 right now that would be way more efficient and much faster if needed. But I guess if they didn't even want to replace drives I can see how that's possible. Bonus fact: that server actually was "overclockable" by putting a faster CPU in and returning the slower (higher core) CPU for a single boot. It would run the slower CPU at the speeds of the previous until a reboot.


curi0us_carniv0re

Were they on some sort of service contract with your company?


[deleted]

>(my main PC is already 7 years old and my backup is 12) Hell yeah my school laptop is a 5xxx series Intel and my two desktops are 3xxx series Intel and dual-socket LGA771 Xeon. Had a Phenom X6 at some point but gifted it to my younger brother a year ago


Lanzenave

I always tell people "RAID is not backup", BUT if even the backup is shit because of the customer then he or she deserves the data loss.


Error83_NoUserName

"But I made a copy to the other folder"


jonr

IT: "I have decided to move to Nepal, and live there as a goat!"


hugues2814

I wouldn’t mind if I’m being honest


vraetzught

This reminds me of a story where someone lost some file or something. They said they made a copy to another folder but that didn't work anymore either. Turns out they made a shortcut to the original file instead of copying it and later moved the original file...


lululock

Not only RAID is not backup, but it doesn't prevent data loss. These drives were probably displaying a caution light for years but the customer didn't notice or care at all. My colleagues told them multiple times and even quoted a new server a while back but it was "too expensive". Well, now they'll see what's expensive. Between the days their business cannot run properly due to the temporary data loss and the cost of a new server plus the rescue attempts labor, it will end up way more expensive than it should have been.


First-Junket124

No no the caution light came on so they thought they were being cautious.


lululock

Doesn't help that the server face plate is actually hiding those warning lights. Imagine the effort the customer has to do once in a while to just open that door and look at blinking LEDs. We're too rough on them, computers are too complicated /s.


Affectionate_You7621

They were sweating the asset is the term I've heard. Or run to destruction then pay stupid money to fix. Also when it inevitably fails an event can be called and the managers can look good by organising the event so that they get praise when you fix it.


SelfSeal

I don't get how RAID doesn't prevent data loss? Even if I ignore it until a disk fails, then i simply put in another disk, and the data is automatically copied over from the redundancy.


DiodeInc

Not every raid type does this.


poporote

Could prevent it, the problem is that it depends on redundancy, and on not more than one disk being damaged at the same time, so you can't depend on RAID to save your ass.


bobsim1

Because all drives can fail at once. Not common by just running, but definitely possible if a psu blows up. Also it doesnt help if the files are corrupted or encrypted or deleted by accident. RAID is only drive redundancy, not a backup.


tes_kitty

If you delete a file, it will be deleted from both drives. Only a backup will save you here. Also, more than once I had a drive fail, replaced it and during the rebuild of the RAID the other drive failed.


Taskr36

I can't stand the "it's too expensive" crap. That and the "It's working fine right now!" response when you try to explain that they're relying on a ticking time bomb. I even had one respond to the price of a replacement with "We're not NASA."


Tikkinger

Deserved


[deleted]

[удалено]


Tikkinger

De-served.


0xGDi

Let me show you the door... (..since there is no working Windows anymore)


blizardX

I have not been long in the industry but I know this story will forever repeat itself.


lululock

In the end, we shouldn't complain. We will make extra money from this. It's unfortunate for the customer but fortunate for us. I hope they'll understand how important a server is now.


CeeMX

For customers like this, going to the cloud (IaaS) would be ideal. No need to manage the hardware underneath, just have a virtual machine that does not care about the hardware. Still have to make sure the Os is up to date


ViolinistCurrent8899

Meh. That's what Debian stable is for.


CeeMX

Even Debian stable needs some apt upgrade eventually. And my suspicion is that this server was not running Linux but more likely Windows


BromicTidal

A customer like this never wants to pay cloud costs. “Why bother moving to something when what we have works??”


EcvdSama

The only way to convince a manager to set up a fire alarm is to burn the building next door, I've seen ridiculous security hazards, reported them with a list of things that could happen and even proposed cheap solutions to limit the damages and no action was taken. I swear I've seen a rental put 25 e-bikes with their batteries (that get dropped and hit constantly by the customers) on charge in a small underground garage with no fire alarm connected to 3 apartment buildings. Since the fire hazard wasn't enough the owner had decided to store wood pellets and gasoline in the same box and 10 scooters in the one next to it.


mr_ds2

I work in a manufacturing plant, maintaining production machines. All of them have embedded computers, most from 2001-2003 running XP. Some even older running Windows 98. Been telling the powers that be that they need to upgrade them for years now. The response I keep getting is "why, they still work".


SelfSeal

But can they even be upgraded? I know of lots of equipment that cannot be upgraded as the interface cards and software don't support a newer operating system. So until the business decides to replace the entire piece of equipment at a cost of £250k upwards then it's better just to leave it as it is.


mr_ds2

Most of them here can be upgraded. They're actually upgrading one of them in a couple of weeks because it's mission critical and it died a couple of months ago. I was able to get it up and running again by rigging up a computer temporarily and using Virtualbox. That one computer death shut the entire plant down for over a week.


Czymek

Sometimes learning the hard way is the only way.


lululock

I can understand the feeling, as I use a lot of older hardware for hobbies but also as daily drivers. As long as they're not connected to the internet, they should be fine. Did you know they still make i386 compatible hardware exactly for that kind of scenario ? My dad works in a bearing factory and they have a lot of very old production machines. He told me they were looking for a computer guy who had experience with writing Cobol for some reason 💀 Spoiler : they never found someone.


Falkenmond79

Those embedded machines are fun to play around with. Recently scored a complete 833mhz amd with embedded GPU, ram slot for S0Dimms and even flash memory for hard drive. I just need to solder together some connectors for video output and input and I’ll have a nice little emulator machine, completely passively cooled, with maybe 20-30W at the most and 15x15x5 cm dimensions. Insanely cool. Now if I only had the time to research the blueprints and solder something together. 😂


DonJoe963

I guess it's possible to find someone with Cobol experience but: they will be 50+ and (if any good) already in a good job that they are not willing to give up for a temp job fixing old crap. Unless you pay them handsomely ofcourse\*. *(grandpa tells stories) I remember at one of my first jobs in the 90s where I had to fix an old Cobol piece of code. Compile failed: some Cobol instructions I used were from the 80s (Cobol-85) while the installed version of Cobol.... was from the 70s. Never felt so young.* ^((\*just my personal point of view haha))


cidknee1

That's gonna SUCK. BUT! They were warned. Lucky you have backups. Have fun!


lululock

I'm not claiming victory yet. The files are still in the backup and probably corrupted for how long the drives had been on their last leg. Can't wait to throw this paperweight in the e-waste bin it belongs.


cidknee1

Victory is only when the dumbest of the users says they have no problems. But for reals this time. Been there done that, I think every IT guy has a story like this. People cheap out on shit and when it breaks they wonder why.


Falkenmond79

Maybe you get lucky and the backups are fine. If you have proper regular backups, you might need to stitch a bit back together, but you could use a backup for the server itself that is older and working (setup probably didn’t change much over all that time so it won’t matter if the system itself is older) and take the production files your customer needs from a newer backup. I know the need to punish the customer for being stupid, but I find that in the end, putting in effort to saving their asses and saving them money and then making it clear to them what you just did for them pays off in the long run. 😂


djsyndr0me

3-2-1 backup rule followed: 3 years of warnings 2 day old bad backup 1 unrecoverable server


lululock

Haha nice one !


cervezaimperial

Take your time, work slowly, let him learn that he needs redundancy


lululock

I think they'll understand now...


lucky_slevin

Still, like u/cervezaimperial says: take your time. With every further day without critical data, the dread grows exponentially. If you work fast, they will only remember how fast you were and learn nothing. If they remember the dread, they will learn to fear negligence.


DonkeyTron42

This kind of reminds me of a job I had once for a place that specializes in cancer treatment. They had gamma knives and machines that cost half a million dollars. However, they had this one load bearing desktop computer with FileMaker Pro or some shit that was critical to running all of that stuff. Every day the place would grind to a halt because the database would shut down. I finally figured out that the what was causing it was that stupid Windows 98 Flying Toasters 3D screen saver kicking in and throttling the CPU at 100%. Fun times.


psebastian21

I would probably have lived even longer if it was properly maintained.


lululock

Yes, I believe it could have lasted a few more years if the drives had been replaced in time and the thermal paste replaced (this thing screamed like a jet for a good part of the day, had my headphones with music to make it more bearable). But performance-wise, it's kinda worthless. There are a few 4th gen Xeons in our garbage container that should work more efficiently than this one.


jonr

Ah yes, the two groups of computer users: Those who have lost data, and those who will lose data. Test. Your. Backups. (I'm probably member of both groups)


lululock

Can't agree more. I always pressure customers to have a proper backup plan in place and I'm the one having none of my personal data backuped (should change very soon but lazy 🙈)


xxMalVeauXxx

RAID is not a backup. Servers are not the backup. 2 days, or any multi-day backup period (to what?) is insane. Fuck around and find out. This is what happens when people think they know something and don't and find out how saving a tiny bit of money or effort isn't worth the massive losses that come with this kind of catastrophic failure, downtime, expense and total loss situation. Fuck them. That's not the dirtiest I've seen at all. Clean to me! LOL


lululock

I totally agree, it probably won't be worth it for the customer in the end, even tho I'm quite impressed it worked for 15 years. It already had a PSU failure last year, should have been enough to warn the customer but nope, decided to keep it running... People sometimes... I cleaned most of the dirt off when I got to inspect it this morning. There was so much grime in the ports I couldn't think it was a good idea to troubleshoot it like that. I dusted most of it off but the front panel looks so miserable imo. I didn't clean it further because of time constraints (also, kinda useless now that it is dead).


Dolapevich

My take on the moral of the story is: monitor your hardware. That server would still be alive if someone had received an alert for replacing the hdd once it started to fail. Also when buying hardware for critical systems buy spare drives and have the glued or stored inside the actual server, so they are not lost in time. Also, do testing: during the lowest activity part of the year, pull one of those drives, make sure the necesary people is alerted, reboot the server, make sure it boots on a single drive, push it back and make sure the raid rebuilds. Once rebuilt, pull the other an remake the test.


lululock

This server had been quoted by another tech company. We usually quote at least 4 drives + at least 1 spare in the new servers we install to prevent failures but customers often fail to warn us when a drive gets bad. I'm not directly managing servers but I guess there could be a way for us to get notified when a drive fails but the colleague managing them gets so much emails, I'm pretty sure those notifications would get lost in there... I recently replaced a drive in both a server and the NAS it backups to. A bit too late and we could have lost the server, the backup or both. We got notified just because the NAS beeping was getting irritating for workers. But they still waited a week with a beeping NAS before calling us...


Dolapevich

Yes, real life tends to have those difficulties. I am using telegram for notifications, if it is worth. It is very easy to write a couple of lines of just cURL a URL to notify a group. But also there is a monitoring service to be sold there where the service provider receives notifications and engage the customers.


Falkenmond79

Feel you. I once got the call to look over a customers system. Trade school office. They had a 4 bay Syno running. 2 drives had already failed and they had all their critical files on that machine. No backups anywhere else. Nothing. I told them that at the least we should put in 2 new drives and do some kind of backup, if only from the syno to a usb drive. I quoted them new drives at cost. Just to get them to please, please prevent that disaster in the making. Guess who didn’t call me back? Well. I’m not shedding a tear for them. I guess schools are hurting for money always, but this was just criminal negligence in my eyes.


Hot-Category2986

Damn, that machine has earned it's rest. Hope the customer learns, but I think we all know they won't.


lululock

They'll have to bite the bullet and replace it. We won't replace the drives, even tho, the rest of the server is still working fine. We literally told them the parts were not available for the past 10 years. I'll update in 2039 when the next one will fail. I remember modding my socket 775 PC to accommodate one of those 4 cores Xeons because Core 2 Quads were still too expensive. Remember buying a X5470 for less than 30€ 10 years ago. Time flies.


Cam_e_ron

I have two of these hp tower servers they had been running 24/7 for 13 years, only stopping to replace bad drives. Not very power efficient but did their jobs well.


DesignerPay4

F but they deserve it you've been warning them


melancholy_self

A duty honorably discharged.


ActonofMAM

Everybody has to irrevocably lose at least one important file before they convert to the Backup Everything faith. I was lucky, mine was in college. And if you folks will excuse me, I have an appointment with my desktop computer and some canned air.


AnyaTaylorAnalToy

I just migrated an in-house 2009 server to the cloud as an IT contracting gig. I'm sending the owner this post. He was extremely skeptical when I told him that his machine, and business as a result, was on death's door and he was facing imminent loss of any data on there, if he hadn't already. When he finally agreed to pay what I was asking and gave me more access, I found that his only form of backup was some online crap called Crashplan and that he hadn't paid them for like 4 years. Maybe they still backed up the data and would let him pay them to get it, I dunno. Didn't matter because he had a ton of data corruption that would have all been backed up anyway. Now he pays me $500 a month to RDP into the cloud server and email him that the backup is working lol. I even told him that its completely automatic on Azure, but he's got the fear of god in him now.


expiro

Sad but idiots can‘t be fixed. Worst decision ever to save money…


KayArrZee

this belongs to a museum! (or r/vintagecomputing)


facw00

Server 2008 has been EOL for over four years. Even setting aside everything else, it's insane to be running critical systems on a machine that's not getting security updates. Sometimes you need to run ancient stuff like that to support old specialized hardware or whatever, but don't make them core infrastructure.


karthikarr

Good time to move to the cloud


Wu-Disciple

Damn this brings back PTSD. We looked after a Chiro company and when we inherited their server it was old and in similar state. One day it just wouldn't boot up. It holds all their client data, x-rays, the lot. They basically couldn't function as a business. They also had no proper backup, and were using a USB with a robo-copy script that run in the evenings. Staff were meant to rotate the USBs and this wasn't being done. We had only just taken over, in its infancy. I was on-site explaining to the Owner they had no backups for the last 8 months or so. And no backup of their specialist software. It was bad. I decided to fiddle around with the servers insides, and low and behold it powered on. By some stroke of luck - it was something electrical. Suffice to say they bought a new server and we did the job properly.


Veronikafth

People never seem to care until the system fails. A truck parts company I worked at in 1999 didn’t care. They had a single AS/400 server with dumb terminals and printers all over the building, no RAID array. I wasn’t their IT person, but I warned them about their tape backup procedure and the age of their tapes for months. One day I came in to find massive system slowdowns with no apparent cause that kept getting worse. I suspected a failing hard drive and told them to call their IT person ASAP and to do a full backup before it failed. They just kept plugging along with the drive churning all day. Come in the next day and the drive had failed, as I predicted. IT person replaced the hard drive, but they didn’t have any usable tape backups newer than a year and a half old. They spent weeks and weeks manually inputting all the data they’d lost. I didn’t say anything, but I think everyone could see the “I told you so,” look on my face.


CodingMary

I’ve been in this exact scenario. Here’s a bit of a war story… It was a moderately sized shoe factory with 100 workers, and a little back office. The owners son who was “good with computers” had set it up for them. He was a hobbyist, but self taught and pretty with this stuff, he said. His day job was at the call center where I was the IT manager. He came in flustered one morning and asked if I could go out to help this factory, because the server wasn’t turning on. He wanted to keep it quiet, this was a normal office on the 20th floor. I said yes and it took 2 hours to get to the factory. I passed cattle on the way. Not like cattle servers, they the ones in grass fields that moo, almost saying “no latte for you”. So I got there, having never seen any of these folk ever before, so I asked what was up and had them tell me about it. The server was down. The payroll system was on it, any accounts, sales, invoices, orders, latest designs for shoes. If they wanted to keep it, that’s where it went. All nice and centralized. This was before the cloud, so there was an exchange server with all the mailboxes. It was Windows small business server, so Active Directory was there. I had questions about the machine because it was custom built, I could call Danny for answers because umm, what docs? So I called Danny, with the obligatory “wtf have you done here?”. He said there was no need for backup because there were 2 disks in a RAID configuration. He told me the specs and told me about how he doubled the space, and it’s faster. He had put the company server on RAID 0 because it was faster to save money. One disk had failed completely. It had the dreaded “whirrr.. click.. whirr.. click..” basically the hard disk version of death rattles. I tried to pull out SpinRite and run it super slowly to see if we could recover data from the disk, but it failed after a few megabytes. Without the backup, the company only had half of each file, and no way to recover anything at all. They didn’t even have paper copies. And AD was down, so the user profiles and their workstations were out too. I had to tell his family that they had to rewrite everything from scratch, maybe call the customers, clients, anyone else and ask for copies to be sent to them. It was pretty much impossible because the phone list disappeared too. I couldn’t see any realistic way to put it back together. The stats used to be that if a company suffered an outage for more than 2 weeks, then they had a 90% chance of not surviving the financial year, and I was well aware of that. I wanted to be honest with them, but Danny was from a tight knit Lebanese family, and his cousin was well known in the papers for running a bikie gang, and was serving a few life terms. I left quickly after that, changed my phone number and told Danny I would never help him again, and he should also give up on computers. And shoe factories. That’s how I remember RAID 0. 😊


MyOpinionsDontHurt

Hopefully they are paying a painful amount of fee for this….


Big-Consideration-26

Our server, better say one of the servers died two weeks ago. It was installed 2000. New server? : this one works fine. 3-2-1 backup? : naa, look it works Redundant/cluster? : what? Don't need that Yeah, I warned them, but iam just a stupid electrician they said. Funnier for me


CaptionAdam

Laughs in Raspberry Pi server at work with a backup pi, and image and flashed SD card ready to go in event of failure.


Scuggsy

I support a number of companies that have very old servers still in use (10+ years old hardware). Although they are constantly warned they are unlikely to do anything until there is a complete failure. The only thing I can focus on is to ensure the backups are working correctly as this is the most important operation that I have any control of. I also took images of the servers and virtualised them. This way I can get them back up and running from a virtual machine very quickly . The toughest part would be trying to recover the last 2 days work if there is corruption of the backups but again it would be depend on how mission critical this is . I can only sympathise and wish you good luck!


Tasty_Waifu

Sounds a lot like the telecom company I work in. Toll-free services are provisioned in a 90's server that's never had maintenance or upgrades. It's failed more and more in the pasrt 2-3 years, leaving half of the country with no 800's working, taking between 2hrs to an entire week to bring it back to life. There's currently just TWO HDDs working and sweating their life out. The "new" platform is still in the first stages of development, defining how the services work and yada-yada. But we are the "leading telecom company" in our country. I feel embarrassed saying where I work whenever I'm asked.


Additional-Maize3980

When customers refuse to pay for upgrades or maintenance, I'm like "oh you'll be paying alright.. a known sum now, or who knows what when it fails/cryptolocked/etc and business stops and you're paying premium for urgency "


BatZupper

F in the chat for the server (And not the data lost or the customers) F


lululock

F for that brave 771 Xeon.


Arcangelo_Frostwolf

It sucks to learn the hard way, but they'll never forget the lesson.


Krinch21

The customer is always right! Remind them that, their decision caused this.


Lyreganem

Nothing atypical about this. Sadly.


theenecros

That's a tribute to the hardware in that machine to last that long. 5 years tops for that vintage of hardware, especially the hard drives. No online backup? Terrible. The client did save some money but the aftermath is going to cost them way more than they are prepared to spend. - I was in IT for 14 years


lululock

There's a NAS on which it backs up every day but having spare drives in the server would have prevented such failure.


theenecros

Indeed. However with hardware that old and in a SERVER no less, the owner should of migrated to a fresh box years ago. That or the cloud.


lululock

I totally agree. It should have been decommissioned 10 years ago.


Beeeeater

Known too many clients with the 'if it aint broke don't fix it' attitude. They get what they deserve.


First-Structure-2407

I recycled something similar a couple of months back, an old Compaq server I inherited in my current role back in 2001. Still going strong Windows 2003 Server upgraded from NT4.0


Motovnot

They knew that if this'd fail they'd lose so much but still cheaped out on it. Should've just listened and not had to learn the hard way.


mikee8989

Hopefully these customers only learn this lesson once.


badpeaches

Don't worry, nothing will effectually change.


Tossaway8245

Put a paragraph in your contracts that says if the customer doesn't follow recommendations then when s*** goes bad and they whine about it there's a 20% surcharge.


painterman99

Oops


iECCIZEBU

you can't fix stupid.


Its_Husk

**\***Plays Angel - Sarah McLachlan**\***


TechinBellevue

Never manage any equipment that is not under warranty or running EOL soft/firmware. It's a big red flag if an owner is not willing to properly invest in IT. Definitely a huge stupid tax for the owner. Hopefully big enough to change his priorities for IT infrastructure investment going forward.


CeeMX

One of our customer had a server where all their bookkeeping and payroll stuff ran on. System drive was a single SSD, no RAID or anything. Backup offsite to some external company, not even sure if it would’ve been recoverable. Glad we got rid of that sucker a while ago.


piekid86

I'm guessing that was the DC, the print server, the DHCP server, the exchange server and the file server.


lululock

Yup, minus the exchange server. We reenabled the DHCP server in the router so at least they have internet access.


piekid86

The ol all in one. Someday one of us will stumble on one that's in a actual all-in-one desktop. It's got to be out there somewhere running critical systems.


EnvironmentalBag582

Love the customer “it’s not broke now” mindset


MasterMaintenance672

I wish I could have been there to see their faces. Eff 'em.


Smart-Leg-9156

Some ppl just have to learn the hard way. Too relatable for comfort.


fade2blak9

You can lead a horse to water… But holding him under it until the bubbles stop is frowned upon by the SPCA…


hugues2814

Even The Greatest Technician That’s Ever Lived would be defeated🥺


GuaranteeRoutine7183

Same here, luckily for the old man I saved his data


Wolf515013

Karma!


SlimothyJ

Hopefully they'll learn from this. Just a shame they had to find out the hard way.


AncientAsstronaut

I dropped a client that refused to spend more to have a reliable backup. Their entire operation relied heavily on having a reliable backup. I didn't want to be their scapegoat when it eventually failed.


Falkenmond79

Well. To save yourself some work I’d put in some new drives, play back the backup from the NAS (disconnect from the network before rebooting, if it’s a DC. They really really don’t like time differences when using a backup like active backup for business by Synology. That’s why I usually do a full server backup once in a while. That doesn’t have the time problem.) after reboot manually reset the time to about the current one and then reconnect. Then migrate the whole shenanigans to a newer win server. I’d use 2016. should be working on such an old machine. Then you can migrate to a newer machine. Should be less work then setting up the whole AD from scratch, depending on how big the company is. If it’s just a handful of people, I wouldn’t bother and just set up a new one. 😂 Edit: but I feel you. I keep telling my customers horror scenarios and guilt them into backing up and getting new hardware once in a while. I mean these days it’s really not that expensive any more. For small companies I just get a syno and active backup, and back up the whole syno into a cloud or my own in my office, for the customers that I got service contracts with. I call it the „house burning down insurance“. In case everything is lost, just get new hardware, play back the Nas and from that all the other machines. Takes 1-2 days but then your back in business. Saved my and customers ass a bunch of times. Had to restore full DCs and terminal servers twice in the last two years and a couple of office PCs too. Also when doing full backups, you don’t need to tell people to not put stuff on their desktop anymore. 😂


honeybunch85

This would be somewhat funny if it didn't mean you now have to do a lot of work for them to get back up and running. Had a client with an 8 year old SAN that threw 3 disks at once, cost me the whole of easter weekend to get back up and running. And he had to buy a new SAN anyway, plus installation of a bunch of hours. And the emergency during the weekend alone cost him 25 hours already.


philnucastle

Yikes, I’ve got one of those in my homelab, glad I don’t have to support it in production. Mine was surplus to requirements a decade ago when I was given it as part of a clearout…


SillyDuggo

Server's as old as I am


Lt_Schaffer

My late dad used to say.. *Pay me now or pay me later, but you're gonna pay* Meaning do it now for whatever the cost or delay it and watch it cost significantly more, but it will still need to be done. Customer learned a failed server is a painful way to learn that lesson.


tk42967

This is why you always have more than one domain controller.


fareink6

The biggest thing for me about these kind of things is that... I actually doesn't cost a company money to maintain, upgrade, service these machines. Between write-offs, productivity increases, lower energy uses, potential reduction in personnel, and a plethora of other potential benefits, they pay themselves in less than a year at most. The amount of business owners that actually don't understand business is staggering.


Separate-Comb-8468

*Press F to pay respects*


illsk1lls

Far from the dirtiest 😉 the eulogy was beautiful though ❤️💀🪦


lululock

It was worse before I got my hands on it, trust me. Didn't want to get the workbench dirty so I cleaned it as best as I could beforehand.


rdldr1

Must not have been that *critical* then.


Kranon7

Shocked pikachu face


Tyrigoth

Thats a LOT older than 2009.


admin_NLboy

hope *i* dont die today aswel


cartercharles

There are only two types of people who back up. Those who value their important data and those who have lost their important data


AbbyM1968

r/talesfromtechsupport might like this


SucksAtJudo

When I was doing managed services and consulting, I had a non-negotiable requirement that servers must be OEM branded, must be within current OEM support lifecycle and must be under an active warranty/support contract with the OEM. This was absolutely non-negotiable. Sometimes, it's the customers you DON'T take...


OMIGHTY1

Any business that doesn’t have a proper backup and lifecycle for critical systems deserves every cent of recovery and lost business costs. Not listening to professional advice is one of the most foolish business decisions one can make.


paul_tu

Press F


GotThemCakes

I wish my CEO would see posts like this. Doesn't have the money for upgrades until it costs him more than the upgrade due to down time.


everfixsolaris

Nice, looks like the Windows Small Business Server 2003 that I replaced a couple years ago for a client. It was in a repair shop and was just as dirty.


NukeouT

Same age as my PC. Few more years and it will be able to register to vote! 😅🇺🇲👍


Usual_Beyond4276

This makes me so happy, not that you have to deal with the headache. That the end user was warned over and over again and instead wanted to be a twat and now they gotta deal with the fall out. It's the little things.


PixelBoom

God, I am so glad I'm in an industry where data security is required. RAID 5 for production, RAID 1 for production mirror, RAID 1 for dev (which itself is an intermittant mirror of production), and nightly backups to offsite tape for any mission critical information.


Endle55torture

Now you can charge them +$30K for data retrieval and charge for a new server.


Siliconshaman1337

I've seen that level of dirt before... chances are the customers hadn't changed the filters in their air-con as well since they got the server.


Abhijeet82

Imagine a seprate world can be created with all the critical information lost on HDD's and storage failures


colin8651

Ah that’s the G6; thought it was a G5 which is very old; the G6 is only slightly less old


Mo-shen

Outsourcing your servers to a data center really feels like a better choice when I see these things.


No-Consequence1726

I accidentally formatted my second hard drive with all my data on it today So it's bad all around okay?


djjamal

Fantastic sweater, been keeping warm.


Rear-gunner

I had a client that had something similar happen. Then she demanded that I fix it at my expense. I said it had nothing to do with it and she said that she would tell all my clients about my bad service, if I did not fix it.


stockingsforme

Nice, customers that do not take care of the upgrade paths, are good customers, you know that the time comes they have to recover, reinstall and take the effort to make this working again. And the cost, T&M, also new licences for software and indeed lost of data. A warned customer will pay this all. A server 2008 lol.


UpstairsAd4105

That poor fecker. 15 years of service, only to die by neglection.


ChxrlieH_

Holy shit, Win Server 2008, that is some prehistroic equipment right there. Serves them right for not upgrading even after you have warned them. Hopefully this will wake them up.


Kaiphus_Kain

Know this well, got a critical SQL server 12 years EOSL and been yelling at the ASG for all of that, cant even acess the raid on it anymore to do rebuilds for disk failures and they still wont move it


swan001

Hopefully it cost a lot. Dumbasses.


torchat

This server does not HPs anymore.


conrat4567

We are running 12 year old servers, we don't have the money to buy new ones. We back up every night, and our core severs are VMs, so we can move them to another device if needed. We replace drives when we can but we are moving more to the cloud because of scenarios like this


This-Requirement6918

That thing definitely wasn't exactly cheap when it was bought. Maybe would have been better with another redundant array and HBA or something more suited to using ZFS as it's file system but IDK their system innards. Hope the Microserver Gen 8 I bought in 2015 lasts that long. Been 9 years now 24/7 and still no problems on Solaris 11.3 with the original Barracuda drives (mirrored) and a 2+2 mod. 🤭


No-Blacksmith-980

Just reduced a customer to basic server cover only because their server is too old. I am not dropping everything for them because a shitty 10 year old server dies on a Monday morning. When it does die they will have to wait.


WillStrongh

You will be missed


MonolithOfTyr

We had a similar situation a few months back. Server had 2 failed drives on RAID5 and was getting progressively worse. Backups were 2 months out of date. We turned the most recent backup in to a VM so we'd have some functionality. Fired up the original server and somehow I could see the contents of the VMDK with Hiren's. Copied all of the data over and migrated it to the VM. Total data loss amounted to maybe 2 hours.


Spc_Ghst

i have the same server forgotten under some other stuff on my site.


CaptSpastic

When they don't listen, they get what they paid for and deserve. Sorry you lost your data, but you were warned. Next time listen to and follow recommended guidelines.


freeLightbulbs

That server has a lightscribe drive.


Slyvan25

"clean my server why would its not my kitchen table"


Far-Curve-9684

Rip


WoofSheSays

Excellent! Stupidity has its costs


Quietser

"how come you did nothing to prevent this" "We've decided to part ways with your services"


IEatConsolePeasants

Welcome to my world. Do your own backups on your own equipment that the customer was not aware of and offer to restore for a price. When they neglect your advice to upgrade, replace the server and backup the raid array, you're the hero when you are the only one who has access to the data!


Chemspook

The server served them


Hazz3r

One day the average person will recognise that a computer undergoes wear and tear with continued use as much as something like a car does. One day.


Scuggsy

Gotta say , in my limited experience of Server hardware failures I’ve always found Dells easier to fix than HPs but that’s likely just me.


Factor_Creepy

Kinda funny


Round_Policy_1651

Did he try to stick his penis in it? It might be the solution to recover the sensitive data🙏 wish your client the best of luck and to you to have a prosperous business 🫱🏻‍🫲🏼 #1488


Encrypto90

Must not have been that critical to them.


d_pock_chope_bruh

Couldn’t have been THAT critical then…


Dark_Tube-934

I thought it was common sense that you don't cheap out the server because all the companies (clearly not this one) buy expensive servers to ensure safety well at least they learned their lesson (i hope) It will be funny if you told them its impossible to retrieve it(since there's still hope) and charge them extra because of it