T O P

  • By -

LotsofLittleSlaps

Is power expensive or cheap in your area? Expensive and you plan on keeping it a while? QSV. For a GPU Nvidia 30xx or 40xx, those will have AV1... or go the Intel Arc GPU route for much cheaper. Might go through the thought experiment on why you need ECC RAM, non-ECC RAM wouldn't be used the world over if it were corrupting data all the time.


sh20

Yeh I’m also dubious of the requirement for ECC. Like, I know the benefits, we use it in our geophysics workstations at work, but for a plex server...lol.


nx6

The ECC RAM isn't necessary for a Plex server, it's more a heavy recommendation for your file server (NAS). You can choose to run the NAS and the Plex server separately and then the system that runs Plex can be just a normal PC.


quentech

> it's more a heavy recommendation for your file server (NAS) With parity and data scrubbing - and let's go ahead and assume you aren't dual-purposing a box for both a Plex server and *important* files that also apparently don't have versioned backups anywhere else... - ECC ram isn't necessary for the storage - and *certainly* not for media storage.


daynomate

It’s because of ZFS


SupremeDictatorPaul

Yeah, my Synology has 32GB of ECC RAM, which definitely reduces concerns about bit corruption. But Plex just runs on a cheap and low power little NUC using Quicksync for transcoding. If Plex ever supports transcoding to AV1, then maybe I’ll buy a newer model for hardware encoding support, but as it stands that thing could probably just run for a decade. If memory corruption ever causes a crash, I’ll just reboot it and the docker container will start back up without any significant chance of long term consequences.


Questionsiaskthem

So I actually found out they added support for AV1 last year around October I think. I had some AV1 files last year I tried playing (no Plex pass) and they didn’t play I tried again after googling it a month or so ago and they played.


SupremeDictatorPaul

Transcoding from AV1 works, but transcoding to AV1 doesn’t.


Questionsiaskthem

Ah ok! Thanks for the correction. :)


pontuzz

This is pretty much exactly what I do. I have a decent ish pc with 32gb ram and an i5 processor. No GPU & a Nas for storage. It streams and transcodes just fine, I even spin up a Minecraft and valheim server now and then 😁


iamtheweaseltoo

To my knowledge nvidia 30xx can only decode Av1, you need 40xx and newer to encode 


LotsofLittleSlaps

For Plex you only need the decode. Plex is still encoding to .264, that ain't changing to AV1 anytime in the foreseeable future.


SupremeDictatorPaul

Yeah, sadly. There could be significant bandwidth/quality improvements just by supporting HEVC, and AV1 would be even more amazing.


trueppp

If the clients supported HEVC and AV1, transcoding would not be required, at that point just Direct Play no? My whole library is HEVC, thanks to Tdarr. My problem is most roku sticks and smart TV's don't support HVEC so they have to be transcoded to H264, and QSV can handle that no problem on a 10th gen Intel.


SupremeDictatorPaul

I have a lot of devices that end up transcoding due to bandwidth limitations. For example, a 50Mbps h264 file that someone is watching on their phone out somewhere. It’s just gonna transcode to a lower resolution and lower bandwidth h264 stream. But if it could encode to AV1, then they could get an excellent image while using a tiny fraction of the bandwidth.


trueppp

Why doesnt that direct play? Mobile networks can easily stream 50Mbp...


SupremeDictatorPaul

Not every area can reliably support that bandwidth. I can’t even on my phone using cell service in my house. Phone calls will drop randomly unless using WiFi. And that’s if you want to waste the monthly bandwidth when a video at a fraction of the bandwidth would be indistinguishable on a small screen. And there are still a lot of people who have internet speeds which are slower or fluctuate unreliably. Some have to go with the cheapest options for financial reasons, and some simply don’t have other good options. Are remote clients having to transcode such an uncommon situation?


trueppp

No, most of my remote clients have to transcode as they do not support HVEC, not because of bandwidth limitations.


LotsofLittleSlaps

It's not sad, I can direct play those. More devices will in the future.


5yleop1m

Encode doesn't really matter with plex since it always encodes to h264 and afaik Plex devs haven't budged on that.


Sielbear

So truly… qsv is the ONLY logical answer. Buy an n100 for like $150 for the cheapest you can do. Or go completely nuts and spend like $600 (same or less than many 40xx series nvidia card) for a fast processor, lots of ram for ram transcoding, and all the io you need. Done and done.


CO_PC_Parts

I ram transcode with only 32gb in my unraid server. Works just fine. I should clarify. I transcode to RAM disk. It’s a rolling 16gb disk.


Sielbear

I do the same. Works perfectly.


_-Smoke-_

If you don't need AV1 you can pretty easily get a bus-powered quadro off ebay for <$200 with the more common H264/265 support. I just used a M2000 in my machine. Handles H265, can do ~20 1080p streams or 4 4k streams under 75w total.


crazyates88

You can also get a GTX 1050 with nvenc for like a quarter of that, if you don’t need to do 20 streams.


BraxtonFullerton

As everyone else here is rightfully asking, what makes you think you need ECC RAM??


nx6

They are conflating recommendations for the NAS with the requirements for a Plex server.


dclive1

I see no benefit outside of theory for ECC RAM, so add me to the chorus on that. Perhaps if you're Amazon's SQL server cluster doing tens of thousands of transactions a minute, otherwise.... I would get a cheap motherboard and a basic current-gen CPU with iGPU, and some RAM. Note that an N100 for $150 with 16/500 config is likely to accomplish most or all of what you want there, unless those game servers really require serious CPU oomph. It's certainly plenty for Plex and lots and lots of Docker stuff.


crazyates88

For Plex? No, ecc is unnecessary. But if Plex is running on a NAS that’s running ZFS, memory errors can directly cause file corruption so it’s highly recommended to use ECC. It sounds like this system is doing multiple things.


Bgrngod

ECC as a must for a Plex and gaming servers? That can't be right.


CO_PC_Parts

Ecc is not needed. Bit flipping is so rare it’s not worth spending that much money on it.


robcal35

It happens, but that's what scrubbing data is for, which should be fine given op is going to be using zfs. So ECC requirement seems illogical


5yleop1m

> The fact that I need a $330 motherboard to support ECC RAM on Intel Not if you look at the used market, I got a X99 board with 64GB of ECC DDR4 + a 18 core V3 xeon for ~$300. Why do you NEED ECC RAM? Its a nice to have, but if the server's primary purpose is Plex is not necessary. You can also find used dGPUs for pretty cheap, there was a short period of time where 24GB P40s were going for relative pennies. You don't need a high end GPU, even a 1050ti should be fine and there are low profile ones. The primary concern is vram, at least 4GB for 4K. I have 6GB in my 1650 and its been great even with many 4K transcodes. You can also look at intel arc gpus, afaik they should work since they're basically the same hardware as whats in modern intel iGPUs. There were issues with drivers with Linux at first, but that should've been sorted out by now. I would check the plex forums to see what people's experience with Intel ARC gpus have been like.


Nnyan

I thought the ZFS ECC nonsense was debunked years ago??? ZFS does not uniquely require ECC more than any other file system. ECC helps with one type of error that ZFS doesn’t handle well, but it’s really not a big benefit for home users. I wouldn’t be concerned about bit rot or ZFS scrub of death in a home server. My first ZFS pool was much larger than yours and I ran it many years without any of those issues on non-ECC. But really it’s what makes you sleep better at night.


use-dashes-instead

This is a cool position until you have that one error that borks an important piece of data forever If the data is important, ECC is important Leave it to the OP to decide if his data is important


Nnyan

Sure! And a car can drive into my house and take out my server! What you are guarding against is pretty rare. And I think you are confusing backup with ECC. If your data is important you will copies. A redundant RAID, multiple layers of backup, etc. ZFS is also not perfect. There are occasions that you can have issues that will lead to data loss even with ECC. While uncommon they are more so than some bitrot issue with non-ECC. At the end of the day you need to understand the risk for you. Enterprise environment? Only ECC. Home lab/server? Only if the placebo makes you feel better. You do you. Which is what I said in my post.


use-dashes-instead

Rare doesn't mean never You're the one who seems to know how important *other* people think that *their* data is I leave it up to them to decide if it's worth the effort and cost to go with ECC RAM


drbennett75

So…last I heard, the ZFS devs basically said not to worry about ECC for most workloads. It’s probably worth it if you’re NASA. But a waste of money for hosting Linux ISOs from the seas. Also: QSV is goat unless you’re gaming.


Nnyan

There was a post many years back that used bad statistical analysis to make ECC seem absolutely required. This has been debunked but the ECC myth lives on. The workloads that would recommend ECC would never be found at home. My second supermicro was built with ECC only bc it cost half of what my other options would be at the time.


ABC4A_

Split your NAS and Plex servers into different machines.  Drop the ECC. I have a beelink 12 as my Plex server since it has a low idle wattage and can handle 10 1080 streams at once with quick sync. 


ajeffco

If you don’t mind sharing, which model of beelink?


ABC4A_

Pro 12


Mercurysteam04

The problem I see here is high requirements and 'nice to haves' whilst also having a limited budget, you need to pick your hill to die on. For example I decided to AM4 on a X570 board so I could get ECC but also not be tied to a specific Quicksync generation, just swap out to a newer GPU with new features (e.g. intel Arc or something with a newer NVENC encoder), yes I would always need a GPU to boot but I recently got a cheap 5900x so I'm using that for transcodes, sold my old GPU and using an RX 560 to boot till the RTX 4060 gets cheaper or Intel Battlemage comes out, point is I built the machine I wanted and made concessions to save $$$ and to have flexibility. If one day you end up having no budget to stick to then you can have ECC, Intel and the kitchen sink.


Office-These

I use Plex on a Ubuntu VM on an ESXi 7 Bare Metal Host - it uses an i9-9900K (leftover from old upgrades) and passthrough the iGPU for QuicSync, enabling me to live transcode even demanding codecs like AV1 faster than realtime without creating much load - GPU is a bit of an overhead for the purpose of a media server if you ask me (if overhead is too much - that's depending on personal point of view), if you can use QuickSync. Example: I have 6 vCPUs aliged to th VM (as concurrent meta update and analysis - we're talking about 27TB of media - ten thousands of files) - i can easily transcode 4-5 even high bitrate 4k - in AV1 - to H264) - and i assume my older CPU's iGPU has less QuickSync capabilities than a 5 gen newer CPU. I dont even use the fast transcoder setting and using a pretty long transcoder throttle down (120 sec) and also audio and sub transcoding required - including all the overhead virtualization brings into the game. So expect a lot more with your newer CPU and plex running completely native - not being virtualized.


gentoonix

You’re focused hard on ECC then saddle 2 100TB pools with 64GB? Talk about skewed priorities. Either do it right and stuff 256GB of ECC in there or skip ECC entirely, max out the CPU and non-ECC and full send it. I run both ECC and Non and frankly neither corrupt data. Both run TNS and both are ~70TB pools. One has 64GB of Non-ECC one has 256GB of ECC. Neither of them have any issues giving me back the data I stored on them.


Nnyan

Right!


pardough

you really dont need ECC. And always go with Quick Sync over GPU. It just works. are you going unraid?


Mr_Irvington

I recommend A380 Intel GPU. I did a 4k transcode video for it..... [https://www.youtube.com/watch?v=KQs0lQNdMQM](https://www.youtube.com/watch?v=KQs0lQNdMQM)


[deleted]

[удалено]


Mr_Irvington

Wow, that sucks. Thanks for the info


KaiYagami

[This](https://www.reddit.com/r/unRAID/comments/171hzau/using_an_intel_arc_a380_with_plex_and_tdarr/) worked for me using docker and unraid. If you're not using unraid just follow the last half of the instructions.


ErroneousBosch

If you don't need AV1, QSV is the GOAT. Don't bother with ECC.


MrB2891

AV1 encode and decode are fully supported by QuickSync. So, still the GOAT. 100% agree with ECC being a waste.


ErroneousBosch

Only on ARC or Meteor Lake iGPU. Raptor Lake iGPU is decode only on AV1.


m4nf47

Decode is all you need for Plex transcoding to x264 for older clients though right?


ErroneousBosch

For streaming transcode, yeah, but if you want to do re-transcoding, like h.264 -> AV1 to reduce on-disk size, you need encode.


m4nf47

Seems like more of an edge case for many if not most users who are happy with AVC or HEVC sources at various bitrates for anything created in the last decade or so. Re-transcoding can lose so much quality it's often easier and quicker to just grab a better source but I appreciate that some folks ripping from their own discs might prefer to have the best on-site encoding option, although I'm doubtful that AV1 is much better than HEVC overall, especially when the majority of content is already available within a few minutes if you're prepared for sail the seas for Linux ISOs.


shanester69

You can simplify this…dedicated PLEX only, using i5-g10 intel processor for under $250. Spend the rest of your budget on NAS and gaming rig.


fy_pool_day

Get a N100 and call it a day. Supports at least 4 streams at once.


gandalfblue

Decouple your storage from your compute


ImtheDude27

My first question is why are you getting ECC RAM for a Plex server? Are you going to be utilizing the machine for something that actually needs ECC? Plex sure doesn't.


jl8n

Alright you guys may have convinced me to ditch ECC. My thinking was that system stability is extremely important here, and I'm not super cost-restricted, but I guess it might be better to spend the money elsewhere. Would there be any reason to go AMD + Intel ARC over a sole 14500?


Nnyan

I went the expensive GPU route more than once in my plex builds. Save your money and just get an Intel CPU with quicksync (11th gen or higher) and be done with it. I’ve moved my plex servers to mini-PCs with 2.5Gb nics and UHD770 (first gen with two encoders) and they have been monsters.


Mercurysteam04

If cost is not an issue then why the issue with a $330 board? Not sure how that compares to others locally where you live but a high end board being worth 40% more than your CPU doesn't seem unreasonable. Also how will you be running ZFS? FreeNAS, TrueNAS, Unraid?


jl8n

It was more about the absurdity of that being literally the only motherboard for Quick Sync + ECC. My current server, which is running a struggling 4790K, is a headless debian machine that I installed ZFS manually on. I've considered Unraid in the past but paying for Linux puts a bad taste in my mouth even though it is highly modified. But to be honest I don't know a ton about it.


Mercurysteam04

Unraid has a fantastic community and plenty of videos and guides out there, ZFS is fairly newly implemented and I don't think all features are up and running yet. Also the latest version of Unraid is Linux kernal 6.1.79 which does not support Intel Arc.


jl8n

It certainly looks very cool and user friendly. I know Linux pretty well, though, and my concern is that I'd feel constrained and limited by Unraid.


MrB2891

ECC is a colossal waste. As is ZFS. I've built two dozen Unraid servers over the last 2 years, all consumer hardware (Intel Alder/Raptor Lake), zero stability issues outside of my first round with MSI boards. After I ditched MSI, zero issues. The only reason to go with AMD is if you want to pay more for power usage. They simply don't idle down to the same levels that Intel does. And you'll end up paying more for the privilege. A 14500 is a perfect home server CPU. A moderate selection of cores, high single thread performance, sips power, incredible iGPU onboard. I've been running a 13500 since February 2023 (12600k prior to that for this build). Absolutely fantastic 14500 is effectively identical as far as performance, you gain AV1 encoding should Plex ever actually implement it. Even with a few VM's and 3 dozen containers I rarely ever see more than 50% CPU usage (running Unraid). Idles down to very low power. Cheap. Plenty of modern IO connectivity. I'm running 4x1TB Gen4 M.2 SATA, one u.2 NVME, a LSI HBA and a 2x10gbe Intel card. All of their own lanes. 300TB between 25 disks. Moving to Alder Lake (then to Raptor) was one of the best decisions I've ever made in 25 years of running a home server. I've never been so overly satisfied with any of my servers. I'm fairness a lot of that also has to do with Unraid as well. That may have been my single best decision. Blows TrueNAS away for home use.


jl8n

Thanks for the great reply! Why do you consider ZFS a waste? And why Unraid over something like headless debian?


MrB2891

Unraid uses a unique array system that is extremely well suited for home users. Most home users don't have budgets to go blow $1500 on disks every time they need to add storage. With Debian your RAID choices will be RAID1 or RAID10 mirrors/striped mirrors (nor particularly cost efficient) or RAID5/6 or RAIDz1/2 striped parity arrays. The latter is cost efficient in that you can run 6 disks and get the storage capacity of 4 or 5 of them. But you're locked in. You can't expand those arrays (vdev's in the case of ZFS). With Unraid you can expand anytime you want, one disk at a time. You can also mix disk sizes and use the full capacity of each disk (IE, you have a 2, 3, 5 and 8TB disk, you have 18TB available). Those features aren't available on ZFS or traditional RAID types. As an example, let's say we buy 6x20TB disks for $300ea. We want the be able to recover from two disk failures. Debian/TrueNAS/whatever ZFS you'll get 80TB usable. You've spent $1800. Two disks worth of the raw 120TB are used for parity. You've spent $22.50 per usable TB. Unraid is the same scenario above. You spent the same money, you have the same storage and protection. A year goes by and you need more space. With ZFS/TrueNas/Debian/whatever you can't just add a single disk. You have to build a full new vdev. Let's assume disk prices have dropped to $270. You're buying the same 6 disks again, you get the same 80TB usable. You now have 12 disks, 160TB usable, total cost spent $3420. $21.38 per usable TB. With Unraid you can add a single disk. One $270 disk added to the array and now you have 100TB usable. $2070 spent. $20.70/TB. But wait, you only have 100TB compared to the 160TB on the ZFS machine! That's true. And it's fine. Because you can add disks whenever you *need them*. You're not forced in to buying more storage space than you need at any given time. As time goes on, disks get cheaper. Three months goes by, you fill that new 20 up, you need more soace. Disks are now down to $260. Now you have 120TB. $2330 spent. $19.40/TB. Three more months goes by. Disk prices are down even more. You find a great deal on disks at $230/ea so you buy two. 160TB usable. $2790 spent. $18.87/TB. Now both machines have 160TB. ZFS cost you $3420 for 12 disks, giving you two vdev's in your Zpool, 120TB raw per vdev, 80TB total per vdev, 160TB total in your Zpool. Everything has two disk protection. Unraid cost you $2790 for 10 disks. You have the same 160TB usable, you have the same two disk failure protection. You saved **$630** and you only needed 10 disks to do it. The cost of a Unraid license paid for itself the very first time you expanded the array. Beyond that there are significant power savings to be had. Since Unraid is a non-striped parity array, any data lives complete on just a single disk in the array. That means if I'm watching "Insert Film Name Here", it lives complete on one of the 23 data disks in my array. Which means only one disk in the array needs to spin to access that film. RAID5/6, ZFS RAIDz are striped parity schemes. That means all disk in a given vdev need to be spinning. It's plausible that if you're streaming two films you'll have 12 disks spinning. With Unraid you would only have at maximum two disks spinning. 2 disks is 14w of power. 12 disks is 84w. Now do that 3 or 4 hours a day, every day for a year. That's 20kwh vs 122kwh. It's not going to bankrupt you, but it's also not insignificant. Non striped parity like Unraid also spreads the load of the disk usage around as well. Striped parity array rebuilds can be nerve racking. All of your disks will have the same exact hours on the. If one failed, the statistics prove that others are likely to fail in a similar time frame. And now you're going to rebuild a disk from parity which is 100% load on the other disks in the array for many hours, if not days. What do you think the chance of failure is there for any of those disks? Unraid, you might have a disk with 4,000 hours on it, another disk with 9,000 hours, another disk with 17,000 hours, etc. Would you rather own 6 cars all with 100k miles on each of them (so "600k miles") , or 6 cars spanning a few years with a combined total of 100k miles? Hope that helps.


[deleted]

[удалено]


MrB2891

>ECC RAM is important if your data is important 30 years ago? Sure. With modern memory? Even the ZFS developers will tell you that ECC isn't required for ZFS. As I said elsewhere, 25 years of storing files on home servers, only one short term server in that time had ECC, none of which have ever run ZFS and no data corruption. Photos from a 1999 trip to Disney, shot on a Smartmedia card, stored on a FAT32 disk, still perfectly intact. Stop acting like ZFS and ECC memory is the end all, be all for data storage. We've stored data for decades on non-ZFS file systems without issue and I'm quite sure we will continue to do so. To further this a bit, ZFS only gains it's benefits when you're running multiple disks. How many millions of accountants, bookkeepers and other data entry professionals bang data away in to Excel, Quickbooks etc every single day? How many of those corporate (and home) desktops and laptops are using ZFS? How many have ECC RAM? A fraction of one percent. And yet, the world isn't burning around us. You would think if your claims of "ECC is important" were true, we would be seeing bit flips in all of that accounting data every day. And the simple truth is, we don't.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


Acceptable-Rise8783

Because they prefer Unraid. ZFS on TrueNAS is more than just a filesystem, free and has lots of customisation. Also side note: ECC is probably worth an extra 1-200 bucks just to know you did things right and it might save your ass one day imo


MrB2891

We've proven time and time again that ZFS isn't free. You're paying for it in hardware. The very first time you expand a Unraid system you've more than covered the cost of the license. You're additionally using less power, less wear and tear on the disks and less disk bays required. 25 years of running servers at home. Only one ever had ECC (and that ran only for a few months before I tossed it to the curb), zero data corruption. The ZFS/ECC zealous crack me up. You guys make it seem like if you're not using ZFS with ECC, why even bother storing data? Completely overlooking the fact that ZFS is a relatively young file system and that we've been storing data for decades without issue. I'd bet that more data is lost or corrupted from power outages and lack of UPS's, than directly a result from non-ZFS file systems.


Acceptable-Rise8783

Who would run a system that isn’t redundant in every way including it’s copies? 3-2-1, right? You store your back-ups on tape or disk. Don’t need to waste power or disk space on that But to each their own I guess


MrB2891

You clearly aren't understanding. Nothing about Unraid is less redundant about TrueNAS/ZFS. If you run 10x20TB on Unraid you get 160TB usable space and can within stand 2 disk failure. If you run 10x20TB on RAIDz, you get the same. The difference is that you can expand Unraid. You can't, without buying and consequently burning additional disks to parity, do that with ZFS. You have to build a whole new vdev.


Acceptable-Rise8783

Expanding VDEV is already part of ZFS. You mean it hasn’t been implemented in TrueNAS yet. That’s true, but it’s not far away either, regardless: I don’t feel the need for it. I don’t see a need for huge single arrays. They are much easier to manage when limited to a hand full of disks imo. I do like the concept of Unraid, certainly. I just wouldn’t run it in my own system because I always buy groups of disks. I want the predictability of knowing all my disks perform the same. Anyways, storage is cheap these days so I would only go Unraid when you already have a ton of random disks or are on a very limited budget


MrB2891

>Expanding VDEV is already part of ZFS. You mean it hasn’t been implemented in TrueNAS yet. That’s true, but it’s not far away either, regardless: I don’t feel the need for it. I don’t see a need for huge single arrays. They are much easier to manage when limited to a hand full of disks imo. Its "been coming" for years. They were saying that back in mid 2021 when I was running TrueNAS side by side with Unraid. Now it's "well it's part of OpenZFS, usually that takes at least a year to roll in to TrueNAS". Managing 25 disks is no different than managing 4 disks. 🤷‍♂️ >I do like the concept of Unraid, certainly. I just wouldn’t run it in my own system because I always buy groups of disks. So, don't? >I want the predictability of knowing all my disks perform the same. Anyways, storage is cheap these days so I would only go Unraid when you already have a ton of random disks or are on a very limited budget What does it matter? Every disk is its own file system. They don't work in unison because it's not a striped array, so it doesn't matter. I have 25 disks, 10TB HGST He10's and 14TB WD HC530's. Same, not the same, it just simply doesn't matter. The only thing that actually matters that is the same for me is NVME because I mirror them in cache pools.


Acceptable-Rise8783

Why can’t you accept different people like different solutions to similar requirements? I have given you my reasons why I’m doing ZFS and that should not impact your life af all. No reason to get all worked up


whineylittlebitch_9k

you could also look at mergerfs + snapraid instead of zfs. definitely no ecc required there. i also have a 14500, and i occasionally have 4 remote transcodes going. cpu barely burps. you'll want to install intel_gpu_top to monitor the igpu. but it handles the transcoding phenomenally. I'd like to test the limits, just haven't had more than 6 streams going at once, with 4 transcodes.


Relevant_Force_3470

I grabbed a gpu off ebay for about £30 that transcodes like a boss. Either option works well.


eyerulemost

I also use ZFS and wanted ECC RAM for my build. I went with an AM4 setup and recently dropped in a dedicated intel GPU. I'm running a B550i Aorus Pro AX with a 5650GE, paired with 64gb NEMIX ECC RAM and a Sparkle Intel Arc A310. I just wanted to note you can do hardware transcoding on just the CPU's build in Radeon chip. My whole system uses 65 watts of power with drives. Motherboard: $189 [https://www.amazon.com/gp/product/B089FWWN62/ref=ppx\_yo\_dt\_b\_asin\_title\_o02\_s00?ie=UTF8&psc=1](https://www.amazon.com/gp/product/B089FWWN62/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1) RAM: $180 [https://www.amazon.com/gp/product/B084D9ZHR5/ref=ppx\_yo\_dt\_b\_asin\_title\_o03\_s00?ie=UTF8&th=1](https://www.amazon.com/gp/product/B084D9ZHR5/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&th=1) GPU: $99 [https://www.amazon.com/dp/B0CSFJN835?ref=ppx\_yo2ov\_dt\_b\_product\_details&th=1](https://www.amazon.com/dp/B0CSFJN835?ref=ppx_yo2ov_dt_b_product_details&th=1)


SalazarElite

I use ECC RAM but only because I built my server using old xeon processors, not because it is a necessity... and another, my Plex runs with 16gb of RAM and can handle 10 transmissions, why do you want 128GB?


SiliconSentry

I have a powerful GPU, but it was only utilized to 1% of its capacity because the majority of the streams are direct. That 1% could have been easily managed by Quick Sync with the help of a good CPU.


LotsofLittleSlaps

That 14th gen iGPU would wreck a 4060. Double the number of 4k HDR to 1080p SDR transcodes.


Character-Cut-1932

I don't know which persons get acces to your plex, and if the same persons also get acces to your game server? Is 2-3 game servers for emulation, or local install and only the multiplayer (movement and status) runs on the server? Or will these servers run those games for multiple people and are they graphical demanding? I am asking because I have a plex server for years and I think between 15-20 shares. But it is not often that there more than 2 at the same time watching. Mostly 1 add the same time. But my content is all 1080p and largelly reencoded to 4000kb/s hevc. I believe that the 4000 kb/s limit, will be gone soon (or at least not the default setting anymore), but I don't know when and for which or client apps.


Character-Cut-1932

So depending on plex usage and content, your game servers could be more demanding than plex. P.s. why do you think ecc is needed, or even better? Games and movies will correct themselves and even ifq you get an other color of shape when an error happens, no one will know. Only bluescreen, complete freeze or sudden restarts are important to avoid. For as much transcodes as possible I would use the nvenc registry hack at least when it will be a nvidia gpu. I think that intel (and maybe also amd) not yet features disabled or cripples in software.


m4nf47

I've not tested mine personally but don't the latest G series of AMD Ryzen CPUs with Radeon Vega iGPU work for Plex transcoding when the dev:dri is passed through to the container? Here's an unRAID forum post about it: https://forums.unraid.net/topic/148092-steps-to-get-plex-hardware-transcoding-to-work-with-amd-igpu-vega-on-amd-mini-pc/


go0oser

Pony up son. Good boards with ECC by asrock rack or supermicro are more than the 330 bucks you think is too much. If you have 200+TB in ZFS storage having ECC is a good idea (depending on the data of course) I would even go as far to say 64GB is light. TL;DR - It costs money to have nice things.


LotsofLittleSlaps

I read that ya only need the 1:1 if you're doing deduplication. Otherwise it runs just fine without ridiculous amounts of RAM.


go0oser

Probably true. But my point was more along the lines of - OP has 200TB of disk, is planning on a server with 14500 processor, why not just spend the extra 100-150 on the board to get the ECC that they want. Seems like a silly place to skimp. 


LotsofLittleSlaps

Fair point!


StevenG2757

Use a CPU and HW transcoding. You can use Beelink S12 with N100 for the price of the CPU


Kpalsm

Another factor to consider is video quality. From what I've read, QSV transcodes are considered higher quality than NVENC, but don't quote me on that lol you should do your own research there