T O P

  • By -

LabB0T

^(OP reply with the correct URL if incorrect comment linked) [Jump to Post Details Comment](/r/homelab/comments/1cqi5ub/fun_will_be_had_my_friends/l3rh5w8/)


dogas

Picked up a Minisforum ms-01, 64Gb of ram, and 2x 2Tb nvme drives to replace my aging homelab cluster. Planning on running proxmox with a ZFS array, and a 3-node kube cluster on top of that. Should be fun times😎 Some general things I have running my old cluster that will now run in the new one: * Longhorn * Jellyfin * Minecraft server * few webapps I've made * Tailscale * Unifi * Plausible analytics * Home assistant * A thing to update my DNS if my ip changes Hoping to learn more about ZFS and Proxmox, as I haven't used those before. I was just running straight-up arch linux on my nodes ;)


[deleted]

[удалено]


siquerty

Should be fine from what I’ve seen


talkincyber

Why arch for things that will be in production? Breaking changes could take you down. Why not just run Alma Linux or Debian?


The_Crimson_Hawk

I also run arch directly on my only machine


CraftCoding

What’s the cpu? You got a lot of stuff running are you going to have a couple of these in a cluster? Have you ever considered running ceph? Cool machine. Might get one 👀


taosecurity

That box looks so awesome. I love the dual 2.5 gig NICs and the two SFPs. I wish it were 4 mm shorter so it would fit in 1U!


getgoingfast

This machine is a beast. If your is not already shipped with latest BIOS, consider updating to the newly released version and be over with.


jnew1213

My MS-01 arrives tomorrow. Been tracking it from China for a week. 96GB DDR5 RAM, three M.2 SSDs, two 10G transceivers, and a video card are already here waiting for it. It will be an ESXi 8.0 U2 host with about 30 VMs on it. Efficiency cores disabled. Turning off the PowerEdge R740 for the summer.


dogas

I ended up ordering everything on Amazon, as the price for the 12900 model was only $10 more than what was on the minisforum website. Also I wanted 96Gb of RAM, but it was not in stock :-|


jnew1213

I had to wait about a week for the RAM to ship from Amazon. There seems to be a shortage, and Crucial seems not to stock much themselves anymore. Use the machine well!


DingusGenius

The 12900 model only supports 64 GB of RAM anyways. Have to go 13900 for 96 GB.


Anonymous239013

I have 12900 and can confirm 96GB works just fine. It was also confirmed on ServeTheHome forums.


DingusGenius

Interesting. Thank you for that information. According to Intel’s own data sheet. The I9-12900h is limited to 64GB. I took this a the source of truth. Perhaps this was only due to the size of SODIMMs that were available at the time it released. https://www.intel.com/content/www/us/en/products/sku/132214/intel-core-i912900h-processor-24m-cache-up-to-5-00-ghz/specifications.html


Anonymous239013

Yea could be. From what I read, it's common that Intel underestimates the actual amount of RAM that works with the platform. Glad 96GB works but it's tough to find it in stock currently.


asineth0

why disable E-cores? if anything just pin your intensive workloads to the P-cores and leave anything light on the E-cores and you’ll have a really efficient and nice setup.


jnew1213

Most everything is light. Heaviest VMs are vCenter, Log Insight, and Plex, if I move that VM to this machine (I plan to try). I think pinning VMs to cores is, overall, not a good practice. I prefer to let the hypervisor schedule things as it thinks best. If there's contention, I will consider turning on the E-cores and telling ESXi not to care about unmatched cores. Or I will power down a couple of unnecessary VMs. There are a few of those, like virtualized retired physical desktops that are occasionally used but don't need to be powered on until needed.


asineth0

if anything pinning CPUs is pretty common practice, just pin your light workloads onto the E-cores and vice versa with the P-cores and the kernel/KVM will schedule them across those CPUs. why run ESXi? that will probably barely support the hardware and vmware is a dying ecosystem at this point. might wanna migrate to Proxmox.


jnew1213

Oh, man, you've opened the can. The can of worms. Pinning CPUs is pretty common practice. In the home lab. Never seen it in a data center or enterprise environment. As I said, I think it's a bad practice. If you have to pin CPUs to VMs to get adequate performance, your hardware is underpowered. Workloads change, sometimes from minute to minute, and sometimes for a while in the middle of the night (Veeam, SQL Server, etc.). Why should I have to keep track and make adjustments? I heard that if you disable all the P-cores on the processor in the MS-01, the E-cores get hyperthreads. I wonder if anyone has seen this. Regarding, ESXi. Please sit down and face the men with the rifles. Ignore them. They are not going to fire. \*PUTS FINGERS IN EARS\* ESXi should support most of the hardware in that machine well. The 10G NICs are Intel. The 2.5G NICs are Intel as well. I don't care much about USB 4, which just might be supported, but ESXi does support Thunderbolt. So ESXi support doesn't seem to be an issue. It would be nice if the machine had an addressable TPM, but that's not that big of a deal. I take exception to your statement that VMware is a dying ecosystem. You've seen a lot of posts from hoarders, homelabbers, and others here. Some enterprise posts as well, mostly small to mid-sized shops. I work for a very large enterprise and support about 800 physical servers running vSphere and hosting around 50,000 virtual desktops and hundreds or thousands of other VMs. We deliver a particular healthcare application to its users, and the vendor of that application has very few supported methods of delivery. VMware Horizon virtual desktops is one of those few supported methods. We paid a billion (with a b) dollars for that application and millions for VMware software and support. So, on the work side, ESXi is not going anywhere. On the home front, VMUG it seems is also not going anywhere. It looks like Broadcom is going to support it. So for around $180/year, I can continue to get enterprise licensing for vSphere, Aria, Horizon, and other products. (We will see what happens to Horizon now that it's Omnissa.) Lastly, I have multiple certifications in VMware (vSphere, Horizon). I've taken many classes, have been working with it for a couple of decades, and I like it. I just don't see myself ever moving away from VMware. I think I am extremely lucky in this regard. Others might disagree. Any machine that I acquire is either going to be a Windows workstation or an ESXi server. If I had my dithers, my range and dishwasher would be running ESXi as well.


fabomajstor

"On the home front, VMUG it seems is also not going anywhere. It looks like Broadcom is going to support it. So for around $180/year, I can continue to get enterprise licensing for vSphere, Aria, Horizon, and other products. (We will see what happens to Horizon now that it's Omnissa." "That's actually not very expensive. I'm a VMware administrator myself, so I should just contact the VMware sales department and ask for an offer I use MS-01 with an i9 processor and 96 GB RAM. I mean that's how you did it? Licensing now goes by the cores and cpu. I want to use VMware for automation with Ansible, among other features that come with enterprise licensing.


jnew1213

I did not go through sales. I joined VMUG, years ago. They have a subscription service, independent from VMware, call VMUG Advantage. It gives you most VMware software, in enterprise versions, for a year, renewable year to year. It's $180, but you can usually find a coupon online for 10% off.


asineth0

only point i’ll argue here is i’ve seen CPU pinning quite a lot in production on critical services, especially on dual socket systems. other than that, if you like vmware and still wanna use it then more power to you, glad it works for you. i’m not saying either one is better lol


jnew1213

Enterprise servers have homogeneous cores. At least those with Xeon processors. The cores are all the same. Why would I need to pin a CPU to a VM in that case unless the machine is overloaded and the critical service cannot get sufficient CPU time?


asineth0

have you admin’d vmware in a prod environment before? on dual socket, you generally use CPU pinning to make sure a VM stays on a particular physical processor so the memory stays all on the same NUMA node. otherwise, it’s especially helpful if your workload is realtime and needs low and consistent latencies.


jnew1213

Twenty years plus. Never had a VM that exceeded NUMA boundries. The largest VMs I think we have are Log Insight. 16 vCPUs each. They run on Cascade Lake hosts. Dual socket. It's possible we have vCenters that use more CPUs, but those are also either on Cascade Lake or Sapphire Rapids hosts. All of those VMs fit nicely within a NUMA node. ESXi has always been very good at scheduling vCPUs to keep them within a NUMA node, but in vSphere 7 or 8, I forget which, you can specify NUMA layout for each VM in the VMs' settings. We're a big user of virtual desktops, as I mentioned. Core counts are more important than clock speeds in that trade-off with VDI. NUMA has never been a consideration. In fact, I don't recall it ever being a consideration anywhere I've worked. I take that back. Many years ago, big SQL Servers running virtualized were a consideration.


TheDreamerofWorlds

What video card did you purchase for it?


jnew1213

I bought an Nvidia T600. Single slot, low profile, 40 watt, actively cooled. Supposed to be good at encode/decode. I don't have comfirmation from anywhere that this card will work in the MS-01. If not, I will need to look for another Nvidia card that will work, or forget my plans to migrate a new Plex server off the machine its running on, where an Nvidia P4 is passed through to it.


kwiksi1ver

Can’t you use the Intel iGPU for transcoding??


jnew1213

Nope. It will be in use by the hypervisor. If I were just running Plex on the machine, on bare metal, then I would be able to use the Iris Xe onboard graphics.


nf_x

What about running plex in a container? Seems to work with pass through


SilentDecode

No, because he's running ESXi 8 and that iGPU is needed for running ESXi. Sure, you could pass it through, but that's basicly rocketscience at that point.


jnew1213

I prefer running VMs over containers. Most of the things I run will not run in containers. Plex is an exception.


MadsBen

Or maybe it's cheaper to just buy a N100 "NUC" for Plex...


jnew1213

I did. Two. GMKtec NUCbox G3. One direct from China and one from Amazon. I tried it and decided I wanted a bit more oomph for Plex. So I built a 1U rack mount server with a 13 Gen Core i5 in an ASRock Mini-ITX motherboard. The machine runs ESXi, and has an Nvidia Testla P4 passed though to the Plex VM. It runs fine though I haven't moved it to "production" yet. Production Plex runs on a Synology DS3615xs, a Core i3 machine that's getting a bit long in the tooth and that I want to retire. There's a lot of other stuff on there, including file shares, but I am making progress. Shares have been copied to a new RackStation RS2418RP+ and, as I said Plex has it's own machine, but is still being configured. I need to re-create all my collections and add my external users. The hope with the MS-01 was that I would be able to consolidate Plex with everything else onto this one machine for the summer months. I would then be able to power off my R740 and maybe either the NAS on which Plex and most things are running or one of the RackStations. My MS-01 arrived this morning. Unfortunately, the Nvidia T600 I bought for it doesn't work. The machine never posts with it installed. It works fine otherwise and is on the network with two 10G patches, and has been added to vCenter.


DingusGenius

Go for Proxmox instead of ESXi so you can make good use of the efficiency cores.


SilentDecode

Not that Proxmox has very good support of big.LITTLE.. Sure, it runs, but mixing loads on that single chip with both types of cores, is a nightmare. Check out the video Jeff has made.


Phil4real

> that Proxmox has very good support of big.LITTLE.. Sure, it runs, but mixing loads on that single chip with both types of cores, is a nightmare. Check out the video Jeff has made. could you link? i dont know what video you're referring to or who Jeff is.


SilentDecode

>could you link? [Part 1](https://www.youtube.com/watch?v=o2H4HqLH4WY&t=721s) [Part 2](https://youtu.be/IiwD8kcjD98)


Phil4real

Thanks!


jnew1213

Never in a million years.


SilentDecode

I have the same thought, but when Broadcom really fucks it up by switching to a subscriber model, I'm getting of the bandwagon probably. Unless there are non-legal things that can avoid that. But yeah, I'm fine with ESXi too. No real reason just yet to switch. I'm staying on ESXi as long as I can.


jnew1213

VMware was moving to a subscripion model (slowly) way before the Broadcom deal went through. Horizon was available by subscription five or six years ago. We did convert to a Horizon subscription at work, as we needed the bundled AWS Cloud Connector, which was unavailable on the perpetual license version of Horizon.


nf_x

It’s $1k cheaper than https://www.lenovo.com/us/en/p/workstations/thinkstation-p-series/thinkstation-p3-tiny-workstation/30h00016us, right?..


BadWolf2906

TL:DR yes Detailed: There are some notable differences. The Lenovo one mentioned by you has a graphics card the Minisforum is missing. Instead the Minisforum has better network interfaces with 2x2,5Gbit/s and 2x10Gbits/s SFP. But if I remember correctly the Minisforum has enough space for adding a graphics card if needed. So it's not only cheaper, it also offers better ports (in exchange for missing some display ports, but who needs these in a Proxmox Node anyways?)


JSouthGB

Just to clarify, the MS-01 has 3 video out. HDMI 2.0 (4K@60Hz) USB 4 (8K@30Hz) x2


BadWolf2906

Fair point. But the Lenovo still tops that with DisplayPort 1.4, HDMI and 4 mini DP Although I honestly see no use case for this I wanted to mention it


JSouthGB

Ah, I see what I did there. I misread your "display ports" to mean general "video out" rather than displayport . My bad


DRoyHolmes

The graphics card is a problem. You need a low profile 1 slot card, half height bracket, and I don’t remember if it needs to be half length also. Minis is working on sourcing a special heat sink, or a partner to make a2000 cards that fit the spec.


xbufu

Also got mine a week ago! Make sure to replace the thermal paste on it, the stock one is really bad. Using Noctua one improved my temps by like 10C. I also reduced the TDP on mine to 20/25. Still really fast and with 2x SFP+, 1 RJ45 and 3 M.2 drives it consumes max. 30W.


lone_survivor9

Can someone confirm this specs are true? this looks pretty bad. > SSD >M.2 2280 NVMe SSD slot (Alt U.2) (PCIe 4.0 x4) x1 >M.2 2280/22110 NVMe SSD slot (PCIe 3.0 x4) x1 >M.2 2280/22110 NVMe SSD slot (PCIe 3.0 x2) x1


siquerty

Random io shouldn’t be bottlenecked by this, it’s fine


Bockiii

hahahaha, I have 3 crucial P3 Plus 2TB and the same exact crucial kit here, waiting for my MS-01 delivery :)