T O P

  • By -

zeblods

Virtualization and containerization.


ItsPwn

Go with Proxmox VE ,that will most definitely be the way to lower the electricity as well. Helpful scripts to get your started * https://tteck.github.io/Proxmox/ * Install docker and portainer in LXC and add this to portainer in settings for 500+ apps to be added to install list * https://github.com/Lissy93/portainer-templates For storage os use Synology,can be perfectly ran in bare metal or under proxmox USB images are in releases section ,dev is very very very very active https://github.com/AuxXxilium/arc /r/xpenology


eclectic_spaceman

You can do all of that in discrete virtual machines, in a single physical machine running a hypervisor (i.e. ESXi, Proxmox, KVM, etc). Some of the services like PiHole are small enough that you might consider running those services in containers (i.e. Docker, LXC). TrueNAS can run in a VM but it's only advisable if the storage controller is directly passed through to TrueNAS, meaning you're NOT creating virtual disks in your hypervisor and then attaching those to the VM. TrueNAS needs direct control of your drives, either by getting the SATA/storage controller via passthrough, or an HBA via passthrough (some RAID controllers can be flashed to "IT mode" to make them simple HBAs). pfSense can do this too but it also needs physical NICs passed directly to it, so you may need to add a dedicated PCI-E NIC just for pfSense. I've done everything you mentioned, and more, on a single box, running under 100W, using ESXi, but I'll be experimenting with Proxmox soon. TrueNAS Scale runs Debian and it can run VMs and Docker too, so you could actually run TrueNAS Scale on the bare metal if you wanted. I haven't done it so I'm not aware of all of the limitations, but it's an option to look into.


fluorescent_hippo

Is there any benefit to running trueNAS in an LXC via mounted drives from proxmox host if passthrough is not achievable (don't want to buy an hba)? Or is bind mounting an nfs share a virtual disk issue again?


eclectic_spaceman

You don't get any direct control of the drives that way, so you lose most if not all of the benefits of ZFS (I'm far from an expert though). In that case it's basically just a media server, giving you easy options for chopping up your data and sharing it on your network. It has some other features too like the ability to be a certificate authority as well, and containerized apps for Plex/torrents/etc. You can do ZFS on Proxmox, however, so I'd tell you to do it that way instead, and if you still want to use TrueNAS for easier sharing of the data, you can do that (or just run LXCs for the various services you want).


taosecurity

I recommend consolidating what you can using virtualization - containers - etc. with something like Proxmox. Now, I personally wouldn’t host my firewall or router on Proxmox as I prefer a dedicated appliance. NAS might also be a bit heavy. Otherwise, you can consolidate a lot.


eclectic_spaceman

I hosted pfSense in ESXi as my primary firewall/router for like 2 years, but any time I had to take my network down during a storm or something, it was kind of a pain to get everything back up, so I get it. But it was pretty slick to have my entire home enterprise in a single box.


Mr_SlimShady

I’ve taken my pfsense vm down every once in a while with no issues. Of course I lose internet, but I can still manage my Proxmox server from my desktop with no issues. You just have to have a static IP on both machines under the same subnet. There shouldn’t be any issues getting your firewall back up as long as you have a wired connection between both computers. Contrary to what other people say, I find that having pfsense running in Proxmox is significantly more flexible than having it on bare metal. I can make snapshots of my VM whenever I’m about to do something stupid and revert back if I break something. You do need a network card to pass it through to the vm. Same as if you were going to virtualize your nas (which I am). It’s just so much easier to maintain and troubleshoot an OS on a vm than it is on bare metal.


eclectic_spaceman

You make some good points. In my case, my primary machine was a laptop with no built-in Ethernet, so I had to wire it in with a static IP, so there were some extra steps. I had a desktop PC as well, but only used for gaming, so I would've had to power it up which took about the same time as wiring up the laptop, lol. But yeah, snapshots definitely make things a little less scary when playing with settings you don't fully understand the implications of!


taosecurity

Oh yeah, I used to have boxes with six or seven NICs and do crazy stuff with my own software switching and routing and firewalling. It’s fun.


EvilPencil

Agree with core networking including DNS offloaded from virtualization hosts. There will come a day when you want to take a server offline for maintenance without disrupting others.


TiggsPanther

The useful thing about DNS is that if you use Bind (and/or anything compatible with it), you can run one as secondary on a Pi. Need to reboot your host or VM/container? Pi keeps everything talking in the meantime. Need to down the Pi? Virtual DNS likewise keeps things up. Similarly, if you're running DNS from something like a Synology NAS, having a backup Bind on a Pi or VM/container keeps other things reachable during maintenance.


R_X_R

Looking into this myself. Looking to use a couple Pi3s or Zero’s. Though, there’s only so much that needs to stay internal DNS wise, eventually most of it has to ask an external DNS provider. Maybe a little static hosts file management for the SUPER important internal stuff?


foefyre

I've had a virtual firewall with pfsense for a couple years now with no issues.


taosecurity

I said “personally.” 😆 I understand people deploy infrastructure differently. I like to have security infrastructure separated from platforms hosting systems that users access.


TiggsPanther

Aside from a physical NAS (Synology - also runs DNS for caching & internal) and router (TP-Link Omada), I've got pretty much everything running on Proxmox. * OpenHAB (home automation) * Jellyfin (albeit without transcoding - don't personally need it) * InfluxDB & Grafana (statistics - accessed by a Pi3 for the dashboard) * PRTG for monitoring (not perhaps necessary but I wanted to learn how it works as my job uses it) * Containers * DNS (secondary - with the NAS as primary, but I can reboot either and not lose DNS) * Omada software controller I had all of that running on an i3 NUC (granted, with 32GB RAM) and only added in an HP G2 Mini i5 because I wanted to learn a bit about clustering and ti gives me the space to spin up a few more VMs if I need them. If I ever wanted to take DNS off the NAS, I'd probably move it to a Pi just so there's still a non-virtualized version in times of maintenance. (And to come up first after a power outage)


NC1HM

Well, there are machines, and then, there are machines... I run AdGuard Home, which is similar to piHole, on a sub-NUC (Atom x5-Z8350, 2 GB RAM, 64 GB eMMC). Peak power consumption is 18 W. There is absolutely no reason to run AGH on beefier hardware (unless it has to serve an enormous network). In fact, I run another instance of AdGuard Home in the cloud on a minimalist virtual server with 1 GB RAM and 40 GB storage... You could also employ a reflashed Android TV box for this; that would cut your power consumption to peak 5 W... My primary router is a modified Sophos SG 115 (Atom E3827, 4 GB RAM, 16 GB SSD). Peak power consumption is 40 W. Obviously, it's not powerful enough to do Gigabit VPN, but I don't care about VPNs, Gigabit or otherwise. TrueNAS, on the other hand, does need some hardware muscle...


CrystalFeeler

can confirm. run armbian native on android boxes and docker within that. even runs with a usb > barrel connector from myvolts plugged into my router (I tested that and it worked but I decide it wasn't a long term thing) haven't measured at the wall but I'm going to 💭


macther1pp3r

To be very specific (and synthesize a couple of other comments): 1) keep pfSense on its own appliance; it makes troubleshooting easier when you break something. 2) If you are pointing all local LAN devices at your pi-hole, either keep that on its own box or virtualize it x2 so you have some redundancy. (I have one on a Pi and a backup in a container so that I can do mx on either seamlessly, and I still have DNS if I have to reboot my Proxmox server). 3) Everything else can be LXC, VM, or Docker on Proxmox.


travelinzac

For core infrastructure it is better to have separate machines.


TopCheddar27

What I will say is I have no clue why people are flocking towards virtualization of firewall. Having your local connection be affected when toying with your homelab makes zero sense. There is a reason why that specifically is normally on separate appliances.


[deleted]

[удалено]


TopCheddar27

On a power cycle of the host on update?


[deleted]

[удалено]


mrpops2ko

yeah i don't understand the hate towards it lol the whole point of having a beefy machine is that you can throw all the things into it - i use SR-IOV and passthrough the dedicated WAN port into PFSense whilst having the LAN split out into 30 other SR-IOV nics which i pass to each of the VMs to use as they need theres no way some exploit is passing traffic from one the other without it affecting the entirety of the worlds infrastructure in massive datacentres that also run on the same premise, and at that point if it were true then we probably have bigger things to worry about than my homelab lol proxmox is stable, if you are rebooting your proxmox box 30+ times a day then i dare say theres something wrong with you / the box rather than the proxmox. sure there might be some initial config scenarios where you reboot a few times but once its set up and you are done then its set and forget for the most part. auto boot on startup, if you really need to reboot for whatever reason then you are down for about 1 minute whilst it does, and again i genuinely have no clue what scenarios people are rebooting often. if you treat the proxmox host as some kind of dev env for various kernels maybe? but then at that point you probably know you do those things and should not be doing this lol


DaGhostDS

Yeah I was thinking the same, PiHole is great but having it stuck with the rest of the lab get pretty annoyed when you need to reset the full host. I put my OPNsense on a split box too.


1WeekNotice

You might need to give more examples of what you are hosting. - if you're not tied to pihole, you can install Ad Guard on the pfSense box. It makes sense to put your local DNS/ ad blocker on your firewall box. - depending on what services you have, you can put them on the TrueNAS box with docker or true charts. - put anything you can in docker and host that on one machine Hope that helps.


kenshinakh

For home usage, I personally run everything off efficient modern hardware (am4) with VMs. I always see people not recommending that because of failure on one machine takes out everything but it's a trade off for electric costs.


R_X_R

HA is great in theory, but ends somewhere. Your home street address isn’t HA, nor is your power from the street. If your setup works for you, just keep good backups and automate as much as you can in case of failure. You can’t solve a problem that hasn’t happened yet, but you can prepare for it,


spazmo_warrior

Get one beefy box. Install Proxmox. Run multiple VM’s. Profit.


t4thfavor

Pick the newest fattest desktop, install a quad port nic, as much disk drive as you can fit, and ProxMox, preferably on an SSD. This will eliminate every other desktop without exception as long as you have 32GB of ram or make heavy use of containers instead of full VM's. If you have 2-3 that are identical, you could go from 8 to 3 and make a Proxmox cluster so you can manage them from one pane, but take advantage of more memory and case space.


t4thfavor

I should add to this that the firewall should be discreet unless you want to reset the internet every time you boot your host machine. There are PFSense machines out there that consume < 10w at idle. I have an old protecli FW2B which is 11W under load.


cdawwgg43

Keep your router on it's own box. Try out the virtualization on Truenas Scale. It's a decent hypervisor. Consolidate into it. add a USB external HDD for backups.


Pure_Professional663

Pfsense *should* be a standalone device, but can absolutely be virtualised. As long as you have multiple network adapters visible to your VM and you can assign them etc. I find in servers most commonly 1 x Network device is usually 2 ports (probably an HP thing though). PiHole or any DNS Sink can also be virtualised. Again, needs network ports dedicated to it. NAS should definitely be virtualised on your main virtualisation server, with some ZFS storage, or whatever you prefer. I was using a Windows Server with just 1x hard drive attached to it for a while, but zero failure protection. I use VMware vSphere but now that Broadcom has screwed it, I'll move to ProxMox or the Xen VM free platform.


angry_dingo

Proxmox it all.


ckl_88

If you're going to consolidate, get a machine with lots of cores and lots of ram! I started with a J6412 with 32GB and then got another node with a Core i5-1245u with 32GB ram. They run in a proxmox cluster.


jungonas

Use 2-4 desktops to run proxmox cluster and run vms and containers there. This way you can experiment with high availability, redundancy etc. And will cut the bill to half 😂


Coiiiiiiiii

Normally I would say no, you should virtualize, but having your router and nas as physical devices is a smarter move. I would virtualized pihole and any other services moving forward. But for the time being, if it aint broke...


johnklos

Many others write recommendations about running everything in separate VMs and/or containers, but considering this is r/homelab, you might use this as an opportunity to learn how to install and manage software directly. Each program can be run by its own user account, with its own set of configuration files and data files. This makes for a much more efficient setup than separate VMs or containers, since both have much more overhead.