Page 1 of 2 12 LastLast
Results 1 to 10 of 18
  1. #1
    Untangler
    Join Date
    Mar 2018
    Location
    Toronto, Ontario
    Posts
    99

    Default Untangle VM under Proxmox

    Any users out there? Have a Lenovo thinkcentre tiny desktop and PCI-E MINI card gigabit replacing the wifi card.

    I'm trying to virtualize untangle using proxmox again. Thanks.

  2. #2
    Untangler
    Join Date
    Sep 2009
    Posts
    43

    Default

    Yep, works just fine. you will have to manually install the qemu gest agent.
    been running with virtio networking with no issues.
    dashpuppy likes this.

  3. #3
    Untangler sheck's Avatar
    Join Date
    May 2020
    Posts
    64

    Default

    I ran test environments on my proxmox server all the time. Works just as good, if not better than ESXI, or bare metal.

    I dont know the specs of your machine but as long as it has two NICs you should be ok, all the setup is done in the hypervisor anyway. You just add a WAN and LAN to the VM you'll install the untangle ISO to.
    Last edited by sheck; 02-09-2022 at 01:03 PM.
    dashpuppy and donhwyo like this.

  4. #4
    Untangler
    Join Date
    May 2008
    Posts
    600

    Default

    Quote Originally Posted by sheck View Post
    I ran test environments on my proxmox server all the time. Works just as good, if not better than ESXI, or bare metal.

    I dont know the specs of your machine but as long as it has two NICs you should be ok, all the setup is done in the hypervisor anyway. You just add a WAN and LAN to the VM you'll install the untangle ISO to.
    I agree it works great. It would be nice if there was a wiki with some best practices. I have replied to a few threads with what works for me. Advanced search might find them.

  5. #5
    Untangler
    Join Date
    Mar 2018
    Location
    Toronto, Ontario
    Posts
    99

    Default

    Thanks everyone. Kicking myself in the pants of not doing it earlier for home use. esxi fan and wasn't satisfied with proxmox 6 performance few years ago with untangle.

    Installed proxmox 7, create untangle VM, restored settings and everything worked well.

    Some things i optimize:
    * CPU Type: Host. Not emulated cpu like kvm64. It's a Stand alone, not going to cluster or doing vmotion/live migration.
    * chipset: Q35. For enabling pcie pass through NIC.
    * optional: installed latest Linux kernel 5.15.x instead of 5.13.x.
    * optional: installed qemu agent manually.
    * optional: enabled 1GB of zswap on host. small swap usage will be in compressed memory rather than disk.
    * Enabled fstrim weekly. Have nvme/ssd. FYI, disabled in Untangle by default. No fault of untangle or Debian, there's a reason why, but should be enabled in modern nvme/ssd.
    * Lastly, enabled hardware WAN NIC pcie passthrough. Means WAN interface is dedicated to untangle VM which has total control of the hardware NIC and the host cannot use it and the 2nd hardware internal nic is bridged. This is for performance (have to quantify that.) and security (VM or containers can't accidentally use WAN.).
    Last edited by balrog; 02-10-2022 at 07:02 AM.
    dashpuppy likes this.

  6. #6
    Untangler
    Join Date
    Sep 2009
    Posts
    43

    Default

    My setup is based on a micro-atx motherboard with a four port server NIC and SR-IOV enabled. performance seems to be as good as bare metal. very satisfied with it. If you have multi socket cpu's then I would suggest enabling NUMA too..

  7. #7
    Master Untangler
    Join Date
    Jul 2010
    Posts
    920

    Default

    Quote Originally Posted by balrog View Post
    Thanks everyone. Kicking myself in the pants of not doing it earlier for home use. esxi fan and wasn't satisfied with proxmox 6 performance few years ago with untangle.

    Installed proxmox 7, create untangle VM, restored settings and everything worked well.

    Some things i optimize:
    * CPU Type: Host. Not emulated cpu like kvm64. It's a Stand alone, not going to cluster or doing vmotion/live migration.
    * chipset: Q35. For enabling pcie pass through NIC.
    * optional: installed latest Linux kernel 5.15.x instead of 5.13.x.
    * optional: installed qemu agent manually.
    * optional: enabled 1GB of zswap on host. small swap usage will be in compressed memory rather than disk.
    * Enabled fstrim weekly. Have nvme/ssd. FYI, disabled in Untangle by default. No fault of untangle or Debian, there's a reason why, but should be enabled in modern nvme/ssd.
    * Lastly, enabled hardware WAN NIC pcie passthrough. Means WAN interface is dedicated to untangle VM which has total control of the hardware NIC and the host cannot use it and the 2nd hardware internal nic is bridged. This is for performance (have to quantify that.) and security (VM or containers can't accidentally use WAN.).
    Esxi fan my self, running 7.0. It is very nice to run pfsense / Untangle in a VM. My issue was doing maintenance and cleanups of my server. If i had to turn it off to take it of fmy self to dust it out, from being in the garage for the year, the internet in the "WHOLE" house went down. I loved the fact that if i screwed up something in the firewall, i could restore it with in seconds and be back up and running.

    IMO i have ditched the Firewall in a vm on the main server & went bare metal on a dedicated box, that way if i have to do anything to my server i can with out effecting the house's internet.

    Before you say The server should be up running 100%, yes thats true, but when you run enterprise hardware, it needs updates & patching too. IE firmware for motherboards & psu's Or Esxi needing patching..

  8. #8
    Untangle Ninja sky-knight's Avatar
    Join Date
    Apr 2008
    Location
    Phoenix, AZ
    Posts
    26,546

    Default

    Yeah, I love me some virtual Untangle, but I still keep an appliance around for maintenance. Then I just left it on the appliance and the VM is there if that thing breaks.

    Also, Proxmox has come a LONG way in the last decade. I'll be moving off HyperV to it sometime in the next year or so. Because MS wants me to do the TPM thing to run Server 2022, which isn't happening. This server I have is just getting warmed up and I don't mind the performance loss due to mitigation.
    dashpuppy likes this.
    Rob Sandling, BS:SWE, MCP
    NexgenAppliances.com
    Phone: 866-794-8879 x201
    Email: support@nexgenappliances.com

  9. #9
    Master Untangler
    Join Date
    Jul 2010
    Posts
    920

    Default

    Quote Originally Posted by sky-knight View Post
    Yeah, I love me some virtual Untangle, but I still keep an appliance around for maintenance. Then I just left it on the appliance and the VM is there if that thing breaks.

    Also, Proxmox has come a LONG way in the last decade. I'll be moving off HyperV to it sometime in the next year or so. Because MS wants me to do the TPM thing to run Server 2022, which isn't happening. This server I have is just getting warmed up and I don't mind the performance loss due to mitigation.
    I found Hyper-v a pain, i mean it's not hard to use or run, i just like the fact that ESXI is less resources on the box. Then again my box is way over built but still :P

  10. #10
    Untangle Ninja sky-knight's Avatar
    Join Date
    Apr 2008
    Location
    Phoenix, AZ
    Posts
    26,546

    Default

    Quote Originally Posted by dashpuppy View Post
    I found Hyper-v a pain, i mean it's not hard to use or run, i just like the fact that ESXI is less resources on the box. Then again my box is way over built but still :P
    Yeah, but patching HyperV so so much easier! And I can recover VMs with a Windows 10 install USB.
    Rob Sandling, BS:SWE, MCP
    NexgenAppliances.com
    Phone: 866-794-8879 x201
    Email: support@nexgenappliances.com

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

SEO by vBSEO 3.6.0 PL2