• 1 Post
  • 5 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle
  • Pulling around 200W on average.

    • 100W for the server. Xeon E3-1231v3 with 8 spinning disks + HBA, couple of sata SSD’s
    • ~80W for the unifi PoE 48 Pro switch. Most of this is PoE power for half a dozen cameras, downstream switches and AP’s, and a couple of raspberry pi’s
    • ~20W for protectli vault running Opnsense
    • Total usage measured via Eaton UPS
    • Subsidised during the day with solar power (Enphase)
    • Tracked in home assistant



  • Thanks, I’ll need to have a look at how the chipset link works, and how the southbridge combines incoming PCIe lanes to reduce the number of connections from 24 in my example, to the 4 available. Despite this though, and considering these devices are typically PCIe 3.0, operating at the maximum spec, they could swamp the link with 3x the data it has bandwidth for (24x3.0 is 23.64GB/s, vs 4x4.0 being 7.88GB/s).


  • I hadn’t considered AMD, really only due to the high praise I’m seeing around the web for QuickSync, and AMD falling behind both Intel and nvidia in hwaccel. Certainly will consider if there’s not a viable option with QS anyway.

    And you’re right, the south bridge provides additional PCIe connectivity (AMD and Intel), but bandwidth has to be considered. Connecting a HBA (x8), 2x m.2 SSD (x8), and 10Gb NIC (x8) over the same x4 link for something like a TrueNAS VM (ignoring other VM IO requirements), you’re going to be hitting the NIC and HBA and/or SSD (think ZFS cache/logging) at max simultaneously, saturating the link resulting in a significant bottleneck, no?