Friday 24 April 2020

MediaServer 8.3 - Bifurcation Edition


Some hardware updates, some minor mods and lots of cabling later, I've managed to get my unRAID server hosting eleven (13?) usable PCIe/PCI expansion cards which I'm using to manage storage devices and populate VMs with passthrough GPU, USB and Utility adapters.

All with 2x slots to spare on my 5x slot mainboard.

Here's how...


Having upgraded to a ThreadRipper 2950x based system,  and been tenacious about consolidating physical machines to VMs, I've been constantly hitting the physical limits of my system when it comes to the quantity of devices available for passthrough.

I have plenty of cores, just not enough expansion slots.

I've recently written about using a PCIe mining expansion system to solve the problem of exposing discrete USB adapters for VMs, but the challenge of GPUs and other devices remains. For example, I run a whole-house a/v infrastructure based around Emby, SqueezeServer & Kodi. The VM driving this needs a pair of PCI audio cards and a bunch of DVB-S2 tuners passed through, where can I fit those?

The X399 architecture typically provides up to 5 physical PCIe slots. My Taichi X399 provides a pair of x16 slots, a pair of x8 slots (x16 physical), and a x1 (which is actually IOMMU bundled with an set of other onboard devices, so is pretty useless for passthrough.)

I've had really good luck using M.2 NGFF SSD to PCI-e Express 4X Converter Adapter Cards to add additional PCIe cards, but those M.2 ports are useful for other things, like storage, so I was always keen to find another solution.

I've been following the Bifurcation thread over at Hard Forum with interest for a while. Bifurcation is the process by which a single 16x or x8 PCIe slot can be split into smaller channel groups for the purpose of driving multiple cards from the same slot.

For example, a single 16x slot can be split into two (8x8x) or four (4x4x4x4x). To achieve this, the motherboard needs to support bifurcation. It's usually set in the Bios. My MB allows me split either of the 16x PCIe slots.

That's not the whole story though. To make use of a bifurcated slot, you need a PCIe adapter or riser cable that physically divides the single slot into two or four slots as appropriate.

The cool thing about Bifurcation is that it actually creates 2x or 4x genuine split slots. Unlike the mining risers and expanders that use PLX technology which works more like a switch, toggling the split devices on and off in sequence so that really only one is connected at a time, a bifurcated slot is, for all intents & purposes, a truly split slot with all connected devices having full access to the lanes they are allocated.

A challenge is that, while a motherboard may support bifurcation, finding a compatible riser can be difficult as they are pretty rare. There's a guy on the Hard Forum thread that has an online store selling his custom designs. However, I found another site almost by accident and I really liked the look of the design and a lot of the features, so I invested in a MaxCloudOn.com 16x -> 4x4x4x4 riser.

NOTE: The vendor webshop hasn't opened yet, but you can order on eBay.


I really like this approach for a number of reasons;

  • The child PCIe cards are separate from each other, which allows for more flexible placement within the server.
  • The system uses SFF 8087 cables, similar to those used by SAS expanders, so good re-usability potential.
  • They are white!

On receipt, I popped it right in to one of my PCIe 16x slots which I'd bifurcated as 4x4x4x in the bios and started adding in various cards, including GPUs. Everything worked perfectly, so much so that it was time to start planning my layout.

The Core X9 I have is a generous case, but even so, with all the hardware I need to fit, it was going to be a tight squeeze. I though about reverting to a set up I had a few years ago, where I'd externalised my StartTech PCIe expander and some hard drives to an external case. The concept worked well at the time, but I had a lot of trailing wires between the two boxes. I wanted to look at this again, but do it right.

I again turned to my Silverstone Grandia GD07 as the external chassis, but this time, I gave it  a dedicated PSU and eliminated power tethering by using a PSU Jumper Bridge that permits the secondary system to power up without motherboard connections.

The systems are tethered using a couple of Dual Port Mini SAS SFF-8088 to SAS 36Pin SFF-8087 PCBA Adapters and a DVI cable connecting the offboard innards of my trusty Startech  Expansion system with it's onboard host adapter (re-located to a bifurcated slot).

This quick 'n' dirty diagram shows the general set up of the system across my main system and secondary chassis, with a focus on the PCIe slot usage. (click to expand to full size);


You'll see that connected to my Bifurcation adapter in motherboard PCIe slot 1 is

1. An old HD5450 GPU.
Since this is connected to the first breakout slot, it's essentially the first GPU enumerated by the system and therefore becomes my unRAID boot display device. Prior to this, I'd got an RX 570 in this slot. My MB insists on booting with the first GPU it finds, and so it used that card, which I'd later need to passthrough to a VM, thus losing unRAID console. Problem now solved.

2. Startech Expansion Adapter
This is the 1x PCIe card that connects to my (now externalised) Startech Expansion board. This provides legacy PCI slots for my M-Audio cards which drive whole house audio and a 1X PCIe Digital Connection DVB host adapter. All of these are passed through to a 'homecontrol' Win10 VM (headless)

3. AMD RX 570 GPU
This is passed through to an OSX Workstation VM, (with occasional Win10 substitution). Yes, in this configuration, it's only running in an effective PCIe 3.0 x4 slot, but as the VM is not used for graphics-intensive purpose or gaming beyond Cities Skylines, that's not a significant problem.

4. 1x PCIe expansion adapter
Connects via a USB 3.0 cable to a PLX expansion board populated with 3x discrete USB 3.0 adapters that are each passed through to different VMs. (detailed in my previous post).  The 4th slot on the expander is now populated by an Intel res2sv240 SAS Expander. This doesn't actually use the PCIe bus, but is simply piggybacking on the slot for power. It's connected to an LSI 9211-8i Raid Controller that lives in the MB PCIe2 slot and between them,  the Dual Port Mini SAS and the MB SATA ports provide support for up to 28 SATA drives across both chassis.


There's a further RX 570 GPU in motherboard PCIe4 running at 16x. PCIe5 (8x) is free for future expansion!

Altogether, there are 14 expansion cards in this system, 16 if you count the two DuoFlex DVB-S2 daughter-cards associated with the Octopus. 3 of these are expanders of one kind or another, so in total there are 11 (or 13) functional expansion cards occupying 3 of the 5 available motherboard slots.

And so far, so stable.

In my research, I've seen occasional reference to a limit of 7 PCIe devices in the Threadripper 2xxx architecture. I haven't seen this limit in any practical way yet. I believe  this is because 2 of the expanders are essentially PLX devices, so the cards behind those only count as one in this limit calculation. So, in my current configuration, I've only used 6 of my available 7 devices, (LSI 9211-8i, RX570(1), HD5450, Startech Expansion Card, RX570(2), PCIe Expander)

Overall, I'm delighted with this setup. The MaxCloudOn bifurcation riser is an excellent addition, and unlocks the full potential of the rest of my hardware.


The only fault I have with it is that the downstream PCIe cards, while having one physical slot, will occupy two spaces on a PCI mounting bracket. This is fine if using dual width GPUs, but when populating them with single-width cards, it's a little less optimal. A lesser pitch when configuring them side by side in a case would be beneficial.

I've had some fun with the placement of the downstream bifurcated PCIe slots that involves woodwork and case modification. Too much for this article though. Stay tuned, I'll document it in the coming days. Here's a teaser;




Here's a breakdown of my IOMMU groups courtesy of VFIO-PCI Config plugin for anyone interested. (ACS override disabled).




6 comments:

Kevin said...

This is what I have been dreaming about doing, and you have only opened up the possibilities further. May I inquire more on how you did this? It will be a ways to go till I can purchase a system like this for experimentation but I will do it! Age 17

MediaServer8 said...

Hi Kevin. Thanks for your comment. It’s hard to know where to start with how I did this. For me, nothing is ever really done. It’s constant evolution. If you look back over some of the older posts on this blog, you’ll see a thread of experimentation and progress - that’s really the truth of how it’s done. Lots of time and making baby steps.

Starting out, focus on second hand hardware. You’ll find a lot of value if you don’t insist of being at the bleeding edge. For example, I’m currently running an X399 Threadripper 2950 system, which I will admit costs a lot of money. But I’m 30 years older than you and have a decent job.

However, I learned all about virtualization on my previous setup, an amd fx8550 system. You can pick up that kind of hardware very cost effectively these days, and it’s perfect for learning. Good luck!

Unknown said...

Hello! First of all, this was a great read and seeing how you maximized your slot efficiency was amazing. I'm looking to build a pretty beefy rendering rig for use with Redshift and Octane so I went ahead and bought the same risers from maxcloudon. However, I seem to have run into a bit of an issue and figured I'd ask you since you've already figured this out. I have a Zenith Extreme (first iteration) board from asus and I tried putting the bottom most slot (x8) in pcie raid mode x4+x4 but when I get into Windows, the whole thing goes black. And to even get to that it takes like 10 minutes to get to the sign in screen. Any ideas as to what might be causing this? The riser I'm using is a x4x4x4x4 and I only have the first two of the 8087 ports filled since it's only an x8 slot on the motherboard. Could that be part of the issue?

MediaServer8 said...

Hi Unknown

You don’t indicate what devices you’ve populated in your expansion slots. I’m guessing GPUs as you reference screen behavior?

It’s not an issue I’ve encountered. But remember, I’m running unRAID as my main OS (Linux), and run windows in virtual machines. In this context, I have no trouble when I pass through bifurcated devices. I’ve never actually used them directly with windows as the main os.

Here’s a few things to try (if you haven’t already)

First, the cables that come with the risers are unidirectional as the have proprietary wiring - it’s important to use only the supplied cables, and ensure they are connected in the right way.

Next, I’d try the riser in a different slot, you never know.

The issue of a 4x4x4x4 in a 4x4x slot might be the cause, but I’d be unsure. I’ve only used these in 16x slots as the are the only ones my motherboard can bifurcate.

I’d suggest reaching out to maxcloudon directly, It’s a very small outfit, and I’ve found him to be very responsive and helpful whenever I’ve been in touch. Do come back and let me know how you get on.

Anonymous said...

Hi,

I also bought from maxcloundon recently but for x16->2x8.
I was wondering if you can share some info (as I didn't see it on your blog post) regarding how do you power the additional expansion cards sitting in the child pcie adatper.

Do you simply rely on the pcie power from motherboard, or do you plug the 6-ping pcie power from the 6-pin vga power from ATX power supply? (I can't seem to find detailed info on maxcloundon's website either)

thank you!

MediaServer8 said...

Regarding powering the child expanders, I connect 6 pin from PSU. As I use several, in some I use molex 4 pin to 6 pin adapters.