I have this huge case of which none of the 5.25 bay slots are being used. My Board could allow 3 GPUs with out pic extender, if I didn't care that two of them would be really close to each other. That got me thinking, what if I used an extender and put my third GPU in the top BAY slot which conveniently has ventilation at the top.
Then I stumbled on this
https://www.bplus.com.tw/Adapter/PE4F.html
curious as to peoples thoughts?
(05-11-2017, 06:51 PM)elidell Wrote: [ -> ]I have this huge case of which none of the 5.25 bay slots are being used. My Board could allow 3 GPUs with out pic extender, if I didn't care that two of them would be really close to each other. That got me thinking, what if I used an extender and put my third GPU in the top BAY slot which conveniently has ventilation at the top.
Then I stumbled on this
https://www.bplus.com.tw/Adapter/PE4F.html
curious as to peoples thoughts?
That web page says that it is compliant with PCI Express 2.0. Is there a newer model that supports PCI Express 3.0?
Also, it only transfers a PCIe x4 signal. It seems to me that you would want it to be PCIe x16.
Instead of this, why not just stack your 3 GPUs after removing the backplate on the two that are close to each other? If thermals are a concern, you can get a flexible fan inside your case that blows air directly on the two cards that are right next to each other.
I had a fan like this one on a previous build and it did the job.
https://www.newegg.com/Product/Product.a...6835209044
It mounts with a single screw. I don't know if Antec still makes that fan model so get it while you can.
I wasn't suggesting that I get that particular device, just really using the bay to house a card.
That said can you point me to a tutorial on removing the heat plates? I thought they were there to help draw away heat, like a heat sync. No?
(05-12-2017, 02:14 PM)elidell Wrote: [ -> ]I wasn't suggesting that I get that particular device, just really using the bay to house a card.
Interesting device! I've seen risers but never anything for drive bays.
Thread is getting old, but I thought I would confirm:
Many GPU intensive applications don't need a fast bus. I can put a modern card in my old Mac Pro that only has PCIe 1.1 slots, and certain apps run at almost full speed. Of course, if there is a lot of data going back and forth to the card, this won't work well.
Isn't hashcat mostly running in-gpu with relatively little data going over the bus?