Im building a new server for hash cracking, iv done a fair amount of searches to decide which GPU is better, many have said the GTX980 owns them all, but i see in numbers the R9 290X has more to offer, lets say regardless of the power consumption, which is really better , in terms of price compared to hash/second , with the choice of multiple cards ..
thanks for everyone in advance
r280x are very cheap and you can oc them very nice... if you want some more money be saved , you should look for 2-3 older 7970 modells or maybee 7950 modells ... you can buy 4 used 7950 on ebay für the price of one 290x , what mainboard you wanna use for your project ? chase ? cooling ?
R9 290X has the best Perf/$ and excels in single-hash brute force performance.
GTX 980 has the best Perf/Watt and the best multihash performance, and excels at anything involving memory. Also excels at not having shitty drivers
Have a look at
https://hashcat.net/forum/thread-3949-pos...l#pid22449
lets take the NTLM for example, the 290x gives approximately 21 billion tries and the GTX 980 approximately 15 billion, what would the numbers be for multihash ? like i said i dont care for the power consumption, its a comparison between perf/price now,that is if i get 2 290x for the price of one 980 for example and i get double or better performance in the 2 290x ill go for 2 290x. and what is the memory role in all this ? yeh and i agree about the drivers
and im not stuck with those two cards , if i get better performance from other cards , like ati6990 for example 4 7950 for 1 290X in price and better performance i would go for that !
I'm just a hobbyist and for me the choice is price/performance. As you said two 290x is the price of one 980, and doubles your performance. I do not run my system for extended periods of time so power consumption wouldn't be an issue either. The driver differences isn't enough to out weigh the cost.
My choice would have to be the 290x
(01-06-2015, 09:20 PM)Serpent Wrote: [ -> ]lets take the NTLM for example, the 290x gives approximately 21 billion tries and the GTX 980 approximately 15 billion, what would the numbers be for multihash ?
Since you specified NTLM, here are some NTLM benchmarks:
Code:
Attack R9 290X GTX 980
Brute force, 1 hash 22957.6 21521.5
Brute force, 10k hashes 11489.4 12101.2
Brute force, 1M hashes 1986.3 3113.7
rockyou+d3ad0ne, 1 hash 5676.2 5974.2
rockyou+d3ad0ne, 10k hashes 4443.1 4737.0
rockyou+d3ad0ne, 1M hashes 1872.7 2879.1
rockyou combinator, 1 hash 8913.6 8759.8
rockyou combinator, 10k hashes 7345.0 8426.4
rockyou combinator, 1M hashes 1948.5 2840.7
GTX 980 easily keeps pace with -- and in several cases outperforms -- the 290X, even though it has 768 fewer cores and draws only half the power of the 290X.
(01-06-2015, 09:20 PM)Serpent Wrote: [ -> ]like i said i dont care for the power consumption
You should. Power == heat. Heat is difficult to deal with, especially in multi-GPU setups, and especially if you buy OEM-design cards.
(01-06-2015, 09:20 PM)Serpent Wrote: [ -> ]its a comparison between perf/price now,that is if i get 2 290x for the price of one 980 for example and i get double or better performance in the 2 290x ill go for 2 290x.
Sure, if money is a factor and you can actually power & cool two GPUs, then go for 2x 290X.
Thanks a lot guys, especially epixoip, your answers in all threads are perfect
.
but what im confused about is the benchmarks that atom gave me , hashcat.net/forum/thread-3687.html, the numbers are different about 5 billion than what you said for the NTLM for example, which benchmark should i believe ?
atom's benchmarks are with stock clocks & without tuning PowerMizer. The benchmarks I provided above are with a mild & usable overclock on both the 290X and 980, with optimal PowerTune and PowerMizer settings respectively.
For the 290X:
Code:
od6config --set core=1050,mem=1375,power=50,temp=95,fan=100
For the 980:
Code:
nvidia-settings -a GPUPowerMizerMode=1 \
-a GPUFanControlState=1 \
-a GPUCurrentFanSpeed=100 \
-a GPUGraphicsClockOffset[3]=350 \
-a GPUMemoryTransferRateOffset[3]=400
For Nvidia you'll have to add the following options to xorg.conf to make that nvidia-settings command work:
Code:
Option "Coolbits" "12"
Option "RegistryDwords" "PerfLevelSrc=0x2222"
when its 2018 and there are 1080ti's and titan xp's