Please note, this is a STATIC archive of website hashcat.net from October 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: NVidia RTX 2080
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10
The new Tesla T4 based on the same TU104-chip like the upcoming RTX2080 does its ~8 GFLOPS at sensationally low 75 Watts TDP. Since the only difference to the RTX2080 are a few more (activated) shaders and the raytracing cores, the latter must do the main difference in power terms, since the RTX2080 FE has a TDP of 225W.

This could mean hashcat will no longer operate NVidia GPUs at their power limit since raytracing and tensor cores are not being used.
NVidia seems to introduce binning with the new Turing-GPUs. They select chips by energy consumption and overclocking abilities and mark the better ones with an A, e.g. TU104A. The A-chips can be overclocked by OEMs, but it'll be forbidden with the other and cheaper ones. Manual overclocking by users will be still possible although A-chips will be usually much more promising.

After reading more articles and leaked benchmarks, here're my speculated MD5-speeds:
GTX 1080: 27,5 GH/s
RTX 2070 FE: ~30-32 GH/s
GTX 1080Ti: 37,4 GH/s
RTX 2080 FE: ~40-43 GH/s
Tesla V100: 56,6 GH/s
RTX 2080Ti FE: ~55-58 GH/s
After reading a few articles regarding Turing architecture, I can say for sure that hashcat can't use RT cores or Tensor cores.

RT cores leverage FP numbers which are useless to hashcat and Tensor cores leverage INT4 and INT8 low precision integers which are useless for hashcat too.

Regarding CUDA performance I find your speculated speeds to be extremely exaggerated.

I saw real gaming benchmarks of the online database results of Final Fantasy and the performance difference is this:

2080 Ti vs 1080 Ti ~ 30%

2080 vs 1080 ~ 30%

2080 vs 1080 Ti ~ 5%

So, I expect raw speed difference in hashcat to be even lower than this due to the fact that new architectures have some optimizations explicitly for games which can't be used in raw execution speed.

After all these I insist on RTX cards being one of the biggest failures of all time for nVidia for two reasons:
Performance and price.

What a disastrous combination for RTX cards!
Yes, it's still only ...
(09-18-2018, 01:21 PM)Flomac Wrote: [ -> ](...) speculated (...)

No real results from Hashcat guys. We need to wait Smile
(09-18-2018, 07:55 PM)Nikos Wrote: [ -> ]INT4 and INT8 low precision integers

wtf does that even mean?
INT4 = 4 bit precision integers

INT8 = 8 bit precision integers

No difference in principle than FP16 or FP32
My speculation is also based on the V100 which is true in hardware and has been tested many times. The chip design of Volta and Turing seems to be very identical except for the missing Ray-cores on the V100 and the bigger L2-cache for Turing.

Let's see when the first hashcat benches come out and then you can blame me. But last time from Maxwell to Pascal I guessed pretty right, even a bit too conservative in the end Wink
When im looking on the prices now, we can have 2x gtx 1080ti in the same price as one rtx 2080 ti, so i think it's better buy 2x1080ti.
In usual games 2080ti is better than 1080 for 10-30%. As expected. Skipping.
Benchmarks are finally here!

https://twitter.com/Chick3nman512/status...2338024449

https://gist.github.com/Chick3nman/d03c0...4256801a9e

It's performing around the speed of a 1080Ti, as expected, though this is with release day drivers so it's definitely not running at full tilt and could see some slight performance boosts as new drivers come out and as the hashcat tuning is corrected.
Pages: 1 2 3 4 5 6 7 8 9 10