Please note, this is a STATIC archive of website hashcat.net from 08 Oct 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

Expected hashrates with -m 22000
#1
Hi all.

I'm unsure whether I'm able to push higher hashrates than I'm doing atm. The wordlists I'm using is somewhere between 20 mio - 1+ billion words, but it does not seem to effect hashrates that much and I'm just trying to crack a single hash.

I'm primarily speaking of hashrates by running these commands:

Code:
hashcat -a 0 -m 22000 myhccapx.hccapx mywordlist.txt

or

Code:
hashcat -a 0 -m 22000 mypmkid.16800 mywordlist.txt

or using a maskattack, for instance:

Code:
hashcat -a 3 -m 22000 mypmkid.16800 ?d?d?d?d?d?d?d?d

I'm aware of -w 3 and -O, but I'm more concerned whether or not I'm producing enough work for optimal gpu acceleration. The above commands produce hashrates in the range of ~240-250 kH/s.

My main thought is whether or not I can reach higher hashrates by passing input different than from what I'm doing or if I should apply rules or do something entirely different. Or maybe I'm doing it right and I'm just limited by hardware and just need someone who has the knowledge to confirm that.

I've experimented with:

Code:
hashcat --stdout mywordlist.txt -r rules/best64.rule | hashcat -m 22000 myhccapx.hccapx

Which seemed to produce slightly lower hashrates (~210 kH/s).

Output of hashcat -I:

Code:
CUDA Info:
==========

CUDA.Version.: 11.0

Backend Device ID #1 (Alias: #2)
  Name...........: GeForce GTX 1650 SUPER
  Processor(s)...: 20
  Clock..........: 1755
  Memory.Total...: 3908 MB
  Memory.Free....: 3553 MB

OpenCL Info:
============

OpenCL Platform ID #1
  Vendor..: NVIDIA Corporation
  Name....: NVIDIA CUDA
  Version.: OpenCL 1.2 CUDA 11.0.228

  Backend Device ID #2 (Alias: #1)
    Type...........: GPU
    Vendor.ID......: 32
    Vendor.........: NVIDIA Corporation
    Name...........: GeForce GTX 1650 SUPER
    Version........: OpenCL 1.2 CUDA
    Processor(s)...: 20
    Clock..........: 1755
    Memory.Total...: 3908 MB (limited to 977 MB allocatable in one block)
    Memory.Free....: 3520 MB
    OpenCL.Version.: OpenCL C 1.2
    Driver.Version.: 450.66

Output of hashcat -m 22000 -b:

Code:
CUDA API (CUDA 11.0)
====================
* Device #1: GeForce GTX 1650 SUPER, 3557/3908 MB, 20MCU

OpenCL API (OpenCL 1.2 CUDA 11.0.228) - Platform #1 [NVIDIA Corporation]
========================================================================
* Device #2: GeForce GTX 1650 SUPER, skipped

Benchmark relevant options:
===========================
* --optimized-kernel-enable

Hashmode: 22000 - WPA-PBKDF2-PMKID+EAPOL (Iterations: 4095)

Speed.#1.........:  278.2 kH/s (73.41ms) @ Accel:32 Loops:128 Thr:1024 Vec:1

Let me know if you need the output of some command. I've tried to include output from which I thought could be relevant.
Reply
#2
If you GPU is consistently showing 100% utilization, you're supplying enough work for the GPUs to not be bored. For a slower hash like the WPA family, speeds like you're showing aren't unreasonable, but there may be a little more room for more speed.

With a straight wordlist, I get ~350kS/s on GTX 1080s, but the GPUs aren't consistently at 100%. Supplying a small rules file like best64.rule gets me ~430kH/s and consistent 100% usage.
~
Reply
#3
(09-23-2020, 08:13 AM)royce Wrote: If you GPU is consistently showing 100% utilization, you're supplying enough work for the GPUs to not be bored. For a slower hash like the WPA family, speeds like you're showing aren't unreasonable, but there may be a little more room for more speed.

With a straight wordlist, I get ~350kS/s on GTX 1080s, but the GPUs aren't consistently at 100%. Supplying a small rules file like best64.rule gets me ~430kH/s and consistent 100% usage.

Thanks a lot for your assistance royce. GPU is consistently showing 100% utilization, so I guess it is what it is. Applying no rules gave me ~244kH/s where applying rules gave me ~254kH/s, so a tiny difference. 

It might just be the limit for this GPU in -m 22000.
Reply