Download is here:
https://hashcat.net/oclhashcat/
This version 1.36 is a wild mix of nice updates
Still, all oclHashcat versions back to 1.33 share the same driver dependencies for AMD. If you have a running oclHashcat v1.33 or newer then v1.36 will work, too, without a driver update.
Most important changes:
- Added new hash mode -m 11300 = Bitcoin/Litecoin wallet.dat
- Added new hash mode -m 11600 = 7-Zip
- Fixed a bug in NVidia multihash kernels: MD5, NTLM, IPB2
- The parameters --show / --left do work with both halves of LM hashes (if they were 32 hex chars long)
- Optimized final round flushing (reduces time at last percents of progress with slower speed)
- Optimized rejection handling (for example passwords > 8 if cracking DEScrypt or < 8 if cracking WPA/WPA2, etc)
- The speed in status display is no longer divide by the number of uncracked salts
Don't forget to visit the new FAQ pages, it's worth a read:
https://hashcat.net/wiki/doku.php?id=fre..._questions
Full changelog v1.35 -> v1.36
Quote:
Type: Driver
File: Kernel
Desc: Added support for NV ForceWare 346.59 driver
Type: Feature
File: Kernel
Desc: Added new hash mode -m 11400 = SIP digest authentication (MD5)
Trac: #539
Type: Feature
File: Kernel
Desc: Added new hash mode -m 11300 = Bitcoin/Litecoin wallet.dat
Trac: #434
Type: Feature
File: Kernel
Desc: Added new hash mode -m 11500 = CRC32
Trac: #532
Type: Feature
File: Kernel
Desc: Added new hash mode -m 11600 = 7-Zip
Trac: #532
Type: Feature
File: Host
Desc: Optimized final round flushing (reduces time at last percents of progress with slower speed)
Type: Feature
File: Host
Desc: Optimized rejection handling (for example passwords > 8 if cracking DEScrypt etc)
Type: Feature
File: Host
Desc: Added parameter --bitmap-min to help loading huge hashlists faster
Type: Feature
File: Host
Desc: In status display, if a single hash is longer than 40 chars, truncate it and add "..." to it
Type: Change
File: Host
Desc: Renamed -m 3810 = md5($salt.$pass.$salt) to -m 3800 = md5($salt.$pass.$salt)
Type: Change
File: Host
Desc: Renamed -m 4710 = sha1($salt.$pass.$salt) to -m 4900 = sha1($salt.$pass.$salt)
Type: Change
File: Host
Desc: The speed in status display is no longer divide by the number of uncracked salts
Type: Change
File: Host
Desc: If all hashes bound to a salt are cracked, reduce the progress count of one salt from the total progress
Type: Change
File: Host
Desc: --show/--left does now work with both halves of -m 3000 = LM hashes if they are 32 hex chars long
Trac: #448
Type: Bug
File: Kernels
Desc: Fixed a bug in NVidia multihash kernels: MD5, NTLM, IPB2
Type: Bug
File: Host
Desc: Added additional checks for hexadecimal values supplied in masks by using the --hex-charset switch
Trac: 610
Type: Bug
File: Host
Desc: Fixed a bug in NVidia workload balancing
Type: Bug
File: Host
Desc: Fixed a bug in single rule applied to each word from left dict
Type: Bug
File: Host
Desc: Problem with "," character escaping in .hcmask file fixed
Type: Bug
File: Host
Desc: Fixed a bug in -m 101, was showing a wrong cracked plaintext
Thank you everyone involved
Nice work!
I was just about to ask when this would be available, I was anticipating RAR support but 7z support is still great!
Now, if only this .rar file would repack itself as .7z :p
Running the subsets of the benchmarks I use to avoid cudaHashcat crashing in Windows, I get a funny result for hash-type=10700.
The tail end of c1.35 gets:
>cudaHashcat64.exe --benchmark --hash-type=10600
cudaHashcat v1.35 starting in benchmark-mode...
Device #1: GeForce GTX 970, 4096MB, 1240Mhz, 13MCU
Hashtype: PDF 1.7 Level 3 (Acrobat 9)
Workload: 1024 loops, 256 accel
Speed.GPU.#1.: 1155.8 MH/s
Started: Sat Apr 25 16:21:03 2015
Stopped: Sat Apr 25 16:21:19 2015
>cudaHashcat64.exe --benchmark --hash-type=10700
cudaHashcat v1.35 starting in benchmark-mode...
Device #1: GeForce GTX 970, 4096MB, 1240Mhz, 13MCU
Hashtype: PDF 1.7 Level 8 (Acrobat 10 - 11)
Workload: 64 loops, 8 accel
Speed.GPU.#1.: 14537 H/s
Started: Sat Apr 25 16:21:19 2015
Stopped: Sat Apr 25 16:21:35 2015
But now in v1.36 I get:
>cudaHashcat64.exe --benchmark --hash-type=10600
cudaHashcat v1.36 starting in benchmark-mode...
Device #1: GeForce GTX 970, 4096MB, 1240Mhz, 13MCU
Hashtype: PDF 1.7 Level 3 (Acrobat 9)
Workload: 1024 loops, 256 accel
Speed.GPU.#1.: 1163.8 MH/s
Started: Sat Apr 25 16:18:41 2015
Stopped: Sat Apr 25 16:18:58 2015
>cudaHashcat64.exe --benchmark --hash-type=10700
cudaHashcat v1.36 starting in benchmark-mode...
Device #1: GeForce GTX 970, 4096MB, 1240Mhz, 13MCU
Hashtype: PDF 1.7 Level 8 (Acrobat 10 - 11)
Workload: 64 loops, 8 accel
Speed.GPU.#1.: 0 H/s
Started: Sat Apr 25 16:18:58 2015
Stopped: Sat Apr 25 16:19:24 2015
Is there some problem with the NVIDIA kernel or some other problem?
(04-25-2015, 11:28 PM)Kgx Pnqvhm Wrote: [ -> ]Is there some problem with the NVIDIA kernel or some other problem?
Nah, default workload is just too large.
Some quirks for current 7z implementation:
- Not all hashes provided by 7z2john can be loaded (ex. archives with a lot of files and/or different compression methods).
- Since the hash itself doesn't have any real data for verification, the cat won't stop cracking even after a password has been found, as it treats these passwords as pseudocollisions. If you get a password like this, you'll have to manually verify it for your archive.
- Archives with unencrypted file names cannot currently be cracked.
point 1. and 2. shouldn't be a problem in the future, because I've developed a perl script (7z2hashcat.pl) that should be able to deal with those archive types too. I.e. it does not have those limitations that 7z2john.py has...
This tool is currently in testing phase (by me and atom), but I plan to release it on github soon.
I also think with this new info and knowledge, we could help to improve 7z2john too... so maybe I will also provide a "patch" for 7z2john in the future...
(btw further discussion about 7z2hashcat should probably have its own forum thread in the future... maybe after we release 7z2hashcat!?)
(04-26-2015, 06:03 AM)epixoip Wrote: [ -> ] (04-25-2015, 11:28 PM)Kgx Pnqvhm Wrote: [ -> ]Is there some problem with the NVIDIA kernel or some other problem?
Nah, default workload is just too large.
Running the benchmarks in previous versions worked, so is this something the hashcat devs should change? So that benchmarks should be compared to benchmarks fairly?
To re-run the benchmarks for 10700 what should I use?
-m 10700 -b --benchmark-mode 0 -u 1024 -n 1 then increase the -n value to find the sweet spot