Please note, this is a STATIC archive of website hashcat.net from 08 Oct 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: oclHashcat-Lite and ?h mask
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi,

I have been fiddling around using different character sets and using hashcat. When I use characters from the ?h character set, hashcat returns the incorrect password values.

As an example:

Cracking an NTLM password, password is êêêêê and the resulting NTLM hash is fdc960a5a41047a551af345d9a27329. When I run this through oclhashcat lite with the following arguments:
cudaHashcat-lite64.exe fdc960a5a41047a551af345d9a27329 --hash-type=1000 --pw-min=5 --pw-max=5 -1 ?h ?1?1?1?1?1

It returns the password as follows:

fdc960a5a41047a551af345d9a273293:ΩΩΩΩΩ

I feel like I am overlooking something basic.
it's just the way your terminal is representing those characters.
Here's how to get it straight in this case.

Code:
$ echo ΩΩΩΩΩ | iconv -t cp437 | iconv -f cp1252    
êêêêê

I always wondered how you tell HashCat what codepage to assume for input when converting to Unicode. What if the password was ккккк (in Russian) or κκκκκ (in Greece)? They do look the same but they are totally different in UTF-16. And like ΩΩΩΩΩ and êêêêê, both can be represented by 0xEA in some 8-bit codepage.
hashcat cheats for unicode algorithms like NTLM, it just inserts zeros
(09-05-2013, 02:46 PM)atom Wrote: [ -> ]hashcat cheats for unicode algorithms like NTLM, it just inserts zeros

Oou, that doesn't sound like a proper solution Big Grin Please, could you explain, why did you make it this way ?
All the GPGPU cracking tools that support fast hashes do it this way, it's because it does not harm to ASCII based ^ 95 charset. It gives a lot more speed to do it this way but it's hard to predict a number.
(09-05-2013, 08:27 PM)Kuci Wrote: [ -> ]Oou, that doesn't sound like a proper solution

Well it is 100% proper for converting full 8-bit ISO-8859-1 -> UTF16. Just not for any other 8-bit encoding.

FWIW I have a proper UTF8->UTF16 implementation on GPU (as well as conversion from some other codepages), in NTLMv2 in JtR. It was mostly as an experiment. The trick is it's only used when needed. It's trivial code but I'm sure it's slow - but right now that format is slow anyway because it lacks password generation on GPU.
You can trick a bit with NTLM by using MD4 instead of NTLM and using the tricks explained in rurapenthes latest blogpost: https://www.rurapenthe.me/2013/09/crackin...guage.html

Just not that you'd push zerobytes whenever required. It's the idea that counts.