Hello,
I've got a bunch of plains. About 1500 of them are 8 char long, mostly random and issued at account creation and not changed since. The 12000 other plains are user picked after account creation, ranging from 9 to 49 char long.
What would be the best tool(s) to derive interesting stuffs from these plains like rules/masks/statistics…? Anything that would help me optimize cracking sessions (these are not supposed to be plains, I plan to make a password audit by cracking their hash counterparts, and I've got about 24000 more hashed password from same source).
I've given PACK a try.
I've also tried Pipal and Passpal. Pipal is interesting but on the vocabulary side, it fails splitting pass phrases into dictionary words. For 10 years old dumps it's not so important, but for recent dumps it's a problem.
thanks,
pat
Try expander from hashcat-utils.
Normally what I will do is expanding the plains that I've cracked using expander and attack the hashes using the expanded output either straight, combinator, rules, etc.
See
hashcat-utils wiki
Best regards,
Azren
(06-12-2016, 09:05 PM)patpro Wrote: [ -> ]I've given PACK a try.
I've also tried Pipal and Passpal. Pipal is interesting but on the vocabulary side, it fails splitting pass phrases into dictionary words. For 10 years old dumps it's not so important, but for recent dumps it's a problem.
thanks,
pat
i had the same issue, and PACK was my solution, why u did not find it usefull in ur case?
(06-12-2016, 11:23 PM)azren Wrote: [ -> ]Try expander from hashcat-utils.
Normally what I will do is expanding the plains that I've cracked using expander and attack the hashes using the expanded output either straight, combinator, rules, etc.
thanks azren. That's not exactly what I'm looking for but any tool/helper is welcome
full disclosure: I'm don't use hashcat often because I'm running almost only FreeBSD boxes (no GPU) and old MacOS X boxes (lame GPU). I'm currently building a multi GPU box to run Linux and Windows and hopefully I'll start using hashcat along with JtR soon.
(06-13-2016, 01:23 PM)kiara Wrote: [ -> ] (06-12-2016, 09:05 PM)patpro Wrote: [ -> ]I've given PACK a try.
I've also tried Pipal and Passpal. Pipal is interesting but on the vocabulary side, it fails splitting pass phrases into dictionary words. For 10 years old dumps it's not so important, but for recent dumps it's a problem.
thanks,
pat
i had the same issue, and PACK was my solution, why u did not find it usefull in ur case?
I had the feeling that PACK was not enough but after trying again yesterday I've understood I have a dictionary problem, and PACK don't work as it should on my FreeBSD servers.
PACK is definitively a great tool, but any other tool is good to know, and more importantly PACK is a bit old now.
(06-14-2016, 06:43 AM)patpro Wrote: [ -> ]PACK is definitively a great tool, but any other tool is good to know, and more importantly PACK is a bit old now.
old but working great,
u can use policygen to create mask files, to know which mask to make u can use
StatsGen..
By the way, I've got a problem with PACK, but it's not directly caused by PACK. Depending on your OS/package/etc. enchant can come with myspell, hunspell, aspell, etc. bindings making results highly unreliable across OSes.
On the same plain list, here is the result of rulegen.py on FreeBSD (enchant bound to aspell):
Code:
[*]Top 10 rules
[+] : - 4675 (1.00%)
[+] T0 - 2272 (0.00%)
[+] ] - 575 (0.00%)
[+] $1 - 315 (0.00%)
[+] l ] - 221 (0.00%)
[+] l $1 - 199 (0.00%)
[+] $e - 184 (0.00%)
[+] i4i o5l - 172 (0.00%)
[+] o4i $l - 170 (0.00%)
[+] i2u o5u - 149 (0.00%)
[*]Top 10 words
[+] marine - 125 (0.00%)
[+] solely - 123 (0.00%)
[+] dodo - 108 (0.00%)
[+] dodos - 106 (0.00%)
[+] lulu - 99 (0.00%)
[+] couch - 99 (0.00%)
[+] sesame - 91 (0.00%)
[+] bidon - 90 (0.00%)
[+] lollop - 88 (0.00%)
[+] sole - 88 (0.00%)
and the result on OS X, enchant bound to myspell:
Code:
[*] Top 10 rules
[+] : - 4579 (2.00%)
[+] T0 - 1907 (0.00%)
[+] ] - 547 (0.00%)
[+] $1 - 297 (0.00%)
[+] o3d o4o o5u - 225 (0.00%)
[+] l ] - 217 (0.00%)
[+] l $1 - 182 (0.00%)
[+] o3l o4o o5u - 175 (0.00%)
[+] o4i o5l - 168 (0.00%)
[+] o0m - 150 (0.00%)
[*]Top 10 words
[+] Toulouse - 123 (0.00%)
[+] julienne - 121 (0.00%)
[+] solely - 120 (0.00%)
[+] marine - 111 (0.00%)
[+] double - 101 (0.00%)
[+] doughy - 97 (0.00%)
[+] douser - 97 (0.00%)
[+] bidon - 90 (0.00%)
[+] solemn - 90 (0.00%)
[+] sesame - 89 (0.00%)