Please note, this is a STATIC archive of website hashcat.net from 08 Oct 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: Getting Stats on Rules and Masks for Analysis
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I went through the wiki again just now to make sure it wasn't covered... and I couldn't find this specifically answered, not to say it wasn't. But I'm sure this is done regularly by many of you, and to date I've been doing analysis on known passwords to generate rules and masks but I haven't been doing analysis on the efficiency of rules and masks.

So what I'm wondering is if HC (or other tools with compatible rule/mask) can generate a stats file on the performance of the rule, ideally CSV, compared to known passwords?

Instead of using the rules and masks to generate hashes for comparison to other hashes I would like to have it compare the plaintexts of each rule and/or mask it generates against to a wordlist (known pws) and spit out:
combinations tried,words matched,% efficiency (matched/tried)

For example, the "8 character u-l-d-s compliant" is one I'd like to analyze, but geared to WPA. Since the speed on WPA is so slow it isn't practical to run the whole set of masks. But by getting a report on what masks had the highest efficiency against a word list (known PWs) I could whittle the keyspace down a lot. Off the top of my head I would not be surprised if a group of masks in there representing less than 10% of the total keyspace accounting for at least 20 or 30% of the matches.

I'm pretty sure this could be done by writing shell script to feed OHC one mask at a time from that rule file, but I was hoping maybe there was something like this already before I try to reinvent the wheel.

I read the wiki again, I may have missed something. I saw where you can output matched rules. I will give that a try and see what it does.

So I think creating a script, be it python or shell is probably the best way to handle this since I have a variety of separate rules and attacks I want to analyze at once.

What I wondering is has anyone built a script of some type for cracking that does reports/analysis? I figure if something is already out there it would give me a head start and I would actually like to start collecting this on cracking also and not just for analysis against known words.

What I'd like to do is have several types of attacks, ie: combinator, regular dictionary, dictionary + rules, etc...
and then run those as a batch and capture the key metrics from them as each completes. Like attempts, hits, hashes per second, etc.

Right now everything I'm doing it more one off and just going off memory of what is working, or at best writing some notes. I'd like to have all this info output into a csv or sqlite db so I can analyze the success of various attacks and start optimizing more and creating priorities.

If anyone has done anything like this they are willing to share please post a link to the code. If not, I'm going to start on it as time allows and I will post the code when it is ready.
HM (Hash Manager) has a bunch of command line tools some of which might do part of what you want but you will have to make everything work together yourself with some scripts. Look at GetTopPasswords to have stats on the most common thing in your list (be it words, rules, etc.) and give a final sorted list with the most common on top. I don't remember where I have put this but at one point I was testing also attacks by timing the length of the attack versus the number of cracks (CountPasswords). Bottom line, it's worth doing your own analysis to become a more efficient cracker.
I may be misunderstanding the question a bit, but if you are trying to see what masks would have the highest success of already broken wordlists, you could run Pipal against it. It spits out hashcat compliant masks and tells you the  % that would have broken with that mask. 

If that isnt what you meant, ignore me.