Please note, this is a STATIC archive of website hashcat.net from 08 Oct 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: Best practice dealing with dic intersections
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi!

My friend tends to run many small, tightly targeted dictionaries based on hash original location (geo), hash author' seŃ…, hardware (if any) and so on. If none of those worked it's time to run common, bigger dictionaries. Obviously, small dics often overlap with big ones (e.g., local phone numbers with ?d-masks).

He failed to google/man any built-in possibility to exclude intersections using Hashcat (e.g. by means of "exclude words from said files" param). Did he miss it? What's teh best practice here? Thanks in advance!
There are countless tools, but I guess the best is to just stick to hashcat-utils's rli/rli2 tool:
- https://hashcat.net/wiki/doku.php?id=hashcat_utils#rli
- https://hashcat.net/wiki/doku.php?id=hashcat_utils#rli2 (less memory usage, but dictionaries need to be sorted and uniqued)
This is genius stuff, I usually work the same way and I have accumulate such huge dict. that I am for sure throwing duplicate all the time.

With this RLI tool I might be able to clean all my dict. and insure that im not wasting time retrying the same candidate.

anyways thanx Phil Big Grin
(07-30-2017, 07:34 AM)philsmd Wrote: [ -> ]There are countless tools

Well, yes, that's kinda what my friend does now. But is it the best practice tho? I mean, you have to manipulate them dics all the time, tracking every single one of them in relation to particular set of hash' not to mention noticeable amount of storage space overhead... it's a mess. And it makes harder to use masks. Built-in --exclude-words approach could save so much time in several scenarios. Eh.
(07-30-2017, 02:46 PM)fromdusktillpwn Wrote: [ -> ]
(07-30-2017, 07:34 AM)philsmd Wrote: [ -> ]There are countless tools

Well, yes, that's kinda what my friend does now. But is it the best practice tho? I mean, you have to manipulate them dics all the time, tracking every single one of them in relation to particular set of hash' not to mention noticeable amount of storage space overhead... it's a mess. And it makes harder to use masks. Built-in --exclude-words approach could save so much time in several scenarios. Eh.

Such files can be of arbitrary size and in case of very large file have a negative impact on cracking performance. Therefore it's better to pre-process the input wordlist using an external software. As an alternative you can using hashcat in stdin mode and feed it from your own wordlist exclude-words tool.