Please note, this is a STATIC archive of website hashcat.net from October 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: | attacks using hashcat-utils
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I have been using hashcat utils(combinator.bin) to combine two dictionaries and piping it into plus to be able to add rules to the combination. This is working fine, however, it would be nice to see a status screen that could be refreshed when it is running so I know how long it is going to take, etc. When I start the attack it just says starting in stdin, it is not until I stop the attack that I am presented with a summary screen. Is the status screen possible today or is that something that maybe can be considered for future releases?

If it needs to be in a future release it would be nice to see hashcat support it natively vs using the pipe. I know I could use combinator and combine two dictionaries to disk, then run it to plus with rules, but it takes a ton of diskspace.
The problem here is the communication between completely different processes (combinator.bin and oclHashcat-plus64.bin)... the pipe symbol reflects this very well... ( look at | as a wall between the two processes where only some data can flow through)....
The oclHashcat process does not know how the data is being generated ... and how much remaining data there is... to make it clear, not even combinator.bin exaclty knows a priori how "much" data there is and when exactly it is done... think of it this way "it just finishes when it needs to stop - when everything is done"

(To make it clear, oclHashcat can not influence and query the state from the other process(es) - which make part of the pipe - and won't be able to do that in future versions... this is a standard way to seperate but at the same time interact w/ other processes, only allow the flow of data - but don't be able to control the other processes ( besides querying the data slower or faster) )

That said, ther are some workarounds that you can try....
A nice tool for this is pipe viewer (pv, sudo apt-get install pv):
Code:
pv combinator.bin rockyou.txt huge_wordlist.txt | ./oclHashcat-plus64.bin -m 0 m0000.txt --quiet -r rules/best64.rule
This will (magically) show you the amount of data that flows through the pipe and how much data there still may be (I think pv just magically knows the "positions" in the input file(s) - and it sees how much data passes between the processes)
This may not be perfectly accurate but it seems to give you an idea.
Note: pv somehow controls the combinator.bin process (much more than oclHashcat after the pipe does)... also note there is no pipe between pv and combinator.bin here.

Another way to do the same could be to use "named pipes" <(./combinator.bin rockyou.txt huge_wordlist.txt)... the problem is that oclHashcat does not allow to do so, because it needs to have some information about the start and end of the input - if not in stdin mode - (and this information is not available a priori)....

... back to topic, this is the main reason, why oclHashcat does not (allow to) show the status during an attack in stdin mode.... because it does not have the required info for the status screen (e.g. the total amount of passwords/combinations to try etc etc.....) ...
Ofc you could try to convince the devs (by opening a trac ticket) to show (while in stdin mode) a special version of the status screen, where you only see the speed (of pipe and/or cracking speed, should be kind of the same because pipe in most cases is/can be a bottleneck) and other info available.

I think the 'pv' work-around could help you somehow. Let me know if this is working for you... or if not how you think oclHashcat can do better (besides my proposed feature request - very special/reduced/limited version of status screen)
Interesting. I will give pv a try, looks like it could help me get more info.

Back to the enhancement portion. Instead of a special/limited screen for using pipes, what about building off of the already existing combinator functionality in oclHashcat? So for example we could start oclHashcat with -a 1, then pass it a rule set -r rules/best64.rule. I know this would be added functionality and am not sure of specifics in the logic required. I do know in the past a rule could be passed with the combinator attack, but it wouldn't do anything. With the current release of oclHashcat that was disallowed via trac ticket I had opened as it wasn't supported anyways. Thoughts? Is this worth opening a trac ticket?
Please keep in mind, that while for you this 2 things seem to end up doing the same or at least have several points in common, ... the feature to natively support combinations + rules is very different from using a pipe (stdin mode). I mean, within stdin mode you could "send" very different inputs (password candidates) to oclHashcat (not generated by combinator.bin or that like)...

Therefore, in theory we need to consider to open at least 2 different trac tickets:
- stdin mode: show more info, show status screen with all the information you have (and yes, we know oclHashcat does not have the info about the upper bound - when it should stop)
- -a1 should work together w/ -r

Before you open any ticket it would be good to check if there was no such (forum/trac) suggestion/discussion/track ticket before ... and if there real was, should devs consider it again etc. I didn't check it myself

UPDATE: <removed> - added in new post below
Agreed. They are two different things as stdin could be used in other fashions I am not listing here, so the special status screen still would prove to be useful.

Just searched for both am not seeing anything related to either except for the previous ticket I created to throw an error when rules are applied with combinator. Also, nothing really for the stdin status screen either. I will go ahead and create the tickets for both and see if I can prove my case.
UPDATE: let's consider -a 1 w/ -r
the problem here is that devs need to come up w/ a nice and very fast solution to do both the combination and apply rules (possible within the gpu kernel, otherwise it isn't really fast)... this will ofc slow down runs that do not have many rules (or none)... => therefore, separate kernels just to do -a1 + -r need to be coded (and maintained !!!) for each and every hash type ... sounds like a lot of work... it is even more work then you can think of Sad
so -a 1 and -r
should do: combine the two dicts and then apply rules, right?

an alternative ofc could be to let the cpu also do some stuff ( and use only the fast rule engine on gpu device, e.g. cpu does combination and gpu applies rules + cracks - like in -a 0)... dunno if this makes any sense at the end, since cpu is slow + this needs to be implemented + at the end it could be that (following ths cpu+gpu approach), the pipe is even faster
note: w/ combinator.bin + -r in oclHashcat - the result is the same: use cpu for combination + gpu for (rules + cracking)....

So both approaches, may be either:
- difficult to code/maintain and/or
- slow (because cpu is involved)

Therefore, we need to look at a more clever solution.
The combinator.bin process maybe combines dictionaries as fast or faster than we need. I ran it for one minute on my test machine combining two small dictionaries and it created a file 8.5GB in size with 579,424,960 lines/combinations. Doing this with no rules would of course be ineffective as that would roughly equate to 9,657,083 tries a second unless you are using a slow algorithm like WPA2, NTLM would slow way down waiting on combinations from the cpu.

Now if rules are applied(depending on the length of the ruleset of course) that will increase the combinations the gpu needs to try. So if we look at it this way, 9,657,083 combinations per second being pushed from the combinator engine using CPU. Then apply a rule set to each one of those combinations.

We can use a fairly lengthly one as shorter ones would be less effective; passwordspro.rule has around 3200 lines of rules. So if we take 9,657,083 * 3,200 that equals 30,902,665,600 attempts per second, which would be enough to keep 4 - 7970s busy cracking NTLM.

This in my mind isn't bad, of course for the guys that are running 8 - 7970s they would basically have four idle GPUs which is undesirable, but it would be a start. This assumes we could use cpu for combining, then use gpu to crack + apply rules. The key would be using larger rule sets when cracking a fast algorithm or if you have a lot of hardware.

I suppose the other option would be to break down the wordlists into chunks and try to run multiple combinator threads on the CPU to effectively feed the GPUs faster if needed. Depending on the CPU or possible I/O of the machine I would think 3 or 4 cores could be tied up fairly easily. Then we could push 9,657,083 * 4 = 38,628,332 combinations per second, add passwordpro.rule to that and we have 123,610,662,400 attempt per second, which would keep over 8 - 7970s busy.

It seems to me that using the CPU/GPU in conjunction with each other would be the best route and maybe not that hard to implement vs developing and maintaining separate kernels just for this attack.

Let me know thoughts or if my logic is wrong.