Please note, this is a STATIC archive of website hashcat.net from October 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: bulk extracting hashes
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I was wondering if there scripts that could be run recursively against a directory of files (doc, ppt, xls, docx, pdf, zip, 7z, rar, etc..) that would call the appropriate office2john.py (for example) to extract hashes for a large number of files.  They then could be passed to hashid or similar to make sorted lists to be passed into hashcat.
You could easily write a bash script. Do something along the lines of:
Code:
find ~/officedocs -name '*.pdf' -or -name '*.doc' -or -name '*.7z' | xargs office2john.py ARG ARG

Just add each kind of office document that you want to search in the find options, then pipe it all over to xargs office2john.py
Or pass it to a case statement so you can execute the proper application e.g. 7z2john rar2john zip2john etc.

Hope this helps get you in the right direction.