Hi
Just wondered if anyone knew of a way to remove bad hashes or text that is not a hash from a corrupted or mixed hash list and dump it into a new clean one ?
Especially if the user knows what sort of hash they are looking for, can hashcat or hashcat-plus do this already ?
Thanks.
(07-07-2012, 07:08 PM)undeath Wrote: [ -> ]use grep.
Ah, thank you undeath. I always look forward to your long winded and inordinately verbose replies !
(only joking)
Anything for the humble windows user ?
Could do it with excel as well. Create a char count column then sort by number.
(07-08-2012, 06:20 AM)radix Wrote: [ -> ]Could do it with excel as well. Create a char count column then sort by number.
Thank you for your help.
Thats a good idea but I was wondering if there was a more sophisticated way of checking for a hash rather than just line length.
I am thinking about the recent Linkedin list where the first five characters were zero's. They would pass the line length test but they were not actually real hashes, if you know what I mean.
I was wondering if atom had already made a feature where hashcat could delete these lines or better separate them and dump them into a separate text file. I just couldn't find it.
I appreciate we will never know it is a genuine hash unless it is cracked, but I'm just wondering....
Thanks for your reply.
With the LinkedIn list it would be very difficult (outside the realm of probability) to detect that those hashes were not crackable, because they are valid hashes by all accounts. They were the right length and contained the correct characters.
For mixed hashes it's even more difficult because it's impossible to distinguish between e.g. lm, nt, md4, md5, double md5, md5 of sha1, whatever.
If you know for certain that you have a list of SHA1 hashes plus maybe some other garbage, the best you can do is something like: egrep '^[a-f0-9]{40}$' list >cleanlist But that wouldn't help in a LinkedIn-type scenario.
Thank you for your help
Hmm... I see the problem, it is difficult isn't it.
I guess the only solution is "maybe" to suspect hashes with more than a certain amount of repeated characters is less likely to be a real hash.
So more than x sequential characters = bad hash ?
I suppose a rough way to filter linkedin for example, would be to say more than 4 sequential 0000 is a bad hash ?
I am still fumbling around with RegEx, I got this far ...
^(00000) but still get hashes with 000000 six zero's.
Just now trying to work out how to say select any line that starts with ooooo but not if the sixth character is a 0.
Nothing important I am just trying to learn more about RegEx as I can see how powerful it is, it just needs a better user !!
o_O
grep -E '^[a-f0-9]{40}$' | grep -v '^00000'