04-08-2015, 04:41 PM
04-08-2015, 05:35 PM
I think there should be another contest!!
04-08-2015, 05:49 PM
I'm in favor of a new contest, but it should be against a larger corpus of non-uniqued hashes from multiple sources.
best30 & best100 makes sense, but only as long as best30 is simply the top30 rules from best100. So you'd run best30 against slow hashes, and the full best100 against fast hashes.
best30 & best100 makes sense, but only as long as best30 is simply the top30 rules from best100. So you'd run best30 against slow hashes, and the full best100 against fast hashes.
04-08-2015, 07:01 PM
yes and agreed with @epixoip
04-10-2015, 02:02 PM
why not make a double ? best120.rule , best30 will be only a smal piece of best64 and makes no sense for slow algos.
04-11-2015, 01:47 AM
Wouldn't some of these "best rules" already be in the various rule sets already distributed with the hashcats?
04-11-2015, 03:15 AM
I vote yes to this too. And as to Kgx's question, probably, but so what? Passwords used change over time, and the rules need to be updated against them.
04-11-2015, 08:54 AM
How about best 512, sorted by efficiency?
This way the list can be cut to suit any needs and still remain efficient.
2 cents.
This way the list can be cut to suit any needs and still remain efficient.
2 cents.
04-11-2015, 07:53 PM
FYI, the reason for the initial best64 was to generate enough material that, when used as an amplifier for fast hashes, it's nearly as fast as the theoretical maximum performance. Of course this totally changed in comparison to the currentl oclHashcat version.
Sorting by occurance, or by efficiency, is some idea that I really like. The dive.rule or the generated*.rule are ordered the same way. However from what I've seen people don't do this kind of stuff. It's the opposite they even do stuff like $ cat rules/* > all.rule
If the goal for the challenge is to find the best XX for slow hash processing we'd typically end up with the usual suspects like $1, $1$2$3 etc. I mean, this propably makes sense as those are really the best rules but maybe not what we are looking for.
Finally the more important question than the one how many rules we want is how we do it and which reference hashes we use.
Sorting by occurance, or by efficiency, is some idea that I really like. The dive.rule or the generated*.rule are ordered the same way. However from what I've seen people don't do this kind of stuff. It's the opposite they even do stuff like $ cat rules/* > all.rule
If the goal for the challenge is to find the best XX for slow hash processing we'd typically end up with the usual suspects like $1, $1$2$3 etc. I mean, this propably makes sense as those are really the best rules but maybe not what we are looking for.
Finally the more important question than the one how many rules we want is how we do it and which reference hashes we use.
04-11-2015, 08:02 PM
(04-11-2015, 07:53 PM)atom Wrote: [ -> ]Finally the more important question than the one how many rules we want is how we do it and which reference hashes we use.
The last contest lead to rather dump-specific results. I think the new one should have at least three different wordlists.
rockyou and linkedin seem to be good targets.