Please note, this is a STATIC archive of website hashcat.net from October 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: Limiting the consecutive occurrence
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13
(05-29-2012, 04:02 AM)ntk Wrote: [ -> ]the one with 3p is virgin (UK)

mp64 runs on CPU not GPU, can it be run in more then one instances without reducing speed, if we have enough space? CPU supports SSE2 feature can it be used somehow? Has it used already.

Yes the speed worrying me too, and the size I have to delete a lot of files to make space for the n line ...

mp64 ain't the problem this tool is super quick, its sed this tool really needs multi-threads support and I'm surprised that it hasn't.

(05-29-2012, 04:02 AM)ntk Wrote: [ -> ]4gZaY34e is BT (UK) router WPA2/CMPK

never seen a key like this before, this would be really be hard to crack 62^8= 218,340,105,584,896 Oh my god!!!

M@LIK, need your sed wisdom again Tongue this is more for my curiosity than these lists as it would strip out too much I think.

So, how would sed delete all lines that don't have at least one or more duplicate character any where in the whole line?

Hash-IT what do you think to this. too much? you did say...

(05-28-2012, 03:30 PM)Hash-IT Wrote: [ -> ]optimized brute force attack. It is not meant to find everything but it aims to find most in the shortest time. .... hopefully. Big Grin

and if we are still sticking to the assumption that we have observed we won't need these without at least one duplicate? it would make the lists alot smaller.


EDIT:
also with the current filter, if Ive worked this out right, each character will be 56.79GB or 58,162MB with 6,098,751,256 passwords so that means the full A-Z will be 1.44TB or 1,476GB with a total of 158,567,532,656 passwords saving/removing 50,259,531,920 or a percentage of 24.06% Big Grin

and on your GPU Hash-IT, thats about 6 days and 12hr you will save.

although I don't know how to work out how long it will take to generate. its just done over 5GB, so only another 51.79GB to go Undecided
(05-29-2012, 12:12 AM)ntk Wrote: [ -> ]after we have got the right command. There is one more thing to consider:

abcdefgh
hgfedcba

is exactly the same but mirrored only.
I over looked this, well spotted ntk. We should include this.

Code:
/\(DCBA\|EDCB\|FEDC\|GFED\|HGFE\|IHGF\|JIHG\|KJIH\|LKJI\|MLKJ\|NMLK\|ONML\|PONM\|QPON\|RQPO\|SRQP\|TSRQ\|UTSR\|VUTS\|WVUT\|XWVU\|YXWV\|ZYXW\)/d

also it this the right command?
Yikes !!! I get some sleep and return to this thread and it has moved on more than I could imagine !!

@Pixel and ntk you are doing some great work and have some good idea’s, I will have to answer briefly I’m afraid as there is too much to write about this and if I don’t get a reply in quick things will move on so fast I may as well not bother !!! Big Grin

@Ntk

I understand your concerns about trying to predict what users would choose as a password but this thread was not considering “users”, it has a more specific target machine chosen default SKY passwords. However I think the principle of a less brutal password list is a good one for other applications.

@Pixel
(05-29-2012, 09:00 AM)Pixel Wrote: [ -> ]So, how would sed delete all lines that don't have at least one or more duplicate character any where in the whole line?

Hash-IT what do you think to this. too much? you did say...

I personally believe so, I think we are making a huge assumption based on a few known keys. You made a good observation but it may be down to chance that the keys we have do this. If we don’t add this extra filtering then the lists may be more use for other things also.

Your calculations are very interesting, they seem a little more accurate than the ones I did when I first thought of this. It is a significant drop in cracking time and I hope atom adopts this idea as it will save days !!!

I am generating Z at the moment and it has been running for 14 hours or so, the output file is only 3.76MB !!!!!!!!

I don’t think SED is fast enough, this needs to be on GPU.

Another interesting thing to consider is I used ULM’s regular expressions to do some of this filtering on a text file and it does it much faster. I am purely guessing it would be done in 5 hours or so.

I think filtering an existing list may be faster than generating one then pushing it through SED.

I think none of us should seriously get into generating these lists yet until we are all happy that the filter is ok and we have heard from atom. Think about it, it takes many hours to generate possibly days, we will then have to upload / download all the lists and then find somewhere to store them. Unless you have a huge empty drive they will have to be stored zipped which will mean unzipping them every time you need a character then move on to the next.

This filtering is a sound idea and worth pursuing but not without GPU or being able to do it on the fly, without atom I believe my idea will die.
(05-29-2012, 11:31 AM)Hash-IT Wrote: [ -> ]Yikes !!! I get some sleep and return to this thread and it has moved on more than I could imagine !!

@Pixel and ntk you are doing some great work and have some good idea’s, I will have to answer briefly I’m afraid as there is too much to write about this and if I don’t get a reply in quick things will move on so fast I may as well not bother !!! Big Grin

Sorry if I seem to be rushing you Hash-IT, don't mean too....
I've just been trying to do this for a long time and had a feeling sed could do what I wanted, its just that don't have a clue how to use sed properly. So its great when I find someone who has (I think) has the same goal as me, you. As well as someone who can use sed properly M@LIK.

(05-29-2012, 11:31 AM)Hash-IT Wrote: [ -> ]Another interesting thing to consider is I used ULM’s regular expressions to do some of this filtering on a text file and it does it much faster. I am purely guessing it would be done in 5 hours or so.

I think filtering an existing list may be faster than generating one then pushing it through SED.

Well can't we convert the sed commands over to regular expression? (thats another one that confuses me)
(05-29-2012, 12:26 PM)Pixel Wrote: [ -> ]Well can't we convert the sed commands over to regular expression? (thats another one that confuses me)

I have done 2 of them for use with ULM...

Consecutive characters...
Code:
(.)\1

Characters n times in a given line...

Code:
(.).*\1

The bad thing is I cannot get them to work together, so you have to do one pass with each separate command.

You will need to ask M@LIK about converting the other stuff as he is much better at this than I am. I don't know anyone else that can do this sort of thing other than perhaps TAPE.

Just to give you an update, the Z generating is only at 3.95MB at the moment. This is crazy slow !!

How fast are you doing them ?
(05-29-2012, 12:45 PM)Hash-IT Wrote: [ -> ]Just to give you an update, the Z generating is only at 3.95MB at the moment. This is crazy slow !!
How fast are you doing them ?

started it at 7.45pm yesterday, its done 7GB upto now
(05-29-2012, 01:01 PM)Pixel Wrote: [ -> ]started it at 7.45pm yesterday, its done 7GB upto now

Crikey !

I think it was about 9pm yesterday when I started so thats just over 15 hours for me and I only have 4.05MB !!!!!!!!!!

There must be something wrong here, how come yours is so much faster ? Are you using all the commands ?

This machine I am generating on is AMD dual core 3GHz. I thought that would be enough, I have a 4 core being used for other things but I doubt that would be better as SED is single core stuff.

I am tempted to stop it until we hear from atom.
all I used are in THIS post
ntk Wrote: [ -> ]Anyone has idea to generate the first half then use rule via OCLplus (GPU) to reverse it to form the missing half?

Disagree.

Pixel Wrote: [ -> ]So, how would sed delete all lines that don't have at least one or more duplicate character any where in the whole line?

This can be done using the command below, add it, slow it, love it xD:
Code:
/\(.\).*\1/!d


Pixel Wrote: [ -> ]I over looked this, well spotted ntk. We should include this.

Code:
/\(DCBA\|EDCB\|FEDC\|GFED\|HGFE\|IHGF\|JIHG\|KJIH\|LKJI\|MLKJ\|NMLK\|ONML\|PONM\|QPON\|RQPO\|SRQP\|TSRQ\|UTSR\|VUTS\|WVUT\|XWVU\|YXWV\|ZYXW\)/d

also it this the right command?

These kind of command slow it the most! That's why I said I can't do it in the first place (When you asked for alphabetical order rule).


Hash-IT Wrote: [ -> ]I am generating Z at the moment and it has been running for 14 hours or so, the output file is only 3.76MB !!!!!!!!

I don’t think SED is fast enough, this needs to be on GPU.

According to this : "AMD dual core 3GHz" It's your CPU : (
I'm on i7 2.4, I can generate 1GB within couple of hours.


Hash-IT Wrote: [ -> ]Another interesting thing to consider is I used ULM’s regular expressions to do some of this filtering on a text file and it does it much faster. I am purely guessing it would be done in 5 hours or so.

I think filtering an existing list may be faster than generating one then pushing it through SED.

I'll check that.

Hash-IT Wrote: [ -> ]I think none of us should seriously get into generating these lists yet until we are all happy that the filter is ok and we have heard from atom. Think about it, it takes many hours to generate possibly days, we will then have to upload / download all the lists and then find somewhere to store them. Unless you have a huge empty drive they will have to be stored zipped which will mean unzipping them every time you need a character then move on to the next.

Agree!


Here are all the commands for Regular Expression:
Code:
(.)\1\1
(.).*\1.*\1
(.).*\1.*(.).*\2
(.).*(.).*\1.*\2
(.).*(.).*\2.*\1
(ABCD|BCDE|CDEF|DEFG|EFGH|FGHI|GHIJ|HIJK|IJKL|JKLM|KLMN|LMNO|MNOP|NOPQ|OPQR|PQRS|QRST|RSTU|STUV|TUVW|UVWX|VWXY|WXYZ)
All the rules so far.
(05-29-2012, 01:47 PM)M@LIK Wrote: [ -> ]According to this : "AMD dual core 3GHz" It's your CPU : (
I'm on i7 2.4, I can generate 1GB within couple of hours.

Oh dear and I thought this was a good one !! Sad It looks like I am out of the game chaps as I cannot generate these lists in a reasonable time. I would love to know how come the new processors are so much faster !

(05-29-2012, 01:47 PM)M@LIK Wrote: [ -> ]Here are all the commands for Regular Expression:
Code:
(.)\1\1
(.).*\1.*\1
(.).*\1.*(.).*\2
(.).*(.).*\1.*\2
(.).*(.).*\2.*\1
(ABCD|BCDE|CDEF|DEFG|EFGH|FGHI|GHIJ|HIJK|IJKL|JKLM|KLMN|LMNO|MNOP|NOPQ|OPQR|PQRS|QRST|RSTU|STUV|TUVW|UVWX|VWXY|WXYZ)
All the rules so far.

You are doing a fantastic job there M@LIK. have you been able to run more than one filter at a time with ULM ? I have tried using "|" to separate them but it didn't work for me.

Just a warning for people copying M@LIK's regular expressions. There seems to be a formatting problem so you need to copy and paste the code into notepad. Then look at "PQRS​". You might find a "?" there as in "PQRS?" which dramatically affects the output of ULM.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13