Got this error with a 400 Mb list. Version 0.07 worked well with these
size files. Wih smaller size lists, all seems normal.
What can be the reason for this error ?
Using 12.3 ccc (while was using 11.12 with 0.07) W7x64.
0.08 added support for SHA512. The digest of SHA512 is union'ed with the twice the size smaller SHA256 digest. Means each hash requires doubled GPU and HOST memory than with 0.07.
Ok, splited the file and the problem disapears.
However, among the features that are still announced
on
https://hashcat.net/oclhashcat-plus/ this
"Multi-Hash (up to 24 million hashes)"
probably is correct for 0.07, but not for
later versions, right (just asking for a confirmation) ?
Yeah, your are right. I will update it with the release of 0.08. I played around a bit, looks like my hd7970 (3gb ram) is still able to load 15 million hashes.
(04-15-2012, 10:30 AM)atom Wrote: [ -> ]Yeah, your are right. I will update it with the release of 0.08. I played around a bit, looks like my hd7970 (3gb ram) is still able to load 15 million hashes.
I wonder why then 2x6990 (8gb ram) were not able to load a little less than that, if it is only a question of a total card ram ?! Perhaps the 4Gb ram system is too low for this ? Just curious, cause the solution was to split the file, but looks like the cards ram aren't totaly used (at least with 69xx series)...
The problem is AMD's ram management is not that good. You loose much performance on I/O. If you run millions of hashes frequently, better buy some nvidia cards.
Each GPU RAM must hold the same data. You have 4gb only - not 8gb. The 7970 has 3gb, that fits with your statement that the hd6990 were not able to load a little less than the hd7990.