Please note, this is a STATIC archive of website hashcat.net from 08 Oct 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: Hashtopus - distributed solution
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Hashtopus - distributed GPU hashcat wrapper

Download: https://hashtopus.nech.me/beta (just grab the file with the highest number)
Install guide: https://www.youtube.com/watch?v=cazDoJhJvTM
Github: https://github.com/curlyboi?tab=repositories


Architecture
- Computing agent in C#.NET 2.0 running on Windows or under Mono on Linux
- PHP web server + MySQL
- PHP web admin

It has too many features to cover here. Check the manual inside installation package or at least the video.
Well, it seems like a good idea. However, making it for Window$ is not a good idea. If you think that making it for unix-based systems is beyound your skills, then just learn how to develope for it Big Grin

And one more advice, learn C++ or JAVA instead of C#. They're much more useful.
Like I said in the first post, it is aimed for gamers, 99% of which systems are based on Windows, although with Steam on Linux that might change soon. I am not a programmer, so learning a new programming language is not very interesting for me, and to achieve such level that I would be able to create a software in equal quality I am now developing in C# would take me a lot of time which I don't have.
Nontheless I am writing it in 2.0 .NET so it might as well run under Mono
Gentlemen, I am standing before a tough problem. I have constructed hashtopus with great agent unstability in mind because it is designed for agent deployment on computers which are not dedicated for hash cracking. Basically, I expect the agent could disconnect without warning any second. That is why I transfer cracked hashes to server almost in real time (using a small buffer) and why I dispatch chunks of relatively small time worth of computing (default 5 minutes).
That is also why I wanted to implement protection against mid-chunk interruption. Basically, with each cracked hashes buffer flush, I would also report in what part of keyspace am I currently in the cracking, so the server could keep track how much of the incomplete chunks was already performed. Should the agent die mid-cracking, the reassign to another agent feature (already implemented) wouldn't have to reasign the whole chunk but only the remaining part.
But recently I have discovered that if the base loop of keyspace (depends for example on -n parameter and first letter of brute force mask) is too big, the .restore file might not get updated the whole 5 minutes chunk. That means I have to find other way to track individual chunk progress.
One way that popped to my mind was to have the hashcat periodically output [s]tatus, but since I normaly crack with --quiet parameter and read the output to achieve instant event-driven output capturing, I would have to rewrite to be file-based which would create even bigger problems since no virtual files exist on Windows.

I am therefore asking if someone of you doesn't see anything to solve this I am missing.
Since you say that you dispatch small chunks (5 minutes), I don't think that it's a big deal to simply re-crack the chunk from the start. Otherwise, you can set --restore-timer=60 so that it save the restore file every minutes.
(02-25-2014, 07:32 PM)mastercracker Wrote: [ -> ]Since you say that you dispatch small chunks (5 minutes), I don't think that it's a big deal to simply re-crack the chunk from the start. Otherwise, you can set --restore-timer=60 so that it save the restore file every minutes.

Thank you for this reaction. Unfortunately, even if you force --restore-timer=1, the file gets updated but the contents only update when the base loop finishes. So in many cases, the file gets re-written every second, but the contents only change every few minutes or so... Just try it yourself.

As for the chunk size - if I knew I could update the chunk position to the server everytime I submit hashes buffer, I could make much longer chunks.

My goal is simply to eliminate duplicate work.
Hi, I would like to share some screenshots from the web GUI development. Please excuse shitty design, I am in no way a web designer.

Agent list:
[Image: htp_agents.png]

Agent detail:
[Image: htp_agentdetail.png]

Hashlist list:
[Image: htp_hashlists.png]

Hashlist detail:
[Image: htp_hashlistdetail.png]

Hashlist hashes:
[Image: htp_hashes.png]

Task list:
[Image: htp_tasks.png]

Task detail:
[Image: htp_taskdetail.png]

New hashlist:
[Image: htp_newhashlist.png]

New task:
[Image: htp_newtask.png]
Very nice job
awesome dude , cool job.
keep us up2date
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29