Please note, this is a STATIC archive of website hashcat.net from October 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: CPU vs GPU
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
Which is better? CPU or GPU?

I've tried using both and the CPU seems faster, but when i run hashcat it only uses 1/4 of the maximum power (1024mb out of 4048mb). 

Is there a way that I can allocate more memory to hash cat and could I allocate more GPU memory to hash cat?

If it helps I'm using a 2015 macbook air. I would use my desktop which has more CPU power but I don't know how to allocate more power so I'm just using my macbook air.
GPU is usually better.

There's no way to use more memory at the hashcat level. Some background on the 25% memory cap is here. I have an open enhancement request with NVIDIA to investigate, but there's no guarantee that they can do anything about it.
If the GPU is better...

why is my CPU faster at cracking the hashes?
I don't know a lot about Mac hardware, but I would assume that this is because the GPU in that unit is slower than the CPU? In my experience, the GPU is usually better, though, so someone else may know more.

Can you post an MD5 benchmark (-d -m 0) for both CPU and GPU?

EDIT: My apologies - I meant (-b -m 0)!
It's telling me that it's an invalid argument
That definitely works. Orient yourself with your command line.

Code:
$ hashcat -b -m 0
hashcat (v3.30-317-g778f568) starting in benchmark mode...

OpenCL Platform #1: NVIDIA Corporation
======================================
* Device #1: GeForce GTX 970, 1009/4036 MB allocatable, 13MCU
* Device #2: GeForce GTX 750 Ti, 500/2000 MB allocatable, 5MCU

Hashtype: MD5

Speed.Dev.#1.....: 10260.8 MH/s (84.99ms)
Speed.Dev.#2.....:  3661.7 MH/s (91.61ms)
Speed.Dev.#*.....: 13922.6 MH/s

Started: Sun Feb 26 06:54:49 2017
Stopped: Sun Feb 26 06:54:53 2017
Also note that the "maximum power" you mentioned early is memory size, which has no relationship to speed.
On my Desktop: 
CPU:
OpenCL Platform #1: Apple
=========================
* Device #1: Intel(R) Core(TM) i3 CPU         550  @ 3.20GHz, 2047/12288 MB allocatable, 4MCU
* Device #2: ATI Radeon HD 5670, skipped

Hashtype: MD5

Speed.Dev.#1.....: 60611.3 kH/s (69.10ms)

GPU:
OpenCL Platform #1: Apple
=========================
* Device #1: Intel(R) Core(TM) i3 CPU         550  @ 3.20GHz, skipped
* Device #2: ATI Radeon HD 5670, 128/512 MB allocatable, 5MCU

Hashtype: MD5

Speed.Dev.#2.....:   753.3 MH/s (52.96ms)

On my Laptop:

CPU: 

OpenCL Platform #1: Apple
=========================
* Device #1: Intel(R) Core(TM) i5-5250U CPU @ 1.60GHz, 1024/4096 MB allocatable, 4MCU
* Device #2: Intel(R) Iris(TM) Graphics 6100, skipped

Hashtype: MD5

Speed.Dev.#1.....: 51787.3 kH/s (80.91ms)

GPU:
OpenCL Platform #1: Apple

=========================

* Device #1: Intel(R) Core(TM) i5-5250U CPU @ 1.60GHz, skipped

* Device #2: Intel(R) Iris(TM) Graphics 6100, 384/1536 MB allocatable, 48MCU



Hashtype: MD5


Speed.Dev.#2.....:   427.9 MH/s (57.82ms)


Nothing compared to yours of course
In the posted benchmarks, your GPU performance is roughly an order of magnitude better than your CPU performance.
so what does that mean?
Pages: 1 2 3