Please note, this is a STATIC archive of website hashcat.net from 08 Oct 2020, cach3.com does not collect or store any user information, there is no "phishing" involved.

hashcat Forum

Full Version: Kernel outputting CL_UNKNOWN_ERROR
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hello, Im developing a mode for an old file format. At some point in my code, I do something like;
Code:
__kernel function1() {
     struct mytype;
     function2(&mytype);
}

function2(struct *mytype) {
     uchar *ptr = mytype->value2;
     function3(ptr);
}

function3(byte* ptr) {
     uint16 v1 = 10;
     uint16* ptr2 = (uint16*) ptr;
     *ptr2 = v1 >> 8;
}


struct mytype {
      uchar value1[8];
      uchar value2[8];
      uint key[52];
      uint bufleft;
 }


This code fails at
Code:
*ptr2 = v1 >> 8;
with clWaitForEvents(): CL_UNKNOWN_ERROR. If I remove this single assignment, the code runs without errors. Does anybody has any tip on how to solve this problem?
From a C perspective it's fine (if we assumbe byte* is some 8 bit datatype), but in OpenCL everything is different. What I mean is that you have to find workarounds that do the same you try to do but are more easy for the compiler to understand.

Try could try to use uchar* instead in the function declaration. But it's more likely that you can not make use of 8 bit datatypes, since they do not exist natively on a GPU. There's on 32 bit registers and that's it. So for example you can use a combination out of div and mod and switch() in order to emulate what you do with the cast. Anyway, welcome to my world Smile
(06-19-2018, 09:47 AM)atom Wrote: [ -> ]From a C perspective it's fine (if we assumbe byte* is some 8 bit datatype), but in OpenCL everything is different. What I mean is that you have to find workarounds that do the same you try to do but are more easy for the compiler to understand.

Try could try to use uchar* instead in the function declaration. But it's more likely that you can not make use of 8 bit datatypes, since they do not exist natively on a GPU. There's on 32 bit registers and that's it. So for example you can use a combination out of div and mod and switch() in order to emulate what you do with the cast. Anyway, welcome to my world Smile

Thanks for the tips atom. I'll try to rewrite it. The code behavior is still not making much sense, but I'll try your sugestions.
Problem solved. I just changed my code from using the aligned data types to the unaligned data types, eg, u8a to u8.