Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Hintze

macrumors member
Original poster
Apr 29, 2010
32
0
Hi, have been spending many hours of setting up glfw with xcode and i believe i finally got it to work, since it works to create a window using it. I am very new to OpenGL and I have followed several tutorials on how to make a single triangle on the screen and no matter what i try it seems like I can't get it to draw the triangle to the screen, as in it draws the window but nothing more.
I am not sure if it is of importance but when running these,
Code:
    glfwWindowHint(GLFW_SAMPLES, 1);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
the program breaks on glewInit() with EXC_BAD_ACCESS, but without them it works just fine.

So if anyone could take a look on my code and see if it is my code that is wrong or it might be my linking and such,


Code:
#include <iostream>
#include <GL/glew.h>
#include <GL/glfw3.h>


GLuint vboId;

void CreateVertexBuffer()
{
    float triangle [] ={-1.0f, -1.0f, 0.0f,
                        1.0f, -1.0f, 0.0f,
                        0.0f,  1.0f, 0.0f};
    
    glGenBuffers(1, &vboId);
    glBindBuffer(GL_ARRAY_BUFFER,vboId);
    glBufferData(GL_ARRAY_BUFFER,sizeof(triangle),triangle,GL_STATIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0);
    glBindBuffer(GL_ARRAY_BUFFER,0);
}

int main(int argc,char **argv)
{
    GLFWwindow* window;
    if (!glfwInit())
        return -1;
    glfwWindowHint(GLFW_SAMPLES, 1); // 4x antialiasing
    glfwWindowHint(3,1);
    window = glfwCreateWindow(1024, 768, "Hello World", NULL, NULL);
    glfwMakeContextCurrent(window);
    glewExperimental=GL_TRUE;
    
    if (glewInit () != GLEW_NO_ERROR)
    {
        std::cout << "Failed to initialize GLEW... " << std::endl;
        return -1;
    }
    CreateVertexBuffer();
    
    while (!glfwWindowShouldClose(window))
    {
        glClearColor(.1, .1, .1, .1);
        glClear(GL_COLOR_BUFFER_BIT);
        glBindBuffer(GL_ARRAY_BUFFER, vboId);
        glDrawArrays(GL_TRIANGLES, 0, 3);
        glfwSwapBuffers(window);
        glfwPollEvents();
    }
    
}
Thanks for you time!
//Hintze
 
You need to add:

Code:
 glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);

before
Code:
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

You should now no longer get EXC_BAD_ACCESS.

Good Luck! ;)
 
thanks a lot! that definitely made the trick.
I have followed the tutorial on here
http://www.opengl-tutorial.org

and i get a nice window but i can't get anything to draw and I have no idea where to start, i have just finished writing a program that can read a height map and have a boxfilter and have dynamic lighting but it uses freeglut and OpenGL in windows, i would use freeglut in osx if that would not force me to use OpenGL 2.1, So if anyone have any idea what could be wrong i would really appreciate it because i would love to be able to work on my project on osx and no be forced to boot into windows.

thanks for your time!

//Hintze
 
thanks a lot! that definitely made the trick.
I have followed the tutorial on here
http://www.opengl-tutorial.org

and i get a nice window but i can't get anything to draw and I have no idea where to start, i have just finished writing a program that can read a height map and have a boxfilter and have dynamic lighting but it uses freeglut and OpenGL in windows, i would use freeglut in osx if that would not force me to use OpenGL 2.1, So if anyone have any idea what could be wrong i would really appreciate it because i would love to be able to work on my project on osx and no be forced to boot into windows.

thanks for your time!

//Hintze

You didn't provide any details about you problem, so it is impossible to assess what is wrong.

I have not gotten FreeGLUT to work on OS X with OpenGL 4 despite the fact that it should support it, so I use GLFW instead.

If you are using OS X Mavericks, GLFW supports context creation of up to OpenGL 4.1. To get started with OpenGL 4 and GLFW I would take a look at Anton Gerdelan's OpenGL 4 Tutorials
It takes a little bit of reading to started with these tutorials on the Mac, but they are very helpful (at least to me they were.. and still are).
 
Hi and thanks for your response! i will look into the tutorials you sent me I am starting to think that it is a problem with loading in my shaders, but i will follow the tutorial and hopefully i will soon have it up and running on osx :)

I did as they do in the beginning of the tutorial and defining my shaders in my main class and then it works just as it should, so the problem i am having is loading the shaders from files. I am using te exact same class which i am using i windows which is the one the uses in the example in OpenGL programming 8th edition, so if you can shed some light on how you load in your textures that would be really nice :)

/Hintze
 
Last edited:
okay so loading shaders is now longer a problem sort of.
It throws me this error when I try to compile my shader, and i added glGetString(GL_SHADING_LANGUAGE_VERSION);
glGetString (GL_VERSION);
so you can see I am not trying to use a OpenGL version not supported

Code:
Renderer: 4.10
OpenGL version supported 4.1 NVIDIA-8.18.22 310.40.05f01
VertexShader.glsl:ERROR: 0:1: '' :  version '120' is not supported
ERROR: 0:2: '' :  #version required and missing.
ERROR: 0:3: '' :  #version must occur before any other statement in the program
ERROR: 0:10: ';' : syntax error syntax error
Program ended with exit code: 0
and this is my shaders
Code:
#version 410

out vec4 frag_colour;

void main () {
  frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);
};

Code:
#version 410

in vec3 vp;

void main () {
  gl_Position = vec4 (vp, 1.0);

};

again, thanks for your time!

/Hintze


okay it is obviously not a problem with OpenGL because it works when i define the shaders in the main class so the problem must lie in where i try to read my shaders from file, so maybe this something works different on osx and windows because something is not reading as it should.
 
Last edited:
okay so loading shaders is now longer a problem sort of.
It throws me this error when I try to compile my shader, and i added glGetString(GL_SHADING_LANGUAGE_VERSION);
glGetString (GL_VERSION);
so you can see I am not trying to use a OpenGL version not supported

Code:
Renderer: 4.10
OpenGL version supported 4.1 NVIDIA-8.18.22 310.40.05f01
VertexShader.glsl:ERROR: 0:1: '' :  version '120' is not supported
ERROR: 0:2: '' :  #version required and missing.
ERROR: 0:3: '' :  #version must occur before any other statement in the program
ERROR: 0:10: ';' : syntax error syntax error
Program ended with exit code: 0
and this is my shaders
Code:
#version 410

out vec4 frag_colour;

void main () {
  frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);
};

Code:
#version 410

in vec3 vp;

void main () {
  gl_Position = vec4 (vp, 1.0);

};

again, thanks for your time!

/Hintze


okay it is obviously not a problem with OpenGL because it works when i define the shaders in the main class so the problem must lie in where i try to read my shaders from file, so maybe this something works different on osx and windows because something is not reading as it should.

take off the 0 from your version.
 
There is no need to remove the 0 from the version. I load my shaders like this:

Code:
const char* load_shader(const char* path){
    FILE *shaderInputFile = fopen(path, "r");
    fseek(shaderInputFile, 0, SEEK_END); //place the position indication at the end of the file
    long shaderSize = ftell(shaderInputFile); //returns distance from beginning in bytes
    rewind(shaderInputFile); //put the position indicator back at the beginning
    char *shaderContent = (char*)malloc(shaderSize +1 ); //returns pointer to memory block. casting is from (void*)
    fread(shaderContent, 1, (size_t)shaderSize, shaderInputFile);
    shaderContent[shaderSize] = '\0'; //add null terminating character
    return shaderContent;
}

and then:

Code:
 const char* fragmentShaderFile = load_shader("type fragment shader path");
    
    
    const char* vertexShaderFile = load_shader("type vertex shader path");
    
    
    unsigned int vs = glCreateShader (GL_VERTEX_SHADER);
    glShaderSource (vs, 1, &vertexShaderFile, NULL);
    glCompileShader (vs);
    
    unsigned int fs = glCreateShader (GL_FRAGMENT_SHADER);
    glShaderSource (fs, 1, &fragmentShaderFile, NULL);
    glCompileShader (fs);
 
I assume you mean writing as this
Code:
#version 41
in vec3 vp;
void main () {
  gl_Position = vec4 (vp, 1.0);
};

that does not make much difference, and I know the shader does compile if i define it in my main class so I don't load from file so I don't think it has anything to do with how my shader is written unless I'm missing something here.
I do appreciate your help though :)

----------

There is no need to remove the 0 from the version. I load my shaders like this:

Code:
const char* load_shader(const char* path){
    FILE *shaderInputFile = fopen(path, "r");
    fseek(shaderInputFile, 0, SEEK_END); //place the position indication at the end of the file
    long shaderSize = ftell(shaderInputFile); //returns distance from beginning in bytes
    rewind(shaderInputFile); //put the position indicator back at the beginning
    char *shaderContent = (char*)malloc(shaderSize +1 ); //returns pointer to memory block. casting is from (void*)
    fread(shaderContent, 1, (size_t)shaderSize, shaderInputFile);
    shaderContent[shaderSize] = '\0'; //add null terminating character
    return shaderContent;
}

and then:

Code:
 const char* fragmentShaderFile = load_shader("type fragment shader path");
    
    
    const char* vertexShaderFile = load_shader("type vertex shader path");
    
    
    unsigned int vs = glCreateShader (GL_VERTEX_SHADER);
    glShaderSource (vs, 1, &vertexShaderFile, NULL);
    glCompileShader (vs);
    
    unsigned int fs = glCreateShader (GL_FRAGMENT_SHADER);
    glShaderSource (fs, 1, &fragmentShaderFile, NULL);
    glCompileShader (fs);

thanks, I will try this and report back:)
 
Hi again!
I tried your way of loading the shaders and it seems like it loads the file just fine but I get no results, is there a easy way of telling if the shaders compile correctly? Wouldn't the program break if the shader is incorrect? Because at the moment i get no results whatsoever:(
Sk%C3%A4rmavbild%202014-01-18%20kl.%2019.37.29.png


it clearly looks like it loads the files correctly, right?

I really appreciate that you guys are trying to help me:)
 
Hi again!
I tried your way of loading the shaders and it seems like it loads the file just fine but I get no results, is there a easy way of telling if the shaders compile correctly? Wouldn't the program break if the shader is incorrect? Because at the moment i get no results whatsoever:(
Image

it clearly looks like it loads the files correctly, right?

I really appreciate that you guys are trying to help me:)

I don't have enough information to figure it what is wrong. If you could upload your xcode project or show your source I can take a look.
 
I hope you can figure out what is wrong :)
and again thanks a lot for your help, it means a lot to me:)

https://www.dropbox.com/s/3kelff4py607i6r/Opengltest2.zip

There is a simple mistake in your shaders. You are adding a semicolon to the end of the
Code:
void main {}
. Remove the semicolon in both the vertex and fragment shader to get the purple triangle and a blur background :cool: .

For future reference use this to get any shader compilation errors (that's what I did):

Code:
void _print_shader_info_log (unsigned int shader_index) {
    int max_length = 2048;
    int actual_length = 0;
    char log[2048];
    glGetShaderInfoLog (shader_index, max_length, &actual_length, log);
    printf ("shader info log for GL index %i:\n%s\n", shader_index, log);
}
 
There is a simple mistake in your shaders. You are adding a semicolon to the end of the
Code:
void main {}
. Remove the semicolon in both the vertex and fragment shader to get the purple triangle and a blur background :cool: .

For future reference use this to get any shader compilation errors (that's what I did):

Code:
void _print_shader_info_log (unsigned int shader_index) {
    int max_length = 2048;
    int actual_length = 0;
    char log[2048];
    glGetShaderInfoLog (shader_index, max_length, &actual_length, log);
    printf ("shader info log for GL index %i:\n%s\n", shader_index, log);
}
haha, thanks a lot man:) I really appreciate it, works perfectly now :D
I will use that function in the future :)

again, thanks a lot.
/Hintze
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.