Laboration 1

Introduction to OpenGL

Goal: In this lab, you will get acquainted with how OpenGL is designed. At the end of the lab, you should be able to display a 3D object.

If you run into problems, you can either look in the textbook, or visit http://www.opengl.org. There you will, among many other things, find the entire OpenGL Programming Guide in on-line version (old version) and, most importantly, the OpenGL 3.2 Quick Reference Card.

Note that you should write down answers to all questions before you get examined!

We will use the C language for this lab, plus GLSL for shader programs.

0) Some important notes about C and GLSL

NOTE: For 2023 we will also provide C++ versions of the labs. They will be mostly the same but you can use operator overloading for many matrix and vector operations, which can be quite convenient.

Some notes about the C/C++ language

We are using C++ for the labs. Most of the time, we stay pretty close to straight C, using the parts of C++ that really helps the lab, like operator overloading.

& takes the address to a variable. When passing variables by reference, C always passes pointers.

An array and a pointer to the first element is the same thing.

* is used both for declaring pointer variables and dereferencing them:

int *a; declares the variable "a" as a pointer to an int.

b = *a; dereferences the variable "a" and gets the value it points to and assigns it to b.

#define and #include are preprocessor directives. Be very careful with them. A #define to make a constant is safe, but if you make a #define with variables in it, it quickly goes out of hand.

You never pass entire arrays to functions. You always pass pointers.

int a[5]; is an array of int

GetSomeData(a); passes a to a function. & is not needed since it is an array, so a is really a pointer.

Initially, we will pass arrays into OpenGL using glUniformMatrix4fv. This really means that we pass the array as a pointer.

However, this is just the down-on-the-bare-metal that we do in lab 1. Thereafter, to avoid passing matrices as pointers, we package them as a struct named mat4 containing the array. Therefore, you can access the array from the matrix myMatrix as myMatrix.m. We also package this further with VectorUtils calls like uploadMat4ToShader().

VectorUtils define several types that work like in GLSL. The two most important ones are:

vec3 p; is a struct referring to a 3D vector.

mat4 m; is a struct referring to a 4x4 matrix;

For C++, these can be initialized with C++ constructors.

We will provide you with examples wherever we see a need. Ask us if you feel you need one.

Notes about the new C++ versions:

We do not use much of the large and complex C language. The main difference to the C code is that you have the very convenient operator overloading so you can do things like multiplying matrices by a * symbol just like you can in the shaders.

In order to make the common code as portable as possible, VectorUtils and LittleObjLoader are here given as header-only units. That means that all code is in the .h file, but only the actual header part is compiled for all units using it except one: The one that defines the “MAIN” symbol.

Both the C and C++ versions are available. Note that the C version currently contains the more conventional version of “common” where .c and .h files are separate. We will most likely phase out the C version, but please let us know if it is of interest to you.

Shader programs

Every OpenGL program must include at least one vertex and one fragment shader. These are small program kernels executed on the GPU as part of rendering, for specifying the behaviour of the vertex and fragment processing steps.

Vertex shaders perform per-vertex calculations. That's where vertices are transformed, per-vertex lighting calculations are done, and where skeletal animation systems do most of their work.

Fragment shaders perform per-pixel calculations. That's where texture and lighting colours are combined into one final pixel colour value.

GLSL

GLSL, OpenGL Shading Language, is the shader language used by OpenGL. GLSL code is similar to C/C++ code, but with a strong emphasis on computation.

Most GLSL code performs floating-point calculations. Common datatypes used are float, vec2, vec3, vec4, mat3 and mat4. These datatypes represent scalars, 2D, 3D, 4D vectors, 3x3 and 4x4 matrices. Arithmetic operations can be performed directly on these datatypes.

For integer calculations (such as counting loop iterations), int is available. The bool datatype is also available.

A small GLSL function can look like:

vec4 applyDirectionalLight(vec3 normal, vec4 originalColor)

{

  vec3 lightDirection = normalize(vec3(0.5, 0.8, 0.7));

  float strength = dot(lightDirection, normal);

  if (strength < 0.0)

    strength = 0.0;

  vec4 color = originalColor.xyxx * strength;

  return color;

}

vec3(0.5, 0.8, 0.7) constructs a new vec3 from three floating-point values.

dot() calls a predefined math function.

originalColor.xyxx performs "swizzling" on the original vector: the result is a vec4 whose XYZW elements are taken from the X, Y, X and X elements of originalColor, respectively.

You can find a complete list of built-in mathematical functions in the GLSL Language Specification.

GLSL program structure and variables

The code for a shader program is enclosed inside the main() function. It takes no arguments, and returns nothing. Communications between OpenGL, the vertex shader and the fragment shader is done by reading/writing global variables.

Variables can have a few different qualifiers:

uniform - the value is constant over an entire polygon; it is read/write for OpenGL, and read-only for fragment and vertex shaders.

in/out - input and output. In vertex shaders, these are "attributes", can be unique for every vertex (by passing arrays). The resulting color from a fragment shader is an out variable. 

in/out between shaders - out from vertex, in to fragment, "varying" variables: the value will be interpolated over the surface of a polygon; write in vertex shader, read in fragment shader.

All variables whose names begin with "gl_" are predefined by OpenGL. These are always present, and they can be used without declaring them first. For now, you only need to care about gl_Position, which is a vec4 with the resulting vertex from your vertex shader after transformation and projection. Writing this in your vertex shader is mandatory.

OpenGL will take the output from the vertex shader, interpolate the resulting values over the surface of any neighboring polygons, and then run the fragment shader once for every pixel which the polygon is supposed to render to. Any extra out variables in the vertex shader will also be interpolated over the polygon, and the result is available to the fragment shader in in variables.

You can find a full list of pre-defined variables in the GLSL Language Specification. And, of course, the course book also holds more information.

Debugging shaders

Debugging a shader is a story of its own. We don't have any full shader debugger installed in the lab, so we have to resort to other methods. On the positive side, shaders are often very simple (especially in this lab). However, debugging takes some special tricks.

Compilation errors are reported to stdout. This is a main source for information.

You can also play some tricks in the shaders. If your shader is running, but produces the wrong data, you can use its output for extra information. For example, you can output bright red to signal the result of some test comparison.

1) Setup and getting acquainted with the lab shell

Download the lab package below and unpack it at a suitable location. We start from the first example from Chapter 3. All material for all labs is in one archive.

New version 2024: New MicroGlut for Mac, cleaned up junk files (._), added makefiles for Mac and Windows (contributed by tersi295 and phiwe030). I have not had time to test these makefiles but they look OK.

TSBK07-labs-cpp.tar.xz

TSBK07-labs-cpp.zip

New version 2023-02-05. Bug fixes, some files had wrong names! Not tested under Linux yet.

C++ version:

TSBK07-labs-cpp-v5.tar.gz

C version:

TSBK07-labs-v3.tar.gz

Previous versions, some bugs are fixed since these but in case I did something wrong they may help:

TSBK07-labs-cpp-v4.tar.gz

TSBK07-labs-cpp-v3.tar.gz

TSBK07-labs-v2.tar.gz

TSBK07-labs-cpp-v2.tar.gz

TSBK07-labs.tar.gz

TSBK07-labs-cpp.tar.gz


2022 and older:

lab1.tar.gz

common.tar.gz

common-jan22.tar.gz

common-sep-21.tar.gz (2021 version)


There are several files included in the lab environment:

makefile - contains rules for how the executable should be built; read by make.

lab1-1.cpp - the actual lab code; this is where the main program resides.

lab1-1.vert - Minimal vertex shader.

lab1-1.frag - Minimal fragment shader.

The folder "common" contains a set of reusable utility files

GL_utilities.c - utilities for loading shaders and more.

MicroGlut.c - a package that uses an API similar to the classic user interface library GLUT but smaller and in a single file in order to improve code transparency and avoid obsolete code. It also adds a few convenient features. Note: MicroGlut is not GLUT, you can not expect GLUT documentation online to be relevant.

VectorUtils4.h - Simple header only vector/matrix package. It is similar to the library glm but, like MicroGlut, small and transparent.

LittleOBJLoader.h - Header only loader for "OBJ" models

LoadTGA.c - Loader for "TGA" images (for textures), used from Lab 2 and onward.

All these files are relatively small and fairly self-explanatory (except parts of LittleOBJLoader). Throughout the lab material, we strive for code transparency, code that can easily be edited by you any time you need to modify, change a behavior or add a missing feature, and, not least, any time you want to go "behind the scene" and see how things work.

You will be using makefiles, lab1-1.cpp, shaders and respective .h files directly.

Compile the test program by entering the lab1 directory and performing make on the command line. This should produce a new executable file called lab1-1.

Run lab1-1 by typing ./lab1-1 on the command-line. It should show a white triangle against a dark background.

Open lab1-1.cpp and have a look inside it. There are two functions of interest to you, init() and display(). init() is called once during program startup and display() is called every time it is time to render a new frame of graphics.

Currently, display() does three things:

* Clearing of screen and Z-buffer

* Rendering of a triangle using OpenGL rendering commands

* Swapping front- and backbuffer

The init() function do work critical for rendering:

* Sets the background color and activates the Z-buffer

* Uploading the vertex list to the GPU

* Loading the vertex and fragment shaders

The upload of the vertex list may require some explanation. We will return to it in section 4 (color shading).

Try changing the triangle data, by moving the vertices.

Change the color of the triangle and the background.

For those of you on other systems, you may need other makefiles or project files. Let us know if you need them.

Questions:

Where is the origin placed in the on-screen coordinate system?

Which direction are the X and Y axes pointing in the on-screen coordinate system?

2) Transformations in the vertex shader

Goal: To transform your polygon with 2D transforms defined by matrices.

Copy lab1-1.cpp to lab1-2.cpp and add a new entry to the makefile. Make this section's changes to lab1-2.c. Also copy the shaders similarly.

Define transformation matrices, somewhat like this:

GLfloat myMatrix[] = {    1.0f, 0.0f, 0.0f, 0.5f,

                        0.0f, 1.0f, 0.0f, 0.0f,

                        0.0f, 0.0f, 1.0f, 0.0f,

                        0.0f, 0.0f, 0.0f, 1.0f };

What does this matrix do? Define other 2D transformations.

Use the following call to send your matrix to your shaders.

    glUniformMatrix4fv(glGetUniformLocation(program, "myMatrix"), 1, GL_TRUE, myMatrix);

The "program" variable is a reference to your shaders, returned when you first loaded them.

In your vertex shader, declare your matrices and apply them to your vertices as you see fit. For the example above, there should be a matrix declared like this:

uniform mat4 myMatrix;

Questions:

What is the purpose of the "in", "out" and "uniform" modifiers? Be more specific than saying “input” and “output”.

What is the output of the fragment shader?

What does the function glUniformMatrix4fv do?


3) Simple animation

Goal: To add time-based rotation/translation of the object.

Copy lab1-2.cpp to lab1-3.cpp and add a new entry to the makefile. Make this section's changes to lab1-3.cpp. Also copy the shaders similarly.

You can get the current time using

GLfloat t = (GLfloat)glutGet(GLUT_ELAPSED_TIME);

The function returns an integer, a milliseconds value. We cast it to float to avoid truncation when scaling it.

In order to render new images repeatedly, you should use

glutRepeatingTimer(d)

where d is an integer. This will run a timer that will cause a redisplay every d milliseconds. You call it once during the startup of the program.

Modify matrices using a time-varying variable to produce an animation. Note that you will now need to upload your updated matrices in the display() callback, not in init().

When animating objects, you may want to use the sin() and cos() functions. To do that, you should include this header file:

#include <math.h>

and link with the math library using -lm.

Questions:

What is the frame rate of the animation?


4) Color shading

Goal: To interpolate data between vertices.

Copy lab1-3.cpp to lab1-4.cpp and add a new entry to the makefile. Make this section's changes to lab1-4.cpp. Also copy the shaders similarly.

Now we are going to modify the data upload, so let us look deeper into what actually happens.

The uploading of the vertex list is more complex than you might expect. First, you need a vertex array object (VAO). This is merely a container, which refers to a number of buffers which are to be used together. It is created by glGenVertexArrays(), activated by glBindVertexArray(), and there should be only one for one model.

Then you need one or more buffers. For now, we will only care about vertex buffers, which means the actual data provided to vertices. This includes the vertex coordinates we already have but it is not limited to that as we shall see here. These buffers will be fed to the vertex shader one item at a time, to the "in" variable of your choice.

A buffer is referred to be a vertex buffer object (VBO), allocated by glGenBuffers() and activated - thereby bound to the active VAO - with glBindBuffer. This is just a reference, but we can upload data to it with glBufferData. Finally, we must connect it to the "in" variable in the shader, which is done with glVertexAttribPointer. Notice that you specify both type and the amount of data to send for each activation of the shader, in our case 3 GL_FLOAT, which fits a vec3 in the shader. Finally, we want to make sure that this array is active by glEnableVertexAttribArray().

Add a new buffer, similar to the vertex buffer, but this time for colors. Each vertex should have its own color.

Upload this array to the shaders just like you did with the vertices.

Pass the colors to "out" variables in the vertex shader, and as "in" variables in the fragment shader.

Use the interpolated color for the fragment color.

Interpolating shades over the surface like this can be considered a kind of Gouraud shading (although that usually refers to light).

Questions:

Did you need to do anything different when uploading the color data?

The "in" and "out" modifiers are now used for something different. What?


5) Building a pyramid, visible surface detection

Goal: To build a pyramid, using more 3D data.

Copy lab1-4.cpp to lab1-5.cpp and add a new entry to the makefile. Make this section's changes to lab1-5.cpp. Also copy the shaders similarly.

Build a pyramid by creating six triangles (4 sides and a square bottom). Make the coordinates within +/- 0.5 units from origin.

Set all vertices for each triangle to the same color (that is, different for each triangle).

Use a transformation as of part 3 to rotate the model. Does something look strange?

It is likely that it looks strange in some orientations. We need some kind of visible surface detection (VSD). We will try one of the most widely used VSD methods: Z buffering. To use that, you need to do three things:

1) Set up with Z buffer:

    glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);

This call is part of the context configuration, so it must be called before the OpenGL context is created, while all GL calls can only be called after. So, where in the code should this be?

2) Activate the Z buffer and back-face culling:

    glEnable(GL_DEPTH_TEST);

3) Erase the Z buffer before rendering (modify existing call):

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

Enable/disable Z-buffering. Compare the difference.

Now, turn off Z-buffering and enable back-face culling:

    glDisable(GL_DEPTH_TEST);

    glEnable(GL_CULL_FACE);

Hint: In order to see all parts of the pyramid, it can be good to rotate it around the X axis.

Any problems this time? How can you fix it?

When you are done, your pyramid should render correctly with back-face culling enabled!

Questions:

What problems did you encounter while building the pyramid?

How do you change the facing of a polygon?


6) Load a 3D model from disc

Goal: To render a complex 3D model read from disc.

Copy lab1-5.cpp to lab1-6.cpp and add a new entry to the makefile. Make this section's changes to lab1-6.cpp. Also copy the shaders similarly.

The file LittleOBJLoader.c will load a Wavefront OBJ file to disc. Include "LittleOBJLoader.h" in your source to use it. C++: Also #define MAIN before including LittleOBJLoader.h.

Model *m;

m = LoadModel("bunny.obj");

However, from there we need to upload it to the GPU ourselves, using a Vertex Array Object as main reference. We also need a few Vertex Buffer Objects temporarily.

unsigned int bunnyVertexArrayObjID;

unsigned int bunnyVertexBufferObjID;

unsigned int bunnyIndexBufferObjID;

unsigned int bunnyNormalBufferObjID;

Uploading it is similar to what we did with simpler models before:

    glGenVertexArrays(1, &bunnyVertexArrayObjID);

    glGenBuffers(1, &bunnyVertexBufferObjID);

    glGenBuffers(1, &bunnyIndexBufferObjID);

    glGenBuffers(1, &bunnyNormalBufferObjID);

    

    glBindVertexArray(bunnyVertexArrayObjID);


    // VBO for vertex data

    glBindBuffer(GL_ARRAY_BUFFER, bunnyVertexBufferObjID);

    glBufferData(GL_ARRAY_BUFFER, m->numVertices*3*sizeof(GLfloat), m->vertexArray, GL_STATIC_DRAW);

    glVertexAttribPointer(glGetAttribLocation(program, "in_Position"), 3, GL_FLOAT, GL_FALSE, 0, 0); 

    glEnableVertexAttribArray(glGetAttribLocation(program, "in_Position"));


    // VBO for normal data

    glBindBuffer(GL_ARRAY_BUFFER, bunnyNormalBufferObjID);

    glBufferData(GL_ARRAY_BUFFER, m->numVertices*3*sizeof(GLfloat), m->normalArray, GL_STATIC_DRAW);

    glVertexAttribPointer(glGetAttribLocation(program, "in_Normal"), 3, GL_FLOAT, GL_FALSE, 0, 0);

    glEnableVertexAttribArray(glGetAttribLocation(program, "in_Normal"));

    

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bunnyIndexBufferObjID);

    glBufferData(GL_ELEMENT_ARRAY_BUFFER, m->numIndices*sizeof(GLuint), m->indexArray, GL_STATIC_DRAW);

Don't forget to make error checks.

You can draw the model like this:

    glBindVertexArray(bunnyVertexArrayObjID);    // Select VAO

    glDrawElements(GL_TRIANGLES, m->numIndices, GL_UNSIGNED_INT, 0L);

Note: Now we draw with glDrawElements instead of glDrawArrays. More about the difference on the upcoming lectures.

There are no colors, so you need to edit your shaders. You can use the normal vector in any way you like (be creative!) to select colors by vertex. Then these colors should be interpolated over the triangles.

Finally, disable back-face culling for now. (Without projection it will not be correct. We return to that later.)

glDisable(GL_CULL_FACE);

Questions:

Why do we need normal vectors for a model?

What did you do in your fragment shader?

Should a normal vector always be perpendicular to a certain triangle? If not, why?

Now we are using glBindBuffer and glBufferData again. They deal with buffers, but in what way?


That concludes lab 1. Good work! In the next lab, you will experiment with texture mapping, scenes containing multiple objects, and camera placement.

This page is maintained by Ingemar Ragnemalm