What are the three support libraries introduced in this lecture that simplify OpenGL application development, and what is each one used for?
GLFW → handles window creation, OpenGL context, and input (keyboard, mouse, events).
GLEW → loads OpenGL extensions and gives access to modern OpenGL functions.
GLM → a math library that helps build matrices and vectors for transformations.
What are the two main types of shader programs in OpenGL, and what is each one responsible for?
The two main types of shader programs are:
1) Vertex shader: runs once per vertex and handles vertex transformations (e.g., applying model/view/projection matrices).
2) Fragment shader: runs once per fragment (potential pixel) and determines the final color and lighting of each pixel.
In modern OpenGL, what are the two processors involved in running a graphics program, and what role does each play?
CPU (Central Processing Unit): runs your OpenGL program — sets up data, creates buffers, compiles shaders, and sends commands/data to the GPU.
GPU (Graphics Processing Unit): runs your GLSL shader programs — performs the actual vertex and fragment processing in parallel to render the scene.
These two programs operate mostly independently, with the CPU sending data and instructions, and the GPU doing the heavy graphical computation.
What is a Vertex Array Object (VAO), and why is it useful in OpenGL?
A Vertex Array Object (VAO) stores all the state needed to specify vertex data — that is, which buffers (like vertex and index buffers) to use and how the vertex attributes (position, normal, color, etc.) are laid out in memory.
It’s useful because it packages all the vertex configuration into a single object, so you don’t have to re-specify all the buffer bindings and attribute pointers every time you draw. You just bind the VAO once, and OpenGL knows how to render your geometry.
What is the difference between glBufferData and glBufferSubData? When would you use each?
Both glBufferData() and glBufferSubData() are used to put data into a buffer on the GPU, but they work slightly differently:
1) glBufferData(): Allocates memory and can optionally copy data into the buffer. It’s usually used when you first create the buffer or want to replace its entire contents.
2) glBufferSubData(): Copies new data into a portion of an existing buffer (it does not allocate new memory). You use it when you already have a buffer and just want to update part of it — for example, adding normals after the vertices.
In the CPU–GPU workflow, what is the purpose of glVertexAttribPointer()?
glVertexAttribPointer() tells OpenGL how to interpret the data in a vertex buffer and link it to a variable (an attribute) in your vertex shader. For example, it defines:
- Which attribute (like vPosition, vNormal, or vColor) the data goes to.
- How many components each vertex has (e.g., 2 for position x/y, 3 for normal x/y/z). - The data type (e.g., GL_FLOAT).
- How far apart each vertex’s data is in memory (the stride).
- Where in the buffer the data starts (the offset).
What does the function glDrawElements() do, and why do we use indices with it instead of just drawing the vertex list directly?
glDrawElements() uses the currently bound VAO and index buffer to render shapes (triangles, lines, or points).
Why do we use indices (EBO):
Instead of listing every vertex multiple times, we can store each vertex once in a vertex array and then use an index array (in the element buffer) to say which vertices make up each triangle.
What’s the purpose of double buffering (using a front and back buffer) when rendering in OpenGL?
Double buffering means we use two image buffers:
- Front buffer: what’s currently being displayed on the screen.
- Back buffer: where OpenGL draws the next frame off-screen.
Once drawing is finished, the buffers are swapped — the back buffer becomes the new front buffer and is displayed, while the old front buffer is used for the next drawing cycle.
- Gives smooth, flicker free animation
In the first example, the GPU shader programs were stored in .vs and .fs files.
What do these file extensions stand for, and what does each type of shader do?
.vs → Vertex Shader
- Runs once per vertex.
- Handles transformations (like applying the model, view, and projection matrices).
- Outputs the transformed vertex position to the next pipeline stage.
.fs → Fragment Shader
- Runs once per fragment (potential pixel).
- Computes the final color and lighting effects that appear on the screen.
In Example 2, the triangle starts rotating.
Which GLM function is used to create the rotation transformation matrix, and what are its parameters?
glm::rotate() is used to construct a rotation transformation matrix. Parameters:
- Base matrix — usually glm::mat4(1.0f) (the identity matrix).
- Rotation angle — in radians.
- Rotation axis — a 3D vector indicating which axis to rotate around (e.g., (0,0,1) for the z-axis).
What kind of shader variable is used to send a transformation matrix (like model) from the CPU to the GPU, and which OpenGL function is used to upload it?
Transformation matrices (like model, view, or projection) are sent from the CPU to the GPU as uniform variables — because they have the same value for all vertices during a single draw call. (glUniformMatrix4fv)
In Example 3, lighting is introduced.
What new per-vertex data is added to support lighting calculations, and what does it represent?
In Example 3, the new per-vertex data added is actually the normal vectors, not yet the normal matrix (that comes in Example 4). These normals represent the direction perpendicular to the surface at each vertex. They’re used in the fragment shader to calculate how much light hits each point (via the dot product between the normal and light direction).
In Example 3, the fragment shader computes the light intensity using dot(N, L). What does this operation represent physically?
calculates the diffuse lighting intensity — basically how much light hits the surface, depending on the angle between the surface and the light source. N = the normalized surface normal vector (the direction the surface is facing). L = the normalized light direction vector (the direction light is coming from). The dot product gives the cosine of the angle between N and L.
In Example 4, 3D viewing is introduced.
Which GLM function is used to create the perspective projection matrix, and what are its main parameters?
glm::perspective() builds the perspective projection matrix, which defines the viewing frustum — the 3D region visible to the camera. Its parameters are:
1) fov – the field of view angle (in radians).
2) aspect – the aspect ratio of the window (width / height).
3) near – the near clipping plane distance.
4) far – the far clipping plane distance.
Also in Example 4, the glm::lookAt() function is introduced. What does this function do, and what are its three main parameters?
glm::lookAt() constructs the view matrix, which defines the camera’s position and orientation in the scene — essentially, it moves the world so that the camera is at the origin looking down the -Z axis. Its parameters are:
1) Eye position → where the camera is located (glm::vec3(eyex, eyey, eyez)).
2) Center (look-at target) → the point the camera is looking at.
3) Up vector → defines which direction is “up” for the camera (usually (0, 1, 0)).
In Example 4, normals are transformed differently from vertices.
What matrix is used to correctly transform normal vectors, and why is it needed?
The normal matrix is a 3×3 matrix derived from the model–view transformation that correctly transforms normal vectors. Why we need it:
Finally — when running shader programs, what happens if there’s a syntax or compilation error inside your vertex or fragment shader? Where do you see the error message?
If there’s a syntax or compilation error in your vertex or fragment shader, you won’t see the error in Visual Studio — because the shaders are compiled by the GPU driver, not by the C++ compiler.
What are a VBO, VAO, and EBO in OpenGL, and how do they work together when rendering a 3D object?
How does a uniform variable get its value?
glGetUniformLocation() + glUniform*()
What is a view matrix
Builds a camera view matrix. glm::lookAt(eye, center, up)
What is a model view matrix
Final matrix sent to the shader to transform 3D vertices to screen space.
- model = object’s local transform (position, scale, rotation).
- view = camera transformation
- projection = perspective
Order matters: Right to Left (i.e., projection * (view * model))
What is a normal matrix
Used to correctly transform normal vectors (directions) instead of positions.
Normals shouldn’t be affected by translation, and must stay perpendicular to surfaces even under scaling or rotation.
It’s computed as the inverse transpose of the top-left 3×3 part of the Model–View matrix:
What is a projection and model matrix
Projection: Projects 3D coordinates into 2D screen space — it defines the shape of the camera’s lens. It gives the scene depth perspective, making distant objects appear smaller.
Model: Moves an object from its local (object) coordinates into the world.
Every object starts centered at its own origin (0,0,0) — the model matrix tells OpenGL where to place it in the world, and how to scale or rotate it.