Graphics Flashcards

(230 cards)

1
Q

Define a vertex.

A

A 3d point in world coordinates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do you rotate or scale about a point P=(a,b,c) instead of the origin?

A

Use the sandwich pattern:
T(a,b,c)MT(-a,-b,-c)
Where M is the rotation or scaling matrix.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you write points and vectors in homogeneous coordinates?

A

Point (x,y,z): [x y z 1]^T
Direction vector (x,y,z):
[x y z 0]^T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

If you write M=ABC, which matrix is applied first?

A

The rightmost matrix acts first.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define a primitive.

A

A point, line or triangle built from vertices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do you invert a model matrix made from translations, rotations, and scalings?

A

If a model matrix is built as a product of affine transforms
M=A B C D, then its inverse is
M^-1=D^-1 C^-1 B^-1 A^-1.
Use these rules:
1. Translation:
T(t_x,t_y,t_z)-1 = T(-t_x,-t_y,-t_z)
2. Rotation:
R(theta )^-1=R(-theta )
3. Scaling:
S(s_x,s_y,s_z)^-1 = S(1/s_x, 1/s_y, 1/s_z)
4. To invert a transform about a point P:
T(P) M T(-P) -> T(-P) M^-1 T(P)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define a fragment.

A

3d projection of a pixel, with same attributes as pixel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define pixel.

A

a 2d point on the display arranged in a grid
RGBA color

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

State the geometric pipeline.

A
  1. Vertex
  2. Vertex processor
  3. Clipper and primitive assembler
  4. Rasterizer
  5. Fragment processor
  6. Pixels
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When does the image go from still being 3d to the desired 2d state during the geometric pipeline?

A

After the fragment processor step. The image is now made of pixels rather than vertices and primitives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the vertex processor step?

A
  • Takes each vertex one at a time.
  • Applies model, view, and projection transforms.
  • Computes normals, lighting inputs, texture coords.
  • Outputs the vertex in clip space for the next pipeline stage.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the coordinate transformations done during vertex processing?

A

World coords -> camera coords -> display coords

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the clipper and primitive assembler step in geometric pipelines?

A
  • Primitive assembler: groups vertices into triangles/lines/points.
  • Clipper: removes parts outside the view frustum.
  • Result: only visible primitives go to rasterisation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is rasterisation?

A
  • Takes the primitives.
  • Converts into fragments which are in image coordinates.
  • Will later be shaded in the fragment shader.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is fragment processing?

A
  • Generates a 2d raster image
  • eliminates hidden surfaces, altering fragment colors and blending pixel colors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are raster images?

A

Images made out of pixels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a pinhole camera?

A

A camera which captures photographs in a similar manner to a human eye.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Define the camera model.

A
  • Position
    The camera’s location (centre of projection).
  • Orientation
    Which way the camera is pointing.
  • Focal length
    Controls how “zoomed in” the view is.
  • Film plane / image plane
    The surface where the 3D scene is projected.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is focal length?

A

Portion of the world the camera sees.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is film plane?

A

Width and height of the image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are light sources defined by?

A

Location
Strength
Color
Directionality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is used to denote scalars?

A

Greek letters: 𝛼, 𝛽, 𝛾, …

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is used to denote points?

A

Uppercase letters: P, Q, R, …

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is used to denote vectors?

A

Lowercase letters: u, v, w, …

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
What is used to denote matrices?
Bold letters: 𝑻, 𝑺,𝑹, 𝑴, …
25
What are scalars?
Quantities define by a single number. Used to manipulate points and vectors. Specifically the real numbers when used in graphics. Think negation, addition and multiplication.
26
What is the additive identity element?
0.
27
What is the multiplicative identity element?
1.
28
What are vectors?
Vectors are used to represent points and line segments. Like to think of a vector as a directed line. The offset between two points. Can only be added/subtracted to if their dimensions match.
29
What is vector transposition?
Turns row vectors into column vectors and vice versa.
30
What is inner/dot product?
Vector multiplication that creates a scalar. Dimensions must agree.
31
What is outer/cross product?
axb creates vector or same dimension as a and b.
32
How do you find the length of a vector?
For a vector v =(x,y,z): - Pythagoras: |v| = root(x^2+y^2+z^2) -Dot product: v dot v =x^2+y^2+z^2 |v| = root(v dot v)
33
How do you find the normal of a plane?
- Pick three points on the plane: A,B,C. - Form two vectors: u = B-A, v = C-A - Take the cross product: n = uxv This gives a vector perpendicular to the plane.
34
What are affine transformations?
- transformations that preserve parallel lines - adds a fourth number to our 3d representation where a 1 is a point and a 0 is a vector (called homogenous coordinates) - allows us to differentiate them - points have a location (offset from the origin) whereas vectors do not.
35
Is matrix multiplication associative? Commutative?
1. Associative: ABC = (AB)C = A(BC) 2. Not commutative: AB != BA
36
How do we write chained transformations?
- written "backwards" - but the first transformation to be applied is the one closest to the object being transformed. - e.g P' = c3c2c1P, c1 would be applied first. we can first do c2c1 then transform by c3.
37
What is the translation matrix?
T = 1 0 0 ax 0 1 0 ay 0 0 1 az 0 0 0 1 - adds an offset vector [ax ay az]T to a point P
38
What is the scaling matrix?
S = bx 0 0 0 0 by 0 0 0 0 bz 0 0 0 0 1 - changes each point with respect to the origin - uniform scaling (a rigid body transformation) is when bx = by = bz
39
What is the rotation matrix for the x axis?
Rx = 1 0 0 0 0 cos𝜃 −sin𝜃 0 0 sin𝜃 cos𝜃 0 0 0 0 1
40
What is the ry matrix?
Ry = cos𝜃 0 sin𝜃 0 0 1 0 0 −sin𝜃 0 cos𝜃 0 0 0 0 1
41
What is the rz matrix?
Rz = cos𝜃 −sin𝜃 0 0 sin𝜃 cos𝜃 0 0 0 0 1 0 0 0 0 1
42
What is the shear matrix?
Hx= 1 cot𝜃 0 0 0 1 0 0 0 0 1 0 0 0 0 1
43
What is the projection plane?
Modelling the object to be projected onto a plane that is located in front of the camera.
44
What is COP?
- Centre of Projection. - It locates the camera. - Light passes through the projection plane to COP
45
What is DOP?
- Direction of Projection - specified direction - a direct line from each point to COP
46
What are the types of projections?
1. Parallel - parallel lines remain after projection - COP infinitely far from the projection plane - e.g. orthographic, axonometric and oblique 2. Perspective - parallel lines may not remain parallel after projection - COP located at a finite distance from the projection plane - e.g. one-point, two-point and three-point perspective, with three-point being the most widely used type
47
In computer-based viewing, what do we align the camera with?
- world axes, positioning objects relative to each other in some world coordinates - we then position a camera freely in the same coords and point the camera in some direction
48
Clipping is an important part of the rendering pipeline. Clipping is trivial to do after rasterisation, but we do it before rasterisation. Why?
Doing it before saves computation, ensures correctness, and simplifies the pipeline.
49
What are normalised device coordinates (NDC) and what is their range?
WebGL and similar APIs represent visible space in terms of normalised coordinates that range between -1 and 1 in each dimension.
50
When would you use left-handed coordinates?
Left-handed coordinates are traditionally used for computer graphics, and are needed for hardware operations such as z-buffer.
51
When would you use right-handed coordinates?
Right-handed coordinates are often used by convention in geometry and physics, and many people prefer to work in them.
52
What is an attribute in GLSL?
- A per‑vertex input to the vertex shader. - Different for each vertex (e.g., position, normal, UV). - Set by the CPU before drawing. - Used only in the vertex shader.
53
What is an varying in GLSL?
- Data passed from the vertex shader to the fragment shader. - GPU interpolates it across the primitive. - Used for things like interpolated colour, UVs, normals. - Read‑only in the fragment shader.
54
What is an uniform in GLSL?
- A read‑only variable shared by all vertices and all fragments. - Set once per draw call by the CPU. - Used to pass world information into shaders.
55
What is the straight-line distance between P1 and P2?
v = P2 − P1 = [−1, 3, 0, 1]T − [2, 2, 1, 1]T = [−3, 1, −1, 0]T d = ||v|| =√v · v =√(−3)^2 + 1^2 + (−1)^2 + 0^2 ≈ 3.3166
56
Homogenous vs Cartesian forms.
Homogenous is an extended representation of Cartesian coordinates that adds an extra dimension (called the homogeneous coordinate).
57
How to convert from homogenous to cartesian and vice versa?
Cartesian (𝑥,𝑦) → Homogeneous (𝑥,𝑦,1) Homogeneous (𝑥,𝑦,𝑤) → Cartesian (𝑥/𝑤,𝑦/𝑤)
58
How do you calculate the area of a triangle given three points (P1 P2 P3) in 3D space?
1. Compute two side vectors: 𝑠1=𝑃2−𝑃1 , 𝑠2=𝑃3−𝑃1 2. Take the cross product: 𝑐=𝑠1×𝑠2 3. Find its magnitude: ∥𝑐∥=root((𝑐𝑥)^2+(𝑐𝑦)^2+(𝑐𝑧)^2) 4. Divide by 2 to get the triangle’s area: 𝐴=0.5∥𝑐∥
59
Define a 4 × 4 matrix T that moves P1 by two units along the x axis.
To move a point by +2 units along the x‑axis, the translation matrix 𝑇 is: T = 1 0 0 2 0 1 0 0 0 0 1 0 0 0 0 1
60
Define a 4 × 4 matrix R that rotates P1 around the z axis by π/4.
To rotate a point 𝑃1 around the z‑axis by an angle of 𝜋/4 (45°), we use the standard homogeneous 4×4 rotation matrix: Rz = cos(𝜋/4) −sin(𝜋/4) 0 0 sin(𝜋/4) cos(𝜋/4) 0 0 ⁡0 0 1 0 ⁡0 0 0 1
61
Calculate a 4 × 4 matrix M1 that combines the two operations (translation followed by rotation).
M1 = RT = cos(𝜃) -sin(𝜃) 0 2cos(𝜃) sin(𝜃) cos(𝜃) 0 2sin(𝜃) 0 0 1 0 0 0 0 1
62
What is 𝑀𝑐?
𝑀𝑐 is the camera transform - it describes how to rotate/translate the camera itself in world space so that it points at the target. It’s essentially the camera’s pose. In rendering, we don’t move the camera. Instead, we transform the entire scene so that it looks as if the camera moved. This is called the view transform 𝑀𝑣.
63
Calculate a camera matrix Mc that will rotate the camera so that it points at P1 = [2, 1, 0]T.
By convention, the camera starts at the origin looking down the negative z‑axis. Because the target lies in the x–y plane, the two rotations: - Rₓ — move forward into the x–y plane - R_z — rotate within that plane toward the target are enough to point the camera exactly at (2,1,0). Final rotation M_c=R_z R_x
64
How do we get the view matrix from the camera matrix?
𝑀𝑣 = 𝑀𝑐^−1
65
How do we build a camera matrix that moves the camera to (0,−1,0) and makes it look at 𝑃1=(2,1,0)?
1. Rotate Align the camera’s forward direction with the vector toward P_1. This fixes the orientation. 2. Translate Move the camera to (0,-1,0). This fixes the position. Final matrix M_c=T\, R Where: - R = rotation that makes the camera look at P_1 - T = translation to (0,-1,0)
66
How do we find the location of 𝑃2=(−10,1,0)𝑇 in camera coordinates, given the camera at 𝑃𝑐=(0,−1,0)𝑇 pointing at 𝑃1?
Reasoning The camera matrix 𝑀𝑐 describes how the camera is moved in world space. To transform world points into camera space, we need the view matrix: 𝑀𝑣=𝑀𝑐−1 Then multiply the world point by 𝑀𝑣.
67
How do we build the perspective projection matrix P for a frustum?
Reasoning We use the standard off‑center frustum matrix (right‑handed, OpenGL‑style). It scales x and y based on near and frustum width/height, and maps z into clip space. General Matrix P= ((2xNear)/(right-left)) 0 ((right+left)/(right-left)) 0 0 ((2xNear)/(top-bottom)) ((top+bottom)/(top-bottom)) 0 0 0 -((far+near)/(far-near)) -((2xFarxNear)/(far-near)) 0 0 -1 0
68
How to know if points will get removed during clipping?
Any point that has a coordinate greater than 1 or less than -1 in any dimension will get removed during clipping.
69
What is the equation to normalize a vector?
Normalisation means making a vector (𝑣=(𝑥,𝑦,𝑧)) have length 1 (unit vector). We do this by dividing the vector by its own magnitude. ∥𝑣∥=root(𝑥^2+𝑦^2+𝑧^2) 𝑣𝑛𝑜𝑟𝑚=(𝑣/∥𝑣∥) =(𝑥/∥𝑣∥ ,𝑦/∥𝑣∥ ,𝑧/∥𝑣∥) This keeps the direction the same but scales the length to 1.
70
How do we check that point 𝑃4=(−0.25,0,−0.5)𝑇 lies on the triangle defined by 𝑃1,𝑃2,𝑃3?
Reasoning If 𝑃4 is on the triangle’s surface, its normal will match the triangle’s normal. We can check this in two ways: Compute a new normal using 𝑃1,𝑃2,𝑃4. Check it matches the original triangle’s normal. Verify that the vector from 𝑃2 to 𝑃4 is orthogonal to the original triangle’s normal. If the dot product = 0 → confirms orthogonality.
71
How do we calculate the normalized light vector 𝑙 that points from a surface point 𝑃4 toward a light source at 𝑃𝑙=(1,1,1)𝑇?
Reasoning The light vector 𝑙 points from the surface point toward the light source. Formula: 𝑙=𝑃𝑙−𝑃4 We then normalize 𝑙 so its length = 1. Normalization makes dot products efficient and avoids computing cosines directly.
72
Calculate the vector v that points from P4 toward the camera (Pc).
The view vector 𝑣 points from the surface point toward the camera position. Formula: 𝑣=𝑃𝑐−𝑃4 As with light vectors and normals, we normalize 𝑣 so its length = 1.
73
How do we calculate the perfect reflection vector 𝑟 of a light source at 𝑃𝑙, reflected at point 𝑃4?
The reflection vector 𝑟 is the direction a light ray takes after bouncing off a surface. Formula (Phong model): 𝑟=2(𝑙⋅𝑛)⋅𝑛−𝑙 Requires normalized vectors: 𝑙: light direction (from 𝑃4 to 𝑃𝑙) 𝑛: surface normal at 𝑃4
74
How do we calculate the diffuse light intensity 𝐼𝑑 from point 𝑃4 reaching the camera?
Reasoning Diffuse term uses the Lambertian model: intensity scales with cos⁡𝜃=𝑙⋅𝑛. Distance falloff is controlled by (𝑎+𝑏𝑑+𝑐𝑑^2). Requires normalized 𝑙 and 𝑛 so the dot product equals the cosine. 𝐼𝑑=(𝑘𝑑/𝑎+𝑏𝑑+𝑐𝑑^2)⋅max⁡((𝑙⋅𝑛) 𝐿𝑠, 0)
75
How do we calculate the specular light intensity 𝐼𝑠 reflected toward the camera from 𝑃4, with shininess coefficient 𝛼=20?
Reasoning Specular reflection models the “mirror‑like” highlight. Formula (Phong model): 𝐼𝑠=𝑘𝑠𝐿𝑠(𝑟⋅𝑣)^𝛼 Requires normalized vectors: 𝑟: reflection vector 𝑣: view vector The dot product 𝑟⋅𝑣 gives the cosine of the angle between them.
76
Where do we calculate illumination in WebGL for diffuse/specular light, Gouraud shading, and Phong shading?
Diffuse/Specular: Either vertex or fragment shader. Gouraud shading: Vertex shader → illumination at vertices, rasteriser interpolates. Phong shading: Fragment shader → illumination per fragment using interpolated normals.
77
What texture coordinates (s, t) do we use for: (a) top‑left of a cube face, (b) one‑third down a cylinder, (c) bottom of a diamond?
1. Cube top‑left: Using mathematical convention (origin at bottom‑left): (s, t) = (0, 1) Bottom‑right of same face: (1, 0) 2. Cylinder, one‑third down from top: Wrap around: s spans 0→1 around circumference; t measures height (0 bottom, 1 top). One‑third down from top = two‑thirds up from bottom: t = 2/3, s = any in [0, 1] (depends on where around the wrap) 3. Diamond bottom (pole): Pole mapping is degenerate; naive mapping: t = 0, s = arbitrary Better in practice: add vertices near the pole and split seams to avoid distortion.
78
How are fractional texture coordinates handled in WebGL?
Nearest texel mapping: round to nearest integer texel (e.g., (102.4, 682.7) → (102, 683)). Bilinear filtering: interpolate between four nearest texels using weighted averages of colour components.
79
What extra textures make a surface look more 3D?
Bump map: grayscale image → height per texel → adjust normals indirectly. Normal map: RGB image → normals per texel → direct control of surface orientation. Parallax map: bump map (height) + normal map → shift texel lookup for depth illusion.
80
Why is normal mapping preferable to bump mapping in hardware rendering?
Bump mapping: normals recalculated per fragment in real time → expensive. Normal mapping: normals pre‑computed and stored in texture → faster, avoids runtime cost.
81
How does the Cohen–Sutherland algorithm decide if a line requires clipping, and how do we know which edges to clip against?
Outcodes: Each endpoint gets a 4‑bit code → (top, bottom, right, left). Trivial accept: If both outcodes = 0000 → line fully inside → no clipping. Trivial reject: If (outcode1 & outcode2) ≠ 0 → both points outside the same boundary → discard line. Clipping needed: If outcodes differ and (outcode1 & outcode2) = 0 → line crosses boundary → clip. Which edges: Bits set to 1 in the outcode show which boundary (top, bottom, left, right) the point lies outside. Those edges are used for clipping.
82
How do we calculate clipped points using a parametric line representation?
Represent line segment with parameter 0≤𝛼≤1: 𝑃(𝛼)=(1−𝛼)𝑃1+𝛼𝑃2 To clip against a boundary (e.g., 𝑥=𝑥𝑚𝑖𝑛 or 𝑦=𝑦𝑚𝑎𝑥): 𝛼= (boundary−coordinate of 𝑃1/ coordinate of 𝑃2−coordinate of 𝑃1) Plug 𝛼 back into 𝑃(𝛼) to get intersection point. Repeat for each boundary → new clipped endpoints (e.g., 𝑃3,𝑃4).
83
How does a pipeline clipper work to clip a line segment?
Consists of four sequential clippers: top, bottom, right, left. Each clipper checks if endpoints lie outside its boundary: If yes → compute intersection point and replace endpoint. If no → pass line unchanged. After all four stages, the line is clipped to the window. Ensures line is always tested against all edges in sequence.
84
How do we convert NDC coordinates to integer fragment positions in image space?
Step 1 – Normalise: Shift and scale NDC [−1,1] into [0,1]: 𝑃𝑛𝑜𝑟𝑚=𝑃𝑁𝐷𝐶/2+(0.5,0.5) Step 2 – Scale to resolution: Multiply by image size (𝑊,𝐻): 𝑃𝑖𝑚𝑎𝑔𝑒=𝑃𝑛𝑜𝑟𝑚⊙(𝑊,𝐻) Step 3 – Integer positions: Apply floor to get pixel indices (0 to 𝑊−1, 0 to 𝐻−1). Convention: Mathematical: y points up. Image coordinates: y points down → flip after mapping if needed.
85
What is ⊙?
Multiply vectors/matrices component‑wise (not dot product, not matrix multiplication). Example: (𝑎,𝑏) ⊙ (𝑥,𝑦)=(𝑎⋅𝑥, 𝑏⋅𝑦) In graphics: used to scale coordinates by resolution or apply per‑component operations.
86
How do we rasterise a line segment using the DDA algorithm?
DDA rasterises by stepping along the dominant axis and incrementing the other proportionally to the slope, rounding to integer pixel positions at each step. Step 1 – Compute slope: 𝑚=(𝑦2−𝑦1)/(𝑥2−𝑥1) Step 2 – Choose driving axis: If ∣𝑚∣≤1: step in 𝑥 by 1 pixel, compute Δ𝑦=𝑚⋅Δ𝑥. If ∣𝑚∣>1: step in 𝑦 by 1 pixel, compute Δ𝑥=1/𝑚⋅Δ𝑦. Step 3 – Iteration: Start at (𝑥1,𝑦1). Increment by chosen step until reaching (𝑥2,𝑦2). Round each computed coordinate to nearest integer → pixel positions. Step 4 – Result: Produces a sequence of integer pixel coordinates approximating the line. Can be verified visually with a sketch.
87
How does the winding number test determine if a point is inside a triangle (or polygon)?
Step 1 – Place viewpoint: Imagine standing at the test point and looking toward one vertex. Step 2 – Traverse polygon edges: Follow vertices in order (clockwise or counter‑clockwise). Step 3 – Measure rotations: For each edge, calculate the angle needed to rotate from one vertex direction to the next. Counter‑clockwise rotation → positive angle. Clockwise rotation → negative angle. Step 4 – Sum angles: If total = 0 → point is outside. If total = ±2π → point is inside.
88
What generally does the winding number test determine if a point is inside a triangle (or polygon)?
Triangles: Only ever yield 0 (outside) or 2π (inside). Complex polygons: May yield multiples of 2π depending on winding.
89
How can we represent a human arm (shoulder → fingertips) using a tree structure?
Arm └── Shoulder └── Upper Arm └── Elbow └── Forearm └── Wrist ├── Thumb │ └── Thumb Tip ├── Index Finger │ └── Index Tip ├── Middle Finger │ └── Middle Tip ├── Ring Finger │ └── Ring Tip └── Little Finger └── Little Tip
90
What geometry information must be stored at a scene graph node?
Vertex array → defines object shape Index array → defines primitives (triangles, lines, etc.)
91
What attribute information must be stored at a scene graph node?
Colours (per vertex or per fragment) Normals (for lighting calculations) Texture coordinates (for mapping images to surfaces)
92
What texture information must be stored at a scene graph node?
Texture images (diffuse maps) Normal maps (surface detail) Bump maps (height displacement)
93
What transformation information must be stored at a scene graph node?
Model matrix 𝑀𝑚𝑜𝑑𝑒𝑙: Position Rotation Scale
94
What other information might be stored at a scene graph node?
Material properties (shininess, reflectivity, transparency) Shader references (programs for rendering) Child node links (hierarchical structure for complex models)
95
How can we represent the Sun and the first four planets in a tree structure?
Solar System └── Sun ├── Mercury ├── Venus ├── Earth │ └── Moon └── Mars └── Moons ├── Phobos └── Deimos
96
What is an appropriate model matrix for a moon node?
Model Matrix 𝑀𝑚𝑜𝑜𝑛: Built from parent planet’s transform (position, rotation, scale). Apply translation to place moon at orbital distance from planet. Apply rotation to simulate orbit around planet. Apply scaling to set moon’s size relative to planet. Final matrix: 𝑀𝑚𝑜𝑜𝑛=𝑀𝑝𝑙𝑎𝑛𝑒𝑡⋅𝑇𝑜𝑟𝑏𝑖𝑡⋅𝑅𝑜𝑟𝑏𝑖𝑡⋅𝑆𝑚𝑜𝑜𝑛
97
How does the hierarchical structure help (moon example)?
Hierarchical Structure Benefits: 1. Inheritance: Moon automatically follows planet’s movement (e.g., if planet rotates around Sun, moon follows). 2. Local transforms: Moon’s orbit and rotation are defined relative to planet, not global space. 3. Efficiency: Changes propagate down the tree — update planet once, all child moons update consistently. 4. Flexibility: Easy to add/remove moons or adjust orbits without recalculating global positions.
98
What is an OBJ file and what does it contain?
Purpose: Standard text format for 3D models. Geometry: v → vertices (x, y, z) vt → texture coordinates (s, t) vn → normals (x, y, z) Structure: f → faces (indices into v, vt, vn arrays) o → object name Materials: mtllib → external material file (.mtl) usemtl → select material Other: s → shading (on/off)
99
How can we tell if a texture is meant to be a cubemap?
Texture coordinates: When plotted onto a 1×1 square, they form an unwrapped cube layout. Pattern: Coordinates correspond to six square faces arranged in a cross‑like or unfolded pattern. Clue: More texture coordinates than vertices → indicates reuse across cube faces. Result: The mapping shows each face of the cube is assigned a portion of the texture, confirming it’s a cubemap.
100
How does Euler’s method estimate particle positions over time?
Assume velocity and acceleration stay constant during each timestep. Update position: 𝑃(𝑡+ℎ)=𝑃(𝑡)+ℎ⋅𝑣(𝑡) Update velocity: 𝑣(𝑡+ℎ)=𝑣(𝑡)+ℎ⋅𝑎(𝑡) Repeat frame by frame → approximate motion.
101
What information defines the initial state of particles in Euler’s method?
Particle positions (e.g., 𝐴𝑥,𝐴𝑦,𝐵𝑥,𝐵𝑦) Velocities (𝐴˙𝑥,𝐴˙𝑦,𝐵˙𝑥,𝐵˙𝑦) Forces acting (e.g., spring force) Masses of particles → acceleration = force ÷ mass
102
How is spring force between two particles calculated?
𝑓=−𝑘𝑠⋅(∣𝑑∣−𝑠)⋅𝑑/∣𝑑∣ 𝑑 = displacement vector between particles ∣𝑑∣ = distance 𝑠 = resting length Equal and opposite forces act on each particle
103
How do we calculate acceleration from spring force?
𝑎=𝑓/𝑚 Lighter particle → larger acceleration Heavier particle → smaller acceleration Acceleration direction = same as force direction
104
How are positions and velocities updated each frame in Euler’s method?
Position: 𝑃(𝑡+1)=𝑃(𝑡)+𝑣(𝑡)⋅ℎ Velocity: 𝑣(𝑡+1)=𝑣(𝑡)+𝑎(𝑡)⋅ℎ Errors accumulate because Euler uses start‑of‑frame values only.
105
What does an interpolating cubic polynomial curve look like when defined by four points?
Definition: An interpolating cubic polynomial passes through all given control points. Shape: Smooth curvature (no sharp corners). Continuous slope and tangent direction. The curve bends naturally to connect each point. Properties: Degree 3 polynomial → can flex enough to fit four points exactly. Unlike Bézier curves (which approximate), interpolating curves guarantee exact passage through each control point. Sketch (conceptual): Imagine plotting the four points. The curve smoothly “weaves” through them in order, adjusting slope at each to maintain continuity. No straight line segments — always gently curved.
106
How is the control point matrix 𝑝 defined for a cubic interpolating curve in 3D?
The control point matrix compactly stores all 3D coordinates of the four points used to define the cubic curve. Structure: 𝑝 is a 4×3 matrix. Each row = one control point in 3D space. Each column = x, y, z coordinates. General Form: 𝑝= 𝑃1𝑥 𝑃1𝑦 𝑃1𝑧 𝑃2𝑥 𝑃2𝑦 𝑃2𝑧 𝑃3𝑥 𝑃3𝑦 𝑃3𝑧 𝑃4𝑥 𝑃4𝑦 𝑃4𝑧
107
How do we calculate the coefficient matrix 𝑐 for an interpolating cubic curve?
The coefficient matrix 𝑐 is obtained by multiplying the geometry matrix 𝑀𝐼 with the control point matrix 𝑝. This transforms control points into polynomial coefficients that generate the smooth interpolating curve. Step 1 – Define control points: Store 4 points in matrix 𝑝 (size 4×3): 𝑝= 𝑃1𝑥 𝑃1𝑦 𝑃1𝑧 𝑃2𝑥 𝑃2𝑦 𝑃2𝑧 𝑃3𝑥 𝑃3𝑦 𝑃3𝑧 𝑃4𝑥 𝑃4𝑦 𝑃4𝑧 Step 2 – Use interpolating geometry matrix 𝑀𝐼: A fixed 4×4 matrix that encodes interpolation weights. Step 3 – Multiply: 𝑐=𝑀𝐼⋅𝑝 Result: 𝑐 is a 4×3 matrix. Each row gives polynomial coefficients for x, y, z components. Step 4 – Interpretation: 𝑐 defines the cubic polynomial curve in parametric form. Used to evaluate curve points for parameter 𝑡∈[0,1].
108
How do we calculate the position of a point at parameter 𝑢 along a cubic interpolating curve?
Evaluating a cubic curve at parameter 𝑢 means multiplying the polynomial basis [1,𝑢,𝑢^2,𝑢^3] with the coefficient matrix 𝑐. This yields the interpolated coordinates of the point on the curve. Step 1 – Formula: 𝑃(𝑢)=[1    𝑢    𝑢^2    𝑢^3]⋅𝑐 where 𝑐 is the 4×3 coefficient matrix. Step 2 – Input: Choose parameter 𝑢∈[0,1]. Example: 𝑢=0.25 → one quarter along the curve. Step 3 – Multiply: Row vector [1,𝑢,𝑢^2,𝑢^3] × coefficient matrix 𝑐. Result = [𝑥,𝑦,𝑧] coordinates of the curve point. Step 4 – Interpretation: The output gives the exact interpolated position at that fraction of the curve. Works for any 𝑢 between 0 and 1.
109
Where do Bézier and B‑spline curves lie relative to their control points?
Both Bézier and cubic B‑splines are contained inside the convex hull of their control points. Convex hull = smallest polygon enclosing all control points. Curves cannot pass outside this hull.
110
Why can’t the interpolated point 𝑃(0.25) be generated by Bézier or B‑spline?
In this example, the convex hull is a square with 𝑥min⁡=0. The interpolated point 𝑃(0.25) has 𝑥<0. Since it lies outside the convex hull, it cannot be produced by Bézier or B‑spline.
111
What is the framebuffer?
A block of memory that stores the final image. It holds one value per pixel (colour, depth, etc.). The GPU writes to it during rendering.
112
What is the difference between fixed‑function and programmable pipelines?
- Fixed‑function: hardware decides how graphics are processed. - Programmable: we write shaders to control the pipeline.
113
How does the GPU remove hidden surfaces?
It uses a depth buffer (z‑buffer). Only the closest fragment at each pixel is kept.
114
What is blending?
Combining a new fragment’s colour with the existing pixel colour. Used for transparency.
115
What is the viewport?
The rectangle on the canvas where WebGL draws. It maps NDC (−1 to +1) into pixel coordinates.
116
What are the steps to use a shader?
- Create shader - Add source code - Compile - Attach to program - Link program - Use program
117
Why do we use homogeneous coordinates?
They let us express translation using matrix multiplication. Points use w = 1. Vectors use w = 0.
118
What is the difference Coordinate Transform vs Geometric Transform?
- Coordinate transform: change the frame (e.g., world → camera). - Geometric transform: move the object (translate, rotate, scale).
119
How do we rotate around a point P?
- Translate object so P is at origin - Rotate - Translate back
120
What does lookAt() compute?
A view matrix from: - eye position - target point - up direction
121
What are axonometric projections?
Parallel projections where the image plane is rotated. Types: isometric, dimetric, trimetric.
122
What is an oblique projection?
A parallel projection where projectors hit the image plane at an angle.
123
What is the viewing pipeline?
Model → View → Projection → Clip → NDC → Rasterise.
124
What makes perspective projection different from parallel projection?
Projectors meet at the COP, so distant objects appear smaller.
125
What is the frustum?
The 3D viewing volume defined by near, far, left, right, top, bottom.
126
What is culling?
Removing faces pointing away from the camera.
127
How are shadows a projection?
A shadow is the projection of an object onto a surface using the light as the COP.
128
What is the difference between global and local lighting?
Global includes light bouncing between objects. Local only considers direct light on one object.
129
What are the four main light types?
Ambient, point, spotlight, directional.
130
What is light attenuation?
Intensity falloff: 1 / (a + bd + cd²)
131
What is Lambertian diffuse reflection?
Diffuse light = kd × max(l · n, 0)
132
What is specular reflection?
Shiny highlight = ks × (r · v)ᵅ α = shininess.
133
What is the Phong lighting equation?
Ambient + Diffuse + Specular I = kaLa + kdLd(l·n) + ksLs(r·v)ᵅ
134
What are Li and Ri?
Li = illumination matrix (light colour). Ri = reflection matrix (material properties).
135
How do we approximate curved objects?
Use triangle meshes (cone, cylinder, sphere).
136
How do you compute a flat surface normal?
Cross product of two edges: n = (p2 − p1) × (p1 − p0)
137
How do you compute a sphere normal?
n = p (normalised), because sphere = x² + y² + z² = 1.
138
What is the reflection vector formula?
r = 2(l·n)n − l
139
What is flat shading?
One normal per polygon → one colour per polygon.
140
What is Gouraud shading?
Normal per vertex → colour per vertex → colours interpolated.
141
What is Phong shading?
Normal per vertex → normals interpolated → lighting per fragment.
142
What is the normal matrix?
N = (Mvᵀ)⁻¹ Used to correctly transform normals.
143
What is a buffer?
A 2D grid of values (pixels). Examples: colour buffer, depth buffer, stencil buffer.
144
What is a texel?
A texture pixel. Stored in texture coordinates (s, t).
145
What does texture mapping do?
Uses an image to change fragment colour during shading.
146
Why do we need texture coordinates (s, t)?
They tell the shader which texel belongs to each vertex.
147
What causes aliasing in textures?
Mapping many texels to few pixels (minification) or vice‑versa (magnification).
148
What is bilinear filtering?
Interpolates between 4 neighbouring texels to smooth the result.
149
What is a mipmap?
A pyramid of pre‑scaled textures used to reduce aliasing when objects shrink.
150
Why must mipmaps be power‑of‑two sized?
WebGL’s mipmap generator requires power‑of‑two dimensions.
151
What is an environment map?
A texture representing the surrounding world (e.g., skybox).
152
What is bump mapping?
Modifies surface normals using a height map to fake bumps.
153
How does reflection mapping work?
Compute reflection vector r, then sample the cubemap in direction r.
154
What is normal mapping?
Stores normals directly in a texture (RGB = XYZ normal).
155
What is parallax mapping?
Offsets texture coordinates based on view direction + height map to fake depth.
156
What is displacement mapping?
Actually moves geometry using a height map (needs tessellation shaders).
157
What is an FBO (Framebuffer Object)?
An off‑screen buffer you can render into (e.g., for mirrors, portals, shadow maps).
158
What is shadow mapping?
Render scene from light → store depth → compare with camera view to detect shadows.
159
What is a shadow volume?
A 3D volume extruded from object silhouettes to determine shadowed regions.
160
What is clipping?
Removing parts of primitives outside the view volume.
161
What is an outcode?
A 4‑bit (2D) or 6‑bit (3D) code describing which side of the clipping region a point lies on.
162
What is the Cohen–Sutherland algorithm?
A fast line‑clipping method using outcodes + bit tests.
163
What are the 3 outcomes of line clipping?
Fully inside → accept Fully outside → reject Partially inside → compute intersection(s)
164
Why do we use bounding boxes?
Quick reject test before expensive clipping.
165
Why is Bresenham’s algorithm preferred?
Uses only integers → fast for hardware.
166
What is the odd–even rule?
Count edge crossings; odd = inside, even = outside.
167
Why does the painter’s algorithm fail?
Cyclic overlaps and piercing polygons break depth sorting.
168
What is an instance transformation?
A transform applied to a symbol to place it in the scene (scale → rotate → translate).
169
Why do we need hierarchical models?
Complex objects have moving parts; each part needs its own transform.
170
Difference between tree and DAG for modelling?
Tree duplicates geometry per instance; DAG shares geometry and stores transforms on edges.
171
What is constructive solid geometry (CSG)?
Combining primitives using union, intersection, and difference.
172
What problem do BSP trees solve?
Fast visibility ordering by recursively partitioning space with planes.
173
What does a 3D scene model include?
Objects, lights, cameras, environment.
174
What is the MTL file used for?
Material properties (Ka, Kd, Ks, maps).
175
What is glTF?
A modern JSON/binary format storing full scenes, animations, materials.
176
What is a particle system?
A model where many small particles simulate non‑rigid phenomena (smoke, fire, cloth).
177
What defines a particle’s state?
Position and velocity.
178
What is Hooke’s law for springs?
A: f=-ks(|d|-s)\d/|d|
179
Why add damping to springs?
To stop infinite oscillation.
180
Why subdivide space into cells?
To reduce force calculation from O(n²) to O(n).
181
What are agent‑based models?
Particles with behavioural rules (flocking, avoidance, attraction).
182
What is a generative grammar?
A set of symbol‑replacement rules used to generate shapes.
183
What is the Koch curve rule?
F → F L F R R F L F
184
What is fractal dimension?
A measure of how detail scales with magnification.
185
What is midpoint displacement used for?
Generating natural‑looking random curves and terrains.
186
What is a Hilbert curve?
A recursive space‑filling curve.
187
What is the Mandelbrot set?
Points where the recurrence z_{k+1}=z_k^2+c does not diverge.
188
What are the three curve representations?
Explicit, implicit, parametric.
189
Why are parametric curves preferred in graphics?
Easy to differentiate → tangent + normal for shading.
190
What defines a cubic parametric curve?
p(u)=C_0+C_1u+C_2u^2+C_3u^3
191
How do you relate motion in time to motion along a Bézier curve?
Use the chain rule: dp dp du ---- = --- * --- dt du dt - p'(u)= dp the Bézier derivative ------ du - du ----- dt is how fast the parameter moves in time
192
What is the derivative of a cubic Bézier curve with control points B_0,B_1,B_2,B_3?
p'(u)=3(1-u)^2(B_1-B_0)+6u(1-u)(B_2-B_1)+3u^2(B_3-B_2)
193
What is a Hermite curve defined by?
Two endpoints + two endpoint derivatives.
194
What is a Bézier curve defined by?
Four control points; endpoints interpolated, middle points shape the curve.
195
What is a B‑spline?
A smooth curve that does not interpolate control points; guarantees C2 continuity.
196
What is a parametric surface
A function p(u,v) mapping a 2D parameter domain to 3D space.
197
How is a bicubic patch defined?
16 control points → 48 coefficients.
198
What is a Bézier surface patch?
A 2D extension of Bézier curves; interpolates only the four corners.
199
What is a NURBS surface?
A rational B‑spline using homogeneous coordinates.
200
What is the formula for a cubic Bézier curve with control points P_1,P_2,P_3,P_4?
p(u)=(1-u)^3P_1+3u(1-u)^2P_2+3u^2(1-u)P_3+u^3P_4
201
How do you draw a smooth cubic curve on a raster display without gaps or double‑drawing pixels?
Parametrize: Use 𝑢∈[0,1] as the curve parameter. Step size: Choose small increments Δ𝑢 (e.g., 0.01) for smoothness. Evaluate points: Compute 𝑝(𝑢)=[𝑥(𝑢),𝑦(𝑢)] at each step. Connect points: Draw line segments between consecutive points 𝑝(𝑢) and 𝑝(𝑢+Δ𝑢). Avoid gaps: Keep Δ𝑢 small enough that adjacent points map to neighbouring pixels. Avoid duplicates: Draw segments (scanline/line algorithm) rather than plotting individual points.
202
What are the endpoint derivatives for the two cubic Bézier segments?
For cubic Bézier (B_0,B_1,B_2,B_3): - p'(0)=3(B_1-B_0) - p'(1)=3(B_3-B_2)
203
How do we enforce C¹ continuity at P_4? What is P_5?
C¹ continuity → tangents match at the join: p_1'(1)=p_2'(0) So: 3(P_4-P_3)=3(P_5-P_4)\Rightarrow P_4-P_3=P_5-P_4\Rightarrow P_5=2P_4-P_3
204
What are the second derivatives at the join for the two cubic Bézier segments?
For cubic Bézier (B_0,B_1,B_2,B_3): - p''(0)=6(B_0-2B_1+B_2) - p''(1)=6(B_3-2B_2+B_1)
205
What equation enforces C² continuity at the join?
C² continuity → second derivatives match at the join: p_1''(1)=p_2''(0)
206
What is Delaunay triangulation used for?
Converting scattered height data into a good triangular mesh.
207
What is ray tracing?
Follow rays from the camera, computing reflections, refractions, and shadows.
208
What is a shadow ray?
A ray from a surface point to a light source to test visibility.
209
What is a ray tree?
The branching structure of reflected and refracted rays.
210
What surfaces are easy to intersect analytically?
Planes and quadrics (spheres, cylinders, cones).
211
What is radiosity?
A global illumination method for diffuse surfaces using form factors.
212
What is path tracing?
Monte‑Carlo sampling of many random light paths to approximate the rendering equation.
213
What limitation does WebGL have?
Only vertex + fragment shaders; no tessellation, compute, or ray tracing.
214
What does Vulkan provide?
Low‑level control, multi‑threading, explicit memory management.
215
What does a tessellation control shader do?
Decides how much to subdivide a patch.
216
What does a tessellation evaluation shader do?
Computes actual vertex positions on the subdivided patch.
217
What does a geometry shader do?
Takes one primitive and outputs zero or more new primitives.
218
What is a mesh shader?
A programmable stage that replaces the entire vertex/tessellation/geometry pipeline.
219
What is a compute shader used for?
General GPU computation (physics, particles, simulation).
220
What new shaders does ray tracing add?
Ray‑gen, intersection, hit, miss.
221
How do you choose the up‑vector for a look‑at camera?
Use the world’s up direction (module convention): w_up = (0,0,1)^T It must not be parallel to the forward vector.
222
How do you compute the camera’s right axis?
r= f x w_up ------------------ ||f x w_up||
223
Why does the view matrix use -f instead of f?
In camera space, the camera looks along the negative z‑axis. So the third row of the rotation matrix is -f.
224
How do you build the rotation matrix for the view transform?
R = rx ry rz 0 ux uy uz 0 -fx -fy -fz 0 0 0 0 1
225
How do you combine rotation and translation to get the view matrix?
M_view = R * T
226
How do you construct a view matrix from an eye point, a target point, and an up vector?
- Compute the forward/view direction - Choose world‑up (module convention): - Compute the right axis - Compute the true up axis - Build the rotation matrix (Camera looks along −z in camera space) - Build the translation matrix - Final view matrix
227
How do shadow maps relate to projection matrices?
To create a shadow map, the scene is rendered from the light’s point of view. The light is treated like a camera, using: - a view matrix (light’s position + direction) - a projection matrix (defines how the light “sees” the world) The depth buffer produced by this render becomes the shadow map. During real rendering, each fragment is transformed into the light’s clip space using the same projection matrix to test whether it is in shadow.
228
How do I know which projection matrix to use for each type of light?
- Directional light → Orthographic projection - Rays are parallel - No foreshortening - Use an orthographic projection matrix - Point light → Perspective projection - Rays diverge from a single point - Foreshortening occurs - Use a perspective projection matrix Rule of thumb: Parallel rays → orthographic Diverging rays → perspective
229
How does the fragment shader use a shadow map to work out lighting?
1. Get light‑space position - Transform point P using the light’s view + projection matrices. - Do the perspective divide → UV + depth. 2. Depth test - Read depth from the shadow map. - If P’s depth is greater → in shadow. - Else → lit. 3. Apply lighting - Ambient → always added. - Diffuse + specular → only if not in shadow for that light. 4. Final colour Ambient - lit diffuse/specular from L1 - lit diffuse/specular from L2.