What is a norm?
What is the Euclidean norm exactly?
Also called the standard norm
What is a L^p norm?
What are the three interesting cases?
What are the relative and absolute error approximate formulas?
What are the two broad types of errors?
What is a mantissa? And how do computers represent floating-point numbers?
What is the standard implementation of a floating-point number?
WHY DOES THIS LEAD TO ERRORS
NEED TO KNOW BASE 2, BASE 10 CONVERSION
e.g. example 0.3 + 0.3 + 0.3 != 0.9
Do we need to worry about roundoff errors?
What shouldn’t you do with floating-point numbers?
Why does subtracting a floating-point number cause a catastrophic cancellation error?
ADD TO THIS
What are we interested in about the calculation of any algorithm?
Why do we care about computation cost when looking at error formulas?
Generally, what is big-O notation?
LOOK INTO THIS MORE
What is the definition of Big-O notation?
How do I make a Grid from a continuous interval?
What is a finite-difference approximation (forward difference)?
What are the other finite-difference approximations?
How do we use the Taylor series in the finite-difference approximation? Why do we use it?
Check why we use it!
From the Taylor series representation, how do we find the absolute error between the true derivative and the forward-difference approximation?
What are the absolute errors of all the finite differences of the approximations?
This is useful for two reasons:
1) Compare which algorithm is better e.g. for smaller errors, O(h2) is better than O(h)
2) use for checking if what Im going is create
(Check the scaling law discussed in the lecture recording)