Numerical Methods Flashcards

(223 cards)

1
Q

In which of the following categories can we put Bisection Method?
A. Bracketing Solutions
B. Empirical Solution
C. Graphical Solutions
D. Trial Solutions

A

Bracketing Solutions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The convergence of bisection is
A. very slow
B. very fast
C. quadratic
D. exponential

A

very slow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

For the bisection, the convergence is
A. linear
B. quadratic
C. third power
D. quartic

A

linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The basic principle of Regula Falsi is:
A. Divide interval repeatedly into halves
B. Linear interpolation between two points where function changes sign
C. Use tangent line at one point
D. Use polynomial approximation

A

Linear interpolation between two points where function changes significantly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which statement correctly describes how the False Position Method improves upon the Bisection Method?
A. It can be used to find complex roots, which Bisection cannot.
B. It does not require two initial guesses.
C. It has a quadratic convergence rate, whereas Bisection Is linear.
D. It guarantees that the error is reduced by a factor greater than 0.5 in every step.

A

It guarantees that the error is reduced by a factor greater than 0.5 in every step.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The initial bracketing condition f(xl)*f(xr) <0 for the False Position Method is a direct application of which fundamental theorem of calculus?
A. The Extreme Value Theorem
B. The Fundamental Theorem of Calculus
C. The Mean Value Theorem
D. The Intermediate Value Theorem (IVT)

A

The Intermediate Value Theorem (IVT)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the key necessary condition for the Fixed-Point Iteration, x i + 1 =g(x i ) to converge to a fixed point & near the initial guess?
A. The function g(x) must be greater than zero.
B. [g’(x)] must be less than 1 near the fixed point.
C. [g’(x)] must be exactly 1 near the fixed point.
D. g(x) must be a polynomial of degree 1

A

|g’ * (x)| must be less than 1 near the fixed point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the key necessary condition for the Fixed-Point Iteration, x_i+1 = g(xi) to converge to a fixed point & near the initial guess?

A

|g’(x)| must be less than 1 near the fixed point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The Newton Raphson method is also called as
A. Tangent method
B. Secant Method
C. Chord Method
D. Diameter Method

A

tangent method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

For decreasing the number of iterations in Newton Raphson method:
A. The value of f’(x) must be increased
B. The value of f’‘(x) must be decreased
C. The value of f’(x) must be decreased
D. The value of f’‘(x) must be increased

A

The value of f’(x) must be increased

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The points where the Newton Raphson method fails are called?
A. floating
C. continuous
B. non-stationary
D. stationary

A

stationary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The convergence of which of the following method depends on initial assumed value?
A. False position
B. Gauss Seidel Method
C. Newton Raphson Method
D. Euler Method

A

Newton Raphson Method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If a and (a + h) are two consecutive approximate roots of the equation f(x) = 0 obtained by Newton’s method, then h is equal to:

A. f(a)/f’(a)
B. f’(a)/f(a)
C. -f’(a)/f(a)
D. -f(a)/f’(a)

A

-f(a)/f’(a)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the region of convergence of Secant Method?
A. 1.5
B. 1.26
C. 1.62
D. 1.66

A

1.62

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Secant Method is also called as?
A. 2-point method
B. 3-point method
C. 4-point method
D. 5-point method

A

2-point method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the type of convergence of Secant Method?
A. linear
B. quadratic
C. super linear
D. none of this

A

super linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Muller’s method is primarily used for:
A. Differentiation of functions
B. Integration of functions
C. Root finding of equations
D. Solving linear systems

A

Root finding of equations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Muller’s method generalizes which other method?
A. Newton-Raphson method
B. Secant method
C. Bisection method
D. Regula Falsi method

A

Secant method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Instead of using a line through two points, Muller’s method uses:
A. A tangent line
B. A parabola through three points
C. A cubic polynomial
D. A straight line approximation

A

A parabola through three points

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How many initial approximations are required for Muller’s Method?
A. One
B. Two
C. Three
D. Four

A

Three

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The parabola in Muller’s method is constructed using which points?

A

(x0, f(x0)), (x1, f(x1)), (x2, f(x2))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The next approximation in Muller’s method is:
A. The midpoint of the interval
B. The x-intercept of the parabola
C. The derivative of the function
D. Always equal to x2

A

The x-intercept of the parabola

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

The iterative process of Muller’s method continues until:
A. The function diverges
B. Desired accuracy is achieved
C. Three roots are found
D. The derivative becomes zero

A

Desired accuracy is achieved

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the convergence rate of Muller’s method?
A. 1.0
B. 1.62
C. 1.84
D. 2.0

A

1.84

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Which method has a slightly faster convergence than Muller's method? A. Newton's method B. Secant method C. Bisection method D. False position method
Newton's method
26
Compared to the secant method, Muller's method is: A. Slower B. Faster C. Equally fast D. Divergent
Faster
27
Muller's method does not require which calculation? Function values A. Function Values B. Parabolas C. Derivatives D. Approximate roots
Derivatives
28
This makes Muller's method preferable over Newton's method in cases where: A. Roots are rational B. The function is linear C. The derivative is difficult or impossible to compute D. Convergence is guaranteed
The derivative is difficult or impossible to compute
29
Which type of roots can Muller's method also find? A. Only real roots B. Only Irrational roots C. Complex roots D. Rational roots
Complex roots
30
Muller's method is a significant advantage over: Muller's method is a significant advantage over: A. Secant and Newton's method R. Bisection and Secant method C. Gauss-Seidel method D. Taylor series method
Bisection and Secant method
31
Bisection and Secant methods are typically restricted to finding A. Complex roots C. Real roots B. Irrational roots D. Repeated roots
Real roots
32
The interpolation used in Muller's method is based on: A. Linear approximation B. Quadratic approximation C. Cubic approximation D. None of the above
Quadratic approximation
33
Muller's method can still converge when: A. Starting with only one guess B. Starting with only two guesses C. Starting with real initial guesses and the root is complex D. Derivatives are easily available
Starting with real initial guesses and the root is complex
34
What is the main reason Muller's method is faster than the secant method? A. It uses derivatives B. It avoids iterations C. It uses a parabola instead of a straight line D. It requires fewer starting points
It uses a parabola instead of a straight line
35
The newly found root in Muller's method replaces: A. Only x0 B. Only x1 C. One of the previous three points D. All the three points
One of the previous three points
36
The Chebyshev method is an extension of: A. Secant method B. Newton-Raphson method C. Bisection method D. Muller's method
Newton-Raphson method
37
The Chebyshev method requires how many derivatives of the function? A. Only function values B. Only first derivative C. First and second derivatives D. Third derivative
First and second derivatives
38
The Chebyshev method has which order of convergence? A. Linear (order 1) B. Quadratic (order 2) C. Cubic (order 3) D. Exponential
Cubic (order 3)
39
Compared to Newton's method, the Chebyshev method converges: A. Slower B. At the same rate C. Faster (cubic vs quadratic) D. Divergently
Faster (cubic vs quadratic)
40
The Chebyshev method is especially efficient when: A. The function is linear B. The function is constant C. The root is simple (not multiple root) D. The derivative is undefined
The root is simple (not multiple root)
41
If the initial guess is close to the actual root, the Chebyshev method will: A. Diverge B. Converge rapidly C. Converge linearly D. Fail to converge
Converge rapidly
42
The Chebyshev method is not suitable when: A. The function is smooth D. Fail to converge B. The second derivative is not available C. The initial guess is close to the root D. Function evaluations are cheap
The second derivative is not available
43
The correction term involving the second derivative in the Chebyshev method is used to: A. Slow down convergence B. Approximate linear behavior Improve accuracy and speed of convergence D. Eliminate the first derivative
Improve accuracy and speed of convergence
44
The Chebyshev method requires computation of: A. Only one function per iteration B. Only one derivative per iteration C. Function, first derivative, and second derivative per iteration D. Only constants
Function, first derivative, and second derivative per Iteration
45
The main advantage of the Chebyshev method over Newton's method is: A. Simplicity B. Faster convergence (order 3 vs 2) C. No derivatives required D. It works without an initial guess
Faster convergence (order 3 vs 2)
46
The Chebyshev method is effective when: A. Function is discontinuous B. Derivatives are easy to compute C. Roots are irrational D. Function is constant
Derivatives are easy to compute
47
The Chebyshev method may fail to converge if: A. The initial guess is very close to the root B. The initial guess is far from the root C. The function is differentiable D. The function is quadratic
The Initial guess is far from the root
48
The Chebyshev method converges faster than: A. Newton's method and Secant method B. Newton's, Secant, and Bisection methods C. Only Secant method D. Only Bisection method
Newton's, Secant, and Bisection methods
49
In each iteration, the Chebyshev method requires evaluation of:
C. f(x) f'(x) and f''(x)
50
The Chebyshev method is not commonly used in practice because: A. It is too slow B. Requires second derivative evaluation, which may be costly C. It diverges for all roots D. It works only for complex roots
Requires second derivative evaluation, which may be costly
51
For simple roots, the Chebyshev method achieves: A. Linear convergence B. Quadratic convergence C. Cubic convergence D. No convergence
Cubic convergence
52
Aitken's Δ² method is primarily used for A. Finding integrals B. Solving differential equations Accelerating the convergence of iterative methods D. Finding derivatives
Accelerating the convergence of iterative methods
53
Aitken's Δ² method is also called: A. Bisection method B. Aitken's acceleration process C. Newton's method D. Muller's method
Aitken's acceleration process
54
The Aitken Δ² method improves convergence of which type of sequences? A. Divergent sequences B. Linearly convergent sequences C. Quadratically convergent sequences D. Constant sequences
Linearly convergent sequences
55
The basic idea of Aitken's Δ² method is A. Interpolating a parabola B. Extrapolating the limit of a sequence C. Using derivatives D. Bracketing the root
Extrapolating the limit of a sequence
56
Aitken's method requires how many successive iterates? A. One B. Two C. Three D. Four
Three
57
In the formula of Aitken's, Δ represents: A. Derivative B. Forward difference C. Backward difference D. Integration
Forward difference
58
The method is applied when the sequence {xn}\{x_n\}{xn} converges: A. Divergently B. Slowly (linearly) C. Very rapidly D. Not at all
Slowly (linearly)
59
Aitken's Δ² method transforms: A. Quadratic convergence → cubic B. Linear convergence → quadratic C. Cubic convergence → linear D. Exponential convergence → linear
Linear convergence → quadratic
60
Which type of root-finding methods can benefit from Aitken's Δ² method process? A. Bisection B. Newton's Method C. Fixed-point Iteration D. Secant Method
Fixed-point iteration
61
The main advantage of Aitken's method is A. No derivatives needed B. Bracketing guarantee C. Faster convergence from fixed-point iteration D. Handles complex roots directly
Faster convergence from feed-point iteration
62
The Aitken's Δ² method is especially useful when A. The function is discontinuous B. Newton's method converges too fast C. The fixed-point iteration converges very slowly D. The function has no real roots
The fixed-point iteration converges very slowly
63
Aitken's Δ² method is often combined with A. Bisection method B. Steffensen's method C. Newton-Raphson method D. Gauss elimination
Steffensen's method
64
Steffensen's method is essentially A. A type of Newton method B. Fixed-point iteration + Aitken's Δ² acceleration C. Bisection method + Aitken's Δ² D. Muller's method
Fixed-point iteration + Aitken's Δ² acceleration
65
Which is a limitation of Aitken's Δ² method? A. Requires derivatives B. Needs three successive iterates C. Cannot be used with fixed-point iteration D. Works only for complex roots
Needs three successive Iterates
66
Aitken's Δ² is useful in numerical root finding because it A. Ensures bracketing B. Reduces number of derivatives C. Reduces the number of iterations needed D. Works without function evaluations
Reduces the number of iterations needed
67
The order of convergence achieved after applying Aitken's Δ² is generally A. Linear B. Quadratic C. Cubic D. Exponential
Quadratic
68
Aitken's Δ² process is mostly applied when A. Newton's method is diverging B. Fixed-point iteration converges too slowly C. Bisection cannot be applied D. Derivatives are easy to compute
Fixed-point iteration converges too slowly
69
The aim of elimination steps in Gauss elimination method is to reduce the coefficient matrix to A. diagonal B. identity C. lower triangular D. upper triangular
upper triangular
70
The reduced form of the Matrix in Gauss Elimination method is also called A. Column Echelon Form B. Row-Column Echelon Form C. Column-Row Echelon Form D. Row Echelon Form
Row Echelon Form
71
The procedure adopted in the Gauss-Jordan method in solving linear simultaneous equation is A. It is required to assume initial approximate values of the variables B. It reduces the given system of equations to a diagonal matrix C. It reduces the given system of equations to an equivalent triangular system D. The given matrix is factored into lower and upper triangular matrices
It reduces the given system of equations to a diagonal matrix
72
The Gauss-Jordan method transforms a given matrix into: A. Upper triangular form B. Lower triangular form C. Reduced Row Echelon Form (RREF) D. Diagonal form only
Reduced Row Echelon Form (RREF)
73
In Gauss-Jordan, the pivot element is: A. Always equal to zero B. The element used to eliminate other entries in its column C. Chosen arbitrarily D. Only in the last column
The element used to eliminate other entries in its column
74
The augmented matrix in Gauss-Jordan consists of: A. Only coefficients of variables B. Coefficients and constants of the system C. Constants only D. Identity matrix
Coefficients and constants of the system
75
After applying Gauss-Jordan, the coefficient matrix is transformed into A. A triangular matrix B. A diagonal matrix with arbitrary values C. The identity matrix D. A null matrix
The identity matrix
76
The main advantage of Gauss-Jordan method over Gaussian elimination is: A. Fewer computations B. Direct solution without back-substitution C. No need for pivoting D. Works for nonlinear equations
Direct solution without back-substitution
77
Pivoting in Gauss-Jordan method is used to: A. Increase computation time B. Avoid division by very small numbers C. Reduce the number of equations D. Make the system inconsistent
Avoid division by very small numbers
78
The Gauss-Jordan method may fail or give inaccurate results if: A. The matrix is sparse B. The matrix is large C. The pivot element is zero or very small D. The system has a unique solution
The pivot element is zero or very small
79
The Gauss-Jordan method eliminates entries: A. Only below the pivot B. Both above and below the pivot C. Only above the pivot D. Only in the last row
Both above and below the pivot
80
The reduced row echelon form (RREF) has: A. Zeroes in lower triangle only B. Zeroes in upper triangle only C. Zeroes in both above and below pivots, pivots = 1 D. Arbitrary pivot values
Zeroes in both above and below pivots, pivots = 1
81
The Gauss-Jordan method is most efficient for: A. Extremely large systems B. Systems requiring iterative refinement C. Small to medium-sized systems D. Nonlinear equations
Small to medium-sized systems
82
Which step is not part of Gauss-Jordan method? A. Row interchanging B. Row scaling C. Back-substitution D. Row elimination
Back-substitution
83
134. The Gauss-Jordan method can also be used to: A. Find eigenvalues B. Compute the inverse of a matrix C. Solve differential equations D. Perform polynomial interpolation
Compute the inverse of a matrix
84
If the system has infinitely many solutions, the Gauss-Jordan result will show: A. No solution exists B. Free variables in the reduced system C. A unique solution only D. An inconsistent equation
Free variables in the reduced system
85
In LU decomposition, a matrix A is decomposed into: A. Upper triangular × diagonal B. Diagonal x diagonal C. Lower triangular (L) x Upper triangular (U) D. Identity x matrix
Lower triangular (L) x Upper triangular (U)
86
The main advantage of LU decomposition is: A. It avoids pivoting B. It allows solving multiple right-hand sides efficiently C. It eliminates forward substitution D. It converges iteratively
It allows solving multiple right-hand sides efficiently
87
The matrix L in LU decomposition is: A. Diagonal B. Upper triangular C. Lower triangular D. Symmetric
Lower triangular
88
The matrix U in LU decomposition is: A. Lower triangular 8. Upper triangular C. Symmetric D. Diagonal
Upper triangular
89
LU decomposition is not possible if the matrix: A. Is square B. Has real entries C. Is singular D. Has positive pivots
Is singular
90
To improve stability in LU decomposition, we often use: A. Bisection method B. Partial pivoting C. Gauss-Seidel method D. Relaxation
Partial pivoting
91
142. LU decomposition is particularly useful when: A. Only one right-hand side exists B. Multiple right-hand sides exist C. The system is nonlinear D. The system is inconsistent
Multiple right-hand sides exist
92
Which factorization is similar in spirit to LU decomposition? A. QR factorization B. SVD C. Cholesky decomposition (for symmetric positive definite matrices) D. Jacobi method
Cholesky decomposition (for symmetric positive definite matrices)
93
The Gauss elimination method with back-substitution is equivalent to: A. QR decomposition B. LU decomposition C. Bisection method D. Power method
LU decomposition
94
In Doolittle's method of LU decomposition, the diagonal entries of L are: A. Zero B. One C. Arbitrary D. Equal to the pivots
One
95
In Crout's method of LU decomposition, the diagonal entries of U are: A. Zero B. One C. Arbitrary D. Equal to pivots
One
96
LU decomposition can fail without pivoting when: A. The matrix is positive definite B. The matrix is symmetric C. Zero appears as a pivot element D. The system has a unique solution
The matrix is symmetric
97
For a 3x33 \times 33x3 system, LU decomposition produces how many matrices? A. 1 B. 2 (Land U only) C. 2 (L and U, but can include permutation matrix P if pivoting is used) D. 4
2 (L and U, but can include permutation matrix P if pivoting is used)
98
LU decomposition can also be used to compute: A. Eigenvalues B. Determinant of a matrix C. Inverse Laplace transform D. Fourier transform
Determinant matrix
99
The determinant of a matrix AAA from LU decomposition is given by: A. Product of L's diagonal entries B. Product of U's diagonal entries C. Product of diagonal entries of U (since L's diagonal = 1 in standard form) D. Sum of diagonal entries
Product of diagonal entries of U (since L's diagonal = 1 in standard form)
100
Cholesky's method decomposes a matrix A into: A. A = LU B. A = QR C. A = LLᵀ D. A = UUᵀ
A = LLᵀ
101
Cholesky decomposition applies only to: A. Any square matrix B. Any triangular matrix C. Symmetric positive definite matrices D. Singular matrices
C. Symmetric positive definite matrices
102
The matrix L in Cholesky factorization is: A. Upper triangular B. Lower triangular C. Diagonal D. Identity
Lower triangular
103
The transpose of L in Cholesky's method is: A. Also lower triangular 8. Upper triangular C. Diagonal D. Singular
Upper triangular
104
Compared to LU decomposition, Cholesky's method requires about: A. Twice as many operations B. Half as many operations C. The same number of operations D. Exponentially more operations
Half as many operations
105
The number of square root operations required in Cholesky's method for n x n matrix is A. n² B. n C. 2n D. n³
n
106
Which of the following is an advantage of Cholesky's method? A. Works for all square matrices B. Requires fewer computations than LU C. No square root operations D. Works for non-symmetric systems
Requires fewer computations than LU
107
Cholesky decomposition can also be used to compute: A. Eigenvalues B. Determinants of matrices C. Inverse Laplace transforms D. Fourier transforms
Determinants of matrices
108
The determinant of a matrix using Cholesky decomposition is: A. Product of all entries of L B. Product of all entries of U C. Square of the product of diagonal entries of L D. Sum of diagonal entries of L
Square of the product of diagonal entries of L
109
The main difference between LU and Cholesky is that Cholesky: A. Works for any square matrix B. Is restricted to symmetric positive definite matrices C. Avoids multiplication pe D. Does not involve square root
Is restricted to symmetric positive definite matrices
110
Cholesky's method is most often used in: A. Iterative refinement B. Numerical linear algebra and optimization problems C. Fourier series expansion D. Polynomial interpolation
Numerical linear algebra and optimization problems
111
In solving Ax = b using Cholesky, the process is:
Forward substitution with Ly = b then backward substitution with Lᵀx=y
112
Cholesky's factorization produces a matrix L with A. Zeros everywhere B-Real positive diagonal entries is C. Negative diagonal entries D. Arbitrary diagonal values
Real positive diagonal entries
113
Compared to Gaussian elimination, Cholesky's method is: A. Less stable and more expensive B. Equally stable and same cost C. More efficient for symmetric positive definite matrices D. Only for non-square systems
More efficient for symmetric positive definite matrices
114
Crout's method is a form of: A. Bisection method B. Gauss-Seidel method C. LU decomposition D. Newton's method
LU decomposition
115
In Crout's decomposition, matrix AAA expressed as: A = L + U B. A = UL C. A = LU D. A = Lᵀ L
A = LU
116
In crouts method, the matrix L is: A. Upper triangular with diagonal 1 B. Lower triangular with arbitrary diagonal values C. Diagonal only D. Symmetric
Lower triangular with arbitrary diagonal values
117
In Crout's method, the matrix U is: A. Lower triangular with arbitrary diagonals B. Upper triangular with diagonal entries = 1 C. Symmetric D. Diagonal
Upper triangular with diagonal entries = 1
118
Compared to Doolittle's method, Crout's method differs in: A. Using partial pivoting B. Which triangular matrix has 1's on the diagonal C. Complexity order D. Number of iterations
Which triangular matrix has 1's on the diagonal
119
Crout's method is most efficient for: A. Nonlinear systems B. Small to medium-sized linear systems C. Polynomial fitting D. Eigenvalue computation
Small to medium-sized linear systems
120
In solving Ax = b with Crout's method, the steps are: A. Back substitution only B. Matrix inversion C. Forward substitution then backward (Ly = b), substitution (Ux = y) D. Trial and error
Forward substitution then backward (Ly = b), substitution (Ux = y)
121
Which is true about Crout's decomposition? A. Works only for diagonal matrices B. Works for square, non-singular matrices C. Works only for symmetric matrices D. Works for singular matrices
Works for square, non-singular matrices
122
Crout's decomposition is unstable when: A. The system has a unique solution B. The matrix is sparse C. Pivot element is zero or very small D. The system is consistent
Pivot element is zero or very small
123
Stability in Crout's method is improved by: A. Iterative refinement B. Partial pivoting C. Increasing system size D. Matrix inversion
Partial pivoting
124
In Crout's decomposition, the number of matrices produced is: A. One B. Two (Land U) C. Three (L, U, and P) D. Four
Two (Land U)
125
If pivoting is applied, the decomposition becomes A. Cholesky method B. Doolittle's method C. LUP decomposition D. Jacobi method
LUP decomposition
126
Crout's method is essentially a systematic way of performing A. Interpolation B. Polynomial factorization C. Gaussian elimination D. Iteration
Gaussian elimination
127
Crout's method requires solving how many substitution steps after decomposition? A. None B. One C. Two D. Three
Two
128
Crout's method can also be used to compute: method is: A. Fourier transforms B. Inverse Laplace transforms C. Determinants of matrices D. Taylor expansions
Determinants of matrices
129
The determinant of a matrix using Crout's decomposition is equal to: A. Product of diagonal entries of U B. Product of diagonal entries of L C. Sum of diagonal entries of L D. Product of off-diagonal entries
Product of diagonal entries of L
130
Gauss-Seidel is an improvement of which method? A. Newton-Raphson B. Jacobi method C. Bisection method D. Cholesky method
Jacobi method
131
182. Compared to Jacobi, Gauss-Seidel generally A. Converges slower B. Converges faster C. Requires fewer equations D. Never converges
Converges faster
132
Convergence of the Gauss-Seidel and Jacobi method is guaranteed if the coefficient matrix is: A. Symmetric B. Sparse C. Strictly diagonally dominant D. Singular
Strictly diagonally dominant
133
Another sufficient condition for Gauss-Seidel and Jacobi method convergence is if the coefficient matrix is: A. Rectangular B. III-conditioned C. Symmetric positive definite D. Singular
Symmetric positive definite
134
Gauss-Seidel method is best suited for: A. Dense large systems B. Sparse linear systems C. Nonlinear systems D. Single-variable equations
Sparse linear systems
135
The iterative formula for Gauss-Seidel differs from Jacobi in that it: A. Uses inverse of A directly B. Immediately substitutes the latest computed values C. Uses random guesses D. Requires determinant calculation
Immediately substitutes the latest computed values
136
Initial guess in Gauss-Seidel: A. Must be zero B. Can be arbitrary C. Must equal the exact solution D. Must be symmetric
Can be arbitrary
137
Which of the following is an advantage of Gauss-Seidel? A. Always converges regardless of matrix type B. Faster convergence than Jacobi in many cases C. Requires no initial guess D. Exact in finite steps
Faster convergence than Jacobi in many cases
138
If the matrix is strictly diagonally dominant, the convergence of Gauss-Seidel is
Guaranteed
139
The Gauss-Seidel method is especially suitable for solving: A. Systems with a unique diagonal matrix B. Systems with ill-conditioned matrices C. Large sparse systems from discretized PDEs D. Nonlinear algebraic systems
Large sparse systems from discretized PDEs
140
If the spectral radius p(G) of the iteration matrix in Gauss-Seidel and Jacobi satisfies p(G)<1, then: 207 A. The method diverges B. The method converges C. The method oscillates D. The method stops immediately
The method converges
141
In the Jacobi method, each new variable is computed using: A. The latest available values B. Only values from the previous iteration C. Random guesses D. Partial pivoting
Only values from the previous iteration
142
Compared to Gauss-Seidel, the Jacobi method generally: A. Converges faster B. Converges slower C. Never converges D. Requires no iterations
Converges slower
143
The Jacobi method requires: A. No Initial guess B. An initial guess vector C. The exact solution D. Eigenvalues of A
An initial guess vector
144
If the coefficient matrix is not diagonally dominant, the Jacobi method: A. Always converges B. May fail to converge C. Gives the exact solution in one step D. Works only with Gauss elimination
May fail to converge
145
The Jacobi method is especially useful for: A. Small dense systems B. Large sparse systems C. Nonlinear equations D. Matrix Inversion
Large sparse systems
146
The Jacobi method is considered a: A. Direct decomposition method B. Approximation-free method C. Stationary iterative method D. Root-finding method
Stationary iterative method
147
In Jacobi, all components of x^(k+1) are computed using: A. Random subsets of equations B. Only the values from x(k) C. A mixture of new and old values D. Pivoted LU decomposition
Only the values from x(k)
148
The speed of convergence in Jacobi depends on: A. Only the right-hand side vector B. Spectral radius of the iteration matrix C. Determinant of A D. Initial guess alone
Spectral radius of the iteration matrix
149
Jacobi method is widely applied in: A. Eigenvalue problems B. Fourier transforms C. Iterative solutions of discretized PDES D. Laplace transforms
Iterative solutions of discretized PDES
150
The forward difference formula is based on: A. Taylor series about X_n-1 B. Taylor series about X_n C. Trapezoidal rule D. Simpson's rule
Taylor series about X_n
151
The forward difference method is best suited for: A. Boundary points at the right end B. Boundary points at the left end C. Midpoints D. All equally
Boundary points at the left end
152
Which finite difference operator is used in forward difference? A. Backward operator ∇ B. Forward operator Δ C. Central operator δ D. Differential operator D
Forward operator Δ
153
For step size h, forward difference is exact for: A. Constant functions B. Linear functions C. Quadratic functions D. All functions
Linear functions
154
Forward difference approximation is generally: A. More accurate than central difference B. Exact for polynomials of any degree C. Less accurate than central difference D. Independent of step size
Less accurate than central difference
155
The backward difference method is mainly applied at: A. Left boundary B. Right boundary C. Midpoints D. Random points
Right boundary
156
Backward difference formula uses which operator? A. Forward operator Δ B. Backward operator ∇ C. Central operator δ D. Shift operator E
Backward operator ∇
157
For step size h, the backward difference is exact for: A. Quadratic functions B. Linear functions C. Cubic functions D. Trigonometric functions
Linear functions
158
Backward difference table is generally used in A. Newton's forward interpolation B. Newton's backward interpolation C. Lagrange interpolation D. Simpson's rule
Newton's backward interpolation
159
Central difference is generally: A. Less accurate than forward difference B. More accurate than forward/backward difference C. Exact only for constants D. Independent of step size
More accurate than forward/backward difference
160
Backward difference is typically used in numerical differentiation at: A. Any random point B. Endpoints (last data points) C. Midpoints only D. Nodes with symmetry
Endpoints (last data points)
161
212. The central difference approximation is symmetric about: A. Left point B. Right point C. Central point D. Any random point
Central point
162
For smooth functions, central difference is: A. First-order accurate B. Second-order accurate C. Third-order accurate D. Fourth-order accurate
Second-order accurate
163
Central difference is not suitable at: A. Interior points B. Endpoints of data range C. Symmetric nodes D. Equally spaced points
Endpoints of data range
164
Central difference approximation is commonly used in: A. Curve fitting B. Root finding C. Numerical differentiation and PDE discretization D. Eigenvalue problems
Numerical differentiation and PDE discretization
165
Central difference and Higher-order numerical differentiation methods are derived using: A. Monte Carlo simulation B. Taylor series expansion C. Gaussian quadrature D. Newton-Raphson method
Taylor series expansion
166
A 5-point central difference formula improves accuracy to: A. First order B. Second order C. Fourth order D. Sixth order
Fourth order
167
The main goal of higher-order formulas in differentiation is to: A. Increase computational cost B. Reduce truncation error C. Eliminate step size D. Avoid interpolation
Reduce truncation error
168
The trade-off in higher-order differentiation methods is: A. Higher accuracy and lower cost B. Higher accuracy but more computational cost C. Lower accuracy and higher cost D. Instability always
Higher accuracy but more computational cost
169
Higher-order differentiation methods are widely applied in: A. Polynomial interpolation B. Numerical solutions of PDES C. Root finding D. Matrix decomposition
Numerical solutions of PDES
170
221. Newton-Cotes formulas are based on: A. Taylor series expansion B. Polynomial interpolation C. Root finding D. Gaussian quadrature
Polynomial interpolation
171
The simplest Newton-Cotes formula is: A. Simpson's 3/8 rule B. Trapezoidal rule C. Boole's rule D. Weddle's rule
Trapezoidal rule
172
The error in Newton-Cotes on: formulas generally depends A. Random approximation B. Degree of polynomial used C. Eigenvalues of the system D. Condition number of matrix
Degree of polynomial used
173
Simpson's 1/3 rule is a Newton-Cotes formula with: A. One interval B. Two subintervals (degree 2 polynomial) C. Three subintervals D. Four subintervals
Two subintervals (degree 2 polynomial)
174
The Newton-Cotes formula of order n integrates exactly all polynomials of degree: A. n + 2 C. n-1 B. n D. 2n
n
175
A drawback of higher-order Newton-Cotes formulas is: A. Always unstable B. Runge's phenomenon (oscillations) C. High truncation error only D. Cannot be applied to smooth functions
Runge's phenomenon (oscillations)
176
Gaussian quadrature is based on choosing: A. Equally spaced points B. Optimal non-uniform nodes C. Random sampling D. Midpoints only
Optimal non-uniform nodes
177
Gaussian quadrature integrates exactly: A. Only linear polynomials B. Polynomials of degree ≤ n C. Polynomials of degree ≤ 2n-1 D. All polynomials
Polynomials of degree ≤ 2n-1
178
The standard Gaussian quadrature uses which polynomials as weight functions? A. Chebyshev polynomials B. Legendre polynomials C. Hermite polynomials D. Lagrange polynomials
Legendre polynomials
179
Gauss-Legendre quadrature applies to the interval: A. [0, 1] B. [-1, 1] C. [0, ∞) D. [-∞, ∞]
[-1, 1]
180
Gaussian quadrature with 2 nodes is exact for: A. Degree 1 polynomial B. Degree 3 polynomial C. Degree 2 polynomial D. Degree 4 polynomial
Degree 3 polynomial
181
A major advantage of Gaussian quadrature over Newton-Cotes is: A. Simpler implementation B. Higher accuracy with fewer nodes C. Uses uniform spacing D. Always exact for any function
Higher accuracy with fewer nodes
182
The weights in Gaussian quadrature are chosen to ensure: A. Symmetry only B. Exactness for high-degree polynomials C. Minimum computation time D. Random error cancellation
Exactness for high-degree polynomials
183
The nodes in Gauss-Legendre quadrature are: A. Endpoints of the interval B. Roots of Legendre polynomials C. Midpoints of the interval D. Random values in [-1,1]
Roots of Legendre polynomials
184
Romberg integration is based on: A. Gaussian quadrature B. Richardson extrapolation on trapezoidal rule C. Taylor expansion D. Forward differences
Richardson extrapolation on trapezoidal rule
185
236. Romberg integration improves accuracy by: A. Decreasing step size only B. Combining trapezoidal approximations with extrapolation C. Random sampling D. Ignoring error terms
Combining trapezoidal approximations with extrapolation
186
Romberg integration achieves high accuracy because: A. Uses Gaussian weights B. Cancels error terms progressively C. Uses midpoint rule D. Ignores higher derivatives
Cancels error terms progressively
187
The first column of the Romberg table contains: A. Midpoint values B. Successive trapezoidal approximations C. Gaussian quadrature results D. Lagrange coefficients
Successive trapezoidal approximations
188
Romberg integration can be seen as a refinement of: A. Newton-Cotes formulas B. Trapezoidal rule C. Gaussian quadrature D. Midpoint method
Trapezoidal rule
189
Which of the following is true about Romberg A. Requires only one function evaluation B. Cannot be automated C. Builds accuracy systematically with tabular extrapolation D. Works only for polynomials
Builds accuracy systematically with tabular extrapolation
190
241. Romberg integration is particularly efficient for: A. Discontinuous functions B. Random noisy data C. Smooth functions with continuous derivatives D. Piecewise constant functions
Smooth functions with continuous derivatives
191
242. Euler's method is primarily used to solve: A. Algebraic equations B. Initial value problems for ODES C. Partial differential equations D. Boundary value problems
Initial value problems for ODES
192
Euler's method is classified as: A. Exact B. First-order method C. Second-order method D. Fourth-order method
First-order method
193
A major disadvantage of Euler's method is: A. Difficult implementation B. High computational cost C. Low accuracy and instability for stiff equations D. Cannot be applied to linear ODES
Low accuracy and instability for stiff equations
194
Euler's method improves approximation by: mials mials A. Increasing h B. Decreasing step size h C. Using random step size D. Ignoring slope
Decreasing step size h
195
Which of the following is an improved version of Euler's method? A. Gauss-Seidel method B. Newton-Raphson method C. Heun's method (modified Euler) D. Jacobi method
Heun's method (modified Euler)
196
The most commonly used Runge-Kutta method is: A. RK1 B. RK2 C. RK4 D. RK5
RK4
197
The classical fourth-order Runge-Kutta (RK4) requires: A. 2 function evaluations per step B. 4 function evaluations per step C. 6 function evaluations per step D. 8 function evaluations per step
4 function evaluations per step
198
RK2 method is also called: A. Trapezoidal rule B. Midpoint method C. Gauss quadrature method D. Romberg method
Midpoint method
199
The Runge-Kutta methods avoid explicit use of: A. Differential equations B. Boundary conditions C. Higher-order derivatives D. Step size
Higher-order derivatives
200
RK methods achieve higher accuracy by: A. Using smaller step size only B. Evaluating function slopes at multiple points within the step C. Using random nodes D. Modifying Taylor expansion directly
Evaluating function slopes at multiple points within the step
201
Compared to Euler's method, RK4 is: A. Less accurate B. Equally accurate C. Much more accurate with same step size D. Unstable for all equations
Much more accurate with same step size
202
The Adams-Bashforth method is classified as: A. Single-step explicit B. Multi-step explicit C. Multi-step implicit D. Predictor-corrector implicit
Multi-step explicit
203
254. Adams-Bashforth methods use previously computed values of: A. Only y B. Function evaluations f(x,y)f(x,y)f(x,y) C. Taylor expansion coefficients D. Random points
Function evaluations f(x,y)f(x,y)f(x,y)
204
The 2-step Adams-Bashforth method is of order: A. 1 B. 2 C. 3 D. 4
2
205
A key advantage of Adams-Bashforth methods is: A. Stability for stiff equations B. Fewer function evaluations per step compared to RK C. No starting values needed D. Always exact
Fewer function evaluations per step compared to RK
206
A drawback of Adams-Bashforth is: A. Needs fewer points B. Requires starting values from another method (e.g., RK4) C. Cannot handle explicit functions D. High cost per step
Requires starting values from another method (e.g., RK4)
207
The Adams-Bashforth formula is derived from: A. Gaussian quadrature B. Polynomial interpolation of past derivatives C. Fourier expansion D. Backward difference
Polynomial interpolation of past derivatives
208
Adams-Bashforth methods are commonly used for: A. Stiff ODEs B. Boundary value problems C. Non-stiff IVPs with smooth solutions D. Random data fitting
Non-stiff IVPs with smooth solutions
209
The Adams-Moulton method is classified as: A. Explicit multi-step B. Implicit multi-step C. Predictor-only D. Single-step
Implicit multi-step
210
The Implicit nature of Adams-Moulton means: A. No equations to solve B. Same as Euler's method C. Requires solving equations at each step D. Always unstable
Requires solving equations at each step
211
Adams-Moulton methods generally have: A. Lower accuracy than Adams-Bashforth B. Higher accuracy and stability than Adams-Bashforth C. Equal accuracy to Euler's method D. Zero error
Higher accuracy and stability than Adams-Bashforth
212
The 1-step Adams-Moulton method reduces to: A. Euler's method B. Trapezoidal role C. Simpson's rule D. RK2
Trapezoidal role
213
Adams-Moulton formulas use interpolation of A. Past function values only B. Both past and current function evaluations C. Derivatives only D. Random points
Both past and current function evaluations
214
Adams-Moulton methods are more suitable than Adams-Bashforth for: A. Fast computations B. Approximate answers C. Stiff-ODE problems D. Systems with noise
Stiff-ODE problems
215
The predictor in predictor-corrector schernes is usually: A. Adams-Moulton B. Adams Bashforth C. Gauss-Seidel D. Newton-Raphson
Adams Bashforth
216
The main drawback of Adams-Moulton is: A. Unstable always B. High number of function evaluations C. Requires solving implicit equations D. Cannot be used for stiff problems
Requires solving implicit equations
217
Predictor-Corrector methods combine: A. Two explicit methods B. One explicit predictor and one implicit corrector C. Two implicit methods D. Random approximations
One explicit predictor and one implicit corrector
218
The corrector step: A. Predicts again B. Refines the predicted solution C. Decreases step size automatically D. Solves backward equations
Refines the predicted solution
219
Predictor-Corrector methods improve: A. Storage B. Accuracy and stability C. Randomness of approximation D. Step size selection only
Accuracy and stability
220
A drawback of Predictor-Corrector methods is: A. Zerg convergence B-Require multiple function evaluations per step C. No use for IVPS D. Lack of accuracy
Require multiple function evaluations per step
221
The Milne-Simpson method is an example of: A. Explicit RK method B. Predictor-Corrector scheme C. Taylor expansion method D. Gaussian quadrature
Predictor-Corrector scheme
222
In predictor-corrector schemes, the process may be repeated: A. Never B. Iteratively until convergence C. Randomly chosen D. Only once per problem
Iteratively until convergence
223
275. The corrector is applied to reduce: A. Step size B. Local truncation error C. Memory requirement D. Function evaluations
Local truncation error