Algorithm Specification
A specification precisely expresses the task to be done and consists of:
Name: The name of the algorithm and its arguments.
Initial Condition (Input): Specifies what constitutes “correct” input data.
Final Condition (Output): Specifies the desired result or state after the algorithm finishes.
Name
The name of the algorithm and its arguments.
Initial Condition (Input)
Specifies what constitutes “correct” input data.
Final Condition (Output)
Specifies the desired result or state after the algorithm finishes.
Total Correctness
An algorithm is totally correct if, for any correct input data, it stops and returns the correct output .
Partial Correctness
An algorithm is partially correct if, receiving correct input data, if it stops, the result is guaranteed to be correct. It does not guarantee that the algorithm will actually stop.
Stop Property
The requirement that an algorithm will certainly finish after a finite number of iterations for any correct input data.
Prove the Stop Property
Often shown by identifying a loop variable that moves toward a constant termination condition by a fixed increment/decrement .
Prove Partial Correctness
Usually involves using a Loop Invariant.
Loop Invariants Definition
A logical predicate that, if satisfied before an iteration of a loop, remains satisfied after that iteration.
Example: In a “find maximum” algorithm, a typical invariant is that the current candidate $x$ is the maximum of all array elements visited so far ($\forall_{0\le j<i}x\ge Arr[j]$)16.
Loop Invariants Application
To prove an algorithm’s goal, you combine the loop invariant with the loop’s termination condition.
Example: In a “find maximum” algorithm, a typical invariant is that the current candidate $x$ is the maximum of all array elements visited so far ($\forall_{0\le j<i}x\ge Arr[j]$)16.
Measuring Algorithm Speed
Method: The speed of an algorithm is measured by counting its basic (elementary) operations rather than measuring execution time in seconds.
Independence: This approach ensures the measure is independent of specific programming languages, hardware, or software platforms.
Internal Property: It treats complexity as an internal property of the algorithm itself.
Assessment Prerequisites
Determine the Dominating Operation Set: Identify the operations that represent the bulk of the work4.
Determine the Data Size: Identify what in the input (usually denoted as $n$) influences the number of operations5.
Dominating Operations
Definition: Dominating operations are those performed most frequently and whose count is proportional to the total amount of work performed by the algorithm.
Requirement: These operations must be performable in constant time.
Examples: In a search algorithm, the comparison arr[i] == key or the index increment i++ are typical dominating operations.
Time Complexity
The number of dominating operations executed by the algorithm expressed as a function of the data size $n$9.
Space Complexity ($S(n)$)
The number of units of memory (e.g., machine words) used by the algorithm as a function of data size. Usually, memory used for input data is excluded.
Pessimistic (Worst-case) Time Complexity ($W(n)$
The maximum number of dominating operations performed for any input of size $n$; it is the “supremum” of work across all possible datasets of that size12121212.
Average Time Complexity ($A(n)$)
The expected value of the number of dominating operations, calculated by multiplying the number of operations for each case by its probability of occurrence.
Asymptotic Notation
Purpose: It provides a way to express complexity while neglecting unimportant details such as additive or multiplicative constants (e.g., $2.1n - 1$ is simply seen as having a linear rank) 14.
“Big O” ($O$ )
Defines an upper bound. $f(n) = O(g(n))$ if there exist constants $c > 0$ and $n_0$ such that $f(n) \le c \cdot g(n)$ for all $n \ge n_0$15151515. Intuitively, it corresponds to “$\le$”16.
“Big Theta” ($\Theta$)
Defines the same rank. $f(n) = \Theta(g(n))$ if $f(n) = O(g(n))$ and $g(n) = O(f(n))$17. Intuitively, it corresponds to “$=$”18.
“Big Omega” ($\Omega$)
Defines a lower bound. $f(n) = \Omega(g(n))$ means $g(n) = O(f(n))$, intuitively corresponding to “$\ge$”19191919.
Small “o” and Small “omega” ($\omega$)
These represent sharp inequalities, intuitively corresponding to “$<$” and “$>$” respectively 20.
Common Ranks of Functions (from lowest to highest growth)
Constant: $\Theta(1)$
Logarithmic: $\Theta(\log n)$
Linear: $\Theta(n)$
Linear-logarithmic: $\Theta(n \log n)$
Square (Quadratic): $\Theta(n^2)$
Cubic: $\Theta(n^3)$
Exponential: $\Theta(2^n)$
Factorial: $\Theta(n!)$