Task dependency graph:
DAG with nodes being tasks and edges being dependencies
Steps in the parallelization:
Granularity of decomposition:
Fine grain:
Each task computes a single element, lots of tasks
Possible decompositions into tasks:
Coarse grain:
Each task computes multiple elements, little tasks
Decomposition techniques:
Speculative decomposition approaches:
Speculative decomposition optimistic approach:
Scheduling tasks even if thy might be dependent and just rolling back if there’s an error
Speculative decomposition conservative approach:
Identifying independent tasks only when they’re guaranteed to not have dependencies
Characteristics of tasks:
Task sizes:
Task size:
Amount of work and time a task takes
Data size examples with the input, output, and computation size:
Characteristics of task interactions:
Orthogonal classification of tasks:
Goal of mapping:
Factors that determine the mapping technique:
Execution:
Alternating computation and interaction
Schemes for static mapping:
Hypercube:
N-dimensional analogue of a square and a cube where adjacent nodes differ in 1 bit
Hierarchical mapping:
Task graph mapping at the top level and data partitioning in each level
Schemes for dynamic mapping:
Chunk scheduling:
A process picks up multiple tasks at once