_ involves processing instructions one at a time, using only a single processor, without distributing tasks across multiple processors.
Serial computing
or sequential computing
_ was introduced as computer science evolved to address the slow speeds of serial computing.
Parallel computing
_ is a method where parallel programming enables computers to run processes and perform calculations simultaneously.
Parallel processing
_ is a process where large computing problems are broken down into smaller problems that multiple processors can solve simultaneously.
Parallel computing
Also known as parallel programming
Multiple processors working simultaneously on different parts of a task.
Example: New British weather supercomputer MetOffice’s
Real-world applications of parallel computing span diverse domains, from scientific simulations to big data analytics and high-performance computing.
Noted
Parallel computing architectures enable efficient processing and analysis of large datasets, sophisticated simulations, and complex computational tasks.
_ consist of multiple processing units, or ‘cores,’ on a single integrated circuit (IC). This structure facilitates parallel computing, which enhances performance while potentially reducing power consumption.
Multicore processors
The need for higher performance, faster response times, increased functionality, and energy efficiency has never been more pressing.
Multi-cores, a system can perform multiple tasks at once.
Parallel Computing Benefits
PARALLEL COMPUTING IS A VERSATILE TOOL APPLIED IN MANY DIFFERENT AREAS OF INDUSTRY, INCLUDING:
Parallel computing plays a vital role in addressing complex problems and enabling advancements in various fields. It provides the computational power necessary for scientific research, data analysis, machine learning, high-performance computing, and other demanding applications
Noted
Named after the Hungarian
mathematician John von Neumann
Von Newmann Architecture
Von Newmann Architecture was named after the Hungarian mathematician _
John von Neumann
A _ computer uses the
stored-program concept
von Neumann
The CPU executes a stored program that specifically a sequence of read and write operations on the memory.
The _ gets the instructions and/ or data from the memory, decodes the instructions, and then sequentially performs them.
CPU
One of the more widely used
classifications, in use since 1966.
Flynn’s Classical Taxonomy
Distinguishes multi-processor computer architectures according to how they can be classified along two independent dimensions of instruction and data.
Flynn’s Classical Taxonomy
According to Flynn’s Classical Taxonomy, each dimension can have only one of two possible states: _ or _.
single or multiple
In Flynn’s Matrix Array, the matrix defines four classification:
SISD | SIMD
MISD | MIMD
Y = INSTRUCTION
X = DATA, Y = INSTRUCTION
SISD - Single Instruction, Single Data
SIMD - Single Instruction, Multiple Data
MISD - Multiple Instruction, Single Data
MIMD - Multiple Instruction, Multiple Data
Flynn’s SISD
Pipelining can be implemented, but only one instruction will be executed at a time.
A single instruction is executed on a multiple different pieces of data.
Flynn’s SIMD
Instructions can be performed sequentially taking advantage of pipelining or in parallel using a multiple processors.
GPU, containing vector processors and array processors, are commonly SIMD system.
Multiple processors work on the same data performing different instructions at the same time.
Flynn’s MISD
Example: Space shuttle flight control system
Autonomous processors perform operations on differences pieces of data either independently or as part of shared memory
Flynn’s MIMD
Several different instruction can be executed at the same time using different streams
Main reasons for using parallel programming
The word “distributed” in distributed computing is similar to terms such as?
distributed system
distributed programming
distributed algorithm