Category: Analysis of parallel algorithms

Parallel RAM
In computer science, a parallel random-access machine (parallel RAM or PRAM) is a shared-memory abstract machine. As its name indicates, the PRAM is intended as the parallel-computing analogy to the r
Granularity (parallel computing)
In parallel computing, granularity (or grain size) of a task is a measure of the amount of work (or computation) which is performed by that task. Another definition of granularity takes into account t
Karp–Flatt metric
The Karp–Flatt metric is a measure of parallelization of code in parallel processor systems. This metric exists in addition to Amdahl's law and Gustafson's law as an indication of the extent to which
Speedup
In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task
Data dependency
A data dependency in computer science is a situation in which a program statement (instruction) refers to the data of a preceding statement. In compiler theory, the technique used to discover data dep
Parallel external memory
In computer science, a parallel external memory (PEM) model is a cache-aware, external-memory abstract machine. It is the parallel-computing analogy to the single-processor external memory (EM) model.
Analysis of parallel algorithms
In computer science, the analysis of parallel algorithms is the process of finding the computational complexity of algorithms executed in parallel – the amount of time, storage, or other resources nee
Scalable parallelism
Software is said to exhibit scalable parallelism if it can make use of additional processors to solve larger problems,i.e. this term refers to software for which Gustafson's law holds.Consider a progr
Load-Hit-Store
A Load-Hit-Store, sometimes abbreviated as LHS, is a data dependency in a CPU in which a memory location that has just been the target of a store operation is loaded from. The CPU may then need to wai
Amdahl's law
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system
Parallel slowdown
Parallel slowdown is a phenomenon in parallel computing where parallelization of a parallel algorithm beyond a certain point causes the program to run slower (take more time to run to completion). Par
Gustafson's law
In computer architecture, Gustafson's law (or Gustafson–Barsis's law) gives the speedup in the execution time of a task that theoretically gains from parallel computing, using a hypothetical run of th