Catalog / Computer Architecture Cheatsheet
Computer Architecture Cheatsheet
A concise reference to key concepts in computer architecture, covering instruction sets, memory hierarchies, pipelining, and parallel processing techniques.
Instruction Set Architecture (ISA)
Instruction Formats
Zero-Address Instructions |
Uses a stack architecture. |
One-Address Instructions |
Uses an accumulator. |
Two-Address Instructions |
Two explicit operands, one serves as both source and destination. |
Three-Address Instructions |
Three explicit operands: two sources, one destination. |
RISC vs CISC |
RISC (Reduced Instruction Set Computing) uses simpler instructions, while CISC (Complex Instruction Set Computing) uses more complex instructions. |
Addressing Modes
Immediate Addressing |
Operand is a constant value. |
Direct Addressing |
Operand is the memory address. |
Indirect Addressing |
Operand is a register that contains the memory address. |
Register Addressing |
Operand is a register. |
Register Indirect Addressing |
The register contains the address of the operand. |
Displacement Addressing |
Effective address is the sum of a register and a constant. |
Memory Hierarchy
Levels of Memory Hierarchy
Memory hierarchy is organized to provide a cost-effective balance between speed and size.
|
Cache Memory
Cache Hit |
Data is found in the cache. |
Cache Miss |
Data is not found in the cache, requiring access to main memory. |
Cache Line |
Small block of data that is transferred between cache and main memory. |
Cache Mapping Techniques |
Direct Mapping, Associative Mapping, Set-Associative Mapping. |
Cache Replacement Policies
LRU (Least Recently Used) |
Replaces the least recently used cache line. |
FIFO (First-In, First-Out) |
Replaces the oldest cache line. |
LFU (Least Frequently Used) |
Replaces the least frequently used cache line. |
Random Replacement |
Randomly selects a cache line to replace. |
Pipelining
Basic Pipeline Concepts
Pipelining improves throughput by overlapping the execution of multiple instructions. |
Pipeline Hazards
Data Hazard |
An instruction depends on the result of a previous instruction still in the pipeline. |
Control Hazard (Branch Hazard) |
The pipeline doesn’t know which instruction to fetch next because a branch instruction outcome is not yet known. |
Structural Hazard |
Two instructions need the same hardware resource at the same time. |
Pipeline Performance Metrics
Throughput |
Number of instructions completed per unit of time. |
Latency |
Time taken to execute a single instruction. |
Speedup |
Ratio of execution time without pipelining to execution time with pipelining. |
Parallel Processing
Parallelism Types
Instruction-Level Parallelism (ILP) |
Executing multiple instructions simultaneously within a single processor. |
Data-Level Parallelism (DLP) |
Performing the same operation on multiple data elements simultaneously. |
Thread-Level Parallelism (TLP) |
Executing multiple threads simultaneously on multiple processors or cores. |
Multiprocessor Architectures
Shared Memory Multiprocessors |
Processors share a common memory space. |
Distributed Memory Multiprocessors |
Processors have their own private memory and communicate via message passing. |
Flynn's Taxonomy
SISD (Single Instruction, Single Data) |
Traditional sequential computers. |
SIMD (Single Instruction, Multiple Data) |
Vector processors, GPUs. |
MISD (Multiple Instruction, Single Data) |
Rarely used. |
MIMD (Multiple Instruction, Multiple Data) |
Multiprocessors, multicomputers. |