The Memory Management Glossary
M

 Contents | News | Glossary | FAQ | Articles | Bibliography | Links | Feedback

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z - Help

Our aim is for these entries to be accurate, comprehensible, and useful, and also to have an entry for all common memory management terms. If you can't find the term you're looking for, if our definition doesn't help you, or if you'd like to suggest corrections or additions, please let us know via our feedback page.

For an explanation of the structure of the entries, and information on how to link to definitions, please see the glossary help page.


machine word (for full details, see word)

Almost all processor architectures have a characteristic data size that is handled most efficiently. This is known as the word size, and data of that size are known as words. The word size is usually a power of two multiple of bytes(2).

main memory (also known as memory(3), primary storage)

The main memory (or primary storage) of a computer is memory(1) that is wired directly to the processor, consisting of RAM and possibly ROM.

These terms are used in contrast to mass storage devices and cache memory (although we may note that when a program accesses main memory, it is often actually interacting with a cache).

Main memory is the middle level of the memory hierarchy: it is slower and cheaper than caches(1), but faster and more expensive than backing store.

It is common to refer only to the main memory of a computer; for example, "This box has 16 MB of memory" and "Word for Windows® requires 32 MB".

Historical note: Main memory used to be called core, and is now likewise often called RAM.

Similar terms: RAM; core; physical memory(1).

malloc

A function in the standard C library that performs dynamic allocation of memory(2).

Many people use "malloc" as a verb to mean "allocate dynamically".

Similar terms: allocate.
Opposites: free(2).

manual memory management

In some systems or languages, it is up to the application program to manage all the bookkeeping details of allocating memory(2) from the heap and freeing it when no longer required; this is known as manual memory management.

Manual memory management may be appropriate for small programs, but it does not scale well in general, nor does it encourage modular or object-oriented programming.

To quote Ian Joyner's C++?? : A Critique of C++:

This is the most difficult bookkeeping task C++ programmers face, that leads to two opposite problems: firstly, an object can be deallocated prematurely, while valid references still exist (dangling pointers); secondly, dead objects might not be deallocated, leading to memory filling up with dead objects (memory leaks). Attempts to correct either problem can lead to overcompensation and the opposite problem occurring. A correct system is a fine balance.

Historical note: Manual memory management was common in early languages, but garbage collection has been around since the late 1950s, in languages like Lisp. Most modern languages use automatic memory management, and some older languages have conservative garbage collection extensions.

Opposites: automatic memory management.

mapped (also known as committed)

A range of virtual addresses is said to be mapped (committed on Windows®) if there is physical memory(2) associated with the range.

Note that, in some circumstances, the virtual memory(1) system could actually overcommit mapped memory.

Opposites: unmapped.
See also: mapping; memory mapping; mmap.

mapping

A mapping is a correspondence between a range of virtual addresses and some memory(1) (or a memory-mapped object). The physical location of the memory will be managed by the virtual memory(1) system.

Each page in a mapping could be paged out or paged in, and the locations it occupies in main memory and/or swap space might change over time.

The virtual address space can contain of a complex set of mappings. Typically, parts of the address space are mapped (have a mapping assigned), others are reserved but unmapped, and most of it is entirely unmapped.

Virtual memory with different kinds of mappings
Diagram: Virtual memory with different kinds of mappings

See also: backing store.

mark-compact

Mark-compact collection is a kind of tracing garbage collection that operates by marking reachable objects, then compacting the marked objects (which must include all the live objects).

The mark phase follows reference chains to mark all reachable objects; the compaction phase typically performs a number of sequential passes over memory(2) to move objects and update references. As a result of compaction, all the marked objects are moved into a single contiguous block of memory (or a small number of such blocks); the memory left unused after compaction is recycled.

Mark-compact collection can be regarded as a variation of mark-sweep collection, with extra effort spent to eliminate the resulting fragmentation. Compaction also allows the use of more efficient allocation mechanisms, by making large free blocks available.

Related publications:


mark-sweep, mark-and-sweep

Mark-sweep collection is a kind of tracing garbage collection that operates by marking reachable objects, then sweeping over memory(2) and recycling objects that are unmarked (which must be unreachable), putting them on a free list.

The mark phase follows reference chains to mark all reachable objects; the sweep phase performs a sequential (address-order) pass over memory to recycle all unmarked objects. A mark-sweep collector(1) doesn't move objects.

Historical note: This was the first GC algorithm, devised by McCarthy for Lisp.

See also: mark-compact.

Related publications:


marking

Marking is the first phase ("the mark phase") of the mark-sweep algorithm or mark-compact algorithm. It follows all references from a set of roots to mark all the reachable objects.

Marking follows reference chains and makes some sort of mark for each object it reaches.

Marking is often achieved by setting a bit in the object, though any conservative representation of a predicate on the location of the object can be used. In particular, storing the mark bit within the object can lead to poor locality of reference.

See also: sweep; compact.

MB (for full details, see megabyte)

A megabyte is 1024 kilobytes, or 1048576 bytes(1).

megabyte (also known as MB)

A megabyte is 1024 kilobytes, or 1048576 bytes(1).

See byte(1) for general information on this and related quantities.

memoization (for full details, see caching(3))

Caching is a heuristic that stores answers to questions asked in the past in a cache or a table, in order that they may be more quickly answered in the future. This process is also called memoization and tabling (by the Prolog community).

memory(1) (also known as storage, store(2))

memory or storage (or store) is where data and instructions are stored. For example, caches(1), main memory, floppy and hard disks are all storage devices.

These terms are also used for the capacity of a system to store data, and may be applied to the sum total of all the storage devices attached to a computer.

Historical note: "Store" is old-fashioned, but survives in expressions such as "backing store".

memory(2)

Memory refers to storage that can be accessed by the processor directly (using memory addressing instructions).

This could be real memory(1) or virtual memory(1).

memory(3) (for full details, see main memory)

The main memory (or primary storage) of a computer is memory(1) that is wired directly to the processor, consisting of RAM and possibly ROM.

memory(4)

A memory location; for example, "My watch has 256 memories."

memory bandwidth

Memory bandwidth (by analogy with the term bandwidth from communication theory) is a measure of how quickly information (expressed in terms of bits) can be transferred between two places in a computer system.

Often the term is applied to a measure of how quickly the processor can obtain information from the main memory (for example, "My new bus design has a bandwidth of over 400 Megabytes per second").

memory cache (for full details, see cache(1))

A processor's memory cache is a small piece of fast, but more expensive memory, usually static memory(1), used for copies of parts of main memory. The cache is automatically used by the processor for fast access to any data currently resident there. Access to the cache typically takes only a few processor clock cycles, whereas access to main memory may take tens or even hundreds of cycles.

memory hierarchy (for full details, see storage hierarchy)

A typical computer has several different levels of storage. Each level of storage has a different speed, cost, and size. The levels form a storage hierarchy, in which the topmost levels (those nearest the processor) are fastest, most expensive and smallest.

memory leak, space-leak (also known as leak, space leak)

A memory leak is where allocated memory(2) is not freed although it is never used again.

In manual memory management, this usually occurs because objects become unreachable without being freed.

In tracing garbage collection, this happens when objects are reachable but not live.

In reference counting, this happens when objects are referenced but not live. (Such objects may or may not be reachable.)

Repeated memory leaks cause the memory usage of a process to grow without bound.

memory location (also known as location)

Each separately-addressable unit of memory(2) in which data can be stored is called a memory location. Usually, these hold a byte(2), but the term can refer to words.

memory management (also known as storage management)

Memory management is the art and the process of coordinating and controlling the use of memory(1) in a computer system.

Memory management can be divided into three areas:

  1. Memory management hardware (MMUs, RAM, etc.);
  2. Operating system memory management (virtual memory(1), protection);
  3. Application memory management (allocation, deallocation, garbage collection).

Memory management hardware consists of the electronic devices and associated circuitry that store the state of a computer. These devices include RAM, MMUs (memory management units), caches(1), disks, and processor registers. The design of memory hardware is critical to the performance of modern computer systems. In fact, memory bandwidth is perhaps the main limiting factor on system performance.

Operating system memory management is concerned with using the memory management hardware to manage the resources of the storage hierarchy and allocating them to the various activities running on a computer. The most significant part of this on many systems is virtual memory(1), which creates the illusion that every process has more memory than is actually available. OS memory management is also concerned with memory protection and security, which help to maintain the integrity of the operating system against accidental damage or deliberate attack. It also protects user programs from errors in other programs.

Application memory management involves obtaining memory(2) from the operating system, and managing its use by an application program. Application programs have dynamically changing storage requirements. The application memory manager must cope with this while minimizing the total CPU overhead, interactive pause times, and the total memory used.

While the operating system may create the illusion of nearly infinite memory, it is a complex task to manage application memory so that the application can run most efficiently. Ideally, these problems should be solved by tried and tested tools, tuned to a specific application.

The Memory Management Reference is mostly concerned with application memory management.

See also: automatic memory management; manual memory management.
Other links: Beginner's Guide.

Memory Management Unit (for full details, see MMU)

The MMU (Memory Management Unit) is a hardware device responsible for handling memory(2) accesses requested by the main processor.

memory manager

The memory manager is that part of the system that manages memory(2), servicing allocation requests, and recycling memory, either manually or automatically.

The memory manager can have a significant effect on the efficiency of the program; it is not unusual for a program to spend 20% of its time managing memory.

Similar terms: allocator; collector(1).
See also: memory management.

memory mapping (also known as file mapping)

Memory mapping is the technique of making a part of the address space appear to contain an "object", such as a file or device, so that ordinary memory(2) accesses act on that object.

The object is said to be mapped to that range of addresses. (The term "object" does not mean a program object. It comes from UNIX® terminology on the mmap(2) man page.)

An address space with a range mapped to part of an object
Diagram: An address space with a range mapped to part of an object

Memory mapping uses the same mechanism as virtual memory(1) to "trap" accesses to parts of the address space, so that data from the file or device can be paged in (and other parts paged out) before the access is completed.

Historical note: File mapping is available on most modern UNIX® systems, and also on recent versions of the Windows® operating system such as Windows 95® and Windows NT®. However, it has a much longer history. In Multics, it was the primary way of accessing files.

See also: mapped.

memory protection (for full details, see protection)

Many operating systems support protection of memory(2) pages. Individual pages may be protected against a combination of read, write or execute accesses by a process.

misaligned (for full details, see unaligned)

An address is unaligned or misaligned if it does not comply with some alignment constraint on it.

miss

A miss is a lookup failure in any form of cache(3), most commonly at some level of a storage hierarchy, such as a cache(1) or virtual memory(1) system.

The cost of a miss in a virtual memory system is considerable -- it may be five orders of magnitude more costly than a hit. In some systems, such as multi-process operating systems, other work may be done while a miss is serviced.

Opposites: hit.
See also: miss rate.

miss rate

At any level of a storage hierarchy, the miss rate is the proportion of accesses which miss.

Because misses are very costly, each level is designed to minimize the miss rate. For instance, in caches(1), miss rates of about 0.01 may be acceptable, whereas in virtual memory(1) systems, acceptable miss rates are much lower (say 0.00005). If a system has a miss rate which is too high, it will spend most of its time servicing the misses, and is said to thrash.

Miss rates may also be given as a number of misses per unit time, or per instruction.

Opposites: hit rate.

mmap

mmap is a system call provided on many UNIX® systems to create a mapping for a range of virtual addresses.

MMU (also known as Memory Management Unit)

The MMU (Memory Management Unit) is a hardware device responsible for handling memory(2) accesses requested by the main processor.

This typically involves translation of virtual addresses to physical addresses, cache(1) control, bus arbitration, memory protection, and the generation of various exceptions. Not all processors have an MMU.

See also: virtual memory(1); page fault; segmentation violation.

mostly-copying garbage collection, mostly copying garbage collection

A type of semi-conservative tracing garbage collection which permits objects to move if no ambiguous references point to them.

The techniques used are a hybrid of copying garbage collection and mark-sweep.

Mostly-copying garbage collectors share many of the benefits of copying collectors, including compaction. Since they support ambiguous references they are additionally suitable for use with uncooperative compilers, and may be an efficient choice for multi-threaded systems.

Related publications:


mostly-exact garbage collection (for full details, see semi-conservative garbage collection)

A variant of conservative garbage collection which deals with exact references as well as ambiguous references.

mostly-precise garbage collection (for full details, see semi-conservative garbage collection)

A variant of conservative garbage collection which deals with exact references as well as ambiguous references.

moving garbage collector (also known as moving memory manager)

A memory manager (often a garbage collector) is said to be moving if allocated objects can move during their lifetimes.

Relevance to memory management: In the garbage collecting world this will apply to copying collectors and to mark-compact collectors. It may also refer to replicating collectors.

Similar terms: copying garbage collection.

moving memory manager (for full details, see moving garbage collector)

A memory manager (often a garbage collector) is said to be moving if allocated objects can move during their lifetimes.

mutable

Any object which may be changed by a program is mutable. Opposite of immutable.

Opposites: immutable.

mutator

In a garbage-collected system, the part that executes the user code, which allocates objects and modifies, or mutates, them.

For purposes of describing incremental garbage collection, the system is divided into the mutator and the collector(2). These can be separate threads of computation, or interleaved within the same thread.

The user code issues allocation requests, but the allocator code is usually considered part of the collector. Indeed, one of the major ways of scheduling the other work of the collector is to perform a little of it at every allocation.

While the mutator mutates, it implicitly frees storage by overwriting references.

Historical note: This term is due to Dijkstra et al.

Opposites: collector(2).

Related publications:


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z - Help