The Memory Management Glossary
S

 Contents | News | Glossary | FAQ | Articles | Bibliography | Links | Feedback

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z - Help

Our aim is for these entries to be accurate, comprehensible, and useful, and also to have an entry for all common memory management terms. If you can't find the term you're looking for, if our definition doesn't help you, or if you'd like to suggest corrections or additions, please let us know via our feedback page.

For an explanation of the structure of the entries, and information on how to link to definitions, please see the glossary help page.


sbrk

sbrk is a UNIX® library function that adjusts the limit of the data segment; this limit is known as the break.

sbrk returns the previous value of the break, so sbrk(0) is a common idiom for getting the current value.

Note that, if you use brk, you probably can't safely use sbrk as well, because it may store the last value of the break in a private variable.

scalar data type

A scalar data type is a type that is representable in a single dimension and whose objects have only magnitude as value.

Examples of scalar data types include: integers, floating-point numbers, enumerations, and characters.

Relevance to memory management: The objects of a scalar data type are leaf objects. Scalar data types with bounded magnitude can be represented compactly using value objects.

Historical note: Because compact representation solves many memory management issues, many older programming languages only offered bounded scalar data types. For example, the int type in C is defined to have a magnitude that can be represented by a word.

See also: vector data type; algebraic data type; value object; leaf object.

scan

The examination of an object or an area of memory(2) to find references, typically as part of tracing.

Scanning examines memory that has been decided to be non-garbage, to find references to objects that have been condemned.

scavenging garbage collection (for full details, see copying garbage collection)

Copying garbage collection is a kind of tracing garbage collection that operates by relocating reachable objects (this is sometimes called scavenging) and then reclaiming objects that are left behind, which must be unreachable and therefore dead.

SDRAM

Synchronous Dynamic Random Access Memory. A high performance variant of DRAM.

SDRAM uses an external clock signal to synchronize its data input and output. It is capable of achieving very high data rates for linear access to memory.

segmentation violation

A segmentation violation occurs when an attempt is made to access memory(2) whose address is well-formed, but to which access cannot be granted. This might be due to either a protection fault or an invalid page fault.

The term is sometimes used more loosely as a synonym for any memory access error, including a bus error.

Similar terms: general protection fault; read fault; write fault.

segmented addressing

In segmented addressing, addresses are in two parts: a segment identifier and an offset into that segment.

Each segment has a base address and a limit. If the offset is greater than the limit, the address is invalid (see segmentation violation). Otherwise, the offset is added to the segment's base address, giving the unsegmented address. Segment identifiers may be implicit; for instance, they may be obtained from a current segment register.

Segmentation may be layered on top of virtual memory(1), in which case the unsegmented address is a virtual address, or not, in which case it is a physical address.

Note that, in segmented architectures, you can have a two-dimensional address space.

Segments are a feature of some processor architectures and operating systems. This description does not cover all possible variations on segmentation.

Historical note: Segment terminology may be used on unsegmented systems for historical reasons. For instance, UNIX processes have text segments, even when running on an unsegmented system.

Opposites: linear addressing.

segregated fit

One of the segregated free list class of allocation mechanisms. There is an array of free lists, each holding free blocks of a particular range of sizes. The allocator identifies the appropriate free list and allocates from it (often using a sequential fit mechanism such as first fit). If this fails, a larger block is taken from another list and split.

The details of the mechanism depend on the division of sizes between free lists. See exact segregated fit and strict segregated fit.

This implements a good fit allocation policy.

See also: segregated free list; allocation mechanism; free list; exact segregated fit; strict segregated fit.

Related publications:


segregated free list, segregated free-list

A class of allocation mechanism which divides the free list into several subsets, according to the size of the free blocks. A freed or coalesced block is placed on the appropriate list. An allocation request is serviced from the appropriate list.

This class of mechanism implements a good fit or best fit policy.

Variations within this class include simple segregated storage, segregated fit, and buddy systems.

Related publications:


semi-conservative garbage collection (also known as mostly-precise garbage collection, mostly-exact garbage collection)

A variant of conservative garbage collection which deals with exact references as well as ambiguous references.

For example, references from the root set might be ambiguous, but objects on the heap might be fully described and precisely scanned.

See also: mostly-copying garbage collection.

Related publications:


semi-space

When an area of memory(2) is divided into two parts for the purposes of copying garbage collection, the parts are known as semi-spaces, or sometimes just spaces.

Each semi-space is a contiguous area of memory. Semi-spaces are usually used for two space collection, but can be used for generational collection.

The semi-space where objects reside at the start of the collection is known as the old semi-space; the new semi-space is where objects will reside, and where new objects will be allocated, when the collection is complete.

See also: two space collector.

semi-space collector (for full details, see two-space collector)

A two-space collector(1) is a simple form of a copying garbage collector. The available memory(2) is divided into two halves, called semi-spaces. Objects are allocated in one semi-space until it is full. The reachable objects are then copied into the other semi-space (usually using a Cheney scan) and the old semi-space is reclaimed. Allocation continues in the new semi-space until it is full, at which point the process is repeated in reverse.

sequential fit

A class of allocation mechanisms that maintain the free list as a single linear list of free blocks (a free block chain). Sequential fit mechanisms include first fit and next fit.

To quote Dynamic Storage Allocation: A Survey and Critical Review:

The list is often doubly-linked and/or circularly linked. Typically, sequential fit algorithms use Knuth's boundary tag technique, and a doubly-linked list to make coalescing simple and fast. ... In considering sequential fits, it is probably most important to keep strategy and policy issues in mind. The classic linear-list implementations may not scale well to large heaps, in terms of time costs; as the number of free blocks grows the time to search the list may become unacceptable. More efficient and scalable techniques are available, using totally or partially ordered trees, or segregated fits.

See also: bitmapped fit; indexed fit.

Related publications:


sequential store buffer (also known as SSB)

A sequential store buffer is a technique for dividing the cost of a write-barrier by remembering which objects are modified and updating remembered sets (and so on) at a later stage.

This turns out to be extremely efficient on pipelined architectures with branch prediction.

shared memory

Memory locations are shared if they are in the range of multiple address spaces.

simple object

In the PostScript® language, simple objects are the unboxed objects.

Unlike a composite object, a simple object contains all its data in the object itself.

Similar terms: unboxed.
Opposites: composite object.

simple segregated storage

A segregated free list allocation mechanism which divides storage into pages or other areas and only allocates objects of a single size, or small range of sizes, within each area. This makes allocation fast and avoids headers, but may lead to high external fragmentation, as unused parts of areas cannot be reused for other object sizes.

Related publications:


smart pointer

A smart pointer is an instance of a C++ class that encapsulates a pointer and performs reference counting.

By overloading certain operators it is possible for the class to present the illusion of being a pointer, so that operator*, operator->, etc. can be used as normal. Reference counting allows the objects that are referred to using the smart pointer class to have their storage automatically reclaimed when they are no longer referenced. It is a common technique used when trying to solve memory management problems in C++ applications.

However, reference counting is not always an appropriate memory management technique and smart pointers can be hard to implement properly in C++. A tracing garbage collector might be worth considering.

Related publications:


snap-out (also known as transport snap-out)

In a copying collector, when there is a reference to an object that was condemned, but has been transported, snap-out is the adjustment of that reference to point to the preserved copy.

Typically the first transport leaves a forwarding pointer that enables the snap-out.

Snap-out
Diagram: Snap-out

See also: broken heart.

snapshot-at-the-beginning, snapshot at the beginning

Snapshot-at-the-beginning algorithms for tracing, incremental GC note changes made by the mutator to the graph of objects and update the collector(2) state to make it trace relevant edges that the mutator deletes.

In order for the collector to miss a reachable object, the following two conditions need to hold at some point during tracing:

  1. The mutator stores a reference to a white object into a black object.
  2. All paths from any gray objects to that white object are destroyed.

Snapshot-at-the-beginning algorithms ensure the second condition cannot occur, by causing the collector to process any reference that the mutator overwrites and that might be part of such a path.

They are so called because they keep track of references that existed at the beginning of the collection cycle. Note that this does not mean all modifications need to be seen by the collector, only those needed to complete tracing without missing a reachable object (see Barrier techniques for incremental tracing for details), nor does it mean that it won't trace some references created during the collection.

Historical note: This distinction between incremental-update and snapshot-at-the-beginning was first introduced for write-barrier algorithms, but it applies to any type of tracing algorithm.

Opposites: incremental-update.
See also: tri-color marking; weak tri-color invariant; barrier(1).

Related publications:


soft reference

In JavaTM terminology, soft reference is used to mean a reference encapsulated in a reference object of class SoftReference.

Soft references form one of three kinds of weak reference(1) in Java. They are handy for building caches(3) that are automatically flushed when memory is low.

See also: softly reachable.
Other links: Java spec for class SoftReference; Reference Objects and Garbage Collection.

softly reachable

In JavaTM, an object is softly reachable if it is not strongly reachable and there is a path from the roots to it that contains at least one soft reference but no weak(2) or phantom references.

When the Java collector(1) determines that an object is softly reachable, it has the option of clearing the soft references involved, which will usually allow the object to be recycled. The idea is that they will only be cleared if the process is running short of memory(2). If it is done, all soft references involved are cleared, so that the object is no longer softly reachable, and any affected reference objects which are registered with a queue are enqueued.

See also: reachability; weakly reachable; phantom reachable.
Other links: Java spec for class SoftReference; Reference Objects and Garbage Collection.

space leak (for full details, see memory leak)

A memory leak is where allocated memory(2) is not freed although it is never used again.

spaghetti stack (for full details, see cactus stack)

A cactus stack is a stack with branches. When diagrammed, its shape resembles that of a saguaro cactus.

split

To divide a free block into two smaller free blocks in the process of satisfying an allocation request.

Deciding when to split a block is an important aspect of an allocation policy.

Opposites: coalesce.
See also: coalesce; allocation policy; free block.

SRAM (for full details, see static memory(1))

Static memory(2) or static RAM (SRAM) is a type of physical memory(2) that does not need to be refreshed periodically to avoid losing state.

SSB (for full details, see sequential store buffer)

A sequential store buffer is a technique for dividing the cost of a write-barrier by remembering which objects are modified and updating remembered sets (and so on) at a later stage.

stack

A stack is a LIFO (last in, first out) collection: objects may be pushed onto the stack, and popped off it in reverse order of pushing.

When people say "the stack", they usually mean the control stack supported by the OS and/or the processor.

Relevance to memory management: Stack allocation is an important technique. Control stacks are central to the performance of the system and often require special handling.

Historical note: The terms "stack", "push", and "pop" are taken from the spring-loaded dish stack found in cafeterias and salad bars where removing the top plate causes the others to rise up, exposing the next one, and adding a plate causes the spring to compress, leaving only that plate accessible.

So originally, the latest item was the "top", "down the stack" meant towards earlier items, and "up" towards later ones, but today many use "up" and "down" in the opposite sense.

Similar terms: control stack.
See also: data stack; cactus stack.

stack allocation

Stack allocation means run-time allocation and deallocation of storage in last-in/first-out order.

Typically, stack allocation is performed on top of the main stack, but one can have a separate data stack for this purpose as well, as in Forth, or even multiple ones, as in the PostScript® language.

Allocation and deallocation are typically fast, since they can be done simply by adding or subtracting the size of the block from the stack pointer.

Using only stack allocation, without heap allocation, is somewhat restrictive, as only objects whose size is known at compile-time can be returned from a procedure.

Some programming languages (such as some versions of Lisp and C) provide program-controlled stack allocation and deallocation of dynamic extent objects for efficiency, despite its being unsafe.

Similar terms: automatic storage duration.
Opposites: heap allocation; static allocation.
See also: region inference; dynamic extent.

stack frame (also known as stack record)

A stack frame or record is an activation record that is stored on the stack.

In a register-based architecture, where the current activation record may be partially stored in registers, there may be hardware instructions that facilitate storing registers on the stack when another activation record is made current. Such instructions may prescribe a particular layout for activation records.

Relevance to memory management: Hardware support for saving and restoring registers, for stacks and for stack addressing may limit or otherwise prescribe the size and type of data that can be stored in a stack frame. Knowledge of the layout of each stack frame may assist a garbage collector in finding roots.

Similar terms: activation record.
See also: stack.

stack record (for full details, see stack frame)

A stack frame or record is an activation record that is stored on the stack.

static allocation

Static allocation means allocation of storage before the program starts and retention until the end.

The locations of objects are basically decided at compile-time, although they might be relocated at load-time. This implies the sizes of the objects must be known then.

Using only static allocation is restrictive, as sizes of data structures can't be dynamically varied, and procedures cannot be recursive. However, it is also fast and eliminates the possibility of running out of memory. For this reason, this scheme is sometimes used in real-time systems.

Historical note: The first high-level language, Fortran, only had static allocation to begin with. Later languages usually offer heap and/or stack allocation, but static allocation is often available as an option.

Similar terms: static storage duration.
Opposites: stack allocation; heap allocation.
See also: region inference; static memory(2).

static memory(1) (also known as static RAM, SRAM)

Static memory(2) or static RAM (SRAM) is a type of physical memory(2) that does not need to be refreshed periodically to avoid losing state.

Static memory is typically faster than dynamic memory, or requires essentially no power to preserve its state, but rarely both. These benefits result in static RAM being used for cache(1) memory, and also in portable, low-power applications (such as PDAs). It is, however, more expensive than dynamic RAM and requires more transistors, making dynamic RAM the choice for large amounts of memory (the main memory of desktop machines, for example).

Opposites: dynamic memory.

static memory(2)

The memory(2) where statically allocated objects are stored is sometimes known as static memory. In the context of garbage collection, the term is used mean memory used to store static objects.

See also: static storage duration.

static object

A static object is non-moving. That is, it is not relocated by a memory manager; its address does not change.

static RAM (for full details, see static memory(1))

Static memory(2) or static RAM (SRAM) is a type of physical memory(2) that does not need to be refreshed periodically to avoid losing state.

static storage duration

In C and C++, the static keyword applied to a file scope variable or function means it is local to the file; the static keyword applied to a function or a block scope variable means it is allocated and initialized once only.

Objects declared locally in blocks with the static keyword are allocated in static memory(2), and initialized once (usually by the compiler/linker) instead of each time the block is entered.

Static variables within functions retain their value between function invocations, and therefore must form part of the root set of any collector(1).

Opposites: automatic storage duration.
See also: lifetime.

sticky reference count (for full details, see limited-field reference count)

A reference counting technique whereby the field used to store the number of references to an object has a limited size. In particular, the field is not large enough to represent the maximum possible number of references to an object.

storage (for full details, see memory(1))

memory or storage (or store) is where data and instructions are stored. For example, caches(1), main memory, floppy and hard disks are all storage devices.

storage hierarchy (also known as memory hierarchy)

A typical computer has several different levels of storage. Each level of storage has a different speed, cost, and size. The levels form a storage hierarchy, in which the topmost levels (those nearest the processor) are fastest, most expensive and smallest.

Levels typically include processor registers, possibly some levels of cache(1), main memory, and possibly some levels of backing store.

Each level is commonly used as a cache(2) for the next level. For instance, virtual memory(1) systems use main memory as a cache for backing store.

Storage hierarchy with relative speed, cost, and typical size
Diagram: Storage hierarchy with relative speed, cost, and typical size

storage level

One level in a storage hierarchy, for instance a cache(1), main memory, backing store, and so on.

See also: storage hierarchy.

storage management (for full details, see memory management)

Memory management is the art and the process of coordinating and controlling the use of memory(1) in a computer system.

store(1)

To transfer data from a processor's registers to memory(2).

Store can also be used in the more general sense of transferring data from a part of the memory hierarchy that is fast to access to one that is slow to access.

STORE (or an abbreviation) is also commonly used in many processor architectures as the mnemonic for the machine code instructions that store data into memory.

Opposites: load.

store(2) (for full details, see memory(1))

memory or storage (or store) is where data and instructions are stored. For example, caches(1), main memory, floppy and hard disks are all storage devices.

strict segregated fit

A segregated fit allocation mechanism which has only one block size on each free list. A requested block size is rounded up to the next provided size, and the first block on that list is returned. The sizes must be chosen so that any block of a larger size can be split into a number of smaller sized blocks. Buddy systems are a special case of strict segregated fit allocators.

See also: buddy system; segregated fit; segregated free list; allocation mechanism.

Related publications:


strong reference

In a tracing garbage collector, a strong reference is a reference that keeps the object it refers to alive.

A strong reference is the usual sort of reference; The term is usually used to draw a contrast with weak reference(1).

Opposites: weak reference(1).
See also: strong root.

strong root

A strong root is a root such that all references in it are strong references.

A strong root is the usual sort of root; The term is usually used to draw a contrast with weak root.

Opposites: weak root.

strong tri-color invariant, strong tri-colour invariant, strong tricolor invariant, strong tricolour invariant

The strong tri-color invariant is the property of a reference graph that there is no edge from a black node to a white node.

By preserving this property throughout tri-color marking, a tracing algorithm can ensure that the collector(2) will not miss reachable objects, even if the mutator manipulates the graph during the collection. This invariant can also be used to ensure that a copying garbage collector doesn't confuse the mutator. Mutator actions might need to change the color of the nodes affected in order to preserve the invariant (see Barrier techniques for incremental tracing for details).

Algorithms using this invariant are incremental-update algorithms.

Similar terms: tri-color invariant.
See also: barrier(1); weak tri-color invariant.

Related publications:


strongly reachable

In JavaTM, an object is strongly reachable, if there is a path from the roots to it that contains only strong references, i.e., no reference objects.

See also: reachability; softly reachable; weakly reachable; phantom reachable.
Other links: Reference Objects and Garbage Collection.

suballocator

A suballocator is an allocator functioning on top of another allocator.

Suballocators work by allocating large blocks and splitting them for use, or by recycling blocks locally.

Application programmers sometimes write their own suballocators when faced with an inefficient or inadequate memory manager. Suballocators can take advantage of special knowledge of program behavior, but are less efficient in general than fixing the underlying allocator, mainly because memory management is a global issue for an application, and a global strategy can make a big difference. For example, different suballocators can interact catastrophically with each other and with the virtual memory(1) system, causing the application's memory requirements to grow unnecessarily due to fragmentation.

subgraph

A subgraph S of a graph G is a graph such that all the nodes in S are also in G and all the edges in S are also in G; that is, it is a part of a graph.

sure reference (for full details, see exact reference)

An exact or precise or sure reference is a value the collector(1) knows is a reference.

swap space

Backing store used by a swapping system.

See also: swapping; backing store.

swapped in

A process or page is swapped in if it is available in physical memory(1). This usually applies to the entire program image.

Similar terms: paged in.
Opposites: swapped out.
See also: swapping.

swapped out

A process or page is swapped out if it is not available in physical memory(1). This usually applies to the entire program image.

Similar terms: paged out.
Opposites: swapped in.
See also: swapping.

swapping

Historically, swapping was the technique of moving entire program images to disk (or drum) and back into physical memory(1), an early form of virtual memory(1). Nowadays, it is used as a synonym for paging.

Similar terms: paging.
See also: swapped in; swapped out.

sweeping

Sweeping is the second phase ("the sweep phase") of the mark-sweep algorithm (q.v.). It performs a sequential (address-order) pass over memory to recycle unmarked blocks.

Sweeping typically gathers all unmarked blocks into one or more free lists.

See also: marking.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z - Help