Memory Management Reference

« Memory Management Glossary: C | Memory Management Glossary: D | Memory Management Glossary: E »

Memory Management Glossary: D

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

dangling pointer

A dangling pointer is a surviving reference to an object that no longer exists at that address.

In manual memory management, dangling pointers typically arise from one of:

  1. A premature free, where an object is freed(1), but a reference is retained;
  2. Retaining a reference to a stack-allocated object, after the relevant stack frame has been popped.

Dangling pointers can occur under automatic memory management, because of a garbage collection bug (such as premature collection, or moving without updating all references), but this is much rarer because garbage collection code is usually a single common core of reused code in which these bugs can be fixed systematically.

data stack

A stack used to manage the storage of stack-allocated objects, other than activation records, often under program control.

Because of the limitations that may be imposed on the control stack, or to support stack-like semantics for certain data structures, some language implementations manage additional data stacks in software for storing objects that have dynamic extent but that do not fit within the constraints of the control stack.

See also

control stack.

dead

An object is dead if it is not live; that is, when the mutator cannot reach any state in which it accesses the object.

It is not possible, in general, for garbage collectors to determine exactly which objects are dead and which are live. Instead, they use some approximation to detect objects that are provably dead, such as those that are unreachable.

Opposite term

live.

See also

garbage, undead, free(3).

deallocate

See

free(1).

debugging pool

In the MPS

A pool that performs extra checking in order to find errors in the client program. It uses fenceposts to detect overwriting errors and it writes patterns over reclaimed blocks in order to detect use after free or missing references during scanning.

deferred coalescing

Deferred coalescing is a policy which coalesces free blocks some time after the blocks are freed, as opposed to coalescing free blocks immediately as they are freed.

Adjacent free blocks can be coalesced to form larger free blocks; deferred coalescing is a catch-all for policies which perform this coalescing sometime after the blocks were freed.

Given this rather flexible definition there are a number of choices for when to coalesce: as the free list is traversed during allocation, when the allocation cannot be satisfied from the free list, periodically, and so on. In addition there are choices to be made regarding how much coalescing to perform at any one time.

deferred reference counting

Deferred reference counting reduces the cost of maintaining reference counts by avoiding adjustments when the reference is stored on the stack.

On many systems, the majority of stores are made into local variables, which are kept on the stack. Deferred reference counting leaves those out and counts only references stored in heap objects. This requires compiler support, but can lead to substantial performance improvements.

Objects cannot be reclaimed as soon as their reference count becomes zero, because there might still be references to them from the stack. Such objects are added to a zero count table (ZCT) instead. If a reference to an object with a count of zero is stored into the heap, then the object is removed from the ZCT. Periodically the stack is scanned, and any objects in the ZCT which were not referenced from the stack are reclaimed.

Deferred reference counting has been used successfully with several languages, notably Smalltalk. However, since it fails to collect objects with cyclic references, it is often used alongside a tracing garbage collector.

dependent object

In the MPS

In AWL (Automatic Weak Linked), each object in the pool can have a dependent object. While scanning an object, the MPS ensures that the dependent object is unprotected so that it can be updated. This feature supports the implementation of weak-key and weak-value hash tables. See Dependent objects.

derived pointer
destructor(1)

A destructor is a function or a method that performs the explicit deallocation of an object. It may also perform clean-up actions.

Opposite term

constructor(1).

destructor(2)

In C++, a destructor is a member function that is used to clean up when an object is being deallocated.

When an object is being destroyed (by delete or automatically), the appropriate destructor is called, and then the actual deallocation of memory(2) is performed by operator delete or the run-time system (for static and stack allocation).

See also

constructor(2).

DGC
direct method

Direct methods of automatic memory management maintain information about the liveness of each object, detecting garbage directly.

Such bits of information, for example, reference counts, are typically stored within the objects themselves.

Direct garbage collection can allow memory(2) to be reclaimed as soon as it becomes unreachable. However, the stored information must be updated as the graph of objects changes; this may be an expensive operation, especially in distributed garbage collection where it can lead to intensive communication between processors, and make garbage collection less robust to network failures.

Opposite term

indirect method.

dirty bit

A dirty bit is a flag indicating that a page (or similar) has been written to since it was last examined.

Dirty bits are used by cache(2) to determine which pages must be written out, and by garbage collectors in conjunction with write barriers.

distributed garbage collection

Also known as

DGC.

Distributed garbage collection is garbage collection in a system where objects might not reside in the same address space or even on the same machine.

Distributed garbage collection is difficult to achieve in widely-distributed systems (over wide-area networks) because of the costs of synchronization and communication between processes. These costs are particularly high for a tracing garbage collector, so other techniques, including weighted reference counting, are commonly used instead.

double buddies

A buddy system allocation mechanism using a pair of binary buddy systems with staggered size classes.

One system is a pure binary buddy, with powers-of-two classes (2, 4, 8, …). The other uses some fixed multiple of powers-of-two (for example, 3, 6, 12, …). This resembles weighted buddies, but the two buddy systems are treated independently: blocks cannot be split or coalesced from one to the other.

double free

A double free is when an attempt is made to free(1) a memory(2) block that has already been freed.

This usually occurs in manual memory management when two parts of a program believe they are responsible for the management of the same block.

Many manual memory managers have great trouble with double frees, because they cannot cheaply determine that deallocated blocks were already free. Instead, they corrupt their free block chain, which leads to mysterious problems when the same block is subsequently allocated.

See also

premature free.

doubleword

Also known as

longword.

A doubleword is a unit of memory consisting of two adjacent words.

Historical note

On the Intel 80386, 80486, and Pentium processors, the doubleword of 32 bits is actually the natural word size, but the term word is still used for the 16-bit unit, as it was on earlier processors of this series.

See also

quadword.

doubly weak hash table
A hash table that is both weak-key and weak-value.
DRAM
dynamic allocation
dynamic extent

An object has dynamic extent if its lifetime is bounded by the execution of a function or some other block construct.

Objects of dynamic extent are usually stack-allocated.

Opposite term

indefinite extent.

dynamic memory

Also known as

dynamic RAM, DRAM.

Dynamic memory, or dynamic RAM (DRAM, pronounced “dee ram”), is a type of RAM.

Dynamic memory requires periodic refreshing to avoid losing its contents (as opposed to static memory(1), the contents of which are preserved without any need for refreshing). The refreshing is performed by additional “refresh hardware” usually external to the dynamic memory package itself, sometimes by the main CPU. Dynamic memory is cheap and compact and is the choice for large amounts of relatively fast memory, such as the main memory of PCs. Dynamic memory often comes packaged in SIMMs or DIMMs.

See also

static memory(1), SDRAM.

dynamic RAM