machine word
word
.
main memory
memory, primary storage.
The main memory (or primary storage) of a computer is
memory (1)
that is wired directly to the processor, consisting ofRAM
and possiblyROM
.These terms are used in contrast to mass storage devices and
cache memory
(although we may note that when a program accesses main memory, it is often actually interacting with a cache).Main memory is the middle level of the
memory hierarchy
: it is slower and cheaper thancaches (1)
, but faster and more expensive thanbacking store
.It is common to refer only to the main memory of a computer; for example, "This server has 128 GB of memory" and "macOS High Sierra requires at least 2 GB of memory".
Main memory used to be called
core
, and is now likewise often calledRAM
.
core
,physical memory (1)
,RAM
.
malloc
A function in the standard
C
library that performsdynamic allocation
ofmemory (2)
.Many people use "malloc" as a verb to mean "allocate dynamically".
allocate
.
free (2)
.
manual memory management
In some systems or languages, it is up to the application program to manage all the bookkeeping details of
allocating <allocate>
memory (2)
from theheap
andfreeing <free (1)>
it when no longer required; this is known as manualmemory management
.Manual memory management may be appropriate for small programs, but it does not scale well in general, nor does it encourage modular or object-oriented programming.
To quote
Joyner (1996) <JOYNER96>
:In C++ the programmer must manually manage storage due to the lack of
garbage collection
. This is the most difficult bookkeeping task C++ programmers face, that leads to two opposite problems: firstly, an object can bedeallocated <free (1)>
prematurely, while validreferences
still exist (dangling pointers
); secondly,dead
objects might not be deallocated, leading to memory filling up with dead objects (memory leaks
). Attempts to correct either problem can lead to overcompensation and the opposite problem occurring. A correct system is a fine balance.Manual memory management was common in early languages, but
garbage collection
has been around since the late 1950s, in languages likeLisp
. Most modern languages useautomatic memory management
, and some older languages haveconservative garbage collection
extensions.
automatic memory management
.
mapped
committed.
A range of
virtual addresses
is said to be mapped (committed on Windows) if there isphysical memory (2)
associated with the range.Note that, in some circumstances, the
virtual memory
system could actuallyovercommit
mapped memory.
unmapped
.
mapping
,memory mapping
,mmap
.
mapping
A mapping is a correspondence between a range of
virtual addresses
and somememory (1)
(or amemory-mapped <memory mapping>
object). The physical location of the memory will be managed by thevirtual memory
system.Each
page
in a mapping could bepaged out
orpaged in
, and the locations it occupies inmain memory
and/orswap space
might change over time.The
Virtual memory with different kinds of mappings.virtual address space
can contain of a complex set of mappings. Typically, parts of the address space aremapped
(have a mapping assigned), others arereserved
but unmapped, and most of it is entirelyunmapped
.
backing store
.
mark-compact
Mark-compact collection is a kind of
tracing garbage collection
that operates bymarking
reachable
objects
, thencompacting <compaction>
the marked objects (which must include all thelive
objects).The mark phase follows
reference
chains to mark all reachable objects; the compaction phase typically performs a number of sequential passes overmemory (2)
to move objects and update references. As a result of compaction, all the marked objects are moved into a single contiguousblock
of memory (or a small number of such blocks); the memory left unused after compaction isrecycled
.Mark-compact collection can be regarded as a variation of
mark-sweep collection <mark-sweep>
, with extra effort spent to eliminate the resultingfragmentation
. Compaction also allows the use of more efficientallocation mechanisms
, by making large free blocks available.
Edwards <EDWARDS>
.
mark-sweep mark-and-sweep
Mark-sweep collection is a kind of
tracing garbage collection
that operates bymarking
reachable
objects
, thensweeping
overmemory (2)
andrecycling <recycle>
objects that are unmarked (which must beunreachable
), putting them on afree list
.The mark phase follows
reference
chains to mark all reachable objects; the sweep phase performs a sequential (address
-order) pass over memory to recycle all unmarked objects. A mark-sweepcollector (1)
doesn't move objects.This was the first garbage collection algorithm, devised by John McCarthy for
Lisp
.
mark-compact
.
McCarthy (1960) <MCCARTHY60>
.
marking
Marking is the first phase ("the mark phase") of the
mark-sweep
algorithm ormark-compact
algorithm. It follows allreferences
from a set ofroots
to mark all thereachable
objects
.Marking follows
reference
chains and makes some sort of mark for each object it reaches.Marking is often achieved by setting a bit in the object, though any conservative representation of a predicate on the
memory location
of the object can be used. In particular, storing the mark bit within the object can lead to poorlocality of reference
and to poor cache performance, because the marking phases ends up setting thedirty bit
on allpages
in theworking set
. An alternative is to store the mark bits separately: seebitmap marking
.
compact <compaction>
,sweep <sweeping>
.
MB
megabyte
.
megabyte
MB.
A megabyte is 1024
kilobytes
, or 1048576byte (1)
.See
byte (1)
for general information on this and related quantities.
memoization
caching (3)
.
memory (1)
storage, store.
memory or storage (or store) is where data and instructions are stored. For example,
caches (1) <cache (1)>
,main memory
, floppy and hard disks are all storage devices.These terms are also used for the capacity of a system to store data, and may be applied to the sum total of all the storage devices attached to a computer.
"Store" is old-fashioned, but survives in expressions such as "
backing store
".
memory (2)
Memory refers to
memory (1)
that can be accessed by the processor directly (using memory addressing instructions).This could be
real memory (1)
orvirtual memory
.
memory (3)
main memory
.
memory (4)
A
memory location
; for example, "My digital watch has 256 memories."
memory bandwidth
Memory bandwidth (by analogy with the term bandwidth from communication theory) is a measure of how quickly information (expressed in terms of bits) can be transferred between two places in a computer system.
Often the term is applied to a measure of how quickly the processor can obtain information from the
main memory
(for example, "My new bus design has a bandwidth of over 400 Megabytes per second").
memory cache
cache (1)
.
memory hierarchy
storage hierarchy
.
memory leak
leak, space leak, space-leak.
A memory leak is where
allocated
memory (2)
is notfreed (1)
although it is never used again.In
manual memory management
, this usually occurs becauseobjects
becomeunreachable
without beingfreed (1)
.In
tracing garbage collection
, this happens when objects arereachable
but notlive
.In
reference counting
, this happens when objects arereferenced
but notlive
. (Such objects may or may not bereachable
.)Repeated memory leaks cause the memory usage of a process to grow without bound.
memory location
location.
Each separately-
addressable <address>
unit ofmemory (2)
in which data can be stored is called a memory location. Usually, these hold abyte (2)
, but the term can refer towords
.
memory management
storage management.
Memory management is the art and the process of coordinating and controlling the use of
memory (1)
in a computer system.Memory management can be divided into three areas:
- Memory management hardware (
MMUs
,RAM
, etc.);- Operating system memory management (
virtual memory
,protection
);- Application memory management (
allocation <allocate>
,deallocation <free (1)>
,garbage collection
).Memory management hardware consists of the electronic devices and associated circuitry that store the state of a computer. These devices include RAM, MMUs (memory management units),
cache (1)
, disks, and processorregisters
. The design of memory hardware is critical to the performance of modern computer systems. In fact,memory bandwidth
is perhaps the main limiting factor on system performance.Operating system memory management is concerned with using the memory management hardware to manage the resources of the
storage hierarchy
and allocating them to the various activities running on a computer. The most significant part of this on many systems isvirtual memory
, which creates the illusion that every process has more memory than is actually available. OS memory management is also concerned withmemory protection
and security, which help to maintain the integrity of the operating system against accidental damage or deliberate attack. It also protects user programs from errors in other programs.Application memory management involves obtaining
memory (2)
from the operating system, and managing its use by an application program. Application programs have dynamically changing storage requirements. The applicationmemory manager
must cope with this while minimizing the total CPU overhead, interactive pause times, and the total memory used.While the operating system may create the illusion of nearly infinite memory, it is a complex task to manage application memory so that the application can run most efficiently. Ideally, these problems should be solved by tried and tested tools, tuned to a specific application.
The Memory Management Reference is mostly concerned with application memory management.
automatic memory management
,manual memory management
.
Memory Management Unit
MMU
.
memory manager
The memory manager is that part of the system that manages
memory (2)
, servicingallocation <allocate>
requests, andrecycling <recycle>
memory, eithermanually <manual memory management>
orautomatically <automatic memory management>
.The memory manager can have a significant effect on the efficiency of the program; it is not unusual for a program to spend 20% of its time managing memory.
allocator
,collector (1)
.
memory management
.
memory mapping
file mapping.
Memory mapping is the technique of making a part of the
address space
appear to contain an "object", such as a file or device, so that ordinarymemory (2)
accesses act on that object.The object is said to be mapped to that range of addresses. (The term "object" does not mean a program
An address space with a range mapped to part of an object.object
. It comes from Unix terminology on themmap
man page.)Memory mapping uses the same mechanism as
virtual memory
to "trap" accesses to parts of theaddress space
, so that data from the file or device can bepaged in
(and other partspaged out
) before the access is completed.File mapping is available on most modern Unix and Windows systems. However, it has a much longer history. In Multics, it was the primary way of accessing files.
mapped
.
memory protection
protection
.
message
message queue
message type
misaligned
unaligned
.
miss
A miss is a lookup failure in any form of
cache (3) <caching (3)>
, most commonly at some level of astorage hierarchy
, such as acache (1)
orvirtual memory
system.The cost of a miss in a virtual memory system is considerable: it may be five orders of magnitude more costly than a hit. In some systems, such as multi-process operating systems, other work may be done while a miss is serviced.
hit
.
miss rate
.
miss rate
At any level of a
storage hierarchy
, the miss rate is the proportion of accesses whichmiss
.Because misses are very costly, each level is designed to minimize the miss rate. For instance, in
caches (1)
, miss rates of about 0.01 may be acceptable, whereas invirtual memory
systems, acceptable miss rates are much lower (say 0.00005). If a system has a miss rate which is too high, it will spend most of its time servicing the misses, and is said tothrash
.Miss rates may also be given as a number of misses per unit time, or per instruction.
hit rate
.
mmap
mmap
is a system call provided on many Unix systems to create amapping
for a range ofvirtual addresses
.
MMU
Memory Management Unit.
The MMU (Memory Management Unit) is a hardware device responsible for handling
memory (2)
accesses requested by the main processor.This typically involves translation of
virtual addresses
tophysical addresses
,cache (1)
control, bus arbitration,memory protection
, and the generation of various exceptions. Not all processors have an MMU.
page fault
,segmentation violation
,virtual memory
.
mostly-copying garbage collection
A type of
semi-conservative <semi-conservative garbage collection>
tracing garbage collection
which permitsobjects
tomove <moving garbage collector>
if noambiguous references
point to them.The techniques used are a hybrid of
copying garbage collection
andmark-sweep
.Mostly-copying garbage collectors share many of the benefits of copying collectors, including
compaction
. Since they support ambiguous references they are additionally suitable for use with uncooperative compilers, and may be an efficient choice for multi-threaded systems.
Bartlett (1989) <BARTLETT89>
,Yip (1991) <YIP91>
.
mostly-exact garbage collection
semi-conservative garbage collection
.
mostly-precise garbage collection
semi-conservative garbage collection
.
moving garbage collector moving memory manager
A memory manager (often a
garbage collector
) is said to be moving ifallocated
objects
can move during their lifetimes.In the garbage collecting world this will apply to
copying <copying garbage collection>
collectors and tomark-compact
collectors. It may also refer toreplicating <replicating garbage collector>
collectors.
copying garbage collection
.
non-moving garbage collector
.
mutable
Any
object
which may be changed by a program is mutable.
immutable
.
mutator
client program.
In a
garbage-collected <garbage collection>
system, the part that executes the user code, whichallocates
objects
and modifies, or mutates, them.For purposes of describing
incremental garbage collection
, the system is divided into the mutator and thecollector (2)
. These can be separate threads of computation, or interleaved within the same thread.The user code issues allocation requests, but the allocator code is usually considered part of the collector. Indeed, one of the major ways of scheduling the other work of the collector is to perform a little of it at every allocation.
While the mutator mutates, it implicitly
frees <free (1)>
memory (1)
by overwritingreferences
.This term is due to
Dijkstra et al. (1976) <DLMSS76>
.
collector (2)
.