请输入您要查询的百科知识:

 

词条 Memory management
释义

  1. Details

      {{Anchor|DYNAMIC|HEAP|ALLOCATION}}Dynamic memory allocation    Efficiency    Implementations    {{Anchor|FIXED-SIZE}}Fixed-size blocks allocation    Buddy blocks    Slab allocation    Stack allocation    Automatic variables    Garbage collection  

  2. Systems with virtual memory

  3. See also

  4. Notes

  5. Notes

  6. References

  7. Further reading

  8. External links

{{Redirect|Memory allocation|memory allocation in the brain|Neuronal memory allocation}}{{about|memory management at the application level|memory management at the operating system level|Memory management (operating systems)}}{{More footnotes|date=April 2014}}{{OS}}Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time.[1]

Several methods have been devised that increase the effectiveness of memory management. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the size of the virtual address space beyond the available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have an extensive effect on overall system performance.

Details

{{expand section|date=November 2016}}

Application-level memory management is generally categorized as either automatic memory management, usually involving garbage collection, or manual memory management.

{{Anchor|DYNAMIC|HEAP|ALLOCATION}}Dynamic memory allocation

{{See also|C dynamic memory allocation}}

The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store.{{efn|Not to be confused with the unrelated heap data structure.}} At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations.

Several issues complicate the implementation, such as external fragmentation, which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator's metadata can also inflate the size of (individually) small allocations. This is often managed by chunking. The memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever "lost" (i.e. that there are no "memory leaks").

Efficiency

The specific dynamic memory allocation algorithm implemented can impact performance significantly. A study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52 (as measured with an instruction level profiler on a variety of software).[2]

Implementations

Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer reference. The specific algorithm used to organize the memory area and allocate and deallocate chunks is interlinked with the kernel, and may use any of the following methods:

{{Anchor|FIXED-SIZE}}Fixed-size blocks allocation
{{main|Memory pool}}

Fixed-size blocks allocation, also called memory pool allocation, uses a free list of fixed-size blocks of memory (often all of the same size). This works well for simple embedded systems where no large objects need to be allocated, but suffers from fragmentation, especially with long memory addresses. However, due to the significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation / de-allocation and is often used in video games.

Buddy blocks
{{details|Buddy memory allocation}}

In this system, memory is allocated into several pools of memory instead of just one, where each pool represents blocks of memory of a certain power of two in size, or blocks of some other convenient size progression. All blocks of a particular size are kept in a sorted linked list or tree and all new blocks that are formed during allocation are added to their respective memory pools for later use. If a smaller size is requested than is available, the smallest available size is selected and split. One of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks. When a block is freed, it is compared to its buddy. If they are both free, they are combined and placed in the correspondingly larger-sized buddy-block list.

Slab allocation
{{main|Slab allocation}}

This memory allocation mechanism preallocates memory chunks suitable to fit objects of a certain type or size.[3] These chunks are called caches and the allocator only has to keep track of a list of free cache slots. Constructing an object will use any one of the free cache slots and destructing an object will add a slot back to the free cache slot list. This technique alleviates memory fragmentation and is efficient as there is no need to search for a suitable portion of memory, as any open slot will suffice.

Stack allocation
{{main|Stack-based memory allocation}}{{expand section|date=November 2016}}

Automatic variables

{{main|Automatic variable}}

In many programming language implementations, all variables declared within a procedure (subroutine, or function) are local to that function; the runtime environment for the program automatically allocates memory for these variables on program execution entry to the procedure, and automatically releases that memory when the procedure is exited. Special declarations may allow local variables to retain values between invocations of the procedure, or may allow local variables to be accessed by other procedures. The automatic allocation of local variables makes recursion possible, to a depth limited by available memory.

Garbage collection

{{main|Garbage collection (computer science)}}

Garbage collection is a strategy for automatically detecting memory allocated to objects that are no longer usable in a program, and returning that allocated memory to a pool of free memory locations. This method is in contrast to "manual" memory management where a programmer explicitly codes memory requests and memory releases in the program. While automatic garbage has the advantages of reducing programmer workload and preventing certain kinds of memory allocation bugs, garbage collection does require memory resources of its own, and can compete with the application program for processor time.

Systems with virtual memory

{{Main|Memory protection|Shared memory (interprocess communication)}}

Virtual memory is a method of decoupling the memory organization from the physical hardware. The applications operate on memory via virtual addresses. Each attempt by the application to access a particular virtual memory address results in the virtual memory address being translated to an actual physical address. In this way the addition of virtual memory enables granular control over memory systems and methods of access.

In virtual memory systems the operating system limits how a process can access the memory. This feature, called memory protection, can be used to disallow a process to read or write to memory that is not allocated to it, preventing malicious or malfunctioning code in one program from interfering with the operation of another.

Even though the memory allocated for specific processes is normally isolated, processes sometimes need to be able to share information. Shared memory is one of the fastest techniques for inter-process communication.

Memory is usually classified by access rate into primary storage and secondary storage. Memory management systems, among other operations, also handle the moving of information between these two levels of memory.

See also

{{Portal|Computer science}}
  • Dynamic array
  • Out of memory

Notes

{{notelist}}

Notes

1. ^{{cite journal|last=Gibson|first=Steve|author1-link=Steve Gibson (computer programmer)|title=Tech Talk: Placing the IBM/Microsoft XMS Spec Into Perspective|url=https://books.google.com/books?id=ZzoEAAAAMBAJ&pg=PA34|magazine=InfoWorld|date=August 15, 1988}}
2. ^{{Cite journal | doi = 10.1002/spe.4380240602| title = Memory allocation costs in large C and C++ programs| journal = Software: Practice and Experience| volume = 24| issue = 6| pages = 527–542| date=June 1994 | last1 = Detlefs | first1 = D. | last2 = Dosser | first2 = A. | last3 = Zorn | first3 = B. | url = http://www.eecs.northwestern.edu/~robby/uc-courses/15400-2008-spring/spe895.pdf| citeseerx = 10.1.1.30.3073}}
3. ^{{cite book|author1=Abraham Silberschatz|authorlink1=Abraham Silberschatz|author2=Peter B. Galvin|title=Operating system concepts|publisher=Wiley|year=2004|isbn=0-471-69466-5}}

References

  • Donald Knuth. Fundamental Algorithms, Third Edition. Addison-Wesley, 1997. {{ISBN|0-201-89683-4}}. Section 2.5: Dynamic Storage Allocation, pp. 435–456.
  • Simple Memory Allocation Algorithms{{webarchive |url=https://web.archive.org/web/20160305050619/http://buzzan.tistory.com/m/post/428 |date=5 March 2016}} (originally published on OSDEV Community)
  • {{Cite book | doi = 10.1007/3-540-60368-9_19| chapter = Dynamic storage allocation: A survey and critical review| title = Memory Management| volume = 986| pages = 1–116| series = Lecture Notes in Computer Science| year = 1995| last1 = Wilson | first1 = P. R. | last2 = Johnstone | first2 = M. S. | last3 = Neely | first3 = M. | last4 = Boles | first4 = D. | isbn = 978-3-540-60368-9| citeseerx = 10.1.1.47.275}}
  • {{Cite book | doi = 10.1145/378795.378821| chapter = Composing High-Performance Memory Allocators| title = Proceedings of the ACM SIGPLAN 2001 conference on Programming language design and implementation| conference = PLDI '01| pages = 114–124| date=June 2001 | last1 = Berger | first1 = E. D. | last2 = Zorn | first2 = B. G. | last3 = McKinley | first3 = K. S. | author3-link = Kathryn S. McKinley| isbn = 1-58113-414-2| url = http://www.cs.umass.edu/%7Eemery/pubs/berger-pldi2001.pdf| citeseerx = 10.1.1.1.2112}}
  • {{Cite book | doi = 10.1145/582419.582421| chapter = Reconsidering Custom Memory Allocation| title = Proceedings of the 17th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications| conference = OOPSLA '02| pages = 1–12| date=November 2002 | last1 = Berger | first1 = E. D. | last2 = Zorn | first2 = B. G. | last3 = McKinley | first3 = K. S. | author3-link = Kathryn S. McKinley| isbn = 1-58113-471-1| url = http://people.cs.umass.edu/~emery/pubs/berger-oopsla2002.pdf| citeseerx = 10.1.1.119.5298}}

Further reading

{{Wikibooks}}
  • {{cite|last1=Wilson|first1=Paul R.|last2=Johnstone|first2=Mark S.|last3=Neely|first3=Michael|last4=Boles|first4=David|date=September 28–29, 1995|title=Dynamic Storage Allocation: A Survey and Critical Review |url=http://www.cs.northwestern.edu/~pdinda/icsclass/doc/dsa.pdf |access-date=2017-06-03|institution=Department of Computer Sciences University of Texas|publication-place=Austin, Texas}}

External links

  • "Generic Memory Manager" C++ library
  • [https://code.google.com/p/arena-memory-allocation/downloads/list Sample bit-mapped arena memory allocator in C]
  • TLSF: a constant time allocator for real-time systems
  • [https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_cpp_dynamic-memory.php Slides on Dynamic memory allocation]
  • Inside A Storage Allocator
  • The Memory Management Reference
  • The Memory Management Reference, Beginner's Guide Allocation
  • Linux Memory Management
  • Memory Management For System Programmers
  • VMem - general malloc/free replacement. Fast thread safe C++ allocator
  • A small old site dedicated to memory management
  • [https://www.embedded-office.com/en/blog/dynamic-memory-iec-61508.html Dynamic Memory in IEC61508 Systems]
{{Memory management navbox}}{{Authority control}}

2 : Memory management|Computer architecture

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/24 15:30:53