请输入您要查询的百科知识:

 

词条 Concurrency (computer science)
释义

  1. History

  2. Issues

      Models    Logics  

  3. Practice

  4. See also

  5. References

  6. Further reading

  7. External links

{{For|a more practical discussion|Concurrent computing}}

In computer science, concurrency refers to the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability property of a program, algorithm, or problem into order-independent or partially-ordered components or units.[1]

A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi, the parallel random-access machine model, the actor model and the Reo Coordination Language.

History

As Leslie Lamport (2015) notes, "While concurrent program execution had been considered for years, the computer science of concurrency began with Edsger Dijkstra's seminal 1965 paper that introduced the mutual exclusion problem. ... The ensuing decades have seen a huge growth of interest in concurrency—particularly in distributed systems. Looking back at the origins of the field, what stands out is the fundamental role played by Edsger Dijkstra".[2]

Issues

Because computations in a concurrent system can interact with each other while being executed, the number of possible execution paths in the system can be extremely large, and the resulting outcome can be indeterminate. Concurrent use of shared resources can be a source of indeterminacy leading to issues such as deadlocks, and resource starvation.[3]

Design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange, memory allocation, and execution scheduling to minimize response time and maximise throughput.[4]

== Theory ==

Concurrency theory has been an active field of research in theoretical computer science. One of the first proposals was Carl Adam Petri's seminal work on Petri nets in the early 1960s. In the years since, a wide variety of formalisms have been developed for modeling and reasoning about concurrency.

Models

A number of formalisms for modeling and understanding concurrent systems have been developed, including:[5]

  • The parallel random-access machine[6]
  • The actor model
  • Computational bridging models such as the bulk synchronous parallel (BSP) model
  • Petri nets
  • Process calculi
    • Calculus of communicating systems (CCS)
    • Communicating sequential processes (CSP) model
    • π-calculus
  • Tuple spaces, e.g., Linda
  • Simple Concurrent Object-Oriented Programming (SCOOP)
  • Reo Coordination Language

Some of these models of concurrency are primarily intended to support reasoning and specification, while others can be used through the entire development cycle, including design, implementation, proof, testing and simulation of concurrent systems. Some of these are based on message passing, while others have different mechanisms for concurrency.

The proliferation of different models of concurrency has motivated some researchers to develop ways to unify these different theoretical models. For example, Lee and Sangiovanni-Vincentelli have demonstrated that a so-called "tagged-signal" model can be used to provide a common framework for defining the denotational semantics of a variety of different models of concurrency,[7] while Nielsen, Sassone, and Winskel have demonstrated that category theory can be used to provide a similar unified understanding of different models.[8]

The Concurrency Representation Theorem in the actor model provides a fairly general way to represent concurrent systems that are closed in the sense that they do not receive communications from outside. (Other concurrency systems, e.g., process calculi can be modeled in the actor model using a two-phase commit protocol.[9]) The mathematical denotation denoted by a closed system S is constructed increasingly better approximations from an initial behavior called S using a behavior approximating function progressionS to construct a denotation (meaning ) for S as follows:[10]

DenoteS ≡ ⊔i∈ω progressionSi(⊥S)

In this way, S can be mathematically characterized in terms of all its possible behaviors.

Logics

Various types of temporal logic[11] can be used to help reason about concurrent systems. Some of these logics, such as linear temporal logic and computation tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such as action computational tree logic, Hennessy–Milner logic, and Lamport's temporal logic of actions, build their assertions from sequences of actions (changes in state). The principal application of these logics is in writing specifications for concurrent systems.[3]

Practice

{{Unreferenced section|date=April 2007}}

Concurrent programming encompasses programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered to be more general than parallel programming because it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally have a predefined and well-structured communications pattern. The base goals of concurrent programming include correctness, performance and robustness. Concurrent systems such as Operating systems and Database management systems are generally designed to operate indefinitely, including automatic recovery from failure, and not terminate unexpectedly (see Concurrency control). Some concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer.

Because they use shared resources, concurrent systems in general require the inclusion of some kind of arbiter somewhere in their implementation (often in the underlying hardware), to control access to those resources. The use of arbiters introduces the possibility of indeterminacy in concurrent computation which has major implications for practice including correctness and performance. For example, arbitration introduces unbounded nondeterminism which raises issues with model checking because it causes explosion in the state space and can even cause models to have an infinite number of states.

Some concurrent programming models include coprocesses and deterministic concurrency. In these models, threads of control explicitly yield their timeslices, either to the system or to another process.

See also

  • Client–server network nodes
  • Clojure
  • Cluster nodes
  • Concurrency control
  • Concurrent computing
  • Concurrent object-oriented programming
  • Concurrency pattern
  • Chu space
  • Distributed systemnodes
  • Go (programming language)
  • Rust (programming language)
  • Elixir (programming language)
  • Gordon Pask
  • Opening
  • open korg
  • tMP
  • Parallel computing
  • Partitioned global address space
  • Processes
  • Ptolemy Project
  • International Conference on Concurrency Theory
  • Sheaf (mathematics)
  • Threads
  • X10 (programming language)

References

1. ^{{cite journal|last1=Lamport|first1=Leslie|title=Time, Clocks, and the Ordering of Events in a Distributed System|journal=Communications of the ACM|volume=21|issue=7|pages=558–565|date=July 1978|url=http://research.microsoft.com/en-us/um/people/lamport/pubs/time-clocks.pdf|accessdate=4 February 2016|doi=10.1145/359545.359563}}
2. ^{{cite web|url=http://cacm.acm.org/magazines/2015/6/187316-turing-lecture-the-computer-science-of-concurrency/fulltext |author=Lamport, Leslie |title=Turing Lecture: The Computer Science of Concurrency: The Early Years (Communications of the ACM, Vol. 58 No. 6, June 2015) |publisher=ACM |accessdate=22 Mar 2017 }}
3. ^{{cite journal|last=Cleaveland|first=Rance|author2=Scott Smolka|title=Strategic Directions in Concurrency Research|journal=ACM Computing Surveys|volume=28|issue=4|date=December 1996|pages=607|doi=10.1145/242223.242252}}
4. ^{{cite book | first1=Colin | last1=Campbell | first2=Ralph | last2=Johnson | first3=Ade | last3=Miller | first4=Stephen | last4=Toub | title=Parallel Programming with Microsoft .NET | url=http://msdn.microsoft.com/en-us/library/ff963542.aspx |date=August 2010 | publisher=Microsoft Press | isbn=978-0-7356-5159-3 }}
5. ^{{cite book|last=Filman|first=Robert|author2=Daniel Friedman|title=Coordinated Computing - Tools and Techniques for Distributed Software|year=1984|publisher=McGraw-Hill|isbn=978-0-07-022439-1|url=http://ic.arc.nasa.gov/people/filman/text/dpl/dpl.html|deadurl=yes|archiveurl=https://web.archive.org/web/20070516135233/http://ic.arc.nasa.gov/people/filman/text/dpl/dpl.html|archivedate=2007-05-16|df=}}
6. ^{{cite book | first=Jörg | last=Keller |author2=Christoph Keßler |author3=Jesper Träff | title=Practical PRAM Programming | publisher=John Wiley and Sons | year=2001}}
7. ^{{cite journal|last=Lee|first=Edward|author2=Alberto Sangiovanni-Vincentelli| title= A Framework for Comparing Models of Computation|journal=IEEE Transactions on CAD|volume=17|issue=12|pages=1217–1229|date=December 1998|doi=10.1109/43.736561}}
8. ^{{cite conference|author = Mogens Nielsen |author2=Vladimiro Sassone |author3=Glynn Winskel| title = Relationships Between Models of Concurrency |booktitle = REX School/Symposium|year = 1993|url = http://citeseer.ist.psu.edu/article/nielsen94relationships.html}}
9. ^Frederick Knabe. A Distributed Protocol for Channel-Based Communication with Choice PARLE 1992.
10. ^{{Cite journal|author=William Clinger|authorlink=William Clinger (computer scientist)|title=Foundations of Actor Semantics|publisher=MIT|version=Mathematics Doctoral Dissertation|date=June 1981|hdl=1721.1/6935}}
11. ^{{cite book|first=Colin|last=Roscoe|title=Modal and Temporal Properties of Processes|publisher=Springer|year=2001|isbn=978-0-387-98717-0}}

Further reading

  • {{cite book

| last = Lynch
| first = Nancy A.
| title = Distributed Algorithms
| publisher = Morgan Kaufmann
| year = 1996
| isbn = 978-1-55860-348-6
}}
  • {{cite book

| last1 = Tanenbaum
| first1 = Andrew S.
| last2 = Van Steen
| first2 = Maarten
| title = Distributed Systems: Principles and Paradigms
| publisher = Prentice Hall
| year = 2002
| isbn = 978-0-13-088893-8
}}
  • {{cite book

| last = Kurki-Suonio
| first = Reino
| title = A Practical Theory of Reactive Systems
| publisher = Springer
| year = 2005
| isbn = 978-3-540-23342-8
}}
  • {{cite book

| last = Garg
| first = Vijay K.
| title = Elements of Distributed Computing
| publisher = Wiley-IEEE Press
| year = 2002
| isbn = 978-0-471-03600-5
}}
  • {{cite book

| last = Magee
| first = Jeff
|author2=Kramer, Jeff
| title = Concurrency: State Models and Java Programming
| publisher = Wiley
| year = 2006
| isbn = 978-0-470-09355-9
}}
  • Distefano, S., & Bruneo, D. (2015). Quantitative assessments of distributed systems: Methodologies and techniques (1st ed.). Somerset: John Wiley & Sons Inc.{{ISBN|9781119131144}}
  • Bhattacharyya, S. S. (2013;2014;). Handbook of signal processing systems (Second;2;2nd 2013; ed.). New York, NY: Springer.10.1007/978-1-4614-6859-2 {{ISBN|9781461468592}}
  • Wolter, K. (2012;2014;). Resilience assessment and evaluation of computing systems (1. Aufl.;1; ed.). London;Berlin;: Springer. {{ISBN|9783642290329}}

External links

  • [https://web.archive.org/web/20060128114620/http://vl.fmnet.info/concurrent/ Concurrent Systems] at The WWW Virtual Library
  • Concurrency patterns presentation given at scaleconf

2 : Concurrency (computer science)|Edsger W. Dijkstra

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/14 1:08:52