请输入您要查询的百科知识:

 

词条 Barrier (computer science)
释义

  1. Implementation

      Sense-Reversal Centralized Barrier    Combining Tree Barrier    Hardware Barrier Implementation  

  2. See also

  3. References

In parallel computing, a barrier is a type of synchronization method. A barrier for a group of threads or processes in the source code means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier.

Many collective routines and directive-based parallel languages impose implicit barriers. For example, a parallel do loop in Fortran with OpenMP will not be allowed to continue on any thread until the last iteration is completed. This is in case the program relies on the result of the loop immediately after its completion. In message passing, any global communication (such as reduction or scatter) may imply a barrier.

Implementation

The basic barrier has mainly two variables, one of which records the pass/stop state of the barrier, the other of which keeps the total number of threads that have entered in the barrier. The barrier state was initialized to be "stop" by the first threads coming into the barrier. Whenever a thread enters, based on the number of threads already in the barrier, only if it is the last one, the thread sets the barrier state to be "pass" so that all the threads can get out of the barrier. On the other hand, when the incoming thread is not the last one, it is trapped in the barrier and keeps testing if the barrier state has changed from "stop" to "pass", and it gets out only when the barrier state changes to "pass". The following C++ code demonstrates this procedure.[1][2]

struct barrier_type

{
    // how many processors have entered the barrier    // initialize to 0    int arrive_counter;    // how many processors have exited the barrier    // initialize to p    int leave_counter;    int flag;    std::mutex lock;

};

// barrier for p processors

void barrier(barrier_type* b, int p)

{
    b->lock.lock();    if (b->leave_counter == p)    {        if (b->arrive_counter == 0) // no other threads in barrier        {            b->flag = 0; // first arriver clears flag        }        else        {            b->lock.unlock();            while (b->leave_counter != p); // wait for all to leave before clearing            b->lock.lock();            b->flag = 0; // first arriver clears flag        }    }    b->arrive_counter++;    int arrived = b->arrive_counter;    b->lock.unlock();    if (arrived == p) // last arriver sets flag    {        b->arrive_counter = 0;        b->leave_counter = 1;        b->flag = 1;    }    else    {        while (b->flag == 0); // wait for flag        b->lock.lock();        b->leave_counter++;        b->lock.unlock();    }

}

The potential problems are as follows:

  1. When sequential barriers using the same pass/block state variable are implemented, a deadlock could happen in the first barrier whenever a thread reaches the second and there are still some threads have not got out of the first barrier.
  2. Due to all the threads repeatedly accessing the global variable for pass/stop, the communication traffic is rather high, which decreases the scalability.

The following Sense-Reversal Centralized Barrier is designed to resolve the first problem. And the second problem can be resolved by regrouping the threads and using multi-level barrier, e.g. Combining Tree Barrier. Also hardware implementations may have the advantage of higher scalability.

Sense-Reversal Centralized Barrier

A Sense-Reversal Centralized Barrier solves the potential deadlock problem arising when sequential barriers are used. Instead of using the same value to represent pass/stop, sequential barriers use opposite values for pass/stop state. For example, if barrier 1 uses 0 to stop the threads, barrier 2 will use 1 to stop threads and barrier 3 will use 0 to stop threads again and so on.[3] The following C++ code demonstrates this.[1][4][2]

struct barrier_type

{
    int counter; // initialize to 0    int flag; // initialize to 0    std::mutex lock;

};

int local_sense = 0; // private per processor

// barrier for p processors

void barrier(barrier_type* b, int p)

{
    local_sense = 1 - local_sense;    b->lock.lock();    b->counter++;    int arrived = b->counter;    if (arrived == p) // last arriver sets flag    {        b->lock.unlock();        b->counter = 0;        // memory fence to ensure that the change to counter        // is seen before the change to flag        b->flag = local_sense;    }    else    {        b->lock.unlock();        while (b->flag != local_sense); // wait for flag    }

}

Combining Tree Barrier

A Combining Tree Barrier is a hierarchical way of implementing barrier to resolve the scalability by avoiding the case that all threads spinning on a same location.[3]

In k-Tree Barrier, all threads are equally divided into subgroups of k threads and a first-round synchronizations are done within these subgroups. Once all subgroups have done their synchronizations, the first thread in each subgroup enters the second level for further synchronization. In the second level, like in the first level, the threads form new subgroups of k threads and synchronize within groups, sending out one thread in each subgroup to next level and so on. Eventually, in the final level there is only one subgroup to be synchronized. After the final-level synchronization, the releasing signal is transmitted to upper levels and all threads get past the barrier.[4][5]

Hardware Barrier Implementation

The hardware barrier uses hardware to implement the above basic barrier model.[1]

The simplest hardware implementation uses dedicated wires to transmit signal to implement barrier. This dedicated wire performs OR/AND operation to act as the pass/block flags and thread counter. For small systems, such a model works and communication speed is not a major concern. In large multiprocessor systems this hardware design can make barrier implementation have high latency. The network connection among processors is one implementation to lower the latency, which is analogous to Combining Tree Barrier.[6]

See also

  • Fork–join model
  • Rendezvous (Plan 9)
  • Memory barrier

References

1. ^{{Cite book|url=http://dl.acm.org/citation.cfm?id=2855046|title=Fundamentals of Parallel Multicore Architecture|last=Solihin|first=Yan|date=2015-01-01|publisher=Chapman & Hall/CRC|isbn=978-1482211184|edition=1st}}
2. ^{{cite web|title=Implementing Barriers|url=http://15418.courses.cs.cmu.edu/spring2013/article/43|publisher=Carnegie Mellon University}}
3. ^{{Cite book|title=Parallel Computer Architecture, A Hardware/Software Approach|last=Culler|first=David|year=1998|isbn=978-1558603431|location=|pages=|quote=|via=}}
4. ^{{Cite book|title=Evolving OpenMP in an Age of Extreme Parallelism|last=Nanjegowda|first=Ramachandra|last2=Hernandez|first2=Oscar|last3=Chapman|first3=Barbara|last4=Jin|first4=Haoqiang H.|date=2009-06-03|publisher=Springer Berlin Heidelberg|isbn=9783642022845|editor-last=Müller|editor-first=Matthias S.|series=Lecture Notes in Computer Science|pages=42–52|language=en|doi=10.1007/978-3-642-02303-3_4|editor-last2=Supinski|editor-first2=Bronis R. de|editor-last3=Chapman|editor-first3=Barbara M.}}
5. ^{{Cite book|last=Nikolopoulos|first=Dimitrios S.|last2=Papatheodorou|first2=Theodore S.|date=1999-01-01|title=A Quantitative Architectural Evaluation of Synchronization Algorithms and Disciplines on ccNUMA Systems: The Case of the SGI Origin2000 |journal=Proceedings of the 13th International Conference on Supercomputing|series=ICS '99|location=New York, NY, USA|publisher=ACM|pages=319–328|doi=10.1145/305138.305209|isbn=978-1581131642|url=https://pure.qub.ac.uk/portal/en/publications/a-quantitative-evaluation-of-synchronization-algorithms-and-disciplines-on-ccnuma-systems-the-case-of-the-sgi-origin2000(2b9278d2-85fc-4ce9-96ed-725524379f9a).html}}
6. ^N.R. Adiga, et al. An Overview of the BlueGene/L Supercomputer. Proceedings of the Conference on High Performance Networking and Computing, 2002.
7. ^{{Cite web | url=http://blogs.sourceallies.com/2012/03/parallel-programming-with-barrier-synchronization/ | title=Parallel Programming with Barrier Synchronization| date=March 2012}}
{{Parallel_computing}}[7]{{DEFAULTSORT:Barrier (Computer Science)}}

1 : Parallel computing

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/10 22:10:50