词条 | Join-based tree algorithms |
释义 |
In computer science, join-based tree algorithms are a class of algorithms for self-balancing binary search trees. The algorithmic framework is based on a single operation join.[1] Under this framework, the join operation captures all balancing criteria of different balancing schemes, and all other functions join have generic implementation across different balancing schemes. The join-based algorithms can be applied to at least four balancing schemes: AVL trees, red-black trees, weight-balanced trees and treaps. The join operation takes as input two binary balanced trees and of the same balancing scheme, and a key , and outputs a new balanced binary tree whose in-order traversal is the in-order traversal of , then then the in-order traversal of . In particular, if the trees are search trees, which means that the in-order of the trees maintain a total ordering on keys, it must satisfy the condition that all keys in are smaller than and all keys in are greater than . HistoryThe join operation was first defined by Tarjan [2] on red-black trees, which runs in worst-case logarithmic time. Later Sleator and Tarjan [3] described an join algorithm for splay trees which runs in amortized logarithmic time. Later Adams [4] extended join to weight-balanced trees and used it for fast set-set functions including union, intersection and set difference. In 1998, Blelloch and Reid-Miller extended join on treaps, and proved the bound of the set functions to be for two trees of size and , which is optimal in the comparison model. They also brought up parallelism in Adams' algorithm by using a divide-and-conquer scheme. In 2016, Blelloch et al. formally proposed the join-based algorithms, and formalized the join algorithm for four different balancing schemes: AVL trees, red-black trees, weight-balanced trees and treaps. In the same work they proved that Adams' algorithms on union, intersection and difference are work-optimal on all the four balancing schemes. Join AlgorithmsThe function join considers rebalancing the tree, and thus depends on the input balancing scheme. If the two trees are balanced, join simply creates a new node with left subtree {{math|t1}}, root {{mvar|k}} and right subtree {{math|t2}}. Suppose that {{math|t1}} is heavier (this "heavier" depends on the balancing scheme) than {{math|t2}} (the other case is symmetric). Join follows the right spine of {{math|t1}} until a node {{mvar|c}} which is balanced with {{math|t2}}. At this point a new node with left child {{mvar|c}}, root {{mvar| k}} and right child {{math|t2}} is created to replace c. The new node may invalidate the balancing invariant. This can be fixed with rotations. The following is the join algorithms on different balancing schemes. The join algorithm for AVL trees: '''function''' joinRightAVL(TL, k, TR) (l, k', c) = expose(TL) '''if''' (h(c) < h(TR) + 1) T'=Node(c, k, TR) if (h(T') <= h(l) + 1) then '''return''' Node(l, k', T') else '''return''' rotateLeft(Node(l, k', rotateRight(T'))) '''else''' T' = joinRightAVL(c, k, TR) T'' = Node(l, k', T') '''if''' (h(TL) > h(TR) + 1) '''return''' T'' '''else''' '''return''' rotateLeft(T'') '''function''' joinLeftAVL(TL, k, TR) /* symmetric to joinRightAVL */ '''function''' join(TL, k, TR) '''if''' (h(TL) > h(TR) + 1) '''return''' joinRightAVL(TL, k, TR) '''if''' (h(TR) > h(TL) + 1) '''return''' joinLeftAVL(TL, k, TR) '''return''' Node(TL, k, TR) Here of a node the height of . expose(v)=(l,k,r) means to extract a tree node 's left child , the key of the node , and the right child . Node(l,k,r) means to create a node of left child , key , and right child . The join algorithm for red-black trees: '''function''' joinRightRB(TL, k, TR) '''if''' r(TL) = ⌊r(TL)/2⌋ × 2: '''return''' Node(TL, ⟨k, red⟩, TR) '''else''' (L', ⟨k', c'⟩, R') = expose(TL) T' = Node(L', ⟨k', c'⟩, joinRightRB(R', k, TR) '''if''' (c' = black) and (T'.right.color = T'.right.right.color = red): T'.right.right.color = black '''return''' rotateLeft(T') '''else''' return T' '''function''' joinLeftRB(TL, k, TR) /* symmetric to joinRightRB */ '''function''' join(TL, k, TR) '''if''' ⌊r(TL)/2⌋ > ⌊r(TR)/2⌋ × 2: T' = joinRightRB(TL, k, TR) '''if''' (T'.color = red) and (T'.right.color = red): T'.color = black return T' '''else if''' ⌊r(TL)/2⌋ > ⌊r(TL)/2⌋ × 2 /* symmetric */ '''else if''' (TL.color = black) and (TR = black) Node(TL, ⟨k, red⟩, TR) '''else''' Node(TL, ⟨k, black⟩, TR) Here of a node means twice the black height of a black node, and the twice the black height of a red node. expose(v)=(l,⟨k,c⟩,r) means to extract a tree node 's left child , the key of the node , the color of the node and the right child . Node(l,⟨k,c⟩,r) means to create a node of left child , key , color and right child . The join algorithm for weight-balanced trees: '''function''' joinRightWB(TL, k, TR) (l, k', c)=expose(TL) '''if''' balance(|TL|, |TL|) '''return''' Node(TL,k,TR) '''else''' T' = joinRightWB(c, k, TR) (l1, k1, r1) = expose(T') '''if''' (balance(l, T')) '''return''' Node(l, k', T') '''else if''' (balance(|l|, |l1|) and balance(|l|+|l1|, |r1|)) '''return''' rotateLeft(Node(l, k', T')) '''else''' return rotateLeft(Node(l, k', rotateRight(T')) '''function''' joinLeftWB(TL, k, TR) /* symmetric to joinRightWB */ '''function''' join(TL, k, TR) '''if''' (heavy(TL, TR)) return joinRightWB(TL, k, TR) '''if''' (heavy(TR, TL)) return joinLeftWB(TL, k, TR) Node(TL, k, TR) Here balance means two weigthts and are balanced. expose(v)=(l,k,r) means to extract a tree node 's left child , the key of the node and the right child . Node(l,k,r) means to create a node of left child , key and right child . Join-based AlgorithmsIn the following, expose(v)=(l,k,r) means to extract a tree node 's left child , the key of the node and the right child . Node(l,k,r) means to create a node of left child , key and right child . right() and left() extracts the right child and the left child of a tree node, respectively. extract the key of a node . "" means that two statements and can run in parallel. SplitTo split a tree into two trees, those smaller than key x, and those larger than key x, we first draw a path from the root by inserting x into the tree. After this insertion, all values less than x will be found on the left of the path, and all values greater than x will be found on the right. By applying Join, all the subtrees on the left side are merged bottom-up using keys on the path as intermediate nodes from bottom to top to form the left tree, and the right part is asymmetric. For some applications, Split also returns a boolean value denoting if x appears in the tree. The cost of Split is , order of the height of the tree. The split algorithm is as follows: '''function''' split(T,k) '''if''' (T=nil) return (nil,false,nil) (L,(m,c),R)=expose(T) '''if''' (k=m) return (L,true,R) '''if''' (k Join2This function is defined similarly as join but without the middle key. It first splits out the last key of the left tree, and then join the rest part of the left tree with the right tree with . The algorithm is as follows: '''function''' splitLast(T) (L,k,R)=expose(T) '''if''' (R=nil) '''return''' (L,k) (T',k')=splitLast(R) '''return''' (join(L,k,T'),k') '''function''' join2(L,R) '''if''' (L=nil) '''return''' R (L',k)=splitLast(L) '''return''' join(L',k,R) The cost is for a tree of size . Insert and DeleteThe insertion and deletion algorithms, when making use of join can be independent of balancing schemes. For an insertion, the algorithm compares the key to be inserted with the key in the root, inserts it to the left/right subtree if the key is smaller/greater than the key in the root, and joins the two subtrees back with the root. A deletion compares the key to be deleted with the key in the root. If they are equal, return join2 on the two subtrees. Otherwise, delete the key from the corresponding subtree, and join the two subtrees back with the root. The algorithms are as follows: '''function''' insert(T,k) if (T=nil) '''return''' Node(nil,k,nil) (L,k',R)=expose(T) '''if''' (k Both insertion and deletion requires time if . Set-Set FunctionsSeveral set operations have been defined on weight-balanced trees: union, intersection and set difference. The union of two weight-balanced trees {{math|t1}} and {{math|t2}} representing sets {{mvar|A}} and {{mvar|B}}, is a tree {{mvar|t}} that represents {{math|A ∪ B}}. The following recursive function computes this union: '''function''' union(t1, t2): '''if''' t1 = nil: '''return''' t2 '''if''' t2 = nil: '''return''' t1 (t<, b, t>) = split t2 on t1.root nl=union(left(t1), t<) || nr = union(right(t1), t>) '''return''' join(nl, t1.root, nr) Similarly, the algorithms of intersection and set-difference are as follows: '''function''' intersection(t1, t2): '''if''' (t1 = nil or t2 = nil) '''return''' nil (t<, b, t>) = split t2 on t1.root nl = t1.root, intersection(left(t1) || nr = intersection(right(t1), t>) '''if''' (b) '''return''' join(nl, t<), nr) '''else''' '''return''' join2(nl, nr) '''function''' difference(t1, t2): '''if''' (t1 = nil) '''return''' nil '''if''' (t2 = nil) '''return''' t1 (t<, b, t>) = split t2 on t1.root nl = difference(left(t1), t<) || nr = difference(right(t1), t>) '''return''' join2(nl, nr) The complexity of each of union, intersection and difference is for two weight-balanced trees of sizes and . This complexity is optimal in terms of the number of comparisons. More importantly, since the recursive calls to union, intersection or difference are independent of each other, they can be executed in parallel with a parallel depth .[1] When , the join-based implementation applies the same computation as in a single-element insertion or deletion if the root of the larger tree is used to split the smaller tree. BuildThe algorithm for building a tree can make use of the union algorithm, and use the divide-and-conquer scheme: '''function''' build(A[], n): '''if''' (n=0) '''return''' nil '''if''' (n=1) '''return''' Node(nil,A[0],nil) L = build(A,n/2) || R = (A+n/2, n-n/2) '''return''' union(L,R) This algorithm costs work and has depth. A more-efficient algorithm makes use of a parallel sorting algorithm. '''function''' buildSorted(A[], n): '''if''' (n=0) '''return''' nil '''if''' (n=1) '''return''' Node(nil,A[0],nil) L = build(A,n/2) || R = (A+n/2+1, n-n/2-1) '''return''' join(L,A[n/2],R) '''function''' build(A[], n): A'=sort(A,n) '''return''' buildSorted(A,n) This algorithm costs work and has depth assuming the sorting algorithm has work and depth. FilterThis function selects all entries in a tree satisfying an indicator , and return a tree containing all selected entries. It recursively filters the two subtrees, and join them with the root if the root satisfies , otherwise join2 the two subtrees. '''function''' filter(T,f): '''if''' (T=nil) '''return''' nil L = filter(left(T),f) || R = (right(T),f) '''if''' (f(k(T)) '''return''' join(L,k(T),R) '''else''' '''return''' join2(L,R) This algorithm costs work and depth on a tree of size , assuming has constant cost. Used in LibrariesThe join-based algorithms are applied to support interface for sets, maps, and augmented maps [5] in libarays such as Hackage, SML/NJ, and PAM[6]. Notes{{notelist}}References1. ^1 {{citation | last1 = Blelloch | first1 = Guy E. | last2 = Ferizovic | first2 = Daniel | last3 = Sun | first3 = Yihan | contribution = Just Join for Parallel Ordered Sets | doi = 10.1145/2935764.2935768 | isbn = 978-1-4503-4210-0 | pages = 253–264 | publisher = ACM | title = Symposium on Parallel Algorithms and Architectures, Proc. of 28th ACM Symp. Parallel Algorithms and Architectures (SPAA 2016) | year = 2016 | url = https://arxiv.org/pdf/1602.02120}} 2. ^{{citation | last1 = Tarjan | first1 = Robert Endre | contribution = Data structures and network algorithms | title = Data structures and network algorithms | pages = 45-56 | publisher = Siam | year = 1983}} 3. ^{{citation | last1 = Sleator | first1 = Daniel Dominic | last2 = Tarjan | first2 = Robert Endre | contribution = Self-adjusting binary search trees | publisher = Siam | title = Journal of the ACM (JACM) | year = 1985}} 4. ^{{citation | last1 = Adams | first1 = Stephen | contribution = Implementing sets efficiently in a functional language | title = Implementing sets efficiently in a functional language | publisher = Citeseer | year = 1992 | url = http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.501.8427}}. 5. ^{{citation | last1 = Blelloch | first1 = Guy E. | last2 = Ferizovic | first2 = Daniel | last3 = Sun | first3 = Yihan | contribution = PAM: parallel augmented maps | pages = 290-304 | title = Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming | publisher = ACM | year = 2018}} 6. ^{{citation | last1 = Blelloch | first1 = Guy E. | last2 = Ferizovic | first2 = Daniel | last3 = Sun | first3 = Yihan | contribution = PAM: parallel augmented maps | pages = 290-304 | title = Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming | publisher = ACM | year = 2018}} External links
3 : Algorithms and data structures|Algorithms|Data structures |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。