User:Morris Kurz/sandbox

From Wikipedia, the free encyclopedia

Parallel Graph Representations[edit]

The parallelization of graph problems faces significant challenges: Data-driven computations, unstructured problems, poor locality and high data access to computation ratio[1][2]. The graph representation used for parallel architectures plays a significant role in facing those challenges. Poorly chosen representations may unnecessarily drive up the communication cost of the algorithm, which will decrease its scalability. In the following, shared and distributed memory architectures are considered.

Shared memory[edit]

In the case of a shared memory model, the graph representations used for parallel processing are the same as in the sequential case[3], since parallel read-only access to the graph representation (e.g. an adjacency list) is efficient in shared memory.

Distributed Memory[edit]

In the distributed memory model, the usual approach is to partition the vertex set of the graph into sets . Here, is the amount of available processing elements (PE). The vertex set partitions are then distributed to the PEs with matching index, additionally to the corresponding edges. Every PE has its own subgraph representation, where edges with an endpoint in another partition require special attention. For standard communication interfaces like MPI, the ID of the PE owning the other endpoint has to be identifiable. During computation in a distributed graph algorithms, passing information along these edges implies communication.[3]

Partitioning the graph needs to be done carefully - there is a trade-off between low communication and even size partitioning[4] But partitioning a graph is a NP-hard problem, so it is not feasible to calculate them. Instead, the following heuristics are used.

1D partitioning: Every processor gets vertices and the corresponding outgoing edges. This can be understood as a row-wise or column-wise decomposition of the adjacency matrix. For algorithms operating on this representation, this requires an All-to-All communication step as well as message buffer sizes, as each PE potentially has outgoing edges to every other PE.[5]

2D partitioning: Every processor gets a submatrix of the adjacency matrix. Assume the processors are aligned in a rectangle , where and are the amount of processing elements in each row and column, respectively. Then each processor gets a submatrix of the adjacency matrix of dimension . This can be visualized as a checkerboard pattern in a matrix.[5] Therefore, each processing unit can only have outgoing edges to PEs in the same row and column. This bounds the amount of communication partners for each PE to out of possible ones.

Parallel Topological Sorting[edit]

An algorithm for parallel topological sorting on distributed memory machines parallelizes the algorithm of Khan for a DAG [3]. On a high level, the algorithm of Khan repeatedly removes the vertices of indegree 0 and adds them to the topological sorting in the order in which they were removed. Since the outgoing edges of the removed vertices are also removed, there will be a new set of vertices of indegree 0, where the procedure is repeated until no vertices are left. This algorithm performs iterations, where is the longest path in . Each iteration can be parallelized, which is the idea of the following algorithm.

In the following it is assumed that the graph partition is stored on processing elements (PE) which are labeled . Each PE initializes a set of local vertices with indegree 0, where the upper index represents the current iteration. Since all vertices in the local sets have indegree 0, i.e. they are not adjacent, they can be given in an arbitrary order for a valid topological sorting. To assign a global index to each vertex, a prefix sum is calculated over the sizes of . So each step, there are vertices added to the topological sorting.

Execution of the parallel topological sorting algorithm on a DAG with two processing elements.

In the first step, PE assigns the indices to the local vertices in . These vertices in are removed, together with their corresponding outgoing edges. For each outgoing edge with endpoint in another PE , the message is posted to PE . After all vertices in are removed, the posted messages are sent to their corresponding PE. Each message received updates the indegree of the local vertex . If the indegree drops to zero, is added to . Then the next iteration starts.

In step k, PE j assigns the indices , where is the total amount of processed vertices after step k-1. This procedure repeats until there are no vertices left to process, hence . Below is a high level, single program, multiple data pseudo code overview of this algorithm.

Note that the prefix sum for the local offsets can be efficiently calculated in parallel.

p processing elements with IDs from 0 to p-1
Input: G = (V, E) DAG, distributed to PEs, PE index 
Output: topological sorting of G 
function traverseDAGDistributed
    δ incoming degree of local vertices V
    Q = {v ∈ V | δ[v] = 0}                     // All vertices with indegree 0
    nrOfVerticesProcessed = 0
    do                 
         global build prefix sum over size of Q     // get offsets and total amount of vertices in this step
         offset = nrOfVerticesProcessed +           // j is the processor index
         foreach u ∈ Q                                       
              localOrder[u] = index++;
              foreach (u, v) ∈ E do post message (u, v) to PE owning vertex v
         nrOfVerticesProcessed += 
         deliver all messages to neighbors of vertices in Q  
         receive messages for local vertices V
         remove all vertices in Q
         foreach message (u, v) received:
              if --δ[v] = 0
                   add v to Q
    while global size of Q > 0
    return localOrder

The communication cost depends heavily on the given graph partition. As for runtime, on a CRCW-PRAM model that allows fetch-and-decrement in constant time, this algorithm runs in , where D is again the longest path in G and the maximum degree.[3]

  1. ^ Bader, David; Meyerhenke, Henning; Sanders, Peter; Wagner, Dorothea (2013/01). Graph Partitioning and Graph Clustering. Contemporary Mathematics. Vol. 588. American Mathematical Society. doi:10.1090/conm/588/11709. ISBN 978-0-8218-9038-7. {{cite book}}: Check date values in: |date= (help)
  2. ^ LUMSDAINE, ANDREW; GREGOR, DOUGLAS; HENDRICKSON, BRUCE; BERRY, JONATHAN (2007-03). "CHALLENGES IN PARALLEL GRAPH PROCESSING". Parallel Processing Letters. 17 (01): 5–20. doi:10.1142/s0129626407002843. ISSN 0129-6264. {{cite journal}}: Check date values in: |date= (help)
  3. ^ a b c d Sanders, Peter; Mehlhorn, Kurt; Dietzfelbinger, Martin; Dementiev, Roman (2019). Sequential and Parallel Algorithms and Data Structures: The Basic Toolbox. Springer International Publishing. ISBN 978-3-030-25208-3.
  4. ^ "Parallel Processing of Graphs" (PDF).{{cite web}}: CS1 maint: url-status (link)
  5. ^ a b "Parallel breadth-first search on distributed memory systems | Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis". dl.acm.org. doi:10.1145/2063384.2063471. Retrieved 2020-02-06.