Okulmus, C. (2022). Parallel Computation of Structural Decompositions [Dissertation, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2023.108704
Identifying tractable problems and providing polynomial time algorithms for them has long been a key focus in the study of computational complexity. More recently, the focus has been on trying to identify tractable fragments of problems that are, in the general case, intractable. One such line of work is to look at the structural properties of problems such as constraint satisfaction and conjunctive query evaluation. The structure of these problems is naturally expressed as hypergraphs, and it is a long standing result that instances of these problems whose underlying hypergraph structure is acyclic are solvable in polynomial time. This then led to generalisations of acyclicity, in the form of hypergraph decompositions and the notion of width which signifies the complexity of the hypergraph that is being decomposed. Finding decompositions of low width, or determining quickly that none may exist, is thus of key importance for identifying and solving tractable fragments of constraint satisfaction problems or conjunctive query evaluation. In this thesis, we advance the ability to provide such low-width decompositions or proofs of their non-existence for these important problems. As a first step, we focus on the particular type of hypergraph decomposition known as generalized hypertree decompositions (GHD). For this class, we provide the first parallel algorithm, called BalancedGo, that makes use of balanced separators. We also provide a number of general optimisations in the form of pre-processing steps and prove that they do not affect the correctness of any algorithm making use of them. We also show the potential of combining multiple distinct algorithms into one, and showcase the practicality of this by implementing a hybrid algorithm which combines our novel parallel algorithm with an older sequential one. This hybrid system turned to out to combine the best of both worlds, combining the strengths of each individual method. We conclude this first study with an extensive experimental evaluation of our developed methods against other state of the art methods for computing GHDs and show that our hybrid system outperforms all other approaches. Following this, we explored another type of hypergraph decomposition, namely (regular) hypertree decompositions (HD). These represent a smaller class of hypergraph decompositions when compared to GHDs, which have the desirable property of being computable in polynomial time, when looking at decompositions of a fixed width. On the other hand, HDs are rooted, and our GHD algorithm constructs fragment of the GHD and heavily depends on re-rooting these fragments when combining them. In order to allow the use of balanced separators for constructing HDs, we provide an entirely new idea. This involved guessing pairs of nodes, parent node and child node. This allowed us to construct an HD in arbitrary order, allowing for the implementation of a parallel algorithm which can then use balanced separation to split the hypergraph into subproblems and work on them concurrently. We also describe how a series of optimisations allowed us to lessen the computational cost of the guess of pairs of nodes. We again conclude this line of study by an extensive experimental evaluation, this time focusing on state of the art decomposition methods for HDs and we show that our novel method log-k-decomp, as part of a more involved hybrid algorithm, managed to provide the best results. Finally, we also explore the use of highly distributed systems for the computation of hypergraph decompositions. The focus on these distributed systems is motivated by two observations: the first one is the presence of very large hypergraphs in the commonly used benchmarks, which all existing state of the art methods have failed to solve optimally and the second one is the fact that the most computationally powerful systems are all highly distributed machines, composed of a large number of interconnected computers. Currently proposed methods cannot claim to make use of these distributed systems, as they need shared memory architectures, limiting them to be run on a single machine. By taking on some key ideas from the recent work of Gottlob, Lanzinger, Pichler and Razgon (JACM, 2021), we propose a novel algorithm, composed of several individual programs that can work concurrently. This distributed system is built on the notion of a block – which encodes subhypergraphs which need to be decomposed – and allows workers to independently search for balanced separators for all possible subhypergraphs that need to be explored to find a decomposition. These blocks are then processed at a central location to see if a decomposition of the input hypergraph has been found. This forms a distributed system and it has the property that the search for balanced separators can – in principle – use an unlimited number of machines and CPUs and thus tackle ever larger instances. We also develop a first prototype of this distributed algorithm and report on preliminary experimental results.