2016

Main content

Seminar on Algorithms, Data Structures, and Applications, SOS

Talks are usually given on Thursdays, 14:45 h, CAB H 52

If you wish to receive email announcements of upcoming events, please subscribe to our SOS mailing list.

Talks given 2016

  • 13.12.16, 10:00, CAB H 52, Raphael Kübler: The Transactional Commit Problem. Abstract: The transactional commit problem arises in transactional systems whenever two or more concurrent transactions conflict on a data item. While the standard solution to such conflicts is to immediately abort one of the transactions, some systems consider the alternative of delaying the conflict resolution for a short interval, which may allow one of the trans- actions to commit. The challenge in the transactional commit problem is to choose the optimal length of this delay interval so as to minimize the sum of running times of transactions.
    This talk will present optimal online algorithms for the transactional commit problem. Specifically, it discusses two variants of this problem which arise in real system implementations, namely requester wins and requester aborts. For each case, we provide a deterministic and a randomized algorithm that yields optimal competitive ratio with respect to the expected cost. One case is solved by reducing it to the ski rental problem and applying an already existing algorithm. The other case consists of a new version of the ski rental problem, for which we derive an optimal algorithm. Further, we solve both instances in the setting where additional information is provided in terms of the means of the underlying distribution of transaction lengths. We validate our algorithmic results empirically on synthetic data.
  • 08.12.16, 14:45, CAB H 52, Daniel Graf: Dynamic Connectivity on Forests with Link-Cut Trees.
    Abstract: We want to maintain a dynamically changing set of rooted trees. In each update, we add or delete an edge, i.e., connect or disconnect two trees. In between those updates, we want to ask connectivity queries, i.e., are vertices u and v in the same tree?
    In the seminar, I will present the link-cut tree data structure by Sleator and Tarjan (1983) [1,2] that supports both updates and queries in O(log n) amortized time. I will present a nice version of it that builds on top of splay trees, following the lecture by Demaine [3]. Note: This talk is part of a series we're doing this semester on data structures. Please check here for more details: http://people.inf.ethz.ch/aldan/dss/main.html
    [1] D. D. Sleator, R. E. Tarjan, A data structure for dynamic trees. Journal of computer and system sciences 26.3 (1983): 362-391. [2] Robert Endre Tarjan, Data structures and network algorithms. Vol. 44. Siam, 1984. [3] Erik Demaine, Lecture 19, http://courses.csail.mit.edu/6.851/spring14/lectures/
  • 10.11.16, 14:15, CAB H 52, Stefano Leucci: Multiple-Edge-Fault-Tolerant Approximate Shortest-PathTrees (and Oracles).
    Abstract: Let G be an n-node and m-edge positively real-weighted undirected graph. For any given integer f≥1, we study the problem of designing a sparse f-edge-fault-tolerant (f-EFT) σ-approximate single-source shortest-path tree (σ-ASPT), namely a subgraph of G having as few edges as possible and which, following the failure of a set F of at most f edges in G, contains paths from a fixed source that are stretched at most by a factor of σ. To this respect, we provide an algorithm that efficiently computes an f-EFT (2|F|+1)-ASPT of size O(fn). Our structure improves on a previous related construction designed for unweighted graphs, having the same size but guaranteeing a larger stretch factor of 3(f+1), plus an additive term of (f+1)log n. Then, we show how to convert our structure into an efficient f-EFT single-source distance oracle (SSDO), that can be built in Õ(fm) time, has size O(fn log^2 n), and is able to report, after the failure of the edge set F, in O(|F|^2 log^2 n) time a (2|F|+1)-approximate distance from the source to any node, and a corresponding approximate path in the same amount of time plus the path’s size. Such an oracle is obtained by handling another fundamental problem, namely that of updating a minimum spanning forest (MSF) of G after that a batch of k simultaneous edge modifications (i.e., edge insertions, deletions and weight changes) is performed. For this problem, we build in O(m log^3 n) time a sensitivity oracle of size O(m log^2 n), that reports in O(k^2 log^2 n) time the (at most 2k) edges either exiting from or entering into the MSF. As a result of independent interest, it is worth noticing that our MSF oracle can be employed to handle arbitrary sequences of o(n^(1/4)/log n) (non-simultaneous) updates with a worst-case time per update of o(n^(1/2)). Thus, for relatively short sequences of updates, our oracle should be preferred w.r.t. the best-known (in a worst-case sense) MSF fully-dynamic algorithm, requiring O(n^(1/2)) time per update.
  • 27.10.16, 15:00, CAB H 16, Yves Frank: De Novo Sequencing with Retention Time Prediction.
    Abstract: We are given an alphabet such that every character in this alphabet and every string over this alphabet has a mass. Further, a mass M, a set of masses X and a scoring function score(s, X) are given. X mainly consists of prefix and suffix masses of one string over the given alphabet. The scoring function measures how good the prefix and suffix masses of s matches the set X.
    The de novo sequencing problem is to find a string s among all strings with mass M that maximizes the scoring function score(s, X). In this presentation we are looking at a variant where additionally every character and every string also have a time. The time of characters and strings are predicted by a time function as the true underlying function is unknown. The de novo sequencing with time prediction problem is, given a mass M, a set of masses X, a time T and some ε ≥ 0 to find a string s among all strings with mass M and predicted time T' such that |T − T'| ≤ ε that maximizes some scoring function score(s, X).
    This problem is motivated by a biological experiment called tandem mass spectrometry with liquid chromatography (LC-MS/MS), used to identify the amino acid sequence a peptide is built of. LC-MS/MS is one of the most powerful and most used tools for peptide identification. In contrast to previous de novo sequencing algorithms we use the additionally given time information of the experiment to return a more accurate solution.
    We propose two (de novo) algorithms to find the maximum scoring solution to two versions of this problem. The two versions differ in the kind of function used to predict the time. One considers a simple additive time function and the other a more complex time function that also minds the directly neighboring characters in a string. The algorithms are capable of finding all solutions with a score of at least α times the highest
    score.
  • 27.10.16, 14:15, CAB H 52, Przemysław Uznański: Lowest Common Ancestor and Range Minimum Queries in O(1) Time and O(n) Space.
    Abstract: In 1984, Harel and Tarjan solved a problem of how to preprocess a tree in O(n) space so that we can answer O(1) time queries of Lowest Common Ancestor of any pair of its nodes (essentially, best possible solution asymptotically). However, the solution was far from simple. Further simplifications came from Berkman et al. (1989) and Bender and Farach- Colton (2000) papers. We will look into the algorithm from the last paper, which is elegantly exploits two-way reductions from and to Range Minimum Queries problem.
  • 13.10.16, 14:15, CAB H 52, Stephan Eidenbenz: Computing at Los Alamos: A Brief Overview and Details on Codesign Performance Prediction through Just-in-time-compiled Parallel Discrete Event Simulation.
    Abstract: Los Alamos National Laboratory (LANL) has a diverse computational and computer science project portfolio that helps feed the high performance computers of the US Department of Energy. The talk will give a brief overview touching upon quantum computing, astrophysics, programming models, machine learning and cyber security. In a second part, we will look at a specific project in more detailed: Building a performance prediction capability to assess algorithmic variations on novel hardware architectures ideally without having the source code of the application nor an example of (future) hardware architectures. While this challenge cannot be solved in general (due to e.g., the halting problem), we are working towards a practical performance prediction toolkit (PPT) for a number of computational physics methods and codes and novel, emerging architectures, including quantum architectures. The key for scalability lies in parallel discrete event simulation, which serves as an illustration of the power of just-in-time compilation.
  • 11.10.16, 14:15, CAB H 52, Valentin Venzin: Improving the Accuracy of De Novo Peptide Sequencing using Retention Time Prediction.
    Abstract: Identifying the amino acid sequence of a peptide is crucial to determine its functional properties. In a liquid chromatography tandem mass spectrometry (LC-MS/MS) experiment, a sample containing multiple copies of different peptides traverses in a first step a liquid chromatograph. The chromatograph separates the different peptides chronologically by their hydrophobicity. During this process, the retention time that each peptide needs to traverse the chromatograph is measured. In a second step, the mass spectrometer measures the mass of the peptide. Thirdly, multiple copies of the same peptide are randomly split at different positions into prefix and suffix fragments. The mass spectrometer measures the masses of these fragments. De novo peptide sequencing asks for an amino acid sequence that matches the mass of the peptide and which explains the measured prefix and suffix fragment masses as accurately as possible. De novo peptide sequencing has been studied for decades. Our goal is to improve the results of de novo peptide sequencing algorithms by including retention time information about the peptides. Given are the mass M, the measured retention time T, and the set of measured fragment masses X of a peptide and a parameter ε. The de novo RT problem asks for the amino acid sequence with mass M that explains as many masses of X as possible and whose predicted retention time deviates by at most ε from T. We study two retention time prediction models to predict the retention times of peptides and propose two algorithms to solve the de novo RT problem. We conducted experiments on a dataset of 342 synthesized peptides and found that using retention time prediction improved the results of our de novo peptide sequencing algorithms.
  • 06.10.16, 14:15, CAB H 52, Tomáš Gavenčiak: WAVL trees: The Best from AVL and Red-Black Trees.
    Abstract: The AVL trees, discovered already in 1962, are among the most well-known balanced tree data structures, and also among the simplest to describe and implement. The Red-Black trees, discovered in 1972 and 1978, have several advantages over AVL (namely guarantee few rotations) and are widely used in practice today, but their operations are rather complicated. And until recently, only few minor simplifications of the many Red-Black tree cases were proposed in the area.
    In 2015, a really simple rank-based WAVL trees were discovered, combining the best from both Red-Black and AVL trees. In the seminar, I will present the WAVL trees in detail and briefly recall AVL trees, Red-Black trees (with some variants) and 2-4-trees, compare them and discuss their connections. A basic knowledge of (balanced) search trees is recommended but sufficient for this talk.
    Literature: Haeupler, Sen, Tarjan: Rank-Balanced Trees, 2015 http://sidsen.azurewebsites.net/papers/rb-trees-talg.pdf
    Note: This talk is part of a series we're doing this semester on data structures. Please check here for more details: http://people.inf.ethz.ch/aldan/dss/main.html
  • 21.09.16, 14:00, CAB H 53, Przemysław Uznański: Sublinear-space distance labeling using hubs.
    Abstract: A distance labeling scheme is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. We propose a series of new labeling schemes within the framework of so-called hub labeling (HL, also known as landmark labeling or 2-hop-cover labeling), in which each node $u$ stores its distance to all nodes from an appropriately chosen set of hubs $S(u) \subseteq V$. For a queried pair of nodes $(u,v)$, the length of a shortest $u-v$-path passing through a hub node from $S(u)\cap S(v)$ is then used as an upper bound on the distance between $u$ and $v$.
    We present a hub labeling which allows us to decode exact distances in sparse graphs using labels of size sublinear in the number of nodes. For graphs with at most $n$ nodes and average degree $\Delta$, the tradeoff between label bit size $L$ and query decoding time $T$ for our approach is given by $L = \bigo(n \log \log_\Delta T / \log_\Delta T)$, for any $T \leq n$. Our simple approach is thus the first sublinear-space distance labeling for sparse graphs that simultaneously admits small decoding time (for constant $\Delta$, we can achieve any $T=\omega(1)$ while maintaining $L=o(n)$), and it also provides an improvement in terms of label size with respect to previous slower approaches.
    By using similar techniques, we then present a $2$-additive labeling scheme for general graphs, i.e., one in which the decoder provides a 2-additive-approximation of the distance between any pair of nodes. We achieve the same label size-time tradeoff $L = \bigo(n \log^2 \log T / \log T)$, for any $T \leq n$. To our knowledge, this is the first additive scheme with constant absolute error to use labels of sublinear size. The corresponding decoding time is then small (any $T=\omega(1)$ is sufficient).
    Joint work with Adrian Kosowski and Paweł Gawrychowski, to be presented at DISC 2016.
  • 13.09.16, 15:30, CAB H 53, Paolo Penna: Bribeproof Mechanisms for Two-Values Domains.
    Abstract: Schummer (Journal of Economic Theory 2000) introduced the concept of bribeproof mechanism which, in a context where monetary transfer between agents is possible, requires that manipulations through bribes are ruled out. Unfortunately, in many domains, the only bribeproof mechanisms are the trivial ones which return a fixed outcome.
    This work presents one of the few constructions of non-trivial bribeproof mechanisms for these quasi-linear environments. Though the suggested construction applies to rather restricted domains, the results obtained are tight: For several natural problems, the method yields the only possible bribeproof mechanism and no such mechanism is possible on more general domains.
    Joint work with Matus Mihalak and Peter Widmayer
  • 05.09.16, 14:30, CNB 100.5, Raffael Buff: Using Additional Information in Streaming Algorithms.
    Abstract: Streaming problems are algorithmic problems that are mainly characterized by their massive input streams. Because of these data streams, the algorithms for these problems are forced to be space- efficient, as the input stream length generally exceeds the available storage. In this thesis, the two streaming problems most frequent item and number of distinct items are studied in detail relating to their algorithmic complexities, and it is compared whether the verification of solution hypotheses has lower algorithmic complexity than computing a solution from the data stream. For this analysis, we introduce some concepts to prove space complexity lower bounds for an approximative setting and for hypothesis verification.
    For the most frequent item problem which consists in identifying theitem which has the highest occurrence within the data stream, we can prove a linear space complexity lower bound for the deterministic and probabilistic setting. This implies that, in practice, this streaming problem cannot be solved in a satisfactory way since every algorithm has to exceed any reasonable storage limit. For some settings, the upper and lower bounds are almost tight, which implies that we have designed an almost optimal algorithm. Even for small approximation ratios, we can prove a linear lower bound, but not for larger ones. Nevertheless, we are not able to design an algorithm that solves the most frequent item problem space-efficiently for large approximation ratios. Furthermore, if we want to verify whether a hypothesis of the highest frequency count is true or not, we get exactly the same space complexity lower bounds, which leads to the conclusion that we are likely not able to profit from a stated hypothesis.
    The number of distinct items problem counts all different elements of the input stream. If we want to solve this problem exactly (in a
    deterministic or probabilistic setting) or approximately with a
    deterministic algorithm, we require once again linear storage size which is tight to the upper bound. However, for the approximative and probabilistic setting, we can enhance an already known space-efficient algorithm such that it is usable for arbitrarily small approximation ratios and arbitrarily good success probabilities. The hypothesis verification leads once again to the same lower bounds.
    However, there are some streaming problems that are able to profit from additional information such as hypotheses, as e.g., the median problem.
  • 23.08.16, 15:30, CAB H 53, Jerry Li: Robust Estimators in High Dimensions without the Computational Intractability.
    Abstract: We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an \varepsilon- fraction of the samples. Such questions have a rich history spanning statistics, machine learning and theoretical computer science. Even in the most basic settings, the only known approaches are either computationally inefficient or lose dimension-dependent factors in their error guarantees. This raises the following question: is high-dimensional agnostic distribution learning even possible, algorithmically?
    In this talk, we present the first computationally efficient algorithms with dimension-independent error guarantees for agnostically learning several fundamental classes of high-dimensional distributions: (1) a single Gaussian, (2) a product distribution on the hypercube, (3) mixtures of two product distributions (under a natural balancedness condition), and (4) mixtures of spherical Gaussians. Our algorithms achieve error that is independent of the dimension, and in many cases scales nearly optimally with the fraction of adversarially corrupted samples. Moreover, we develop a general recipe for detecting and correcting corruptions in high-dimensions, that may be applicable tomany other problems. Based on joint work with Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Ankur Moitra, and Alistair Stewart, to appear in FOCS 2016.
  • 22.08.16, 15:30, CAB H 53, Daniel Graf: Watch them fight! Creativity Task Tournaments of the Swiss Olympiad in Informatics.
    Abstract: As part of the qualification process for the Swiss Olympiad in Informatics, the contestants are each year confronted with one
    "Creativity Task". Unlike typical problems in programming competitions, creativity tasks usually do not have an optimal solution, and are often adaptations of popular board or computer games. After receiving all submissions, a tournament is organized, where the students can watch how their programs play interactively against each other, and points are awarded to the authors according to the tournament ranking. I will present and discuss this task format in general, but mainly present five of our past creativity tasks with their visualizations to show you how these tournaments look and feel like. Moreover, I will describe our task selection process and criteria, and the experience gained.
  • 18.08.16, 15:30, CAB H 53, Kateřina Böhmová: Robust Routing in Urban Public Transportation (Part II).
    Abstract: Given an urban public transportation network and historic delay information, we consider the problem of computing reliable journeys. We perform an experimental evaluation using real-world delay data from Zürich, Switzerland. We compare these methods to natural approaches. We also use a method originally proposed by Buhmann et al. (2013) to measure the similarity of recorded timetables, and demonstrate how this measure relates to the predictive quality of the individual methods. In particular, if the past observations are typical, then the learning-based methods are able to produce solutions that perform well on typical days, even in the presence of strong delays.
  • 18.08.16, 15:00, CAB H 53, Tobias Pröger: Robust Routing in Urban Public Transportation (Part I).
    Abstract: We study the problem of robust routing in urban public transportation networks of large cities that lack clear hierarchical structure and contain services that run with high frequencies. In order to propose solutions that are robust for typical delays, we assume that we have past observations of real traffic situations available. In particular, we assume that we have "daily records" containing the observed travel times in the whole network for a few past days. We introduce a new concept to express a solution that is feasible in any record of a given public transportation network, and discuss algorithms that allow its application for finding a robust journey from a given source to a given destination.
  • 18.08.16, 14:30, CAB H 53, Thomas Tschager: A Better Scoring Model for De Novo Peptide Sequencing: The Symmetric Difference between Explained and Measured Masses.
    Abstract: Given a peptide as a string of amino acids, the masses of all its prefixes and suffixes can be found by a trivial linear scan through the amino acid masses. The inverse problem is the ideal de novo peptide sequencing problem: Given all prefix and suffix masses, determine the string of amino acids. In biological reality, the given masses are measured in a lab experiment, and measurements by necessity are noisy. The (real, noisy) de novo peptide sequencing problem therefore has a noisy input: a few of the prefix and suffix masses of the peptide are missing and a few others are given in addition. For this setting we ask for an amino acid string that explains the given masses as accurately as possible. Past approaches interpreted accuracy by searching for a
    string that explains as many masses as possible. We feel, however, that it is not only bad to not explain a mass that appears, but also to explain a mass that does not appear. That is, we propose to minimize the symmetric difference between the set of given masses and the set of masses that the string explains. For this new optimization problem, we propose an efficient algorithm that computes both the best and the k best solutions. Experiments on measurements of 342 synthesized peptides show that our approach leads to better results compared to finding a string that explains as many given masses as possible.
  • 05.08.16, 13:00, CAB H 52, Andreas Bärtschi: On computing the total displacement number via weighted Motzkin paths.
    Abstract: Counting the number of permutations of a given total displacement is equivalent to counting weighted Motzkin paths of a given area (Guay- Paquet and Petersen, 2014). The former combinatorial problem is still open. In this work we show that this connection allows to construct efficient algorithms for counting and for sampling such permutations. These algorithms provide a tool to better understand the original combinatorial problem. A by-product of our approach is a different way of counting based on certain "building sequences" for Motzkin paths, which may be of independent interest.
  • 20.07.16, 14:00, CAB H 52, Martin Hoefer: Efficient Algorithms for Unknown Markets.
    Abstract: This talk surveys some of our recent work on revealed preference problems in classic market models without information about the number of agents, their individual utilities and endowments. Here we only rely on queries that return aggregate demand for goods in order to, e.g., learn utility functions or compute a market equilibrium.
    As a main result, we design a simple ascending-price algorithm to compute a (1+eps)-approx. equilibrium in exchange markets with weak gross substitute (WGS) property. It runs in time polynomial in market parameters and log(1/eps). It is the first efficient algorithm for this broad class of markets which is easy to implement and avoids heavy machinery such as the ellipsoid method. Moreover, we show several extensions of our technique to markets with spending-constraint utilities and budget-additive utilities.
  • 20.07.16, 11:00, CAB H 52, Prof. Niklaus Wirth and Paul Reed: The Oberon System Implemented on an FPGA.
    Abstract: Developed from scratch at ETH by Niklaus Wirth and Juerg Gutknecht, Oberon is both an entire graphical workstation operating system and a high-level programming language in the Algol and Pascal tradition. Recently, the Oberon system and compiler have been adapted to a 32-bit RISC processor designed by Prof. Wirth, and implemented by him and Paul Reed on a low-cost board containing a field-programmable gate array (FPGA) with 1MB external memory and an SD-Card as filesystem.
    Project Oberon's primary goal was to design and implement an entire system from scratch, and to structure it in such a way that it can be described, explained, and understood as a whole. It is already taught as a case study in the System Construction course at ETH, and it is available in its entirety in open-source form (software and hardware) for study and use, equally by educators in systems design and engineering, or by industry practitioners, at http://projectoberon.com.
    In this short talk, Prof. Wirth and Paul Reed will be demonstrating the Oberon system and describing its structure and development. Its small resource footprint, clear design and self-hosted programming environment provide an excellent starting point for building new systems from scratch - as useful in the field of security, for example, as it is in education.
  • 11.07.16, 15:30, CAB H 53, Tomáš Gavenciak: Cops and robber games on intersection graphs.
    Abstract: Pursuit games on graphs are studied in their many forms since the 80's, some for their theoretical applications (e.g. graph parameters), some being motivated by security or robotics, and some just for their elegance. I will present the development of winning strategies and upper bounds for one of the oldest of such games - Cops and robber - on geometric intersection graphs, from interval and planar graphs to string graphs and beyond.
    The game of Cops and robber is played on a given graph G by two players, one controlling k cops, the other one robber. All figures move on the vertices of G, both players see the full game information. The cops start by choosing their initial positions, then the robber chooses one starting vertex. The players play in alternating turns; on a player's turn, all her tokens may move to distance at most 1. The cops win by capturing the robber after finitely many moves, the robber by escaping indefinitely.
  • 27.06.16, 15:30, CAB H 53, Daniel Graf: Collaborative Delivery with Energy-Constrained Mobile Robots.
    Abstract: We consider the problem of collectively delivering some message from a specified source to a designated target location in a graph, using multiple mobile agents. Each agent has a limited energy which constrains the distance it can move. Hence multiple agents need to collaborate to move the message, each agent handing over the message to the next agent to carry it forward. Given the positions of the agents in the graph and their respective budgets, the problem of finding a feasible movement schedule for the agents can be challenging. We consider two variants of the problem: in non-returning delivery, the agents can stop anywhere; whereas in returning delivery, each agent needs to return to its starting location, a variant which has not been studied before.
    We first provide a polynomial-time algorithm for returning delivery on trees, which is in contrast to the known (weak) NP-hardness of the non-returning version. In addition, we give resource-augmented algorithms for returning delivery in general graphs. Finally, we give tight lower bounds on the required resource augmentation for both variants of the problem. In this sense, our results close the gap left by previous research.
  • 20.06.16, 15:30, CAB G 51, Prof. Dr. Thomas Schwentick (TU Dortmund): Dynamic Complexity: Recent Updates.
    Abstract: In most real-life databases data changes frequently and thus makes efficient query answering challenging. Auxiliary data might help to avoid computing query answers from scratch all the time. One way to study this incremental maintenance scenario is from the perspective ofdynamic algorithms with the goal to reduce (re-)computation time.
    Another option is to investigate it from the perspective of low-level parallel computational complexity [1] or parallelizable database
    queries [2]. As the "lowest" complexity class AC^0 (with a suitable unifomity condition) and the core of the standard database query language SQL both coincide with first-order predicate logic, one naturally arrives at the question which queries can be answered/maintained dynamically with first-order predicate logic(DynFO).
    The talk will give an introduction into dynamic complexity, sketch theproof of the recent result that the reachability problem can be maintained in DynFO, discuss its applications for the maintainability of regular path queries, and report about ongoing work on complex
    changes.
    [1] Sushant Patnaik and Neil Immerman. Dyn-FO: A parallel, dynamic complexity class. J. Comput. Syst. Sci., 55(2):199--209, 1997.
    [2] Guozhu Dong and Jianwen Su. Incremental and decremental evaluation of transitive closure by first-order queries. Inf. Comput., 120(1):101--106, 1995.
  • 17.06.16, 16:15, CAB H 53, Kateřina Böhmová: Sequence Hypergraphs.
    Abstract: Motivated by fault-tolerance concerns in public transportation networks, we introduce sequence hypergraphs, where every hyperedge is defined as a sequence of vertices (imagine a directed path). We study the complexity of some classic algorithmic problems. In particular, we consider the problem of finding a minimum set of hyperedges that ``connects'' $s$ to $t$ (allows to travel from $s$ to $t$), finding a minimum set of hyperedges whose removal ``disconnects'' $t$ from $s$, or finding two disjoint sets of hyperedges, both connecting $s$ to $t$. Many of these problems turn out to be APX-hard, even in acyclic sequence hypergraphs or with hyperedges of constant length. We then consider special settings and for some we get polynomial-time algorithms. For example, we show that if all the hyperedges are of length at most 2, these problems become polynomially solvable.
    Joint work with Jérémie Chalopin, Matúš Mihalák, Guido Proietti, and Peter Widmayer.
  • 13.06.16, 15.30, CAB H 53, Adrian Kosowski: Discrete Lotka-Volterra Population Protocols: Convergence, Majority Amplification, and Equity for Under-Represented Minorities.
    Abstract: In this talk we consider a natural class of population protocols whose dynamics are modeled by the discrete version of Lotka-Volterra equations. In such protocols, when an agent a of type (species) i interacts with an agent b of type (species) j with a as the initiator, then b's type becomes i with probability P_{ij}. In such an interaction, we think of a as the predator, b as the prey, and the type of the prey is either converted to that of the predator or stays as is. Such protocols capture the dynamics of some opinion spreading models and generalize the well-known Rock-Paper-Scissors discrete dynamics.
    We start by considering the convergence time and show that any Lotka-Volterra-type protocol on a n-agent population converges to some absorbing state in time polynomial in n, w.h.p., when any pair of agents is allowed to interact. By contrast, when the interaction graph is a star, even the Rock-Paper-Scissors protocol requires exponential time to converge. We then study threshold effects exhibited by Lotka-Volterra-type protocols with 3 and more species under interactions between any pair of agents, giving examples of protocols which achieve "majority amplification" and "coin-flip consensus" effects in adistributed system. Some of our techniques may be of independent value.
  • 13.06.16, 10:30, CAB H 53, Andreas Feldmann: The Complexity Landscape of Fixed-Parameter Directed Steiner Network Problems.
    Abstract: Given a directed graph G and a list (s_1 , t_1 ), ... , (s_k , t_k ) of    terminal pairs, the Directed Steiner Network problem asks for a minimum-cost subgraph of G that contains a directed s_i → t_i path for every 1 ≤ i ≤ k. The special case Directed Steiner Tree (when we ask for paths from a root r to terminals t_1 , ... , t_k) is known to be fixed-parameter tractable parameterized by the number of terminals, while the special case Strongly Connected Steiner Subgraph (when we ask for a path from every t_i to every other t_j) is known to be W[1]-hard parameterized by the number of terminals. We systematically explore the complexity landscape of directed Steiner problems to fully understand which other special cases are FPT or W[1]-hard. Formally, if H is a class of directed graphs, then we look at the special case of Directed Steiner Network where the list (s_1 , t_1), ... , (s_k , t_k) of requests form a directed graph that is a member of H. Our main result is a complete characterization of the classes H resulting in fixed-parameter tractable special cases: we show that if every pattern in H has the combinatorial property of being “transitively equivalent to a
    bounded-length caterpillar with a bounded number of extra edges,” then the problem is FPT, and it is W[1]-hard for every recursively
    enumerable H not having this property. This complete dichotomy unifies and generalizes the known results showing that Directed Steiner Tree is FPT [Dreyfus and Wagner, Networks 1971], Strongly Connected Steiner Subgraph is W[1]-hard [Guo et al., SIAM J. Discrete Math. 2011], and Directed Steiner Network is solvable in polynomial-time for constant number of terminals [Feldman and Ruhl, SIAM J. Comput. 2006], and moreover reveals a large continent of tractable cases that were not known before.
    Joint work with Dániel Marx.
  • 25.05.16, 11:25, CAB H 52, Dan Alistarh: Molecular Computation: An Algorithmic Perspective.
    Abstract: In recent years, there’s been tremendous interest in understanding the computational power of biological and chemical systems. Such systems have been modelled in various ways (e.g., chemical reaction networks, population protocols), and by different communities (e.g., systems biology, theoretical computer science, control theory).
    The goal of this talk is to give an overview of molecular computation, modelled as chemical reaction networks / population protocols, from an algorithmic point of view. In particular, I will focus on algorithms and impossibility results related to two simple, yet fundamental tasks: majority computation and leader election. I will show that deterministically solving these tasks in systems with a constant space per molecule is bound to be slow, that is, it will take linear time in the number of participating molecules in order to stabilize. On the positive side, I will describe algorithms with fast (poly-logarithmic) convergence time, with a poly-logarithmic space budget.
  • 23.05.16, 15:15, CAB H 52, Anita Schöbel, Approaches for integrated planning in public transportation.
    Abstract: Planning of a public transportation system is usually done in a sequential way: After the network design, the lines and their frequencies are planned. Based on these, the timetable is set up, and later on the vehicles’ schedules and the drivers’ schedules. From an optimization point of view such a sequential planning procedure can be regarded as a Greedy approach: in each planning stage one aims at the best one can do. This usually leads to suboptimal solutions. On the other hand, many of these single steps are alreadyNP hard such that solving the integrated problem to optimality seems out of scope.
    In this talk we briefly review line planning, timetabling and vehicle scheduling and argue that public transportation will benefit
    from an integrated planning. We sketch ideas on exact algorithms for integrated planning (mainly suitable for special cases) and propose an iterative approach which re-optimizes the line plan, the timetable, or the vehicle schedule while the other input is fixed.
    We show that this leads to new problem instances, illustrate these on re-optimizing line plan and timetable, and discuss questions about the convergence of such an approach.
  • 19.05.16, 14:00, CAB H 53, Nicolas Kirchmayr: Mechanism Design without Money: The Kidney Exchange Problem.
    Abstract: In 1986, surgeon Felix Rapaport first theorised an exchange of kidneys between incompatible patient-donor pairs. After the first successful kidney exchange conducted in 2000, Alvin Roth, Tayfun Sönmez and Utku Ünver suggested creating kidney exchange programs to organise thismarket.
    In this talk, I will introduce the Top Trading Cycles and Chains (TTCC) mechanism that can be used to find a feasible allocation for a given kidney exchange problem. It turns out that this mechanism has some desirable properties since the mechanism is truthful for patient-donor pairs even though it does not use payments. Also, the computed outcome is Pareto efficient. Despite truthfulness for the pairs, it is not the case that the TTCC mechanism is individually rational for hospitals. If time permits, I will show additional problems with the TTCC mechanism and solutions proposed in recent papers.
  • 02.05.16, 15:15, CAB H 52, Dan Alistarh: Data Structures of the Future: Concurrent, Optimistic, and Relaxed.
    Abstract: A central computing trend over the last decade has been the need to process increasingly larger amounts of data as efficiently as possible. This development is challenging both software and hardware design, and is altering the way data structures and algorithms are constructed, implemented, and deployed.
    In this talk, I will present some examples of such new data structuredesign ideas and implementations. In particular, I will discuss some inherent limitations of parallelizing classic data structures, and then focus on approaches to circumvent these limitations. The first approach is to relax the software semantics, to allow for approximation, andomization, or both. The second is to modify the underlying hardware architecture to unlock more parallelism. Time permitting, I will also cover results showing that both approaches can improve real-world performance, and touch upon some of the major open questions in the area.
  • 20.04.16, 11:15, CAB H 52, Przemysław Uznański: Randomized Algorithms for Majority Detection.
    Abstract: Given $n$ colored balls, we want to detect if more than $\lfloor n/2 \rfloor$ of them have the same color, and if so find one ball with such majority color. We are only allowed to choose two balls and compare their colors, and the goal is to minimize the total number of such operations. A well-known exercise is to show how to find such a ball with only $2n$ comparisons while using only a logarithmic number of bits for bookkeeping. The resulting algorithm is called the Boyer--Moore majority vote algorithm. It is known that any deterministic method needs $\lceil 3n/2\rceil-2$ comparisons in the worst case, and this is tight.  
    However, it is not clear what is the required number of comparisons if we allow randomization. We construct a randomized algorithm which always correctly finds a ball of the majority color (or detects thatthere is none) using, with high probability, only $7n/6+o(n)$
    comparisons. We also prove that the expected number of comparisons used by any such randomized method is at least $1.038n$.Joint work with Paweł Gawrychowski and Jukka Suomela
  • 19.04.16, 14:00, CAB H 53, Juhani Karhumäki : On Complexity of K-Abelian Classes.
    Abstract: K-abelian equivalence gives an approximation of the equality of words. It is defined by counting the numbers of occurrences of factors of length k in a word. It is an equivalence relation, in fact, congruence. We consider two complexity issues of these equivalence classes: their number and there size of words of length n. Crucial observation is the characterization of these classes in terms of rewriting. This allows to find canonical representations of the classes. This turns out to be a rational language, thus showing that the complexity function is rational for all values of k in any alphabet. More precisely this allows to determine its exact value, in theory. In practice the automata are unmanageable even for relatively small values of the parameters.We also consider singleton equivalence classes, and notice the evaluation of their number is connected to an old problem on necklaces.
  • 11.04.16,15:15, CAB G 56, Martin Fürer: Multi-Clique-Width, a Powerful New Width Parameter.
    Abstract: Multi-clique-width is a width parameter for graphs. It is obtained by a simple modification in the definition of clique-width. It has the advantage of providing a natural extension of tree-width. Unlike clique-width, it does not explode exponentially compared to tree-width. Efficient algorithms based on multi-clique-width are still possible for interesting tasks like computing the size of a maximum independent set (or even the independent set polynomial) or computing the chromatic number. For these tasks, the running time as a function of the multi-clique-width is the same as the running time of the fastest known algorithm as a function of the clique-width. This results in an exponential speed-up for some graphs, if the corresponding graph generating expressions are given. The reason is that the multi-clique-width is never bigger, but is exponentially smaller than the clique-width for many graphs.
    Tree-width and clique-width will be introduced.

  • 21.03.16, 15:30, CAB H 53, Paolo Penna: Game dynamics and (non)reversible Markov chains.
    Abstract: I will describe some simple game dynamics in which players repeatedly try to improve their own utilities, though sometimes they make *mistakes*. It turns out that some of these dynamics have a very simple and elegant description of their long-run equilibria. The purpose of this talk is to give an idea of the mathematical analysis and its relation to Markov chains through the following examples:
    (1) Classical "Ising model" dynamics for potential games;
    (2) A "fully-parallel" dynamics;
    (3) Simple variants where the process is "non-reversible" and long-run equilibria are difficult to determine.
  • 07.03.16, 15:20, CAB H 53, Paul Dütting: Tools for the Design and Analysis of Economic Mechanisms.
  • 29.02.16, 15:30, CAB H 53, Paul Dütting: Algorithms Against Anarchy.
    Abstract: Which algorithms when combined with simple payment rules, such as first- price payments, yield mechanisms whose equilibria are close to optimal?
    We address this question by providing a tight characterization of a(possibly randomized) mechanism¹s Price of Anarchy provable via smoothness that applies in single-parameter settings. The characterization assigns a unique value to each allocation algorithm; this value provides an upper and a matching lower bound on the Price of Anarchy of a derived mechanism provable via smoothness. The characterization also applies to the sequential or simultaneous composition of single-parameter mechanisms. Importantly, the factor that we identify is typically not in one-to-one correspondence to the approximation guarantee of the algorithm. Rather, it is usually the product of the approximation guarantee and the degree to which the mechanism is loser independent.
    We apply our characterization to show the optimality of greedy mechanisms for single-minded combinatorial auctions, whether these mechanisms are polynomial-time computable or not. We also use it to establish the optimality of a non-greedy, randomized mechanism for independent set in interval graphs and show that it is strictly better than any other deterministic mechanism.
    Joint work with Thomas Kesselheim
  • 03.02, 11:15, CAB G 11, Giuseppe F. Italiano (Univ. of Rome "Tor Vergata", Roma, Italy): Biconnectivity in Directed Graphs.
    Abstract: Edge and vertex connectivity are fundamental concepts in graph theory with numerous practical applications. Given an undirected graph G=(V,E), an edge is a bridge if its removal increases the number of connected components of G. Graph G is 2-edge-connected if it has no bridges. The 2-edge-connected components of G are its maximal 2-edge- connected subgraphs. Two vertices v and w are 2-edge-connected if there are two edge-disjoint paths between v and w. Equivalently, by Menger's Theorem, v and w are 2-edge-connected if the removal of any edge leaves them in the same connected component. Analogous definitions can be given for 2-vertex connectivity. While edge and vertex connectivity have been thoroughly studied in the case of undirected graphs, surprisingly not much has been investigated for directed graphs. In this talk, we survey some very recent work on 2-edge and 2-vertex connectivity in directed graphs, both from the theoretical and the practical viewpoint.

 

 
 
Page URL: http://www.pw.ethz.ch/news-and-events/sos-talkactual/2016.html
21.07.2017
© 2017 Eidgenössische Technische Hochschule Zürich