## Joel Rybicki

Postdoctoral researcher, DSc. (Tech.)

Institute of Science and Technology Austria (IST Austria)

joel.rybicki at ist.ac dot at

## Research interests

My background is in theoretical computer science, but I have also done research in theoretical ecology. Broadly speaking, I have been working on questions related to distributed and self-organising systems.

• Theoretical computer science:
distributed algorithms, self-stabilisation, fault-tolerance, graph algorithms
• Theoretical ecology:
spatial models, metacommunities, habitat loss and fragmentation

## Publications

### Preprints

• Efficient load-balancing through distributed token dropping
Sebastian Brandt, Barbara Keller, Joel Rybicki, Jukka Suomela and Jara Uitto arXiv preprint

arxiv (2020)

###### Abstract

We introduce a new graph problem, the token dropping game, and we show how to solve it efficiently in a distributed setting. We use the token dropping game as a tool to design an efficient distributed algorithm for stable orientations and more generally for locally optimal semi-matchings. The prior work by Czygrinow et al. (DISC 2012) finds a stable orientation in $O(\Delta^5)$ rounds in graphs of maximum degree $\Delta$, while we improve it to $O(\Delta^4)$ and also prove a lower bound of $\Omega(\Delta)$.

• Input-dynamic distributed graph algorithms for congested networks
Klaus-Tycho Foerster, Janne H. Korhonen, Ami Paz, Joel Rybicki and Stefan Schmid arXiv preprint

arxiv (2020)

###### Abstract

Consider a distributed system, where the topology of the communication network remains fixed, but local inputs given to nodes may change over time. In this work, we explore the following question: if some of the local inputs change, can an existing solution be updated efficiently, in a dynamic and distributed manner?

To address this question, we define the batch dynamic CONGEST model, where the communication network $G=(V,E)$ remains fixed and a dynamic edge labelling defines the problem input. The task is to maintain a solution to a graph problem on the labeled graph under batch changes. We investigate, when a batch of α edge label changes arrive,

• how much time as a function of α we need to update an existing solution, and
• how much information the nodes have to keep in local memory between batches in order to update the solution quickly.

We give a general picture of the complexity landscape in this model, including a general framework for lower bounds. In particular, we prove non-trivial upper bounds for two selected, contrasting problems: maintaining a minimum spanning tree and detecting cliques.

### 2019

• What can observational data reveal about metacommunity processes?
Otso Ovaskainen, Joel Rybicki and Nerea Abrego Ecography 2019

Ecography, (2019) • doi:10.1111/ecog.04444

###### Abstract

A key challenge for community ecology is to understand to what extent observational data can be used to infer the underlying community assembly processes. As different processes can lead to similar or even identical patterns, statistical analyses of non-manipulative observational data never yield undisputable causal inference on the underlying processes. Still, most empirical studies in community ecology are based on observational data, and hence understanding under which circumstances such data can shed light on assembly processes is a central concern for community ecologists. We simulated a spatial agent-based model that generates variation in metacommunity dynamics across multiple axes, including the four classic metacommunity paradigms as special cases. We further simulated a virtual ecologist who analysed snapshot data sampled from the simulations using eighteen output metrics derived from beta-diversity and habitat variation indices, variation partitioning and joint species distribution modelling. Our results indicated two main axes of variation in the output metrics. The first axis of variation described whether the landscape has patchy or continuous variation, and thus was essentially independent of the properties of the species community. The second axis of variation related to the level of predictability of the metacommunity. The most predictable communities were niche-based metacommunities inhabiting static landscapes with marked environmental heterogeneity, such as metacommunities following the species sorting paradigm or the mass effects paradigm. The most unpredictable communities were neutral-based metacommunities inhabiting dynamics landscapes with little spatial heterogeneity, such as metacommunities following the neutral or patch sorting paradigms. The output metrics from joint species distribution modelling yielded generally the highest resolution to disentangle among the simulated scenarios. Yet, the different types of statistical approaches utilized in this study carried complementary information, and thus our results suggest that the most comprehensive evaluation of metacommunity structure can be obtained by combining them.

• Self-stabilising Byzantine Clock Synchronisation is Almost as Easy as Consensus
Christoph Lenzen and Joel Rybicki Journal of the ACM 2019

Journal of the ACM, volume 66, number 5 (2019) • doi:10.1145/3339471

###### Abstract

We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the $n$ nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i.e., the initial state of the system may be arbitrary, and there can be up to $f\lt n/3$ ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an arbitrary manner. Furthermore, we assume that the local clocks of the nodes may progress at different speeds (clock drift) and communication has bounded delay. In this model, we study the pulse synchronisation problem, where the task is to guarantee that eventually all correct nodes generate well-separated local pulse events (i.e., unlabelled logical clock ticks) in a synchronised manner.

Compared to prior work, we achieve exponential improvements in stabilisation time and the number of communicated bits, and give the first sublinear-time algorithm for the problem:

• In the deterministic setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $\Theta(f \log f)$ bits per time unit. We exponentially reduce the number of bits broadcasted per time unit to $\Theta(\log f)$ while retaining the same stabilisation time.
• In the randomised setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $O(1)$ bits per time unit. We exponentially reduce the stabilisation time to $\mathrm{polylog} f$ while each node broadcasts $\mathrm{polylog} f$ bits per time unit.

These results are obtained by means of a recursive approach reducing the above task of self-stabilising pulse synchronisation in the bounded-delay model to non-self-stabilising binary consensus in the synchronous model. In general, our approach introduces at most logarithmic overheads in terms of stabilisation time and broadcasted bits over the underlying consensus routine.

• Byzantine Approximate Agreement on Graphs
Thomas Nowak and Joel Rybicki DISC 2019

33rd International Symposium on Distributed Computing (2019) • doi:10.4230/LIPIcs.DISC.2019.29

###### Abstract

Consider a distributed system with $n$ processors out of which $f$ can be Byzantine faulty. In the approximate agreement task, each processor $i$ receives an input value $x_i$ and has to decide on an output value $y_i$ such that

• the output values are in the convex hull of the non-faulty processors' input values,
• the output values are within distance $d$ of each other.

Classically, the values are assumed to be from an $m$-dimensional Euclidean space, where $m \ge 1$.

In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph $G$ and the goal is to output vertices that are within distance $d$ of each other in $G$, but still remain in the graph-induced convex hull of the input values. For $d=0$, the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any $d \ge 1$, we show that the task is solvable in asynchronous systems when $G$ is chordal and $n > (\omega+1)f$, where $\omega$ is the clique number of $G$. In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.

• Brief Announcement: Does Preprocessing Help under Congestion?
Klaus-Tycho Foerster, Janne H. Korhonen, Joel Rybicki and Stefan Schmid PODC 2019

38th ACM Symposium on Principles of Distributed Computing (2019) • doi:10.1145/3293611.3331581

###### Abstract

This paper investigates the power of preprocessing in the CONGEST model. Schmid and Suomela (ACM HotSDN 2013) introduced the SUPPORTED CONGEST model to study the application of distributed algorithms in Software-Defined Networks (SDNs). In this paper, we show that a large class of lower bounds in the CONGEST model still hold in the SUPPORTED model, highlighting the robustness of these bounds. This also raises the question how much does preprocessing help in the CONGEST model.

• Habitat fragmentation and species diversity in competitive communities
Joel Rybicki, Nerea Abrego and Otso Ovaskainen Ecology Letters 2019

Ecology Letters, (2019) • doi:10.1111/ele.13450

###### Abstract

Habitat loss is one of the key drivers of the ongoing decline of biodiversity. However, ecologists still argue about how fragmentation of habitat (independent of habitat loss) affects species richness. The recently proposed habitat amount hypothesis posits that species richness only depends on the total amount of habitat in a local landscape. On the other hand, different empirical studies report contrasting patterns: some find positive and others negative effects of fragmentation per se on species richness. To explain this apparent disparity, we devise a stochastic, spatially-explicit model of competitive species communities in heterogeneous habitats. The model shows that habitat loss and fragmentation have a non-monotone and non-linear effect on the species diversity in competitive communities. When the total amount of habitat is large, fragmentation per se tends to increase species diversity, but if the total amount of habitat is small, the situation is reversed: fragmentation per se decreases species diversity.

• Near-optimal self-stabilising counting and firing squads
Christoph Lenzen and Joel Rybicki Distributed Computing 2019

Distributed Computing, volume 32, pages 339--360 (2019) • doi:10.1007/s00446-018-0342-6

###### Abstract

Consider a fully-connected synchronous distributed system consisting of n nodes, where up to f nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous $C$-counting problem, all nodes need to eventually agree on a counter that is increased by one modulo $C$ in each round for given $C>1$. In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external “go” signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a “fire” signal. Moreover, no node should generate a “fire” signal without some correct node having previously received a “go” signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience $f \lt n/3$, asymptotically optimal stabilisation and response time $O(f)$, and message size $O(\log f)$. As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions.

### 2018

• Model of bacterial toxin-dependent pathogenesis explains infective dose
Joel Rybicki, Eva Kisdi and Jani V. Anttila Proceedings of the National Academy of Sciences 2018

Proceedings of the National Academy of Sciences, (2018) • doi:10.1073/pnas.1721061115

###### Abstract

The initial amount of pathogens required to start an infection within a susceptible host is called the infective dose and is known to vary to a large extent between different pathogen species. We investigate the hypothesis that the differences in infective doses are explained by the mode of action in the underlying mechanism of pathogenesis: Pathogens with locally acting mechanisms tend to have smaller infective doses than pathogens with distantly acting mechanisms. While empirical evidence tends to support the hypothesis, a formal theoretical explanation has been lacking. We give simple analytical models to gain insight into this phenomenon and also investigate a stochastic, spatially explicit, mechanistic within-host model for toxin-dependent bacterial infections. The model shows that pathogens secreting locally acting toxins have smaller infective doses than pathogens secreting diffusive toxins, as hypothesized. While local pathogenetic mechanisms require smaller infective doses, pathogens with distantly acting toxins tend to spread faster and may cause more damage to the host. The proposed model can serve as a basis for the spatially explicit analysis of various virulence factors also in the context of other problems in infection dynamics.

### 2017

• Large cuts with local algorithms on triangle-free graphs
Juho Hirvonen, Joel Rybicki, Stefan Schmid and Jukka Suomela Electronic Journal of Combinatorics 2017

Electronic Journal of Combinatorics, volume 24, number 4, paper #P4.21 (2017)

###### Abstract

We study the problem of finding large cuts in $d$-regular triangle-free graphs. In prior work, Shearer (1992) gives a randomised algorithm that finds a cut of expected size $(1/2 + 0.177/\sqrt{d})m$, where $m$ is the number of edges. We give a simpler algorithm that does much better: it finds a cut of expected size $(1/2 + 0.28125/\sqrt{d})m$. As a corollary, this shows that in any $d$-regular triangle-free graph there exists a cut of at least this size. Our algorithm can be interpreted as a very efficient randomised distributed algorithm: each node needs to produce only one random bit, and the algorithm runs in one synchronous communication round. This work is also a case study of applying computational techniques in the design of distributed algorithms: our algorithm was designed by a computer program that searched for optimal algorithms for small values of $d$.

• Efficient counting with optimal resilience
Christoph Lenzen, Joel Rybicki and Jukka Suomela SIAM Journal on Computing 2017

SIAM Journal on Computing, volume 46, number 4, pages 1473--1500 (2017) • doi:10.1137/16M107877X

###### Abstract

Consider a complete communication network of $n$ nodes, where the nodes receive a common clock pulse. We study the synchronous $c$-counting problem: given any starting state and up to $f$ faulty nodes with arbitrary behavior, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo $c$ in agreement. Thus, we are considering algorithms that are self-stabilizing despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilization time in $f$ (asymptotically optimal), (4) use a small number of states, and, consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomization, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, we present two complementary approaches for reducing the number of bits communicated during and after stabilization.

• Deterministic subgraph detection in broadcast CONGEST
Janne H. Korhonen and Joel Rybicki OPODIS 2017

21st International Conference on Principles of Distributed Systems (2017) • doi:10.4230/LIPIcs.OPODIS.2017.4

###### Abstract

We present simple deterministic algorithms for subgraph finding and enumeration in the broadcast CONGEST model of distributed computation:

• For any constant $k$, detecting $k$-paths and trees on $k$ nodes can be done in $O(1)$ rounds.
• For any constant $k$, detecting $k$-cycles and pseudotrees on $k$ nodes can be done in $O(n)$ rounds.
• On $d$-degenerate graphs, cliques and $4$-cycles can be enumerated in $O(d + \log n)$ rounds, and $5$-cycles in $O(d^2 + \log n)$ rounds.

In many cases, these bounds are tight up to logarithmic factors. Moreover, we show that the algorithms for $d$-degenerate graphs can be improved to optimal complexity $O(d/\log n)$ and $O(d^2/\log n)$, respectively, in the supported CONGEST model, which can be seen as an intermediate model between CONGEST and the congested clique.

###### Proceedings

Proceedings the 21st International Conference on Principles of Distributed Systems 18-20 December 2017, Lisboa, Portugal, pages 4:1--4:16, 2017

• Self-stabilising pulse synchronisation is almost as easy as consensus
Christoph Lenzen and Joel Rybicki DISC 2017

31st International Symposium on Distributed Computing (2017) • doi:10.4230/LIPIcs.DISC.2017.32

###### Abstract

We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the n nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i.e., the initial state of the system may be arbitrary, and there can be up to $f \lt n/3$ ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an arbitrary manner. Furthermore, we assume that the local clocks of the nodes may progress at different speeds (clock drift) and communication has bounded delay. In this model, we study the pulse synchronisation problem, where the task is to guarantee that eventually all correct nodes generate well-separated local pulse events (i.e., unlabelled logical clock ticks) in a synchronised manner. Compared to prior work, we achieve exponential improvements in stabilisation time and the number of communicated bits, and give the first sublinear-time algorithm for the problem:

• In the deterministic setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $\Theta(f \log f)$ bits per time unit. We exponentially reduce the number of bits broadcasted per time unit to $\Theta(\log f)$ while retaining the same stabilisation time.
• In the randomised setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $O(1)$ bits per time unit. We exponentially reduce the stabilisation time to polylog $f$ while each node broadcasts polylog $f$ bits per time unit.

These results are obtained by means of a recursive approach reducing the above task of self-stabilising pulse synchronisation in the bounded-delay model to non-self-stabilising binary consensus in the synchronous model. In general, our approach introduces at most logarithmic overheads in terms of stabilisation time and broadcasted bits over the underlying consensus routine.

• LCL problems on grids
Sebastian Brandt, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Patric R.J. Östergård, Christopher Purcell, Joel Rybicki, Jukka Suomela and Przemysław Uznański • PODC 2017

ACM Symposium on Principles of Distributed Computing (2017) • doi:10.1145/3087801.3087833

###### Abstract

LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of $O(1)$, $\Theta(\log^* n)$, or $\Theta(n)$, and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: $O(1)$, $\Theta(\log^* n)$, and $\Theta(n)$. However, given an LCL problem it is undecidable whether its complexity is $\Theta(\log^* n)$ or $\Theta(n)$ in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is $\Theta(\log^* n)$, we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form $A' \circ S_k$, where $A'$ is a finite function, $S_k$ is an algorithm for finding a maximal independent set in $k$th power of the grid, and $k$ is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.

###### Proceedings

Proceedings of the ACM Symposium on Principles of Distributed Computing - PODC '17, pages 101--110, 2017

### 2016

• Deterministic local algorithms, unique identifiers, and fractional graph colouring
Henning Hasemann, Juho Hirvonen, Joel Rybicki and Jukka Suomela Theoretical Computer Science 2016

Theoretical Computer Science, volume 610, pages 204--217 (2016) • doi:10.1016/j.tcs.2014.06.044

###### Abstract

We show that for any $\alpha > 1$ there exists a deterministic distributed algorithm that finds a fractional graph colouring of length at most $\alpha (\Delta + 1)$ in any graph in one synchronous communication round; here $\Delta$ is the maximum degree of the graph. The result is near-tight, as there are graphs in which the optimal solution has length $\Delta+1$. The result is, of course, too good to be true. The usual definitions of scheduling problems (fractional graph colouring, fractional domatic partition, etc.) in a distributed setting leave a loophole that can be exploited in the design of distributed algorithms: the size of the local output is not bounded. Our algorithm produces an output that seems to be perfectly good by the usual standards but it is impractical, as the schedule of each node consists of a very large number of short periods of activity. More generally, the algorithm shows that when we study distributed algorithms for scheduling problems, we can choose virtually any trade-off between the following three parameters: $T$, the running time of the algorithm, $\ell$, the length of the schedule, and $\kappa$, the maximum number of periods of activity for a any single node. Here $\ell$ is the objective function of the optimisation problem, while $\kappa$ captures the “subjective” quality of the solution. If we study, for example, bounded-degree graphs, we can trivially keep $T$ and $\kappa$ constant, at the cost of a large $\ell$, or we can keep $\kappa$ and $\ell$ constant, at the cost of a large $T$. Our algorithm shows that yet another trade-off is possible: we can keep $T$ and $\ell$ constant at the cost of a large $\kappa$.

• Synchronous counting and computational algorithm design
Danny Dolev, Keijo Heljanko, Matti Järvisalo, Janne H. Korhonen, Christoph Lenzen, Joel Rybicki, Jukka Suomela and Siert Wieringa Journal of Computer and System Sciences 2016

Journal of Computer and System Sciences, volume 82, number 2, pages 310--332 (2016) • doi:10.1016/j.jcss.2015.09.002

###### Abstract

Consider a complete communication network on $n$ nodes. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are "odd" and which are "even". Furthermore, the solution needs to be self-stabilising (reaching correct operation from any initial state) and tolerate $f$ Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms either require a source of random bits or a large number of states per node. In this work, we give fast state-optimal deterministic algorithms for the first non-trivial case $f=1$. To obtain these algorithms, we develop and evaluate two different techniques for algorithm synthesis. Both are based on casting the synthesis problem as a propositional satisfiability (SAT) problem; a direct encoding is efficient for synthesising time-optimal algorithms, while an approach based on counter-example guided abstraction refinement discovers non-optimal algorithms quickly.

• Near-optimal self-stabilising counting and firing squads
Christoph Lenzen and Joel Rybicki SSS 2016

18th International Symposium on Stabilization, Safety, and Security of Distributed Systems (2016) • doi:10.1007/978-3-319-49259-9_21

###### Abstract

Consider a fully-connected synchronous distributed system of $n$ nodes, where up to $f$ nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous counting problem, all nodes need to eventually agree on a counter that is increased by one modulo some $C$ in each round. In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external 'go' signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a 'fire' signal. Moreover, no node should generate a 'fire' signal without some correct node having previously received a go'' signal as input. We present a framework reducing both tasks to binary consensus at very small cost: we maintain the resilience of the underlying consensus routine, while the stabilisation time and message size are, up to constant factors, bounded by the sum of the cost of the consensus routine for $f$ faults and recursively applying our scheme to $f' \lt f/2$ faults. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience $f \lt n/3$, asymptotically optimal stabilisation and response time $O(f)$, and message size $O(\log f)$. As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework to allow for $f \lt n/2$ omission or $f \lt n$ crash faults. Our results resolve various open questions on the two problems, most prominently whether (communication-efficient) self-stabilising Byzantine firing squads or sublinear-time solutions for either problem exist.

###### Proceedings

Proceedings of the 18th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2016), volume 10083 of Lecture Notes in Computer Science, pages 263--280, 2016

• A lower bound for the distributed Lovász local lemma
Sebastian Brandt, Orr Fischer, Juho Hirvonen, Barbara Keller, Tuomo Lempiäinen, Joel Rybicki, Jukka Suomela and Jara Uitto STOC 2016

48th Annual ACM SIGACT Symposium on Theory of Computing (2016) • doi:10.1145/2897518.2897570

###### Abstract

We show that any randomised Monte Carlo distributed algorithm for the Lovász local lemma requires $\Omega(\log \log n)$ communication rounds, assuming that it finds a correct assignment with high probability. Our result holds even in the special case of $d = O(1)$, where d is the maximum degree of the dependency graph. By prior work, there are distributed algorithms for the Lovász local lemma with a running time of $O(\log n)$ rounds in bounded-degree graphs, and the best lower bound before our work was $\Omega(\log^* n)$ rounds [Chung et al. 2014].

###### Proceedings

Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing - STOC 2016, pages 479--488, 2016

### 2015

• Large-scale habitat corridors for biodiversity conservation: A forest corridor in Madagascar
Tanjona Ramiadantsoa, Otso Ovaskainen, Joel Rybicki and Ilkka Hanski • PLOS ONE 2015

PLOS ONE, volume 10, number 7, paper e0132126 (2015) • doi:10.1371/journal.pone.0132126

###### Abstract

In biodiversity conservation, habitat corridors are assumed to increase landscape-level connectivity and to enhance the viability of otherwise isolated populations. While the role of corridors is supported by empirical evidence, studies have typically been conducted at small spatial scales. Here, we assess the quality and the functionality of a large 95-km long forest corridor connecting two large national parks (416 and 311 km²) in the southeastern escarpment of Madagascar. We analyze the occurrence of 300 species in 5 taxonomic groups in the parks and in the corridor, and combine high-resolution forest cover data with a simulation model to examine various scenarios of corridor destruction. At present, the corridor contains essentially the same communities as the national parks, reflecting its breadth which on average matches that of the parks. In the simulation model, we consider three types of dispersers: passive dispersers, which settle randomly around the source population; active dispersers, which settle only in favorable habitat; and gap-avoiding active dispersers, which avoid dispersing across non-habitat. Our results suggest that long-distance passive dispersers are most sensitive to ongoing degradation of the corridor, because increasing numbers of propagules are lost outside the forest habitat. For a wide range of dispersal parameters, the national parks are large enough to sustain stable populations until the corridor becomes severely broken, which will happen around 2065 if the current rate of forest loss continues. A significant decrease in gene flow along the corridor is expected after 2040, and this will exacerbate the adverse consequences of isolation. Our results demonstrate that simulation studies assessing the role of habitat corridors should pay close attention to the mode of dispersal and the effects of regional stochasticity.

• Efficient counting with optimal resilience
Christoph Lenzen and Joel Rybicki DISC 2015

29th International Symposium on Distributed Computing (DISC 2015) (2015) • doi:10.1007/978-3-662-48653-5_2

###### Abstract

In the synchronous $c$-counting problem, we are given a synchronous system of $n$ nodes, where up to $f$ of the nodes may be Byzantine, that is, have arbitrary faulty behaviour. The task is to have all of the correct nodes count modulo $c$ in unison in a self-stabilising manner: regardless of the initial state of the system and the faulty nodes’ behavior, eventually rounds are consistently labelled by a counter modulo c at all correct nodes. We provide a deterministic solution with resilience $f \lt n/3$ that stabilises in $O(f)$ rounds and every correct node broadcasts $O(\log^2 f)$ bits per round. We build and improve on a recent result offering stabilisation time $O(f)$ and communication complexity $O(\log^2 f/ \log \log f)$ but with sub-optimal resilience $f=n^{1−o(1)}$ (PODC 2015). Our new algorithm has optimal resilience, asymptotically optimal stabilisation time, and low communication complexity. Finally, we modify the algorithm to guarantee that after stabilisation very little communication occurs. In particular, for optimal resilience and polynomial counter size $c=n^{O(1)}$, the algorithm broadcasts only $O(1)$ bits per node every $\Theta(n)$ rounds without affecting the other properties of the algorithm; communication-wise this is asymptotically optimal.

###### Proceedings

Proceedings of the 29th International Symposium on Distributed Computing (DISC 2015), Tokyo, Japan, October 7--13, 2015, volume 9363 of Lecture Notes in Computer Science, pages 16--30, 2015

• Exact bounds for distributed graph colouring
Joel Rybicki and Jukka Suomela SIROCCO 2015

22nd International Colloquium on Structural Information and Communication Complexity (2015) • doi:10.1007/978-3-319-25258-2_4

###### Abstract

We prove exact bounds on the time complexity of distributed graph colouring. If we are given a directed path that is properly coloured with n colours, by prior work it is known that we can find a proper 3-colouring in $\log^*(n) \pm O(1)$ communication rounds. We close the gap between upper and lower bounds: we show that for infinitely many n the time complexity is precisely $\log^* n$ communication rounds.

###### Proceedings

Structural Information and Communication Complexity: Post-proceedings of the 22nd International Colloquium (SIROCCO 2015), volume 9439 of Lecture Notes in Computer Science, pages 46--60, 2015

• Towards optimal synchronous counting
Christoph Lenzen, Joel Rybicki and Jukka Suomela PODC 2015

34th ACM Symposium on Principles of Distributed Computing (2015) • doi:10.1145/2767386.2767423

###### Abstract

Consider a complete communication network of n nodes, in which the nodes receive a common clock pulse. We study the synchronous c-counting problem: given any starting state and up to f faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes count modulo c in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have linear stabilisation time in f, (3) use a small number of states, and (4) achieve almost-optimal resilience. Prior algorithms either resort to randomisation, use a large number of states, or have poor resilience. In particular, we achieve an exponential improvement in the state complexity of deterministic algorithms, while still achieving linear stabilisation time and almost-linear resilience.

###### Proceedings

Proceedings of the 34th ACM Symposium on Principles of Distributed Computing (PODC 2015), pages 441--450, 2015

### 2013

• Species-area relationships and extinctions caused by habitat loss and fragmentation
Joel Rybicki and Ilkka Hanski • Ecology Letters 2013

Ecology Letters, volume 16, number S1, pages 27--38 (2013) • doi:10.1111/ele.12065

###### Abstract

The species–area relationship (SAR) has been used to predict the numbers of species going extinct due to habitat loss, but other researchers have maintained that SARs overestimate extinctions and instead one should use the endemics–area relationship (EAR) to predict extinctions. Here, we employ spatially explicit simulations of large numbers of species in spatially heterogeneous landscapes to investigate SARs and extinctions in a dynamic context. The EAR gives the number of species going extinct immediately after habitat loss, but typically many other species have unviable populations in the remaining habitat and go extinct soon afterwards. We conclude that the EAR underestimates extinctions due to habitat loss, the continental SAR (with slope ~0.1 or somewhat less) gives a good approximation of short-term extinctions, while the island SAR calculated for discrete fragments of habitat (with slope ~0.25) predicts the long-term extinctions. However, when the remaining area of land-covering habitat such as forest is roughly less than 20% of the total landscape and the habitat is highly fragmented, all current SARs underestimate extinction rate. We show how the ‘fragmentation effect’ can be incorporated into a predictive SAR model. When the remaining habitat is highly fragmented, an effective way to combat the fragmentation effect is to aggregate habitat fragments into clusters rather than to place them randomly across the landscape.

• Species-fragmented area relationship
Ilkka Hanski, Gustavo A. Zurita, M. Isabel Bellocq and Joel Rybicki Proceedings of the National Academy of Sciences 2013

Proceedings of the National Academy of Sciences, volume 110, number 31, pages 12715--12720 (2013) • doi:10.1073/pnas.1311491110

###### Abstract

The species-area relationship (SAR) gives a quantitative description of the increasing number of species in a community with increasing area of habitat. In conservation, SARs have been used to predict the number of extinctions when the area of habitat is reduced. Such predictions are most needed for landscapes rather than for individual habitat fragments, but SAR-based predictions of extinctions for landscapes with highly fragmented habitat are likely to be biased because SAR assumes contiguous habitat. In reality, habitat loss is typically accompanied by habitat fragmentation. To quantify the effect of fragmentation in addition to the effect of habitat loss on the number of species, we extend the power-law SAR to the species-fragmented area relationship. This model unites the single-species metapopulation theory with the multispecies SAR for communities. We demonstrate with a realistic simulation model and with empirical data for forest-inhabiting subtropical birds that the species-fragmented area relationship gives a far superior prediction than SAR of the number of species in fragmented landscapes. The results demonstrate that for communities of species that are not well adapted to live in fragmented landscapes, the conventional SAR underestimates the number of extinctions for landscapes in which little habitat remains and it is highly fragmented.

• Synchronous counting and computational algorithm design
Danny Dolev, Janne H. Korhonen, Christoph Lenzen, Joel Rybicki and Jukka Suomela SSS 2013

15th International Symposium on Stabilization, Safety, and Security of Distributed Systems (2013) • doi:10.1007/978-3-319-03089-0_17

###### Abstract

Consider a complete communication network on n nodes, each of which is a state machine with s states. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are 'odd' and which are 'even'. We require that the solution is self-stabilising (reaching the correct operation from any initial state) and it tolerates f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms are expensive to implement in hardware: they require a source of random bits or a large number of states $s$. We use computational techniques to construct very compact deterministic algorithms for the first non-trivial case of $f = 1$. While no algorithm exists for $n \lt 4$, we show that as few as 3 states are sufficient for all values $n \ge 4$. We prove that the problem cannot be solved with only 2 states for $n = 4$, but there is a 2-state solution for all values $n \ge 6$.

###### Proceedings

Proceedings of the 15th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2013), volume 8255 of Lecture Notes in Computer Science, pages 237--250, 2013

### 2012

• Deterministic local algorithms, unique identifiers, and fractional graph colouring
Henning Hasemann, Juho Hirvonen, Joel Rybicki and Jukka Suomela SIROCCO 2012

19th International Colloquium on Structural Information and Communication Complexity (2012) • doi:10.1007/978-3-642-31104-8_5

###### Abstract

We show that for any $\alpha > 1$ there exists a deterministic distributed algorithm that finds a fractional graph colouring of length at most $\alpha (\Delta + 1)$ in any graph in one synchronous communication round; here $\Delta$ is the maximum degree of the graph. The result is near-tight, as there are graphs in which the optimal solution has length $\Delta+1$. The result is, of course, too good to be true. The usual definitions of scheduling problems (fractional graph colouring, fractional domatic partition, etc.) in a distributed setting leave a loophole that can be exploited in the design of distributed algorithms: the size of the local output is not bounded. Our algorithm produces an output that seems to be perfectly good by the usual standards but it is impractical, as the schedule of each node consists of a very large number of short periods of activity. More generally, the algorithm shows that when we study distributed algorithms for scheduling problems, we can choose virtually any trade-off between the following three parameters: $T$, the running time of the algorithm, $\ell$, the length of the schedule, and $\kappa$, the maximum number of periods of activity for a any single node. Here $\ell$ is the objective function of the optimisation problem, while $\kappa$ captures the “subjective” quality of the solution. If we study, for example, bounded-degree graphs, we can trivially keep $T$ and $\kappa$ constant, at the cost of a large $\ell$, or we can keep $\kappa$ and $\ell$ constant, at the cost of a large $T$. Our algorithm shows that yet another trade-off is possible: we can keep $T$ and $\ell$ constant at the cost of a large $\kappa$.

###### Proceedings

Proceedings of the 19th International Colloquium on Structural Information and Communication Complexity (SIROCCO 2012), volume 7355 of Lecture Notes in Computer Science, pages 48--60, 2012

### 2009

• A local 2-approximation algorithm for the vertex cover problem
Matti Åstrand, Patrik Floréen, Valentin Polishchuk, Joel Rybicki, Jukka Suomela and Jara Uitto DISC 2009

23rd International Symposium on Distributed Computing (2009) • doi:10.1007/978-3-642-04355-0_21

###### Abstract

We present a distributed 2-approximation algorithm for the minimum vertex cover problem. The algorithm is deterministic, and it runs in $(\Delta+1)^2$ synchronous communication rounds, where $\Delta$ is the maximum degree of the graph. For $\Delta=3$, we give a 2-approximation algorithm also for the weighted version of the problem.

###### Proceedings

Proceedings of the 23rd International Symposium on Distributed Computing (DISC 2009), volume 5805 of Lecture Notes in Computer Science, pages 191--205, 2009

### Manuscripts

• Local algorithms in (weakly) coloured graphs
Matti Åstrand, Valentin Polishchuk, Joel Rybicki, Jukka Suomela and Jara Uitto Manuscript.

Manuscript (2010)

###### Abstract

A local algorithm is a distributed algorithm that completes after a constant number of synchronous communication rounds. We present local approximation algorithms for the minimum dominating set problem and the maximum matching problem in 2-coloured and weakly 2-coloured graphs. In a weakly 2-coloured graph, both problems admit a local algorithm with the approximation factor $(\Delta+1)/2$, where $\Delta$ is the maximum degree of the graph. We also give a matching lower bound proving that there is no local algorithm with a better approximation factor for either of these problems. Furthermore, we show that the stronger assumption of a 2-colouring does not help in the case of the dominating set problem, but there is a local approximation scheme for the maximum matching problem in 2-coloured graphs.

Last updated: September 16, 2020