Discrete notes    About    Archive

January notes

A few notes for January 2019.


Illuminated species in Jardin des Plantes in Paris.

Maximal matching lower bound

They did it! A key problem of distributed computing on graphs has been solved by a team supervised by Jukka Suomela. The preprint has been uploaded to arxiv recently, and is not reviewed yet, so this should be taken with a grain of salt, but given the authors list, and what I have read of the paper, there is little doubt it is correct.

The setting can be explained the following way. There are jobs available, and people looking for jobs. Every job has at most $\Delta$ candidates, and each person has at most $\Delta$ jobs (s)he is interested in. There is no central entity, so the decisions of which job is matched to which person have to be taken based on communication between the “jobs” and the people. At the end, we want that (1) every person either has been assigned a job or all the jobs (s)he was interested in have been assigned to someone else, and (2) every job is either assigned to someone, or all the candidates for this job have been assigned another job. Note that there is no preference list, and that this problem would be very easy if we had a central entity managing the job market.

A strategy to solve the problem is the following: first every person chooses a job among the ones it has selected, and sends a proposal to this job, second the “jobs” choose one of the persons that have sent a proposal to it (if any). After this first round, some people and jobs are already matched, and they leave the game. The other jobs and persons continue, until we have reached a situation that fulfils the conditions (1) and (2) above.

This strategy requires $O(\Delta)$ phases. The new result tells us that this is basically optimal.

Search vs decision

My PhD was about distributed decision, and a sentence that often appears in papers on this topic is some variation of “unlike in the centralized setting, search and decision are very different in the distributed setting”. This makes me a bit uncomfortable because I don’t have a very strong statement to support the claim that in centralized computing search and decision are similar.

Lance Fortnow blogged about this early this month.

Fragile complexity

An interesting recent line of research is to go “beyond worst-case complexity”, that is to consider other measures of complexity/efficiency than the time on the worst input, as the later can be inadequate for many purposes. There are very nice concepts there, such as the smoothed complexity. See also the top 10 ideas in this area, by Tim Roughgarden.

A new measure was introduced in an arxiv preprint this month: fragile complexity. I just read the abstract, and I don’t know why it is called ‘fragile’, but I understand the definition for sorting: the fragile complexity of a sorting algorithm is the maximal number of comparisons that an element of the list can be part of.

LIPIcs without logos

I like LIPIcs latex class, which is now used for many conferences (basically the ones that switched to open access). I would like to use it for my preprints (because it looks nice, and because if the conference is in lipics, it avoids extra-work). One problem is that preprints should not have the official lipics logos, and other feature that are relevant only for conference publications. Following the example of a colleague, I used to use a modified version of the class file. This is not useful any more: the class now provides the command \hideLIPIcs that hides the conference-only features. (See the authors guidelines, page 8.)

PAC learning and deep learning

It seems that deep learning is becoming popular among almost everybody (at least from a scientific point of view), except theoretical computer scientists, who criticize it for its lack of proven guarantees. As a reaction, there are several efforts to better understand ML from a theoretical point of view (for example in Alexander Mądry’s group and in the ML theory group at Princeton).

The month in the Theory Dish blog, Amit Daniely and Roy Frostig blog about the performances of deep learning on well-defined theoretical tasks in the context of PAC learning.

Incidentally, Leslie Valiant published a book about PAC learning (that he defined in the eighties), and its relation with nature and evolution.

Wind turbines, machine learning and algorithms

Lance Fortnow also blogged about wind turbines and ML.

Basically, a problem for wind turbines is to predict sudden changes of the wind, that could cause damage. One could use complex fluid simulations, but these are slow therefore there is an increasing trend in using ML for these predictions.

So there is a way to make things in a “deterministic” well-defined way, but one prefers the ML version that is quicker. This reminds me of a stack exchange topic that was basically asking: will algorithms be really useful in the future, knowing that most of the time we are interested in “good enough” solution, and that we don’t even want some approximation guarantees, or even worse, we want solutions that are good, with the definition of good given only by examples. Unfortunately there were not many answers…

Power diagrams to represents fluids

11011110 blog links to nice fluid simulations using power diagram, to picture units of fluid. Power diagrams are generalizations of Voronoi diagrams but special cases of weighted Voronoi diagrams.

Order types

Suppose you have a configuration with four points in the plane, such that no three of them are aligned. Then, either all the points are extreme points, and they form a kind of quadrilateral, or there are three points forming a triangle, and the last point is in the triangle. These two cases are called the order types (for four points in the plane). As you can expect, this gets more tricky when you look at more points.

There was a paper at SODA 2019 about this topic.