Sliced Gromov-Wasserstein. Vayer, T., Flamary, R., Tavenard, R., Chapel, L., & Courty, N.
Sliced Gromov-Wasserstein [link]Paper  abstract   bibtex   
Recently used in various machine learning contexts, the Gromov-Wasserstein distance (GW) allows for comparing distributions that do not necessarily lie in the same metric space. However, this Optimal Transport (OT) distance requires solving a complex non convex quadratic program which is most of the time very costly both in time and memory. Contrary to GW, the Wasserstein distance (W) enjoys several properties (e.g. duality) that permit large scale optimization. Among those, the Sliced Wasserstein (SW) distance exploits the direct solution of W on the line, that only requires sorting discrete samples in 1D. This paper propose a new divergence based on GW akin to SW. We first derive a closed form for GW when dealing with 1D distributions, based on a new result for the related quadratic assignment problem. We then define a novel OT discrepancy that can deal with large scale distributions via a slicing approach and we show how it relates to the GW distance while being \$O(n\^2)\$ to compute. We illustrate the behavior of this so called Sliced Gromov-Wasserstein (SGW) discrepancy in experiments where we demonstrate its ability to tackle similar problems as GW while being several order of magnitudes faster to compute
@article{vayerSlicedGromovWasserstein2019,
  archivePrefix = {arXiv},
  eprinttype = {arxiv},
  eprint = {1905.10124},
  primaryClass = {cs, stat},
  title = {Sliced {{Gromov}}-{{Wasserstein}}},
  url = {http://arxiv.org/abs/1905.10124},
  abstract = {Recently used in various machine learning contexts, the Gromov-Wasserstein distance (GW) allows for comparing distributions that do not necessarily lie in the same metric space. However, this Optimal Transport (OT) distance requires solving a complex non convex quadratic program which is most of the time very costly both in time and memory. Contrary to GW, the Wasserstein distance (W) enjoys several properties (e.g. duality) that permit large scale optimization. Among those, the Sliced Wasserstein (SW) distance exploits the direct solution of W on the line, that only requires sorting discrete samples in 1D. This paper propose a new divergence based on GW akin to SW. We first derive a closed form for GW when dealing with 1D distributions, based on a new result for the related quadratic assignment problem. We then define a novel OT discrepancy that can deal with large scale distributions via a slicing approach and we show how it relates to the GW distance while being \$O(n\^2)\$ to compute. We illustrate the behavior of this so called Sliced Gromov-Wasserstein (SGW) discrepancy in experiments where we demonstrate its ability to tackle similar problems as GW while being several order of magnitudes faster to compute},
  urldate = {2019-05-29},
  date = {2019-05-24},
  keywords = {Statistics - Machine Learning,Computer Science - Machine Learning},
  author = {Vayer, Titouan and Flamary, Rémi and Tavenard, Romain and Chapel, Laetitia and Courty, Nicolas},
  file = {/home/dimitri/Nextcloud/Zotero/storage/8ZGBPMIP/Vayer et al. - 2019 - Sliced Gromov-Wasserstein.pdf;/home/dimitri/Nextcloud/Zotero/storage/EIJKFLT5/1905.html}
}

Downloads: 0