Hamilton-Jacobi equations on graphs with applications to semi-supervised learning and data depth

Jeff Calder, Mahmood Ettehad

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Shortest path graph distances are widely used in data science and machine learning, since they can approximate the underlying geodesic distance on the data manifold. However, the shortest path distance is highly sensitive to the addition of corrupted edges in the graph, either through noise or an adversarial perturbation. In this paper we study a family of Hamilton-Jacobi equations on graphs that we call the p-eikonal equation. We show that the p-eikonal equation with p = 1 is a provably robust distance-type function on a graph, and the p → ∞ limit recovers shortest path distances. While the p-eikonal equation does not correspond to a shortest-path graph distance, we nonetheless show that the continuum limit of the p-eikonal equation on a random geometric graph recovers a geodesic density weighted distance in the continuum. We consider applications of the p-eikonal equation to data depth and semi-supervised learning, and use the continuum limit to prove asymptotic consistency results for both applications. Finally, we show the results of experiments with data depth and semi-supervised learning on real image datasets, including MNIST, FashionMNIST and CIFAR-10, which show that the p-eikonal equation offers significantly better results compared to shortest path distances.

Original languageEnglish (US)
Article number138
JournalJournal of Machine Learning Research
Volume23
StatePublished - Oct 1 2022

Bibliographical note

Publisher Copyright:
© 2022 Jeff Calder and Mahmood Ettehad.

Keywords

  • Data depth
  • Graph learning
  • Hamilton-Jacobi equation
  • Robust statistics
  • Semi-supervised learning
  • discrete to continuum limits
  • partial differential equations
  • viscosity solutions

Fingerprint

Dive into the research topics of 'Hamilton-Jacobi equations on graphs with applications to semi-supervised learning and data depth'. Together they form a unique fingerprint.

Cite this