Representation Learning on Graphs with Jumping Knowledge Networks. Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K., & Jegelka, S. 6 2018.
Representation Learning on Graphs with Jumping Knowledge Networks [link]Paper  abstract   bibtex   
Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of "neighboring" nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture – jumping knowledge (JK) networks – that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.
@unpublished{Xu-2018-ID310,
  title     = {Representation Learning on Graphs with Jumping Knowledge Networks},
  abstract  = {Recent deep learning approaches for representation learning on graphs
               follow a neighborhood aggregation procedure. We analyze some important
               properties of these models, and propose a strategy to overcome those. In
               particular, the range of "neighboring" nodes that a node's representation
               draws from strongly depends on the graph structure, analogous to the spread
               of a random walk. To adapt to local neighborhood properties and tasks, we
               explore an architecture -- jumping knowledge ({JK}) networks -- that
               flexibly leverages, for each node, different neighborhood ranges to enable
               better structure-aware representation. In a number of experiments on
               social, bioinformatics and citation networks, we demonstrate that our model
               achieves state-of-the-art performance. Furthermore, combining the {JK}
               framework with models like Graph Convolutional Networks, Graph{SAGE} and
               Graph Attention Networks consistently improves those models' performance.},
  author    = {Xu, Keyulu and Li, Chengtao and Tian, Yonglong and Sonobe, Tomohiro and
               Kawarabayashi, Ken-ichi and Jegelka, Stefanie},
  year      = {2018},
  month     = {6},
  url       = {http://arxiv.org/abs/1806.03536},
  url       = {http://arxiv.org/pdf/1806.03536},
  arxiv     = {1806.03536},
  keywords  = {cs.{LG}},
  file      = {FULLTEXT:pdfs/000/000/000000310.pdf:PDF}
}

Downloads: 0