A paper titled Hyperbolic Geometric Latent Diffusion Model for Graph Generation by Xingcheng Fu, a faculty member of the School of Computer Science, has been accepted with high scores (8, 7, 6, 6) by ICML 2024, one of the top three conferences in the field of machine learning. The International Conference on Machine Learning (ICML) is one of the most authoritative conferences in machine learning, alongside NeurIPS and ICLR, and is classified as a CCF-A conference in the China Computer Federation (CCF) recommended conference list.
This paper is a collaborative effort between the School of Computer Science and Engineering at Guangxi Normal University and Beijing University of Aeronautics and Astronautics (BUAA), with our institute as the primary contributing institution. The first author is Xingcheng Fu from our institute, and the corresponding authors are Professor Xianxian Li from our institute and Professor Jianxin Li from the School of Computer Science at BUAA.
Affiliated Institutions:
Guangxi Normal University
Beijing University of Aeronautics and Astronautics
Paper Link:
https://arxiv.org/pdf/2405.03188v1
Code Repository:
https://github.com/RingBDStack/HypDiff
Abstract:
The goal of graph generation is to produce new graph structures that resemble the distribution of observed graph data. It helps people better understand key information in graph data and has played a significant role in fields such as molecular science, protein research, and social networks. This work proposes a Latent Diffusion Model based on hyperbolic geometry to achieve better topological structure generation for graphs. Firstly, an approximate diffusion process in hyperbolic space is proposed based on differential geometry, enabling a low-distortion continuous diffusion process directly on non-Euclidean structures. This significantly improves computational efficiency for sparse and discrete topological structures, and it is proven that the proposed method achieves an error lower bound equivalent to the Klein model. Secondly, two geometric constraints are designed based on the geometric intuition of hyperbolic space, addressing the anisotropic learning problem of non-Euclidean structures in latent space. Experiments demonstrate that, compared to existing discrete graph diffusion models, the proposed method reduces memory consumption by approximately 5 times for the same generation quality, and up to 10 times with minimal loss in precision for large-scale sparse graph structures.