Where to Go Next: A Spatio-Temporal Gated Network for Next POI Recommendation

Pengpeng Zhao, Anjing Luo, Yanchi Liu, Jiajie Xu, Zhixu Li, Fuzhen Zhuang, Victor S. Sheng, Xiaofang Zhou

Research output: Contribution to journalArticlepeer-review

73 Scopus citations


Next Point-of-Interest (POI) recommendation which is of great value to both users and POI holders is a challenging task since complex sequential patterns and rich contexts are contained in extremely sparse user check-in data. Recently proposed embedding techniques have shown promising results in alleviating the data sparsity issue by modeling context information, and Recurrent Neural Network (RNN) has been proved effective in the sequential prediction. However, existing next POI recommendation approaches train the embedding and network model separately, which cannot fully leverage rich contexts. In this paper, we propose a novel unified neural network framework, named NeuNext, which leverages POI context prediction to assist next POI recommendation by joint learning. Specifically, the Spatio-Temporal Gated Network (STGN) is proposed to model personalized sequential patterns for users' long and short term preferences in the next POI recommendation. In the POI context prediction, rich contexts on POI sides are used to construct graph, and enforce the smoothness among neighboring POIs. Finally, we jointly train the POI context prediction and the next POI recommendation to fully leverage labeled and unlabeled data. Extensive experiments on real-world datasets show that our method outperforms other approaches for next POI recommendation in terms of Accuracy and MAP.

Original languageEnglish
Pages (from-to)2512-2524
Number of pages13
JournalIEEE Transactions on Knowledge and Data Engineering
Issue number5
StatePublished - May 1 2022


  • Next POI recommendation
  • POI context prediction
  • joint learning


Dive into the research topics of 'Where to Go Next: A Spatio-Temporal Gated Network for Next POI Recommendation'. Together they form a unique fingerprint.

Cite this