Awesome
Urban-computing-papers
<div align="center"> <img border="0" src="https://camo.githubusercontent.com/54fdbe8888c0a75717d7939b42f3d744b77483b0/687474703a2f2f6a617977636a6c6f76652e6769746875622e696f2f73622f69636f2f617765736f6d652e737667" /> <img border="0" src="https://camo.githubusercontent.com/1ef04f27611ff643eb57eb87cc0f1204d7a6a14d/68747470733a2f2f696d672e736869656c64732e696f2f7374617469632f76313f6c6162656c3d254630253946253843253946266d6573736167653d496625323055736566756c267374796c653d7374796c653d666c617426636f6c6f723d424334453939" /> <a href="https://github.com/SuperSupeng"> <img border="0" src="https://camo.githubusercontent.com/41e8e16b771d56dd768f7055354613254961d169/687474703a2f2f6a617977636a6c6f76652e6769746875622e696f2f73622f6769746875622f677265656e2d666f6c6c6f772e737667" /> </a> <a href="https://github.com/Knowledge-Precipitation-Tribe/Urban-computing-papers/issues"> <img border="0" src="https://img.shields.io/github/issues/Knowledge-Precipitation-Tribe/Urban-computing-papers" /> </a> <a href="https://github.com/Knowledge-Precipitation-Tribe/Urban-computing-papers/network/members"> <img border="0" src="https://img.shields.io/github/forks/Knowledge-Precipitation-Tribe/Urban-computing-papers" /> </a> <a href="https://github.com/Knowledge-Precipitation-Tribe/Urban-computing-papers/stargazers"> <img border="0" src="https://img.shields.io/github/stars/Knowledge-Precipitation-Tribe/Urban-computing-papers" /> </a> <img alt="知识共享许可协议" src="https://camo.githubusercontent.com/75335faf011cd2856b147fa63e9ee383cc15a0a3/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6963656e73652d434325323042592d2d4e432d2d5341253230342e302d6c6967687467726579" data-canonical-src="https://img.shields.io/badge/license-CC%20BY--NC--SA%204.0-lightgrey" style="max-width:100%;"> <a href="https://github.com/Knowledge-Precipitation-Tribe/Urban-computing-papers/blob/master/wechat.md"> <img border="0" src="https://camo.githubusercontent.com/013c283843363c72b1463af208803bfbd5746292/687474703a2f2f6a617977636a6c6f76652e6769746875622e696f2f73622f69636f2f7765636861742e737667" /> </a> </div>I have no energy to maintain this project. If you are interested, please contact me.
Introduction
This project is a collection of recent research in areas such as new infrastructure and urban computing, including white papers, academic papers, AI lab and dataset etc.
Contribution
Contributions are always welcome! Make an individual pull request for each suggestion.
Content
- <a href = "#New-infrastructure">New infrastructure</a>
- <a href = "#WhitePaper">WhitePaper</a>
- <a href = "#Expert">Expert</a>
- <a href = "#AI-Lab">AI Lab</a>
- <a href = "#Dataset">Dataset</a>
- <a href = "#Sensor-data">Sensor-data</a>
- <a href = "#Trajectory-data">Trajectory data</a>
- <a href = "#Demand-data">Demand data</a>
- <a href = "#Public-transportation-system-transaction-records">Public transportation system transaction records</a>
Method summary
- <a href = "#Mind-map">Mind map</a>
- <a href = "#Spatial-dependence-modeling">Spatial dependence modeling</a>
- <a href = "#Temporal-dependence-modeling">Temporal dependence modeling</a>
- <a href = "#External-factors">External factors</a>
- <a href = "#Tricks">Tricks</a>
Relevant papers
- <a href = "#Survey">Survey</a>
- <a href = "#GNN">GNN</a>
- <a href ="#Long-term-Dependencies">Long-term Dependencies</a>
- <a href = "#gnn-papers-on-traffic-forecasting">1. GNN methods on Traffic forecasting</a>
- <a href = "#other-method-on-traffic-forecasting">2. Other methods on Traffic forecasting</a>
- <a href = "#Flows-Prediction">3. Flows Prediction</a>
- <a href = "#Demand-Prediction">4. Demand Prediction</a>
- <a href = "#Travel-time-or-Arrive-time">5. Travel time or Arrive time</a>
New infrastructure
[1] What is new infrastructure
[2] Baidu AI new infrastructure layout
[3] Inventory of new infrastructure projects
[4] Map of new infrastructure enterprises
WhitePaper
[1] 百度城市大脑白皮书
[2] 区块链赋能新型智慧城市白皮书
[3] 京东云智能城市白皮书2019
[4] 中国智能城市发展战略与策略研究
[5] 城市交通数字化转型白皮书
[9] 新基建政策白皮书
[10] 新基建发展白皮书
[11] 我国各省市新基建发展潜力白皮书
[12] 中国城市人工智能发展指数报告
[13] 人工智能与工业融合发展研究报告
[14] 2020年中国智慧城市发展研究报告
[15] 数据生产力崛起:新动能 新治理
Expert
[1] Yu Zheng: link
[2] Yanhua Li: link
[3] Xun Zhou: link
[4] YaGuang Li: link
[5] Zhenhui Jessie Li: link
[6] David S. Rosenblum: link
[7] Huaiyu Wan: link
[8] Junbo Zhang: link
[9] Shining Xiang:link
AI Lab
[1] iFly Tek: link
[2] JD city : link
[3] alibaba: link
[4] Huawei: link
[5] ByteDance: link
[6] alibaba damo academy: link
[7] Tencent: link
[8] Microsoft: link
[9] intel: link
[10] FACEBOOK: link
[11] Google: link
[12] National Laboratory of Pattern Recognition: link
[13] Baidu: link
[14] JD cloud: link
[15] Urban Computing Foundation Interactive Landscape: link
Dataset
[1] GAIA Open Dataset: link
[2] 智慧足迹: link
Sensor data
[1] UK traffic flow datasets: link
[2] Illinois traffic flow datasets: link
[3] PeMS: link, Baidu Netdisk password:jutw | PeMS Guide
Trajectory data
[1] Chengdu: link
[2] Xian: link
Others
[1] Weather and events data: link
[2] Weather and climate data: link
[3] NSW POI data: link
[4] Road network data: link
[5] NYC OpenData: link
[6] METR-LA: link, Baidu Netdisk password:xsz5
[7] TaxiBJ: link, Baidu Netdisk password:sg4n
[8] BikeNYC: link, Baidu Netdisk password:lmwj
[9] NYC-Taxi: link, Baidu Netdisk password:022y
[10] NYC-Bike: link
[11] San Francisco taxi: link
[12] Chicago bike: link
[13] BikeDC: link
Method summary
Mind map
Spatial dependence modeling
Reference | Modules | description | Architecture |
---|---|---|---|
<a href = "#threeone">[3.1]</a> | CNN | First convert the city into grid-shaped data, and then use CNN to capture spatial dependencies. Expand the size of the receptive field by stacking convolutional layers. | |
<a href = "#threeone">[3.1]</a> | GCN | The traffic network generally organizes as a graph structure. It is natural and reasonable to formulate road networks as graphs mathematically. The graph convolution is employed directly on graph-structured data to extract highly meaningful patterns and features in the space domain. | |
Temporal dependence modeling
Reference | Modules | description | Architecture |
---|---|---|---|
<a href = "#oneone">[1.1]</a> | causal convolution | Based on the past observation data, predict the possible future value $y$. Consider the sequence of time during the convolution process. If you want to model a long time sequence, you need to stack more convolutional layers. | |
<a href = "#onefive">[1.5]</a> | dilated casual convolution | In order to solve the problems such as the disappearance of gradients, the explosion of gradients, and the difficulty of model training in long-term sequences caused by causal convolution. The time dependence can be modeled using dilated causal convolution. Dilated convolution achieves a larger receptive field with fewer convolutional layers by skipping part of the input. | |
<a href = "#twotwo">[2.2]</a> | LSTM | Use Long Short-Term Memory (LSTM) network to capture the temporal sequential dependency, which is proposed to address the exploding and vanishing gradient issue of traditional Recurrent Neural Network (RNN). | |
<a href = "#oneseven">[1.7]</a> | GRU | Gated Recurrent Units (GRU) , which is a simple yet powerful variant of RNNs. |
External factors
Reference | Modules | description | Architecture |
---|---|---|---|
<a href = "#threeone">[3.1]</a> | External Component | Mainly consider weather, holiday event, and metadata (i.e. DayOfWeek, Weekday/Weekend). To predict flows at time interval $t$,the holiday event and metadata can be directly obtained. The weather can use the forecasting weather at time interval $t$ or the approximate weather at time interval $t−1$. | |
<a href = "#twofour">[2.4]</a> | External Factor Fusion | First incorporate the temporal factors including time features, meteorological features, and SensorID which specifies the target sensor. weather can use the forecasting weather at time interval $t$. Most of these factors are categorical which cannot be fed to neural networks directly, we transform each categorical attribute into a low- dimensional vector by feeding them into different embedding layers separately. | |
Tricks
Reference | Modules | description | Architecture |
---|---|---|---|
<a href = "#threeone">[3.1]</a> | Residual connection | As the network deepens, the accuracy of the training set has decreased. We can be sure that this is not caused by overfitting (the training set should have a high accuracy in the case of overfitting); so the author raised this question A new kind of network, called deep residual network, which allows the network to deepen as much as possible. | |
Attention | |||
Relevant papers
All papers have been sorted into folders. If the paper cannot be downloaded, please help yourself. ➡ Link(Code:HC8C)
Survey
[1] Urban Computing: Concepts, Methodologies, and Applications. ACM Transactions on Intelligent Systems and Technology 2014. paper
YU ZHENG, LICIA CAPRA, OURI WOLFSON, HAI YANG
[2] A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 2020. paper
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu
[3] Batman or the Joker? The Powerful Urban Computing and its Ethics Issues. SIGSPATIAL 2019. paper
Kaiqun Fu, Abdulaziz Alhamadani, Taoran Ji, Chang-Tien Lu
[4] Deep Learning for Spatio-Temporal Data Mining: A Survey. arXiv paper
Senzhang Wang, Jiannong Cao, Fellow, Philip S. Yu
[5] Urban flow prediction from spatialtemporal data using machine learning: A survey. Information Fusion 2020. paper
Peng Xie, Tianrui Li, Jia Liu, Shengdong Du, Xin Yang, Junbo Zhang
[6] How to Build a Graph-Based Deep Learning Architecture in Traffic Domain: A Survey. arXiv paper
Jiexia Ye, Juanjuan Zhao, Kejiang Ye, Chengzhong Xu
[7] A Survey on Modern Deep Neural Network for Traffic Prediction: Trends, Methods and Challenges. TKDE 2020. paper
David Alexander Tedjopurnomo, Zhifeng Bao, Baihua Zheng, Farhana Murtaza Choudhury, Kai Qin
[8] A Survey of Hybrid Deep Learning Methods for Traffic Flow Prediction. ICAIP 2019. paper
Yan Shi, Haoran Feng, Xiongfei Geng, Xingui Tang, Yongcai Wang
GNN
[1] GRAPH ATTENTION NETWORKS. ICLR 2018. paper
Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, Yoshua Bengio
[2] AM-GCN: Adaptive Multi-channel Graph Convolutional Networks. SIGKDD 2020. paper
Xiao Wang, Meiqi Zhu, Deyu Bo, Peng Cui, Chuan Shi, Jian Pei
[3] Heterogeneous Graph Neural Network. SIGKDD 2019. paper
Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, Nitesh V. Chawla
[4] Adaptive Graph Convolutional Neural Networks. AAAI 2018. paper
Ruoyu Li, Sheng Wang, Feiyun Zhu, Junzhou Huang
[5] Temporal Graph Networks for Deep Learning on Dynamic Graphs. arXiv 2020. paper
Emanuele Rossi,Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, Michael Bronstein
[6] GEOM-GCN: GEOMETRIC GRAPH CONVOLUTIONAL NETWORKS. ICLR 2020. paper
Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, Bo Yang
[7] Investigating and Mitigating Degree-Related Biases in Graph Convolutional Networks. CIKM 2020. paper
Xianfeng Tang, Huaxiu Yao, Yiwei Sun, Yiqi Wang, Jiliang Tang, Charu Aggarwal, Prasenjit Mitra, Suhang Wang
[8] TinyGNN: Learning Efficient Graph Neural Networks. KDD 2020. paper
Bencheng Yan, Chaokun Wang, Gaoyang Guo, Yunkai Lou
[9] Graph Neural Architecture Search. IJCAI 2020. paper
Yang Gao, Hong Yang, Peng Zhang, Chuan Zhou, Yue Hu
[10] A Practical Guide to Graph Neural Networks. arXiv 2020. paper
ISAAC RONALD WARD, JACK JOYNER, CASEY LICKFOLD,STASH ROWE,YULAN GUO,MOHAMMED BENNAMOUN
Long-term Dependencies
[1] Learning Long-term Dependencies Using Cognitive Inductive Biases in Self-attention RNNs. PMLR 2020. paper
Giancarlo Kerg, Bhargav Kanuparthi, Anirudh Goyal, Kyle Goyette, Yoshua Bengio, Guillaume Lajoie
[2] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. AAAI 2021. note. paper, github
Models | Modules | Architecture | conclusion |
---|---|---|---|
Informer |
Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang
1、GNN papers on Traffic forecasting
<p id="oneone">[1.1]</p>Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. IJCAI 2018. note paper, github, code 密码:j6ak.
Models | Modules | Architecture | conclusion |
---|---|---|---|
STGCN | GCN,Gated CNN | This paper uses GCN to model spatial dependence, temporal dependence modeling uses causal convolution, and uses the gating mechanism GLU. The bottleneck strategy is used in the structure to achieve feature compression. This paper is also the first application of GCN in the field of transportation. |
Bing Yu, Haoteng Yin, Zhanxing Zhu
<p id = "onetwo">[1.2]</p>
Dynamic Graph Convolution Network for Traffic Forecasting Based on Latent Network of Laplace Matrix Estimation. TITS 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Kan Guo, Yongli Hu, ZhenQian, Yanfeng Sun, Junbin Gao, Baocai Yin
<p id = "onethree">[1.3]</p>
Spatio-Temporal Graph Structure Learning for Traffic Forecasting. AAAI 2020. paper.
Models | Modules | Architecture | conclusion |
---|---|---|---|
SLC | SLCNN, P3D | This paper proposes a new type of graph convolution formula. The article mentions that it is necessary to learn not only the feature information on the graph, but also the structure information of the graph, which means that the structure of the graph changes dynamically. Use P3D to model the time dependence. |
Qi Zhang, Jianlong Chang, Gaofeng Meng, Shiming Xiang, Chunhong Pan
<p id = "onefour">[1.4]</p>
GMAN: A Graph Multi-Attention Network for Traffic Prediction. AAAI 2020. paper, github, code 密码:4fdh.
Models | Modules | Architecture | conclusion |
---|---|---|---|
GMAN | Encoder-Decoder,ST-Attention,Trans Attention | This paper proposes a spatial-temporal attention mechanism with gated fusion to simulate complex spatial-temporal correlation. |
Chuanpan Zheng, Xiaoliang Fan, Cheng Wang, Jianzhong Qi
<p id = "onefive">[1.5]</p>
Graph WaveNet for Deep Spatial-Temporal Graph Modeling. IJCAI 2019. paper, github, code 密码:acfw.
Models | Modules | Architecture | conclusion |
---|---|---|---|
GWN | GCN with adaptive Matrix,Gated TCN | This paper proposes a diffusion convolution formula with an adaptive adjacency matrix on the basis of DCRNN. During the training process, it also emphasizes that the structure of the graph changes dynamically. The paper uses two embedding vectors to dynamically learn the graph structure. Causal convolution is used to model time dependence. The overall structure of the model is similar to WaveNet. |
Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang
<p id = "onesix">[1.6]</p>
Spatial-Temporal Synchronous Graph Convolutional Networks: A New Framework for Spatial-Temporal Network Data Forecasting. AAAI 2020. paper, github, code 密码:3jkd.
Models | Modules | Architecture | conclusion |
---|---|---|---|
STSGCN | Spatial-Temporal Embedding, STSGCM, | This paper proposes a new structured local spatio-temporal graph. By combining the graph structures of adjacent time slices into a local spatio-temporal graph, a new adjacency matrix is constructed, which can simultaneously capture spatio-temporal dependence. |
Chao Song, Youfang Lin, Shengnan Guo, Huaiyu Wan
<p id = "oneseven">[1.7]</p>
DIFFUSION CONVOLUTIONAL RECURRENT NEURAL NETWORK: DATA-DRIVEN TRAFFIC FORECASTING. ICLR 2018. paper, github, code 密码:ba0q.
Models | Modules | Architecture | conclusion |
---|---|---|---|
DCRNN | Diffusion Convolutional Layer, encoder-decoder, GRU | This paper proposes diffusion convolution based on random walk for modeling spatio-temporal dependence. Use GRU to model time dependence. |
Yaguang Li, Rose Yu, Cyrus Shahabi, Yan Liu
<p id = "oneeight">[1.8]</p>
Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting. AAAI 2019. paper github, code 密码:nbje.
Models | Modules | Architecture | conclusion |
---|---|---|---|
ASTGCN | Spatial attention,Temporal attention,GCN,TCN | The model combines the spatial-temporal attention mechanism and the spatial-temporal convolution, including graph convolu- tions in the spatial dimension and standard convolutions in the temporal dimension, to simultaneously capture the dy- namic spatial-temporal characteristics of traffic data. |
Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, Huaiyu Wan
<p id = "onenine">[1.9]</p>
ST-GRAT: A Novel Spatio-temporal Graph Attention Network for Accurately Forecasting Dynamically Changing Road Speed. CIKM 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
ST-GRAT | Encoder-Decoder, Embedding, Spatial Attention, Temporal Attention | This paper presented ST-GRAT with a novel spatial and temporal attention for accurate traffic speed prediction. Spatial attention captures the spatial correlation among roads, utilizing graph structure information, while temporal attention captures the temporal dynamics of the road network by directly attending to features in long sequences. |
Cheonbok Park , Chunggi Lee , Hyojin Bahng, Yunwon Tae, Seungmin Jin, Kihwan Kim, Sungahn Ko, Jaegul Choo
<p id = "oneten">[1.10]</p>
Temporal Multi-Graph Convolutional Network for Traffic Flow Prediction. TITS 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Mingqi Lv , Zhaoxiong Hong, Ling Chen , Tieming Chen, Tiantian Zhu , Shouling Ji
<p id = "oneoneone">[1.11]</p>
Hybrid Spatio-Temporal Graph Convolutional Network: Improving Traffic Prediction with Navigation Data. SIGKDD 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Rui Dai, Shenkun Xu, Qian Gu, Chenguang Ji, Kaikui Liu
<p id = "oneonetwo">[1.12]</p>
Multi-Range Attentive Bicomponent Graph Convolutional Network for Traffic Forecasting. AAAI 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Weiqi Chen, Ling Chen, Yu Xie, Wei Cao, Yusong Gao, Xiaojie Feng
<p id = "oneonethree">[1.13]</p>
LSGCN: Long Short-Term Traffic Prediction with Graph Convolutional Networks. IJCAI 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Rongzhou Huang , Chuyin Huang, Yubao Liu, Genan Dai, Weiyang Kong
<p id = "oneonefour">[1.14]</p>
Optimized Graph Convolution Recurrent Neural Network for Traffic Prediction. TITS 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Kan Guo, Yongli Hu, Zhen Qian, Hao Liu, Ke Zhang, Yanfeng Sun, Junbin Gao, Baocai Yin
<p id = "oneonefive">[1.15]</p>
Dynamic Graph Convolution Network for Traffic Forecasting Based on Latent Network of Laplace Matrix Estimation. TITS 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Kan Guo, Yongli Hu, ZhenQian, Yanfeng Sun, Junbin Gao, Baocai Yin
<p id = "oneonesix">[1.16]</p>
GSTNet: Global Spatial-Temporal Network for Traffic Flow Prediction. IJCAI 2019. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Shen Fang, Qi Zhang, Gaofeng Meng, Shiming Xiang, Chunhong Pan
<p id = "oneoneseven">[1.17]</p>
Short-Term Traffic Flow Forecasting Method With M-B-LSTM Hybrid Network. TITS 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Qu Zhaowei, Li Haitao, Li Zhihui, Zhong Tao
<p id = "oneoneeight">[1.18]</p>
Traffic Graph Convolutional Recurrent Neural Network: A Deep Learning Framework for Network-Scale Traffic Learning and Forecasting. TITS 2019. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Zhiyong Cui, Kristian Henrickson, Ruimin Ke, Ziyuan Pu, Yinhai Wang
2、Other method on Traffic forecasting
<p id = "twoone">[2.1]</p>Urban Traffic Prediction from Spatio-Temporal Data Using Deep Meta Learning. SIGKDD 2019. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Zheyi Pan , Yuxuan Liang , Weifeng Wang, Yong Yu, Yu Zheng, Junbo Zhang
<p id = "twotwo">[2.2]</p>
Revisiting Spatial-Temporal Similarity: A Deep Learning Framework for Traffic Prediction. AAAI 2019. paper, github, code 密码:7hu9
Models | Modules | Architecture | conclusion |
---|---|---|---|
STDN | CNN, LSTM, Attention, FGM | This paper operates on grid data, using CNN for spatial dependence modeling, LSTM for temporal dependence modeling, and introducing an attention mechanism to model periodic changes in time. |
Huaxiu Yao, Xianfeng Tang, Hua Wei, Guanjie Zheng, Zhenhui Li
<p id = "twothree">[2.3]</p>
Deep Spatial–Temporal 3D Convolutional Neural Networks for Traffic Data Forecasting. TITS 2019. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Shengnan Guo, Youfang Lin, Shijie Li, Zhaoming Chen, and Huaiyu Wan
<p id = "twofour">[2.4]</p>
GeoMAN: Multi-level Attention Networks for Geo-sensory Time Series Prediction. IJCAI 2018. paper, github, code 密码:vavc
Models | Modules | Architecture | conclusion |
---|---|---|---|
GeoMAN | Spatial Attention,Temporal Attention, LSTM, Encoder-decoder | This paper applies local and global spatial attention mechanisms to capture dynamic correlations between sensors in the data. And time attention is used to adaptively select the relevant time step to make the prediction. In addition, the proposed model takes into account the influence of external factors through a common fusion module. |
Yuxuan Liang, Songyu Ke, Junbo Zhang, Xiuwen Yi, Yu Zheng
<p id = "twofive">[2.5]</p>
Preserving Dynamic Attention for Long-Term Spatial-Temporal Prediction. SIGKDD 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Haoxing Lin, Rufan Bai,Weijia Jia,Xinyu Yang,Yongjian You
<p id = "twosix">[2.6]</p>
Self-Attention ConvLSTM for Spatiotemporal Prediction. AAAI 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
SA-ConvLSTM | self-attention,ConvLSTM,Self-Attention Memory Module | This paper attempts to capture long-term spatial and temporal dependence by constructing a self-attention memory module. It was combined with ConvLSTM for spatiotemporal prediction. |
Zhihui Lin,Maomao Li,Zhuobin Zheng,Yangyang Cheng,Chun Yuan
3、Flows Prediction
<p id = "threeone">[3.1]</p>Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction. AAAI 2017. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
ST-ResNet | CNN, Extract key frames | This paper is an operation on grid data, using CNN plus residual connections to model spatial dependence, and using key frame extraction to simulate the trend, periodicity, and proximity in the time dimension, and consider external factors. |
Junbo Zhang, Yu Zheng, Dekang Qi
<p id = "threetwo">[3.2]</p>
UrbanFM: Inferring Fine-Grained Urban Flows. SIGKDD 2019. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
UrbanFM | CNN, Upsampling, SubPixel Block | This paper is similar to the resolution conversion of images, and aims to obtain more fine-grained traffic information in cities based on observations. |
Yuxuan Liang, Kun Ouyang, Lin Jing, Sijie Ruan, Ye Liu1 Junbo Zhang, David S. Rosenblum, Yu Zheng
<p id = "threethree">[3.3]</p>
DeepSTD: Mining Spatio-Temporal Disturbances of Multiple Context Factors for Citywide Traffic Flow Prediction. TITS 2019. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Chuanpan Zheng, Xiaoliang Fan, Chenglu Wen, Longbiao Chen, Cheng Wang, Jonathan Li
<p id = "threefour">[3.4]</p>
Dynamic Spatial-Temporal Representation Learning for Traffic Flow Prediction. TITS 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Lingbo Liu, Jiajie Zhen, Guanbin Li , Geng Zhan, Zhaocheng He,Bowen Du,Liang Lin
<p id = "threefive">[3.5]</p>
AutoST: Efficient Neural Architecture Search for Spatio-Temporal Prediction. SIGKDD 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Ting Li, Junbo Zhang, Kainan Bao, Yuxuan Liang, Yexin Li, Yu Zheng
<p id = "threesix">[3.6]</p>
Flow Prediction in Spatio-Temporal Networks Based on Multitask Deep Learning. TKDE 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Junbo Zhang, Yu Zheng, Junkai Sun, Dekang Qi
<p id = "threeseven">[3.7]</p>
Multi-Graph Convolutional Network for Short-Term Passenger Flow Forecasting in Urban Rail Transit. IET Intelligent Transport Systems 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Jinlei Zhang, Feng Chen, Yinan Guo, Xiaohong Li
<p id = "threeeight">[3.8]</p>
Revisiting Convolutional Neural Networks for Citywide Crowd Flow Analytics. arXiv 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Yuxuan Liang, Kun Ouyang, Yiwei Wang, Ye Liu, Junbo Zhang, Yu Zheng, David S. Rosenblum
<p id = "threenine">[3.9]</p>
Citywide Traffic Flow Prediction Based on Multiple Gated Spatio-temporal Convolutional Neural Networks. TKDD 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Cen Chen, Kenli Li, Sin G. Teo, Xiaofeng Zou, Keqin Li, Zeng Zeng
<p id = "threeten">[3.10]</p>
Physical-Virtual Collaboration Modeling for Intra-and Inter-Station Metro Ridership Prediction. arXiv 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Lingbo Liu, Jingwen Chen, Hefeng Wu, Jiajie Zhen, Guanbin Li, Liang Lin
<p id = "threeoneone">[3.11]</p>
Predicting Citywide Crowd Flows in Irregular Regions Using Multi-View Graph Convolutional Networks. TKDE 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Junkai Sun, Junbo Zhang, Qiaofei Li, Xiuwen Yi, Yuxuan Liang, Yu Zheng
<p id = "threeonetwo">[3.12]</p>
Spatial-Temporal Convolutional Graph Attention Networks for Citywide Traffic Flow Forecasting. CIKM 2020. note paper code
Models | Modules | Architecture | conclusion |
---|---|---|---|
ST-CGA | i) captures the multiple granularity-aware temporal factors that govern the dynamic transition regularities of traffic flow; ii) models the high-order spatial relation structures with a channel-aware convolutional graph learning model; iii) integrates the collaborative signals from spatial, temporal and semantic dimensions. |
Xiyue Zhang, Chao Huang, Yong Xu, Lianghao Xia
4、Demand Prediction
<p id = "fourone">[4.1]</p>Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction. AAAI 2018. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Huaxiu Yao, Fei Wu, Jintao Ke, Xianfeng Tang, Yitian Jia, Siyu Lu, Pinghua Gong, Jieping Ye, Zhenhui Li
<p id = "fourtwo">[4.2]</p>
Origin-Destination Matrix Prediction via Graph Convolution: a New Perspective of Passenger Demand Modeling. SIGKDD 2019. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Yuandong Wang, Hongzhi Yin, Hongxu Chen, Tianyu Wo, Jie Xu, Kai Zheng
<p id = "fourthree">[4.3]</p>
STG2Seq: Spatial-temporal Graph to Sequence Model for Multi-step Passenger Demand Forecasting. IJCAI 2019. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Lei Bai, Lina Yao , Salil.S Kanhere, Xianzhi Wang, Quan.Z Sheng
<p id = "fourfour">[4.4]</p>
Taxi Demand Prediction Using Parallel Multi-Task Learning Model. TITS 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Chizhan Zhang, Fenghua Zhu, Xiao Wang, Leilei Sun, Haina Tang, Yisheng Lv
<p id = "fourfive">[4.5]</p>
Traffic Demand Prediction Based on Dynamic Transition Convolutional Neural Network. TITS 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Bowen Du, Xiao Hu, Leilei Sun, Junming Liu, Yanan Qiao, Weifeng Lv
5、Travel time or Arrive time
<p id = "fiveone">[5.1]</p>HetETA: Heterogeneous Information Network Embedding for Estimating Time of Arrival. SIGKDD 2020. paper github, code 密码:eag2
Models | Modules | Architecture | conclusion |
---|---|---|---|
HetETA | GatedCNNs, GCN | In this paper, traffic structure is constructed by digging deeper semantic information of traffic network. HetETA combines gated convolution neural networks and graph neural networks to capture the correlations in spatiotemporal information. |
Huiting Hong, Yucheng Lin, Xiaoqing Yang, Zang Li, Kun Fu, Zheng Wang, Xiaohu Qie, Jieping Ye
<p id = "fivetwo">[5.2]</p>
CompactETA: A Fast Inference System for Travel Time Prediction. KDD 2020. paper
Models | Modules | Architecture | conclusion |
---|---|---|---|
Kun Fu, Fanlin Meng, Jieping Ye, Zheng Wang
<p id = "fivethree">[5.3]</p>
Spatiotemporal Multi-Graph Convolution Network for Ride-hailing Demand Forecasting. AAAI 2019. paper.
Models | Modules | Architecture | conclusion |
---|---|---|---|
STMGCN | GCN,CGRNN | This paper simulates complex spatial relationships by constructing multi-graphs, captures temporal dependencies by context-gated RNN, and captures spatial dependencies by GCN. |
Xu Geng, Yaguang Li, Leye Wang, Lingyu Zhang, Qiang Yang, Jieping Ye, Yan Liu
Contributors
<a href="https://github.com/SuperSupeng"><img src="https://avatars2.githubusercontent.com/u/20471278?s=460&u=f62611f65c6c368293c0fd73b92aac7d7219b71&v=4" width=98px></img></a> <a href="https://github.com/chenwangnatsukashii"><img src="https://avatars1.githubusercontent.com/u/32348047?s=460&v=4" width=98px></img></a> <a href="https://github.com/zhangjunming123"><img src="https://avatars2.githubusercontent.com/u/38833798?s=460&v=4" width=98px></img></a> <a href="https://github.com/Sylvia822"><img src="https://avatars0.githubusercontent.com/u/63226742?s=460&u=f8485c2378d4454cdedb483e35d9aa603e687e78&v=4" width=98px></img></a> <a href="https://github.com/makeittrue"><img src="https://avatars2.githubusercontent.com/u/39159570?s=460&u=af6cc2ccda8ea0fbf1ab0a440f74887e65b6d18d&v=4" width=98px></img></a>