附录 B — 参考文献
Antonov, M., Csárdi, G., Horvát, S., Müller, K., Nepusz, T., Noom, D.,
Salmon, M., Traag, V., Welles, B. F., and Zanini, F. (2023),
“Igraph enables fast and robust network analysis across
programming languages,” arXiv preprint arXiv:2311.10260.
https://doi.org/10.48550/arXiv.2311.10260.
Bahdanau, D., Cho, K., and Bengio, Y. (2016), “Neural machine translation by
jointly learning to align and translate.”
Bergstra, J., and Bengio, Y. (2012), “Random search for
hyper-parameter optimization,” Journal of Machine
Learning Research, 13, 281–305.
Chen, S. (2015), “Beijing PM2.5,” UCI Machine
Learning Repository.
Chen, Y., Hao, Y., Rakthanmanon, T., Zakaria, J., Hu, B., and Keogh, E.
(2015), “A general framework for never-ending learning from time
series streams,” Data Min. Knowl. Discov., USA: Kluwer
Academic Publishers, 29, 1622–1664. https://doi.org/10.1007/s10618-014-0388-4.
Falbel, D. (2025), Luz: Higher level ’API’ for ’torch’. https://doi.org/10.32614/CRAN.package.luz.
Falbel, D., and Luraschi, J. (2025), Torch: Tensors and neural networks
with ’GPU’ acceleration.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley,
D., Ozair, S., Courville, A., and Bengio, Y. (2014), “Generative adversarial
networks.”
He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., and Wang, M. (2020),
“LightGCN: Simplifying
and powering graph convolution network for recommendation.”
Hochreiter, S., and Schmidhuber, J. (1997), “Long short-term
memory,” Neural computation, MIT press, 9, 1735–1780.
Kang, W.-C., and McAuley, J. (2018), “Self-attentive sequential
recommendation.”
Kipf, T. N., and Welling, M. (2017), “Semi-supervised classification
with graph convolutional networks.”
Kuhn, M. (2025), Modeldata: Data sets useful for modeling
examples. https://doi.org/10.32614/CRAN.package.modeldata.
Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998),
“Gradient-based learning applied to document recognition,”
Proceedings of the IEEE, 86, 2278–2324. https://doi.org/10.1109/5.726791.
Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y., and Potts,
C. (2011), “Learning word vectors
for sentiment analysis,” in Proceedings of the 49th
annual meeting of the association for computational linguistics: Human
language technologies, Portland, Oregon, USA: Association for
Computational Linguistics, pp. 142–150.
Pennington, J., Socher, R., and Manning, C. D. (2014), “GloVe: Global vectors
for word representation,” in Empirical methods in natural
language processing (EMNLP), pp. 1532–1543.
Press, O., Smith, N. A., and Lewis, M. (2022), “Train short, test long:
Attention with linear biases enables input length
extrapolation.”
Ren, S., He, K., Girshick, R., and Sun, J. (2017), “Faster r-CNN:
Towards real-time object detection with region proposal
networks,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, 39, 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. (2019),
“MobileNetV2: Inverted
residuals and linear bottlenecks.”
Simmler, M., and Brouwers, S. P. (2024), “triact package for r: Analyzing the lying behavior
of cows from accelerometer data.” PeerJ, 12, e17036. https://doi.org/10.7717/peerj.17036.
Simonyan, K., and Zisserman, A. (2015), “Very deep convolutional networks
for large-scale image recognition.”
Smith, L. N. (2017), “Cyclical learning rates for
training neural networks.”
Sun, F., Liu, J., Wu, J., Pei, C., Lin, X., Ou, W., and Jiang, P.
(2019), “BERT4Rec:
Sequential recommendation with bidirectional encoder representations
from transformer.”
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez,
A. N., Kaiser, L., and Polosukhin, I. (2017), “Attention is all you
need,” CoRR, abs/1706.03762.
Wickham, H. (2014), “Tidy data,” Journal of Statistical
Software, 59, 1–23. https://doi.org/10.18637/jss.v059.i10.