26. 「本研究の位置づけ(1)(2)」の参
考文献 その1:論文
[SIFT]Distinctive Image Features from Scale-Invariant Keypoints
https://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/lowe_ijcv2
004.pdf
[AlexNet]ImageNetClassificationwithDeepConvolutional NeuralNetworks
https://www.cs.toronto.edu/~kriz/imagenet_classification_with_deep_convol
utional.pdf
[ResNet]Deep Residual Learning for Image Recognition
https://arxiv.org/abs/1512.03385
[Vgg]VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE
RECOGNITION
http://thunders1028.hatenablog.com/entry/2017/11/01/035609
[YOLO]You Only Look Once: Unified, Real-Time Object Detection
https://arxiv.org/abs/1506.02640
[SSD]SSD: Single Shot MultiBox Detector
https://arxiv.org/abs/1512.02325
[DQN]Playing atari with deep reinforcement learning
http://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
[Rainbow]Rainbow: Combining Improvements in Deep Reinforcement
Learning
https://arxiv.org/abs/1710.02298
[R2D2R2D2: Repeatable and Reliable Detector and Descriptor
https://arxiv.org/abs/1906.06195
[LDA]D. Blei, A. Ng, and M. Jordan, “Latent Dirichlet Allocation”, in Journal of
Machine Learning Research, 2003, pp. 1107-1135.
http://jmlr.csail.mit.edu/papers/v3/blei03a.html
[Word2Vec]word2vec Explained: Deriving Mikolov et al.’s Negative-Sampling
Word-Embedding Method
https://arxiv.org/pdf/1402.3722.pdf
[BERT]BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding
https://arxiv.org/abs/1810.04805