Soumettre la recherche
Mettre en ligne
時系列データ3
•
Télécharger en tant que PPT, PDF
•
1 j'aime
•
2,110 vues
G
graySpace999
Suivre
Divertissement et humour
Signaler
Partager
Signaler
Partager
1 sur 6
Télécharger maintenant
Recommandé
第一回統計学勉強会@東大駒場
第一回統計学勉強会@東大駒場
Daisuke Yoneoka
Introduction of "TrailBlazer" algorithm
Introduction of "TrailBlazer" algorithm
Katsuki Ohto
Fast and Probvably Seedings for k-Means
Fast and Probvably Seedings for k-Means
Kimikazu Kato
InfoGAN: Interpretable Representation Learning by Information Maximizing Gen...
InfoGAN: Interpretable Representation Learning by Information Maximizing Gen...
Shuhei Yoshida
Dual Learning for Machine Translation (NIPS 2016)
Dual Learning for Machine Translation (NIPS 2016)
Toru Fujino
Value iteration networks
Value iteration networks
Fujimoto Keisuke
Interaction Networks for Learning about Objects, Relations and Physics
Interaction Networks for Learning about Objects, Relations and Physics
Ken Kuroki
Introduction of “Fairness in Learning: Classic and Contextual Bandits”
Introduction of “Fairness in Learning: Classic and Contextual Bandits”
Kazuto Fukuchi
Recommandé
第一回統計学勉強会@東大駒場
第一回統計学勉強会@東大駒場
Daisuke Yoneoka
Introduction of "TrailBlazer" algorithm
Introduction of "TrailBlazer" algorithm
Katsuki Ohto
Fast and Probvably Seedings for k-Means
Fast and Probvably Seedings for k-Means
Kimikazu Kato
InfoGAN: Interpretable Representation Learning by Information Maximizing Gen...
InfoGAN: Interpretable Representation Learning by Information Maximizing Gen...
Shuhei Yoshida
Dual Learning for Machine Translation (NIPS 2016)
Dual Learning for Machine Translation (NIPS 2016)
Toru Fujino
Value iteration networks
Value iteration networks
Fujimoto Keisuke
Interaction Networks for Learning about Objects, Relations and Physics
Interaction Networks for Learning about Objects, Relations and Physics
Ken Kuroki
Introduction of “Fairness in Learning: Classic and Contextual Bandits”
Introduction of “Fairness in Learning: Classic and Contextual Bandits”
Kazuto Fukuchi
Learning to learn by gradient descent by gradient descent
Learning to learn by gradient descent by gradient descent
Hiroyuki Fukuda
Safe and Efficient Off-Policy Reinforcement Learning
Safe and Efficient Off-Policy Reinforcement Learning
mooopan
Conditional Image Generation with PixelCNN Decoders
Conditional Image Generation with PixelCNN Decoders
suga93
Improving Variational Inference with Inverse Autoregressive Flow
Improving Variational Inference with Inverse Autoregressive Flow
Tatsuya Shirakawa
[DL輪読会]Convolutional Sequence to Sequence Learning
[DL輪読会]Convolutional Sequence to Sequence Learning
Deep Learning JP
論文紹介 Combining Model-Based and Model-Free Updates for Trajectory-Centric Rein...
論文紹介 Combining Model-Based and Model-Free Updates for Trajectory-Centric Rein...
Kusano Hitoshi
NIPS 2016 Overview and Deep Learning Topics
NIPS 2016 Overview and Deep Learning Topics
Koichi Hamada
Differential privacy without sensitivity [NIPS2016読み会資料]
Differential privacy without sensitivity [NIPS2016読み会資料]
Kentaro Minami
Matching networks for one shot learning
Matching networks for one shot learning
Kazuki Fujikawa
ICML2016読み会 概要紹介
ICML2016読み会 概要紹介
Kohei Hayashi
論文紹介 Pixel Recurrent Neural Networks
論文紹介 Pixel Recurrent Neural Networks
Seiya Tokui
Contenu connexe
En vedette
Learning to learn by gradient descent by gradient descent
Learning to learn by gradient descent by gradient descent
Hiroyuki Fukuda
Safe and Efficient Off-Policy Reinforcement Learning
Safe and Efficient Off-Policy Reinforcement Learning
mooopan
Conditional Image Generation with PixelCNN Decoders
Conditional Image Generation with PixelCNN Decoders
suga93
Improving Variational Inference with Inverse Autoregressive Flow
Improving Variational Inference with Inverse Autoregressive Flow
Tatsuya Shirakawa
[DL輪読会]Convolutional Sequence to Sequence Learning
[DL輪読会]Convolutional Sequence to Sequence Learning
Deep Learning JP
論文紹介 Combining Model-Based and Model-Free Updates for Trajectory-Centric Rein...
論文紹介 Combining Model-Based and Model-Free Updates for Trajectory-Centric Rein...
Kusano Hitoshi
NIPS 2016 Overview and Deep Learning Topics
NIPS 2016 Overview and Deep Learning Topics
Koichi Hamada
Differential privacy without sensitivity [NIPS2016読み会資料]
Differential privacy without sensitivity [NIPS2016読み会資料]
Kentaro Minami
Matching networks for one shot learning
Matching networks for one shot learning
Kazuki Fujikawa
ICML2016読み会 概要紹介
ICML2016読み会 概要紹介
Kohei Hayashi
論文紹介 Pixel Recurrent Neural Networks
論文紹介 Pixel Recurrent Neural Networks
Seiya Tokui
En vedette
(11)
Learning to learn by gradient descent by gradient descent
Learning to learn by gradient descent by gradient descent
Safe and Efficient Off-Policy Reinforcement Learning
Safe and Efficient Off-Policy Reinforcement Learning
Conditional Image Generation with PixelCNN Decoders
Conditional Image Generation with PixelCNN Decoders
Improving Variational Inference with Inverse Autoregressive Flow
Improving Variational Inference with Inverse Autoregressive Flow
[DL輪読会]Convolutional Sequence to Sequence Learning
[DL輪読会]Convolutional Sequence to Sequence Learning
論文紹介 Combining Model-Based and Model-Free Updates for Trajectory-Centric Rein...
論文紹介 Combining Model-Based and Model-Free Updates for Trajectory-Centric Rein...
NIPS 2016 Overview and Deep Learning Topics
NIPS 2016 Overview and Deep Learning Topics
Differential privacy without sensitivity [NIPS2016読み会資料]
Differential privacy without sensitivity [NIPS2016読み会資料]
Matching networks for one shot learning
Matching networks for one shot learning
ICML2016読み会 概要紹介
ICML2016読み会 概要紹介
論文紹介 Pixel Recurrent Neural Networks
論文紹介 Pixel Recurrent Neural Networks
時系列データ3
1.
時系列データ分析 3 時系列データの時間依存と 自己回帰モデル
2.
時点がずれたデー タ 同士の相関性は? → 自己相関性 時間依存の表現 1つ以上離れたデー タ同士の相関性は? → 偏自己相関性 time 【
Q&A 】 Q: あるデータが時間依存の構造を持つかどうかを調べるには? A: 時点をずらして、自分自身との相関関係を調べる(自己相関関係 を調べる)。 【定義】 ・ラグ →自己相関性を調べる際にずらす時間差のこと。 ・コレログラム →時間差(ラグ)と自己相関係数の推移を確認できるグラフ。
3.
自己相関性を調べる統計的仮説検定 No 帰無仮説 検定手法 1 自己相関関係を有し Ljung-Box
検定 ていない
4.
時系列データの定常性 「データが独立に抽出された標本である」という前提では、時間依存性を 調べられない。 束縛を弱めつつ、適した条件を考える 弱定常性 1.平均が時間によらず 一定 2.分散が時間によらず 一定 3.自己共分散がラグ h のみに依存 (
時間によら ず一定) ホワイトノイズ 1.平均が 0 2.分散が一定 3.自己共分散が 0
5.
自己回帰モデル Rt = μ
+ Φ*Rt-1 + εt 1. εt はホワイトノイズ。→自己相関性が無い (過去時点の値には依存しない)。 2. |Φ|<1 という条件が、 Rt が定常性を満たすための条 件。 3. Φ=1 の場合、単位根を持つ時系列。 ※)はじめに対象データが単位根を有するかを調べ るべき。
6.
自己回帰モデル Rt = μ
+ Φ*Rt-1 + εt 1. εt はホワイトノイズ。→自己相関性が無い (過去時点の値には依存しない)。 2. |Φ|<1 という条件が、 Rt が定常性を満たすための条 件。 3. Φ=1 の場合、単位根を持つ時系列。 ※)はじめに対象データが単位根を有するかを調べ るべき。
Télécharger maintenant