Path: Top -> Journal -> Jurnal Nasional Teknik Elektro dan Teknologi Informasi -> 2017 -> Vol 6, No 1
Implementasi Q-Learning dan Backpropagation pada Agen yang Memainkan Permainan Flappy Bird
Oleh : Ardiansyah Ardiansyah, Ednawati Rainarli, JNTETI
Dibuat : 2017-02-10, dengan 1 file
Keyword : Flappy Bird, Q-Learning, Value-Function Approximation, Artificial Neural Netowrk, Backpropagation
Url : http://ejnteti.jteti.ugm.ac.id/index.php/JNTETI/article/view/287
Sumber pengambilan dokumen : Web
This paper shows how to implement a combination of Q-learning and backpropagation on the case of agent learning to play Flappy Bird game. Q-learning and backpropagation are combined to predict the value-function of each action, or called value-function approximation. The value-function approximation is used to reduce learning time and to reduce weights stored in memory. Previous studies using only regular reinforcement learning took longer time and more amount of weights stored in memory. The artificial neural network architecture (ANN) used in this study is an ANN for each action. The results show that combining Q-learning and backpropagation can reduce agents learning time to play Flappy Bird up to 92% and reduce the weights stored in memory up to 94%, compared to regular Q-learning only. Although the learning time and the weights stored are reduced, Q-learning combined with backpropagation have the same ability as regular Q-learning to play Flappy Bird game.
Beri Komentar ?#(0) | Bookmark
Properti | Nilai Properti |
---|---|
ID Publisher | gdlhub |
Organisasi | JNTETI |
Nama Kontak | Herti Yani, S.Kom |
Alamat | Jln. Jenderal Sudirman |
Kota | Jambi |
Daerah | Jambi |
Negara | Indonesia |
Telepon | 0741-35095 |
Fax | 0741-35093 |
E-mail Administrator | elibrarystikom@gmail.com |
E-mail CKO | elibrarystikom@gmail.com |
Print ...
Kontributor...
- , Editor: Calvin
Download...
Download hanya untuk member.
287-446-1-SM
File : 287-446-1-SM.pdf
(1479852 bytes)