Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/70594
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kain, Verena | - |
dc.contributor.author | Hirlander, Simon | - |
dc.contributor.author | Goddard, Brennan | - |
dc.contributor.author | Velotti, Francesco Maria | - |
dc.contributor.author | Zevi Della Porta, Giovanni | - |
dc.contributor.author | Bruchon, Niky | - |
dc.contributor.author | Valentino, Gianluca | - |
dc.date.accessioned | 2021-03-08T10:47:01Z | - |
dc.date.available | 2021-03-08T10:47:01Z | - |
dc.date.issued | 2020-12 | - |
dc.identifier.citation | Kain, V., Hirlander, S., Goddard, B., Velotti, F. M., Della Porta, G. Z., Bruchon, N., & Valentino, G. (2020). Sample-efficient reinforcement learning for CERN accelerator control. Physical Review Accelerators and Beams, 23(12), 124801. | en_GB |
dc.identifier.uri | https://www.um.edu.mt/library/oar/handle/123456789/70594 | - |
dc.description.abstract | Numerical optimization algorithms are already established tools to increase and stabilize the performance of particle accelerators. These algorithms have many advantages, are available out of the box, and can be adapted to a wide range of optimization problems in accelerator operation. The next boost in efficiency is expected to come from reinforcement learning algorithms that learn the optimal policy for a certain control problem and hence, once trained, can do without the time-consuming exploration phase needed for numerical optimizers. To investigate this approach, continuous model-free reinforcement learning with up to 16 degrees of freedom was developed and successfully tested at various facilities at CERN. The approach and algorithms used are discussed and the results obtained for trajectory steering at the AWAKE electron line and LINAC4 are presented. The necessary next steps, such as uncertainty aware model-based approaches, and the potential for future applications at particle accelerators are addressed. | en_GB |
dc.language.iso | en | en_GB |
dc.publisher | American Physical Society | en_GB |
dc.rights | info:eu-repo/semantics/openAccess | en_GB |
dc.subject | Reinforcement learning | en_GB |
dc.subject | Particle accelerators | en_GB |
dc.title | Sample-efficient reinforcement learning for CERN accelerator control | en_GB |
dc.type | article | en_GB |
dc.rights.holder | The copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder. | en_GB |
dc.description.reviewed | peer-reviewed | en_GB |
dc.identifier.doi | 10.1103/PhysRevAccelBeams.23.124801 | - |
dc.publication.title | Physical Review Accelerators and Beams | en_GB |
Appears in Collections: | Scholarly Works - FacICTCCE |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
PhysRevAccelBeams.23.124801.pdf | 1.41 MB | Adobe PDF | View/Open |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.