Abed_Abud_2023_J._Inst._18_P04034.pdf (14.93 MB)
Highly-parallelized simulation of a pixelated LArTPC on a GPU
journal contribution
posted on 2023-06-10, 07:10 authored by A Abed Abud, B Abi, R Acciarri, M R Adames, G Adamov, M Adamowski, Lily AsquithLily Asquith, Clark GriffithClark Griffith, Simon PeetersSimon Peeters, Kate ShawKate Shaw, K Wawrowska, Iker De Icaza Astiz, othersThe rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
History
Publication status
- Published
File Version
- Published version
Journal
Journal of InstrumentationISSN
1748-0221Publisher
Institute of PhysicsExternal DOI
Volume
18Page range
P04034 1-35Department affiliated with
- Physics and Astronomy Publications
Full text available
- Yes
Peer reviewed?
- Yes