Speaker
Description
At present, there is no ‘ideal’ detector. In consequence, the electrical signals induced by ‘events’ in a detector can sometimes be ‘falsely’ recorded or may not be recorded at all. For scintillators, for example, acquiring an event by integrating the electrical charge requires a minimum integration and recording time. During this integration time, one or more other events may occur and are then summed and recorded as a single event. This type of recording is commonly called ‘pile-up’. Moreover, the recording of events and the return time to standby configuration requires a minimum duration which is commonly called ‘dead time’. Correcting these events has always posed some difficulty. For 'old analogue acquisitions' for example, it was common to limit the count rate in order to reduce the fraction of ‘pile-up’ events to negligible quantity and thus to not have to apply any correction. Other methods can obviously be used, such as the pulser method to correct for ‘dead time’ and ‘pile-up’, but their implementation is generally complex and cumbersome and has not been used in many applications. The development of digital acquisition since the 2000 years has changed practice. For example, digitizing all the signals leads to identifying the ‘pile-up’ events in post-processing and so apply corrections. However, this method suffers from two strong flaw: firstly, it requires large amounts of computer data, which need many hard disk storages and even more serious, can limit the acceptable count rate according on the acquisition specifications. Secondly, when the time separating the two signals is too close or one signal is too small, it may be impossible to identify the ‘pile-up’. Furthermore, estimating the dead time can also be a difficulty when the acquisition time of an event varies. However, the development of digital acquisitions offers another correction possibility, with or without recording the waveform, based on a Monte Carlo method using probabilities. Indeed, by using the events timing, it is possible for stable measurement configuration to deduce the ‘true’ count rate from the statistical distribution of events. This can offer a first correction possibility. However, the count rate can also be used to carry out a Monte Carlo random sampling of event arrival times and thus simulate a measurement statistically equivalent for the time sampling to the true one. In the same way, the study of real individual events can be used to create a set of statistical data that can be used with a Monte Carlo random sampling. The aim is then to randomly assign recording parameters to each simulated event that are consistent with the measurement. In this way, it is possible to completely simulate the measurement and so deduce the ‘pile-up’ and dead time corrections by studying these simulated events. The method generally requires an iterative algorithm, since the initial statistical data is not perfect. At the end of the analysis, these corrections can be compared with the parameters of the true measurement using some physical observables. For using digital acquisitions in neutron metrology, the Laboratory for micro-irradiation, neutron metrology and dosimetry (i.e. the LMDN from IRSN/Cadarache/France) has developed such a model based on Monte Carlo simulation. The model uses a random selection of the arrival time and the PSD parameters of each event. Some information must be known in advance (as for the signal shape) or deduced from the very experiment (as the temporal description and the PSD matrix). This model does not require routine digitization of the waveform. The model was successfully tested on a white neutron spectrum, during a time-of-flight measurement carried out on NFS (GANIL/France) with a stilbene scintillator coupled to digital acquisition. The model was used to estimate corrections of 10% for dead time and pile-up. Comparison of the measurements with the simulation enabled us to estimate this correction uncertainty at around 0.2%, i.e. less than the other measurement uncertainties. Since waveforms are not required, the amount of data to be recorded is divided by a factor of 100 in our case, which considerably increases the admissible measurement statistics while reducing the experiment time. The method requires a few obvious to be possible/effective, such as having statistics sufficiently representative of the data. It can be adapted to most detectors and measurements. In this presentation, the model implemented for NFS measurements with the scintillator and digital acquisition will be presented in detail. The optimization methodology and comparisons used to validate the effectiveness of the method will also be presented.