Erwin Alhassan | Paul Scherrer Institute (original) (raw)
Papers by Erwin Alhassan
WPEC meeting SG46 May 2018, 2018
Any calculated quantity is practically meaningless without estimates on the uncertainty of theobt... more Any calculated quantity is practically meaningless without estimates on the uncertainty of theobtained results, not the least when it comes to, e.g., safety parameters in a nuclear reactor. Oneof t ...
In the Total Monte Carlo (TMC) method [1] developed at the Nuclear Research and Consultancy Group... more In the Total Monte Carlo (TMC) method [1] developed at the Nuclear Research and Consultancy Group for nuclear data uncertainty propagation, model calculations are compared with differential experimental data and a specific a priori uncertainty is assigned to each model parameter. By varying the model parameters all together within model parameter uncertainties, a full covariance matrix is obtained with its off diagonal elements if desired [1]. In this way, differential experimental data serve as a constraint for the model parameters used in the TALYS nuclear reactions code for the production of random nuclear data files. These files are processed into usable formats and used in transport codes for reactor calculations and for uncertainty propagation to reactor macroscopic parameters of interest. Even though differential experimental data together with their uncertainties are included (implicitly) in the production of these random nuclear data files in the TMC method, wide spreads in...
For the successful deployment of advanced nuclear systems and optimization of current reactor des... more For the successful deployment of advanced nuclear systems and optimization of current reactor designs, high quality nuclear data are required. Before nuclear data can be used in applications they m ...
Progress in Nuclear Energy, 2017
Integral experiments can be used to adjust ND-libraries and consequently the uncertainty response... more Integral experiments can be used to adjust ND-libraries and consequently the uncertainty response in important applications. . In this work we show how we can use integral experiments in a consiste ...
Integral experiments can be used to adjust ND-libraries and consequently the uncertainty response... more Integral experiments can be used to adjust ND-libraries and consequently the uncertainty response in important applications. . In this work we show how we can use integral experiments in a consiste ...
Epj Web of Conferences, 2020
In this work, an overview on the relevance of the nuclear data (ND) uncertainties with respect to... more In this work, an overview on the relevance of the nuclear data (ND) uncertainties with respect to the Light Water Reactors (LWR) neutron dosimetry is presented. The paper summarizes results of several studies realized at the LRT laboratory of the Paul Scherrer Institute over the past decade. The studies were done using the base LRT calculation methodology for dosimetry assessments, which involves the neutron source distribution representation, obtained based on validated CASMO/SIMULATE core follow calculation models, and the subsequent neutron transport simulations with the MCNP® software. The methodology was validated using as reference data results of numerous measurement programs fulfilled at Swiss NPPs. Namely, the following experimental programs are considered in the given overview: PWR “gradient probes” and BWR fast neutron fluence (FNF) monitors post irradiation examination (PIE). For the both cases, assessments of the nuclear data related uncertainties were performed. When a...
Development Of Drag-MOC : A tool for the study of uncertainty analysis through the deterministic ... more Development Of Drag-MOC : A tool for the study of uncertainty analysis through the deterministic OpenMOC transport code
Annals of Nuclear Energy, 2020
In this work, a method is proposed for combining differential and integral benchmark experimental... more In this work, a method is proposed for combining differential and integral benchmark experimental data within a Bayesian framework for nuclear data adjustments and multi-level uncertainty propagation using the Total Monte Carlo method. First, input parameters to basic nuclear physics models implemented within the state of the art nuclear reactions code, TALYS, were sampled from uniform distributions and randomly varied to produce a large set of random nuclear data files. Next, a probabilistic data assimilation was carried out by computing the likelihood function for each random nuclear data file based first on only differential experimental data (1st update) and then on integral benchmark data (2nd update). The individual likelihood functions from the two updates were then combined into a global likelihood function which was used for the selection of the final 'best' file. The proposed method has been applied for the adjustment of 208 Pb in the fast neutron energy region below 20 MeV. The 'best' file from the adjustments was compared with available experimental data from the EXFOR database as well as evaluations from the major nuclear data libraries and found to compare favourably.
Research Journal of Applied Sciences, Engineering and Technology, 2013
An important parameter in the design and analysis of a nuclear reactor is the reactivity worth of... more An important parameter in the design and analysis of a nuclear reactor is the reactivity worth of the control rod which is a measure of the efficiency of the control rod to absorb excess reactivity. During reactor operation, the control rod worth is affected by factors such as the fuel burnup, Xenon concentration, Samarium concentration and the position of the control rod in the core. This study investigates the effect of fuel burnup on the control rod worth by comparing results of a fresh and an irradiated core of Ghana's Miniature Neutron Source Reactor for both HEU and LEU cores. In this study, two codes have been utilized namely BURNPRO for fuel burnup calculation and MCNP5 which uses densities of actinides of the irradiated fuel obtained from BURNPRO. Results showed a decrease of the control rod worth with burnup for the LEU while rod worth increased with burnup for the HEU core. The average thermal flux in both inner and outer irradiation sites also decreased significantly with burnup for both cores.
Annals of Nuclear Energy, 2016
For several decades reactor design has been supported by computer codes for the investigation of ... more For several decades reactor design has been supported by computer codes for the investigation of reactor behavior under both steady state and transient conditions. The use of computer codes to simulate reactor behavior enables the investigation of various safety scenarios saving time and cost. There has been an increase in the development of in-house (local) codes by various research groups in recent times for preliminary design of specific or targeted nuclear reactor applications. These codes must be validated and calibrated against experimental benchmark data with their evolution and improvements. Given the large number of benchmarks available, selecting these benchmarks for reactor calculations and validation of simulation codes for specific or target applications can be rather tedious and difficult. In the past, the traditional approach based on expert judgement using information provided in various handbooks, has been used for the selection of these benchmarks. This approach has been criticized because it introduces a user bias into the selection process. This paper presents a method for selecting these benchmarks for reactor calculations for specific reactor applications based on the Total Monte Carlo (TMC) method. First, nuclear model parameters are randomly sampled within a given probability distribution and a large set of random nuclear data files are produced using the TALYS code system. These files are processed and used to analyze a target reactor system and a set of criticality benchmarks. Similarity between the target reactor system and one or several benchmarks is quantified using a similarity index. The method has been applied to the European Lead Cooled Reactor (ELECTRA) and a set of plutonium and lead sensitive criticality benchmarks using the effective multiplication factor (k eff). From the study, strong similarity were observed in the k eff between ELECTRA and some plutonium and lead sensitive criticality benchmarks. Also, for validation purposes, simulation results for a list of selected criticality benchmarks simulated with the MCNPX and SERPENT codes using different nuclear data libraries have been compared with experimentally measured benchmark k eff values.
Progress in Nuclear Energy, 2017
The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been su... more The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how TMC and Unified Monte Carlo-B (UMC-B) are combined to include experimental data in TMC. Random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications, e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4. In this implementation, the impact from the weighting is small for many of the applications. In some cases, this can be explained by the fact that the distributions used as priors are too narrow to be valid as such. Another possible explanation is that the integral systems are highly sensitive to resonance parameters, which effectively are not treated in this work. In other cases, only a very small number of files get significantly large weights, i.e., the region of interest is poorly resolved. This convergence issue can be due to the parameter distributions used as priors or model defects, for example. Further, some parameters used in the rules for the EXFOR interpretation have been varied. The observed impact from varying one parameter at a time is not very strong. This can partially be due to the general insensitivity to the weights seen for many applications, and there can be strong interaction effects. The automatic treatment of outliers has a quite large impact, however. To approach more justified ND uncertainties, the rules for the EXFOR interpretation shall be further discussed and developed, in particular the rules for rejecting outliers, and random ND files that are intended to describe prior distributions shall be generated. Further, model defects need to be treated.
Progress in Nuclear Energy, 2016
The current nuclear data uncertainties observed in reactor safety parameters for some nuclides ca... more The current nuclear data uncertainties observed in reactor safety parameters for some nuclides call for safety concerns especially with respect to the design of GEN-IV reactors and must therefore be reduced significantly. In this work, uncertainty reduction using criticality benchmark experiments within the Total Monte Carlo methodology is presented. Random nuclear data libraries generated are processed and used to analyze a set of criticality benchmarks. Since the calculated results for each random nuclear data used are different, an algorithm was used to select (or assign weights to) the libraries which give a good description of experimental data for the analyses of the benchmarks. The selected or weighted libraries were then used to analyze the ELECTRA reactor. By using random nuclear data libraries constrained with only differential experimental data as our prior, the uncertainties observed were further reduced by constraining the files with integral experimental data to obtain a posteriori uncertainties on the k eff. Two approaches are presented and compared: a binary accept/reject and a method of assigning file weights based on the likelihood function. Significant reductions in 239 Pu and 208 Pb nuclear data uncertainties in the k eff were observed after implementing the two methods with some criticality benchmarks for the ELECTRA reactor.
Annals of Nuclear Energy, 2014
In this work, we present results of fuel depletion analyses performed for a potential LEU core of... more In this work, we present results of fuel depletion analyses performed for a potential LEU core of Ghana's Miniature Neutron Source Reactor (GHARR-1) using the Monte Carlo N-particle extended (MCNPX) neutron transport code. Depletion calculation was carried out for the reactor core from the Beginning of Life (BOL) to the End of Life (EOL) which corresponds to 10 years of reactor operation. The amounts of uranium and plutonium actinides were estimated at BOL and EOL of the core. Decay heat removal rate for the MNSR after reactor shutdown was investigated due to its significance to reactor safety. Inventory of fission products produced as a result of burnup was also calculated. The results show that a maximum discharge burnup equivalent to 0.568% of U-235 was consumed at EOL equivalent to operating the reactor for 200 Effective Full Power Days (EFPD), while the amount of Pu-239 produced was not significant. Also, the decay heat decreased exponentially after reactor shutdown confirming that decay heat will be removed in the system by natural circulation after shutdown and thus guaranteeing the safety of the reactor.
Nuclear Science and Techniques
In this work, we explore the use of an iterative Bayesian Monte Carlo (iBMC) method for nuclear d... more In this work, we explore the use of an iterative Bayesian Monte Carlo (iBMC) method for nuclear data evaluation within a TALYS Evaluated Nuclear Data Library (TENDL) framework. The goal is to probe the model and parameter space of the TALYS code system to find the optimal model and parameter sets that reproduces selected experimental data. The method involves the simultaneous variation of many nuclear reaction models as well as their parameters included in the TALYS code. The ‘best’ model set with its parameter set was obtained by comparing model calculations with selected experimental data. Three experimental data types were used: (1) reaction cross sections, (2) residual production cross sections, and (3) the elastic angular distributions. To improve our fit to experimental data, we update our ‘best’ parameter set—the file that maximizes the likelihood function—in an iterative fashion. Convergence was determined by monitoring the evolution of the maximum likelihood estimate (MLE) ...
Annals of Nuclear Energy
The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been su... more The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4. For many applications, the weighting does not have much impact, something which can be explained by too narrow prior distribut...
Annals of Nuclear Energy
Criticality, reactor physics and shielding benchmarks are expected to play important roles in GEN... more Criticality, reactor physics and shielding benchmarks are expected to play important roles in GEN-IV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used for validating computer codes and for testing nuclear data libraries. Given the large number of benchmarks available, selecting these benchmarks for specific applications can be rather tedious and difficult. Until recently, the selection process has been based usually on expert judgement which is dependent on the expertise and the experience of the user and thereby introducing a user bias into the process. This approach is also not suitable for the Total Monte Carlo methodology which lays strong emphasis on automation, reproducibility and quality assurance. In this paper a method for selecting these benchmarks for reactor calculation and for nuclear data uncertainty reduction based on the Total Monte Carlo (TMC) method is presented. For...
Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sam... more In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random uncertainty and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systemati...
WPEC meeting SG46 May 2018, 2018
Any calculated quantity is practically meaningless without estimates on the uncertainty of theobt... more Any calculated quantity is practically meaningless without estimates on the uncertainty of theobtained results, not the least when it comes to, e.g., safety parameters in a nuclear reactor. Oneof t ...
In the Total Monte Carlo (TMC) method [1] developed at the Nuclear Research and Consultancy Group... more In the Total Monte Carlo (TMC) method [1] developed at the Nuclear Research and Consultancy Group for nuclear data uncertainty propagation, model calculations are compared with differential experimental data and a specific a priori uncertainty is assigned to each model parameter. By varying the model parameters all together within model parameter uncertainties, a full covariance matrix is obtained with its off diagonal elements if desired [1]. In this way, differential experimental data serve as a constraint for the model parameters used in the TALYS nuclear reactions code for the production of random nuclear data files. These files are processed into usable formats and used in transport codes for reactor calculations and for uncertainty propagation to reactor macroscopic parameters of interest. Even though differential experimental data together with their uncertainties are included (implicitly) in the production of these random nuclear data files in the TMC method, wide spreads in...
For the successful deployment of advanced nuclear systems and optimization of current reactor des... more For the successful deployment of advanced nuclear systems and optimization of current reactor designs, high quality nuclear data are required. Before nuclear data can be used in applications they m ...
Progress in Nuclear Energy, 2017
Integral experiments can be used to adjust ND-libraries and consequently the uncertainty response... more Integral experiments can be used to adjust ND-libraries and consequently the uncertainty response in important applications. . In this work we show how we can use integral experiments in a consiste ...
Integral experiments can be used to adjust ND-libraries and consequently the uncertainty response... more Integral experiments can be used to adjust ND-libraries and consequently the uncertainty response in important applications. . In this work we show how we can use integral experiments in a consiste ...
Epj Web of Conferences, 2020
In this work, an overview on the relevance of the nuclear data (ND) uncertainties with respect to... more In this work, an overview on the relevance of the nuclear data (ND) uncertainties with respect to the Light Water Reactors (LWR) neutron dosimetry is presented. The paper summarizes results of several studies realized at the LRT laboratory of the Paul Scherrer Institute over the past decade. The studies were done using the base LRT calculation methodology for dosimetry assessments, which involves the neutron source distribution representation, obtained based on validated CASMO/SIMULATE core follow calculation models, and the subsequent neutron transport simulations with the MCNP® software. The methodology was validated using as reference data results of numerous measurement programs fulfilled at Swiss NPPs. Namely, the following experimental programs are considered in the given overview: PWR “gradient probes” and BWR fast neutron fluence (FNF) monitors post irradiation examination (PIE). For the both cases, assessments of the nuclear data related uncertainties were performed. When a...
Development Of Drag-MOC : A tool for the study of uncertainty analysis through the deterministic ... more Development Of Drag-MOC : A tool for the study of uncertainty analysis through the deterministic OpenMOC transport code
Annals of Nuclear Energy, 2020
In this work, a method is proposed for combining differential and integral benchmark experimental... more In this work, a method is proposed for combining differential and integral benchmark experimental data within a Bayesian framework for nuclear data adjustments and multi-level uncertainty propagation using the Total Monte Carlo method. First, input parameters to basic nuclear physics models implemented within the state of the art nuclear reactions code, TALYS, were sampled from uniform distributions and randomly varied to produce a large set of random nuclear data files. Next, a probabilistic data assimilation was carried out by computing the likelihood function for each random nuclear data file based first on only differential experimental data (1st update) and then on integral benchmark data (2nd update). The individual likelihood functions from the two updates were then combined into a global likelihood function which was used for the selection of the final 'best' file. The proposed method has been applied for the adjustment of 208 Pb in the fast neutron energy region below 20 MeV. The 'best' file from the adjustments was compared with available experimental data from the EXFOR database as well as evaluations from the major nuclear data libraries and found to compare favourably.
Research Journal of Applied Sciences, Engineering and Technology, 2013
An important parameter in the design and analysis of a nuclear reactor is the reactivity worth of... more An important parameter in the design and analysis of a nuclear reactor is the reactivity worth of the control rod which is a measure of the efficiency of the control rod to absorb excess reactivity. During reactor operation, the control rod worth is affected by factors such as the fuel burnup, Xenon concentration, Samarium concentration and the position of the control rod in the core. This study investigates the effect of fuel burnup on the control rod worth by comparing results of a fresh and an irradiated core of Ghana's Miniature Neutron Source Reactor for both HEU and LEU cores. In this study, two codes have been utilized namely BURNPRO for fuel burnup calculation and MCNP5 which uses densities of actinides of the irradiated fuel obtained from BURNPRO. Results showed a decrease of the control rod worth with burnup for the LEU while rod worth increased with burnup for the HEU core. The average thermal flux in both inner and outer irradiation sites also decreased significantly with burnup for both cores.
Annals of Nuclear Energy, 2016
For several decades reactor design has been supported by computer codes for the investigation of ... more For several decades reactor design has been supported by computer codes for the investigation of reactor behavior under both steady state and transient conditions. The use of computer codes to simulate reactor behavior enables the investigation of various safety scenarios saving time and cost. There has been an increase in the development of in-house (local) codes by various research groups in recent times for preliminary design of specific or targeted nuclear reactor applications. These codes must be validated and calibrated against experimental benchmark data with their evolution and improvements. Given the large number of benchmarks available, selecting these benchmarks for reactor calculations and validation of simulation codes for specific or target applications can be rather tedious and difficult. In the past, the traditional approach based on expert judgement using information provided in various handbooks, has been used for the selection of these benchmarks. This approach has been criticized because it introduces a user bias into the selection process. This paper presents a method for selecting these benchmarks for reactor calculations for specific reactor applications based on the Total Monte Carlo (TMC) method. First, nuclear model parameters are randomly sampled within a given probability distribution and a large set of random nuclear data files are produced using the TALYS code system. These files are processed and used to analyze a target reactor system and a set of criticality benchmarks. Similarity between the target reactor system and one or several benchmarks is quantified using a similarity index. The method has been applied to the European Lead Cooled Reactor (ELECTRA) and a set of plutonium and lead sensitive criticality benchmarks using the effective multiplication factor (k eff). From the study, strong similarity were observed in the k eff between ELECTRA and some plutonium and lead sensitive criticality benchmarks. Also, for validation purposes, simulation results for a list of selected criticality benchmarks simulated with the MCNPX and SERPENT codes using different nuclear data libraries have been compared with experimentally measured benchmark k eff values.
Progress in Nuclear Energy, 2017
The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been su... more The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how TMC and Unified Monte Carlo-B (UMC-B) are combined to include experimental data in TMC. Random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications, e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4. In this implementation, the impact from the weighting is small for many of the applications. In some cases, this can be explained by the fact that the distributions used as priors are too narrow to be valid as such. Another possible explanation is that the integral systems are highly sensitive to resonance parameters, which effectively are not treated in this work. In other cases, only a very small number of files get significantly large weights, i.e., the region of interest is poorly resolved. This convergence issue can be due to the parameter distributions used as priors or model defects, for example. Further, some parameters used in the rules for the EXFOR interpretation have been varied. The observed impact from varying one parameter at a time is not very strong. This can partially be due to the general insensitivity to the weights seen for many applications, and there can be strong interaction effects. The automatic treatment of outliers has a quite large impact, however. To approach more justified ND uncertainties, the rules for the EXFOR interpretation shall be further discussed and developed, in particular the rules for rejecting outliers, and random ND files that are intended to describe prior distributions shall be generated. Further, model defects need to be treated.
Progress in Nuclear Energy, 2016
The current nuclear data uncertainties observed in reactor safety parameters for some nuclides ca... more The current nuclear data uncertainties observed in reactor safety parameters for some nuclides call for safety concerns especially with respect to the design of GEN-IV reactors and must therefore be reduced significantly. In this work, uncertainty reduction using criticality benchmark experiments within the Total Monte Carlo methodology is presented. Random nuclear data libraries generated are processed and used to analyze a set of criticality benchmarks. Since the calculated results for each random nuclear data used are different, an algorithm was used to select (or assign weights to) the libraries which give a good description of experimental data for the analyses of the benchmarks. The selected or weighted libraries were then used to analyze the ELECTRA reactor. By using random nuclear data libraries constrained with only differential experimental data as our prior, the uncertainties observed were further reduced by constraining the files with integral experimental data to obtain a posteriori uncertainties on the k eff. Two approaches are presented and compared: a binary accept/reject and a method of assigning file weights based on the likelihood function. Significant reductions in 239 Pu and 208 Pb nuclear data uncertainties in the k eff were observed after implementing the two methods with some criticality benchmarks for the ELECTRA reactor.
Annals of Nuclear Energy, 2014
In this work, we present results of fuel depletion analyses performed for a potential LEU core of... more In this work, we present results of fuel depletion analyses performed for a potential LEU core of Ghana's Miniature Neutron Source Reactor (GHARR-1) using the Monte Carlo N-particle extended (MCNPX) neutron transport code. Depletion calculation was carried out for the reactor core from the Beginning of Life (BOL) to the End of Life (EOL) which corresponds to 10 years of reactor operation. The amounts of uranium and plutonium actinides were estimated at BOL and EOL of the core. Decay heat removal rate for the MNSR after reactor shutdown was investigated due to its significance to reactor safety. Inventory of fission products produced as a result of burnup was also calculated. The results show that a maximum discharge burnup equivalent to 0.568% of U-235 was consumed at EOL equivalent to operating the reactor for 200 Effective Full Power Days (EFPD), while the amount of Pu-239 produced was not significant. Also, the decay heat decreased exponentially after reactor shutdown confirming that decay heat will be removed in the system by natural circulation after shutdown and thus guaranteeing the safety of the reactor.
Nuclear Science and Techniques
In this work, we explore the use of an iterative Bayesian Monte Carlo (iBMC) method for nuclear d... more In this work, we explore the use of an iterative Bayesian Monte Carlo (iBMC) method for nuclear data evaluation within a TALYS Evaluated Nuclear Data Library (TENDL) framework. The goal is to probe the model and parameter space of the TALYS code system to find the optimal model and parameter sets that reproduces selected experimental data. The method involves the simultaneous variation of many nuclear reaction models as well as their parameters included in the TALYS code. The ‘best’ model set with its parameter set was obtained by comparing model calculations with selected experimental data. Three experimental data types were used: (1) reaction cross sections, (2) residual production cross sections, and (3) the elastic angular distributions. To improve our fit to experimental data, we update our ‘best’ parameter set—the file that maximizes the likelihood function—in an iterative fashion. Convergence was determined by monitoring the evolution of the maximum likelihood estimate (MLE) ...
Annals of Nuclear Energy
The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been su... more The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4. For many applications, the weighting does not have much impact, something which can be explained by too narrow prior distribut...
Annals of Nuclear Energy
Criticality, reactor physics and shielding benchmarks are expected to play important roles in GEN... more Criticality, reactor physics and shielding benchmarks are expected to play important roles in GEN-IV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used for validating computer codes and for testing nuclear data libraries. Given the large number of benchmarks available, selecting these benchmarks for specific applications can be rather tedious and difficult. Until recently, the selection process has been based usually on expert judgement which is dependent on the expertise and the experience of the user and thereby introducing a user bias into the process. This approach is also not suitable for the Total Monte Carlo methodology which lays strong emphasis on automation, reproducibility and quality assurance. In this paper a method for selecting these benchmarks for reactor calculation and for nuclear data uncertainty reduction based on the Total Monte Carlo (TMC) method is presented. For...
Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sam... more In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random uncertainty and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systemati...