Legal Status
Legal Status
Notice
Notice of Availability of Interim Staff Guidance Documents for Fuel Cycle Facilities
A Notice by the Nuclear Regulatory Commission on
Document Details
Information about this document as published in the Federal Register.
 Printed version:
 Publication Date:
 12/06/2004
 Agency:
 Nuclear Regulatory Commission
 Document Type:
 Notice
 Document Citation:
 69 FR 70475
 Page:
 7047570480 (6 pages)
 Document Number:
 0426688
Document Details

Enhanced Content  Table of Contents
This tables of contents is a navigational tool, processed from the headings within the legal text of Federal Register documents. This repetition of headings to form internal navigation links has no substantive legal effect.
 AGENCY:
 ACTION:
 FOR FURTHER INFORMATION CONTACT:
 SUPPLEMENTARY INFORMATION:
 I. Introduction
 II. Summary
 III. Further Information
 Draft—Division of Fuel Cycle Safety and Safeguards Interim Staff Guidance—10; Justification for Minimum Margin of Subcriticality for Safety Issue
 Introduction
 Discussion
 Benchmark Similarity
 System Sensitivity
 Neutron Physics of the System
 Rigor of Validation Methodology
 Margin in System Parameters
 Normal vs. Abnormal Conditions
 Statistical Arguments
 Regulatory Basis
 Technical Review Guidance
 Recommendation
 References
 Footnotes
Enhanced Content  Table of Contents

Enhanced Content  Submit Public Comment
 This feature is not available for this document.
Enhanced Content  Submit Public Comment

Enhanced Content  Read Public Comments
 This feature is not available for this document.
Enhanced Content  Read Public Comments

Enhanced Content  Sharing
 Shorter Document URL
 https://www.federalregister.gov/d/0426688 https://www.federalregister.gov/d/0426688
Enhanced Content  Sharing

Enhanced Content  Document Tools
These tools are designed to help you understand the official document better and aid in comparing the online edition to the print edition.

These markup elements allow the user to see how the document follows the Document Drafting Handbook that agencies use to create their documents. These can be useful for better understanding how a document is structured but are not part of the published document itself.
Display NonPrinted Markup Elements
Enhanced Content  Document Tools


Enhanced Content  Developer Tools
This document is available in the following developer friendly formats:
 JSON: Normalized attributes and metadata
 XML: Original full text XML
 MODS: Government Publishing Office metadata
More information and documentation can be found in our developer tools pages.
Enhanced Content  Developer Tools
Published Document
This document has been published in the Federal Register. Use the PDF linked in the document sidebar for the official electronic format.
AGENCY:
Nuclear Regulatory Commission.
ACTION:
Notice of availability.
Start Further InfoFOR FURTHER INFORMATION CONTACT:
Wilkins Smith, Project manager, Technical Support Group, Division of Fuel Cycle Safety and Safeguards, Office of Nuclear Material Safety and Safeguards, U.S. Nuclear Regulatory Commission, Washington, DC 200050001. Telephone: (301) 4155788; fax number: (301) 4155370; email: wrs@nrc.gov.
End Further Info End Preamble Start Supplemental InformationSUPPLEMENTARY INFORMATION:
I. Introduction
The Nuclear Regulatory Commission (NRC) plans to issue Interim Staff Guidance (ISG) documents for fuel cycle facilities. These ISG documents provide clarifying guidance to the NRC staff when reviewing either a license application or a license amendment request for a fuel cycle facility under 10 CFR part 70. The NRC is soliciting public comments on the ISG documents which will be considered in the final versions or subsequent revisions.
II. Summary
The purpose of this notice is to provide the public an opportunity to review and comment on a draft Interim Staff Guidance document for fuel cycle facilities. Interim Staff Guidance10 provides guidance to NRC staff relative to determining whether the minimum margin of subcriticality (MoS) is sufficient to provide an adequate assurance of subcriticality for safety to demonstrate compliance with the performance requirements of 10 CFR 70.61(d).
III. Further Information
The document related to this action is available electronically at the NRC's Electronic Reading Room at http://www.nrc.gov/readingrm/adams.html. From this site, you can access the NRC's Agencywide Document Access and Management System (ADAMS), which provides text and image files of NRC's public documents. The ADAMS ascension number for the document related to this notice is ML043290270. If you do not have access to ADAMS or if there are problems in accessing the document located in ADAMS, contact the NRC Public Document Room (PDR) Reference staff at 18003974209, 3014154737, or by email to pdr@nrc.gov.
This document may also be viewed electronically on the public computers located at the NRC's PDR, O 1 F21, One White Flint North, 11555 Rockville Pike, Rockville, MD 20852. The PDR reproduction contractor will copy documents for a fee. Comments and questions should be directed to the NRC contact listed above by January 5, 2005. Comments received after this date will be considered if it is practical to do so, but assurance of consideration cannot be given to comments received after this date.
Start SignatureDated at Rockville, Maryland, this 24th day of November 2004.
For the Nuclear Regulatory Commission.
Melanie A. Galloway,
Chief, Technical Support Group, Division of Fuel Cycle Safety and Safeguards, Office of Nuclear Material Safety and Safeguards.
Draft—Division of Fuel Cycle Safety and Safeguards Interim Staff Guidance—10; Justification for Minimum Margin of Subcriticality for Safety Issue
Technical justification for the selection of the minimum margin of subcriticality (MoS) for safety, as required by 10 CFR 70.61(d)
Introduction
10 CFR 70.61(d) requires, in part, that licensees demonstrate that “under normal and credible abnormal conditions, all nuclear processes are subcritical, including use of an approved margin of subcriticality for safety.” To demonstrate subcriticality, licensees perform validation studies in which critical experiments similar to actual or anticipated calculations are chosen and are then used to establish a mathematical criterion for subcriticality for all future calculations. This criterion is expressed in terms of a limit on the maximum value of the calculated k_{eff}, which will be referred to in this ISG as the upper subcritical limit (USL). The USL includes allowances for bias and bias uncertainty as well as an additional margin which will be referred to hereafter as the minimum margin of subcriticality (MoS). This MoS has been variously referred to within the nuclear industry as subcritical margin, arbitrary margin, and administrative margin. The term MoS will be used throughout this ISG for consistency, but these terms are frequently used interchangeably. This MoS is an allowance for any unknown errors in the calculational method that may bias the result of calculations, beyond those accounted for explicitly in the calculation of the bias and bias uncertainty.
There is little guidance in the fuel facility Standard Review Plans (SRPs) as to what constitutes an acceptable MoS. NUREG1520, Section 5.4.3.4.4, states that the MoS should be preapproved by the NRC and that the MoS must “include adequate allowance for uncertainty in the methodology, data, and bias to assure subcriticality.” However, there is little guidance on how to determine the amount of MoS that is appropriate. Partly due to the historical lack of guidance, there have been significantly different margins of subcriticality approved for different fuel cycle facilities over time. In addition, the different ways of defining the MoS and calculating k_{eff} limits significantly compound the potential for confusion. The MoS can have a significant effect on facility operations (e.g., storage capacity and throughput) and there has therefore been considerable recent interest in decreasing the margins of subcriticality below what has been accepted historically. These two factors—the lack of guidance and the increasing interest in reducing margins of subcriticality—make clarification of what constitutes acceptable justification for the MoS necessary. In general, consistent with a riskinformed approach to regulation, smaller margins of subcriticality require more substantial technical justification.
The purpose of this ISG therefore is to provide guidance on determining whether the MoS is sufficient to provide Start Printed Page 70476an adequate assurance of subcriticality for safety, in accordance with 10 CFR 70.61(d).
Discussion
The neutron multiplication factor of a fissile system (k_{eff}) depends, in general, on many different physical variables. The factors that can affect the calculated value of k_{eff} may be broadly divided into the following categories: (1) Geometric form; (2) material composition; and (3) neutron distribution. The geometric form and material composition of the system determine—together with the underlying nuclear data (e.g., v, X(E), and the set of cross section data)—the spatial and energy distribution of neutrons in the system (i.e., flux and energy spectrum). An error in the nuclear data or in the modeling of these systems can produce an error in the calculated value of k_{eff}. This difference between the calculated and true value of k_{eff} is referred to as the bias^{[1] } . The bias is defined as the difference between the calculated and true values of k_{eff}, by the following equation: β = k_{calc} − k_{true}
The bias of a critical experiment may be known with a high degree of confidence because the true (experimental) value is known a priori (k_{true}≉ 1). Because both the experimental and the calculational uncertainty are known, there is a determinable uncertainty associated with the bias. The bias for a calculated system other than a critical experiment is not typically known with this same high degree of confidence, because k_{true} is not typically known. The MoS is therefore an allowance for any unknown errors that may affect the calculated value of k_{eff}, beyond those accounted for explicitly in the bias and bias uncertainty. An MoS is needed because the critical experiments chosen will, in general, exhibit somewhat different geometric forms, material compositions, and neutron spectra from those of actual system configurations, and the effect of these differences is difficult to quantify. Bias and bias uncertainty are estimated by calculating the k_{eff} of critical experiments with geometric forms, material compositions, and neutron spectra similar to those of actual or anticipated calculations. However, because of the many factors that can effect the bias, it must be recognized that this is only an estimate of the true bias of the system; it is not possible to guarantee that all sources of error have been accounted for during validation. Thus, use of a smaller MoS requires a greater level of assurance that all sources of uncertainty and bias have been taken into account and that the bias is known with a high degree of accuracy. The MoS should be large compared to known uncertainties in the nuclear data and limitations of the methodology (e.g., modeling approximations, convergence uncertainties). It should be noted that this MoS is only needed when subcritical limits are based on the use of calculational methods, including computer and hand calculations. The MoS is not needed when subcritical limits are based on other methods, such as experiment or published data (e.g., widely accepted handbooks or endorsed industry standards).
Because the nuclear industry has employed widely different terminology regarding validation and margin, it is necessary to define the following terms as used in this ISG. These definitions are for clarity only and are not meant to prescribe any particular terminology.
Bias: The difference between the calculated and true values of k_{eff} for a fissile system or set of systems.
Bias Uncertainty: The calculated uncertainty in the bias as determined by a statistical method.
Margin of subcriticality (MoS): Margin in k_{eff} applied in addition to bias and bias uncertainty to ensure subcriticality (also known as subcritical, arbitrary, or administrative margin). This term is shorthand for “minimum margin of subcriticality”.
Margin of safety: Margin in one or more system parameters that represents the difference between the value of the parameter at which it is controlled and the value at which the system becomes critical. (This represents an additional margin beyond the MoS.)
Upper Subcritical Limit: The maximum allowable k_{eff} value for a system. Generally, the USL is defined by the equation USL = 1−bias−bias uncertainty−MoS.
Subcritical Limit: The value of a system parameter at which it is controlled to ensure criticality safety, and at which k_{eff} does not exceed the USL (also known as safety limit).
Operating Limit: The value of a system parameter at which it is administratively controlled to ensure that the system will not exceed the subcritical limit.^{[2] }
If the USL is defined as described above, then the MoS represents the difference between the average calculated k_{eff} (including uncertainties) and the USL, thus:
MoS = (1−bias−bias uncertainty)−USL.
There are many factors that can affect the code's ability to accurately calculate k_{eff} and that can thus impact the analyst's confidence in the estimation of the bias. Some of these factors are described in detail below.
Benchmark Similarity
Because the bias of calculations is estimated based on critical benchmarks with similar geometric form, material composition, and neutronic behavior to the systems being evaluated, the degree of similarity between benchmarks and actual or anticipated calculations is a key consideration in determining the appropriate MoS. The more closely the benchmarks represent the characteristics of systems being validated, the more confidence exists in the calculated bias and bias uncertainty.
Allowing a comparison of the chosen benchmarks to actual or anticipated calculations requires that both the experiments and the calculations be described in sufficient detail to permit independent verification of results. This may be accomplished by submitting input decks for both benchmarks and calculations, or by providing detailed drawings, tables, or other such data to the NRC to permit a detailed comparison of system parameters.
In evaluating benchmark similarity, some parameters are obviously more significant than others. The parameters that can have the greatest effect on the calculated k_{eff} of the system are those that are most significant. Historically, some parameters have been used as trending parameters because these are the parameters that are expected to have the greatest effect on the bias. They include the moderatortofuel ratio (e.g., H/U, H/X, v^{m}/v^{f}), isotopic abundance (e.g.,^{235} U,^{239} Pu, or overall Pucontent), and parameters characterizing the neutron spectrum (e.g., energy of average lethargy causing fission (EALF), or average energy group (AEG)). Other parameters, such as material density or overall geometric shape, are generally considered to be of less importance. Care should be taken that, when basing justification for a reduced MoS on the similarity of benchmarks to actual or anticipated calculations, all important system characteristics that can affect the bias have been taken into consideration. There are several ways to demonstrate that the chosen benchmarks are sufficiently similar to actual or anticipated calculations: Start Printed Page 70477
1. NUREG/CR6698, “Guide to Validation of Nuclear Criticality Safety Calculational Method,” Table 2.3, contains a set of screening criteria for determining benchmark applicability. As is stated in the NUREG, these criteria were arrived at by consensus among experienced NCS specialists and may be considered conservative. The NRC staff considers agreement on all screening criteria to be sufficient justification for demonstrating benchmark similarity. However, less conservative (i.e., broader) screening ranges may be used if appropriately justified.
2. Use of an analytical method that systematically quantifies the degree of similarity between benchmarks and design applications, such as Oak Ridge National Laboratory's TSUNAMI code in the SCALE 5 code package.
TSUNAMI calculates a correlation coefficient indicating the degree of similarity between each benchmark and calculation in pairwise fashion. The appropriate threshold value of the parameter indicating a sufficient degree of similarity is an unresolved issue with the use of this method. However, the NRC staff currently considers a correlation coefficient c_{k} ≥ 0.95 to be indicative of a strong degree of similarity. Conversely, a correlation coefficient < 0.90 should not be used as demonstration of benchmark similarity without significant additional justification. These observations are tentative and are based on the staff's observation that benchmarks and calculations having a correlation of at least 95% also appear to be very similar based on a traditional comparison of system parameters. TSUNAMI should not be used as a “black box,” but may be used to inform the benchmark selection process, due to the evolving nature of this tool.
3. Sensitivity studies may be employed to demonstrate that the system k_{eff} is highly insensitive to a particular parameter. In such cases, a significant error in the parameter will have a small effect on the system bias. One example is when the number density of certain trace materials can be shown to have a negligible effect on k_{eff}. Another example is when the presence of a strong external absorber has only a slight effect on k^{eff}. In both cases, such a sensitivity study may be used to justify why agreement with regard to a given parameter is not important for demonstrating benchmark similarity.
4. Physical arguments may be used to demonstrate benchmark similarity. For example, the fact that oxygen and fluorine are almost transparent to thermal neutrons (i.e., cross sections are very low) may be used as justification for why the differences in chemical form between UO_{2} F_{2} and UO_{2} may be ignored.
A combination of the above methods may also prove helpful in demonstrating benchmark similarity. For example, TSUNAMI may be used to identify the parameters to which k_{eff} is most sensitive, or a sensitivity study may be used to confirm TSUNAMI results or justify screening ranges. Care should be taken to ensure that all parameters which can measurably affect the bias are considered when comparing chosen benchmarks to calculations. For example, comparison should not be based solely on agreement in the^{235} U fission spectrum if^{238} U or^{10} B absorption or^{1} H scattering have a significant effect on the calculated k_{eff}. A method such as TSUNAMI that considers the complete set of reactions and nuclides present should be used rather than relying on a comparison of only the fission spectra. That all important parameters have been included can be determined based on a study of the k_{eff} sensitivity, as discussed in the next section. It is especially important that all materials present in calculations that can have more than a negligible effect on the bias are included in the chosen benchmarks. In addition, it is necessary that if the parameters associated with calculations are outside the range of the benchmark data, the effect of extrapolating the bias should be taken into account in setting the USL. This should be done by making use of trends in the bias. Both the trend and the uncertainty in the trend should be extrapolated using an established mathematical method.
Some questions that should be asked in evaluating the chosen benchmarks include:
 Are the critical experiments chosen all highquality benchmarks from reliable (e.g., peerreviewed and widelyaccepted) sources?
 Are the benchmarks chosen taken from independent sources?
 Do the most important benchmark parameters cover the entire range needed for actual or anticipated calculations?
 Is the number of benchmarks sufficient to establish trends in the bias across the entire range? (The number depends on the specific statistical method employed.)
 Are all important parameters that could affect the bias adequately represented in the chosen benchmarks?
System Sensitivity
Sensitivity of the calculated k_{eff} to changes in system parameters is a closely related concept to that of similarity. This is because those parameters to which k_{eff} is most sensitive should weigh more heavily in evaluating benchmark similarity. If k_{eff} is highly sensitive to a given parameter, an error in the parameter could be expected to have a significant impact on the bias. Conversely, if k_{eff} is very insensitive to a given parameter, then an error would be expected to have a negligible impact on the bias. In the latter case, agreement with regard to that parameter is not important to establishing benchmark similarity.
Two major ways to determine the system's k_{eff} sensitivity are:
1. The TSUNAMI code in the SCALE 5 code package can be used to calculate the sensitivity coefficients for each nuclidereaction pair present in the problem. TSUNAMI calculates both an integral sensitivity coefficient (i.e., summed over all energy groups) and a sensitivity profile as a function of energy group. The sensitivity coefficient is defined as the fractional change in k_{eff} for a 1% change in the nuclear cross section. It must be recognized that TSUNAMI only evaluates the k_{eff} sensitivity to changes in the nuclear data, and not to other parameters that could affect the bias and should be considered.
2. Direct sensitivity calculations can also be used to perturb the system and gauge the resulting effect on k_{eff}. Perturbation of the atomic number densities can also be used to confirm the integral sensitivity coefficients calculated by TSUNAMI (as when there is doubt as to convergence of the adjoint flux).
The relationship between the k_{eff} sensitivity and confidence in the bias is the reason that highenriched uranium fuel facilities have historically required a greater MoS than lowenriched uranium facilities. Highenriched systems tend to be much more sensitive to changes in the underlying system parameters, and in such systems, the effect of any errors on the bias would be greatly magnified. For this same reason, systems involving weaponsgrade plutonium would also be more susceptible to undetected errors than lowassay mixed oxide (i.e., a few percent Pu). The appropriate amount of MoS should therefore be commensurate with the sensitivity of the system to changes in the underlying parameters.
Some questions that should be asked in evaluating the k_{eff} sensitivity include:
 How sensitive is k_{eff} to changes in the underlying nuclear data (e.g., cross sections)?
 How sensitive is k_{eff} to changes in the geometric form and material composition? Start Printed Page 70478
 Is the MoS large compared to the expected magnitude of changes in k_{eff} resulting from errors in the underlying system parameters?
Neutron Physics of the System
Another consideration that may affect the appropriate MoS is the extent to which the physical behavior of the system is known. Fissile systems which are known to be subcritical with a high degree of confidence do not require as much MoS as systems where subcriticality is less certain. An example of a system known to be subcritical would be a finished fuel assembly. These systems typically can only be made critical when highly thermalized, and due to extensive analysis and reactor experience, the flooded case is known to be subcritical in isolation. In addition, the thermal neutron cross sections for materials in finished reactor fuel have been measured with an exceptionally high degree of accuracy (as opposed to the unresolved resonance region). Other examples may include systems consisting of very simple geometry or other idealized situations, in which there is strong evidence that the system is subcritical based on comparisons with highly similar systems in published references such as handbooks or standards. In these cases, the amount of MoS needed may be significantly reduced.
An important factor in determining that the neutron physics of the system is wellknown is ensuring that the configuration of the system is fixed. For example, a finished fuel assembly is subject to tight quality assurance checks and has a form that is wellcharacterized and highly stable. A solution or powder process with a complex geometric arrangement would be much more susceptible to having its configuration change to one whose neutron physics is not wellunderstood. Experience with similar processes may also be credited.
Some questions that should be asked in evaluating the neutron physics of the system include:
 Is the geometric form and material composition of the system rigid and unchanging?
 Is the geometric form and material composition of the system subject to strict quality assurance?
 Are there other reasons besides criticality calculations to conclude that the system will be subcritical (e.g., handbooks, standards, reactor fuel studies)?
 How wellknown are the cross sections in the energy range of interest?
Rigor of Validation Methodology
Having a high degree of confidence in the estimated bias and bias uncertainty requires both that there be a sufficient quantity of wellbehaved benchmarks and that there be a sufficiently rigorous validation methodology. If either the data or the methodology is not adequate, a high degree of confidence in the results cannot be attained. The validation methodology must also be suitable for the data analyzed. For example, a statistical methodology relying on the data being normally distributed about the mean k_{eff} would not be appropriate to analyze data that are not normally distributed. A linear regression fit to data that has a nonlinear bias trend would similarly not be appropriate.
Having a sufficient quantity of wellbehaved benchmarks means that: (1) There are enough (applicable) benchmarks to make a statistically meaningful calculation of the bias and bias uncertainty; (2) the benchmarks span the entire range of all important parameters, without gaps requiring extrapolation or wide interpolation; and (3) the benchmarks do not display any apparent anomalies. Most of the statistical methods used rely on the benchmarks being normally distributed. To test for normality, there must be a statistically significant number of benchmarks (which may vary depending on the test employed). If there is insufficient data to verify normality to at least the 95% confidence level, then a nonparametric technique should be used to analyze the data. In addition, the benchmarks should provide a continuum of data across the entire validated range so that any variation in the bias as a function of important system parameters may be observed. Anomalies that may cast doubt on the results of the validation may include the presence of discrete clusters of experiments having a lower calculated k_{eff} than the set of benchmarks as a whole, an excessive fluctuation in k_{eff} values (e.g., having a X^{2}/N ≫ 1), or discarding an unusually high number of benchmarks as outliers (i.e., more than 12%).
Having a sufficiently rigorous validation methodology means having a methodology that is appropriate for the number and distribution of benchmark experiments, that calculates the bias and bias uncertainty using an established statistical methodology, that accounts for any trends in the bias, and that accounts for all apparent sources of uncertainty in the bias (e.g., the increase in uncertainty due to extrapolating the bias beyond the range covered by the benchmark data).
In addition, confidence that the code's performance is wellunderstood means the bias should be relatively small (i.e., bias ≲ 2%), or else the reason for the bias should be known, and no credit must be taken for positive bias. If the absolute value of the bias is very large (especially if the reason for the large bias is unknown), this may indicate that the calculational method is not very accurate, and a larger MoS may be appropriate.
Some questions that should be asked in evaluating the data and the methodology include:
 Is the methodology consistent with the distribution of the data (e.g., normal)?
 Are there enough benchmarks to determine the behavior of the bias across the entire area of applicability?
 Does the assumed functional form of the bias represent a good fit to the benchmark data?
 Are there discrete clusters of benchmarks for which the overall bias appears to be nonconservative (especially consisting of the most applicable benchmarks)?
 Has additional margin been applied to account for extrapolation or wide interpolation?
 Have all apparent bias trends been taken into account?
 Has an excessive number of benchmarks been discarded as statistical outliers?
Performance of an adequate code validation alone is not sufficient justification for any specific MoS. The reason for this is that determination of the bias and bias uncertainty is separate from selection of an appropriate MoS. Therefore, performing an adequate code validation is not alone sufficient demonstration that an appropriate MoS has been chosen.
Margin in System Parameters
The MoS is a reflection of the degree of confidence in the results of the validation analysis; the MoS is a margin in k_{eff} to provide a high degree of assurance that fissile systems calculated to be subcritical are in fact subcritical. However, there are other types of margin that can provide additional assurance of subcriticality; these margins are frequently expressed in terms of the system parameters rather than k_{eff}. It is generally acknowledged that the margin to criticality in system parameters (termed the margin of safety) is a better indication of the inherent safety of the system than margin in k_{eff}. In addition to establishing subcritical limits on controlled system parameters, Start Printed Page 70479licensees frequently establish operating limits to ensure that subcritical limits are not exceeded. The difference between the subcritical limit and the operating limit (if used) of a system parameter represents one type of margin that may be credited in justifying a lower MoS than would be otherwise acceptable. This difference between the subcritical limit and the operating limit should not be confused with the MoS. Confusion often arises, however, because systems in which k_{eff} is highly sensitive to changes in process parameters may require both: (1) A large margin between subcritical and operating limits, and (2) a large MoS. This is because systems in which k_{eff} is highly sensitive to changes in process parameters are highly sensitive to normal process variations and to any potential errors. Both the MoS and the margin between the subcritical and operating limits are thus dependent on the k_{eff} sensitivity of the system.
In addition to the margin between the subcritical and operating limits, there is also usually a significant amount of conservatism in the facility's technical practices with regard to modeling. In criticality calculations, controlled parameters are typically analyzed at their subcritical limits, whereas uncontrolled parameters are analyzed at their worstcase credible condition. In addition, tolerances must be conservatively taken into account. These technical practices generally result in conservatism of at least several percent in k_{eff}. Examples of this conservatism may include assuming optimum concentration in solution processes, neglect of neutron absorbers in structural materials, or requiring at least a 1inch, tightfitting reflector around process equipment. The margin due to this conservatism may be credited in justifying a smaller MoS than would otherwise be found acceptable. However, in order to take credit for this as part of the basis for the MoS, it should be demonstrated that the technical practices committed to in the license application will result in a predictable and consistent amount of conservatism in k_{eff}. If this modeling conservatism will not always be present, it should not be used as justification for the MoS.
Some questions that should be asked in evaluating the margin in system parameters include:
 How much margin in k_{eff} is present due to conservatism in the modeling practices?
 Will this margin be present for all normal and credible abnormal condition calculations?
Normal vs. Abnormal Conditions
Historically, several licensees have distinguished between normal and abnormal condition k_{eff} limits, in that they have a higher k_{eff} limit for abnormal conditions. Separate limits for normal and abnormal condition k_{eff} values are permissible but are not required.
There is a certain likelihood associated with the MoS that processes calculated to be subcritical will in fact be critical. A somewhat higher likelihood is permissible for abnormal than for normal condition calculations. This is because the abnormal condition should be at least unlikely to occur, in accordance with the double contingency principle. That is, achieving the abnormal condition requires at least one contingency to have occurred and is likely to be promptly corrected upon detection. In addition, there is often additional conservatism present in the abnormal condition because uncontrolled parameters are analyzed at their worstcase credible conditions.
As stated in NUREG1718, the fact that abnormal conditions meet the standard of being at least unlikely from the standpoint of the double contingency principle may be used to justify having a lower MoS than would be permissible for normal conditions. In addition, the increased risk associated with the less conservative MoS should be commensurate with and offset by the unlikelihood of achieving the abnormal condition. That is, the likelihood that a process calculated to be subcritical will be critical increases when going from a normal to a higher abnormal condition k_{eff} limit. If the normal condition k_{eff} limit is acceptable, then the abnormal limit will also be acceptable provided this increased likelihood is offset by the unlikelihood of going to the abnormal condition because of the controls that have been established. If a single k_{eff} limit is used (i.e., no credit for unlikelihood of the abnormal condition), then it must be determined to be acceptable to cover both normal and credible abnormal conditions.
Statistical Arguments
Historically, the argument has been used that the MoS can be estimated based on comparing the results of two statistical methods. In the USLSTATS code issued with the SCALE code package there are two methods for calculating the USL: (1) The Confidence Band with Administrative Margin Approach, which calculates USL1, and (2) the Lower Tolerance Band Approach, which calculates USL2. The MoS is an input parameter to the Confidence Band Approach but is not included explicitly in the Lower Tolerance Band Approach. Justification that the MoS chosen in the Confidence Band Approach is adequate has been based on a comparison of USL1 and USL2 (i.e., the condition that USL1, including the chosen MoS, is less than USL2). However, this justification is not sufficient.
The condition that USL1 < USL2 is necessary, but not sufficient, to show that an adequate MoS has been selected. These methods are two different statistical treatments of the data, and a comparison between them can only demonstrate whether the MoS is sufficient to bound statistical uncertainties included in the Lower Tolerance Band Approach but not included in the Confidence Band Approach. There may be other statistical or nonstatistical errors in the calculation of k_{eff} that are not handled in the statistical treatments. Therefore, the NRC does not consider this an acceptable justification for selection of the MoS.
Regulatory Basis
In addition to complying with paragraphs (b) and (c) of this section, the risk of nuclear criticality accidents must be limited by assuring that under normal and credible abnormal conditions, all nuclear processes are subcritical, including use of an approved margin of subcriticality for safety. [10 CFR 70.61(d)]
Technical Review Guidance
Determination of an adequate MoS is strongly dependent upon the specific processes and conditions at the facility being licensed, which is largely the reason that different facilities have been licensed with different limits. Judgement and experience must be employed in evaluating the adequacy of the proposed MoS. Historically, however, an MoS of 0.05 in k_{eff} has generally been found acceptable for a typical lowenriched fuel fabrication facility. This will generally be the case provided there is a sufficient quantity of wellbehaved benchmarks and a sufficiently rigorous validation methodology has been employed. For systems involving highenriched uranium or plutonium, additional MoS may be appropriate to account for the increased sensitivity of k_{eff} to changes in system parameters. There is no consistent precedent for such facilities, but the amount of increased MoS should be commensurate with the increased k_{eff} sensitivity of these systems. Therefore, an MoS of 0.05 in k_{eff} for lowenriched fuel facilities or an MoS of 0.1 for highStart Printed Page 70480enriched or plutonium fuel facilities must be justified but will generally be found acceptable, with the caveats discussed above^{[3] } .
For facility processes involving unusual materials or new process conditions, the validation should be reviewed in detail to ensure that there are no anomalies associated with unique system characteristics.
In any case, the MoS should not be reduced below a minimum of 0.02.
Reducing the MoS below 0.05 for lowenriched processes or 0.1 for highenriched or plutonium processes requires substantial additional justification, which may include:
1. An unusually high degree of similarity between the chosen benchmarks and anticipated normal and credible abnormal conditions being validated.
2. Demonstration that the system k_{eff} is highly insensitive to changes in underlying system parameters, such that the worst credible modeling or cross section errors would have a negligible effect on the bias.
3. Demonstration that the system being modeled is known to be subcritical with a high degree of confidence. This requires that there be other strong evidence in addition to the calculations that the system is subcritical (such as comparison with highly similar systems in published references such as handbooks or standards).
4. Demonstration that the validation methodology is exceptionally rigorous, so that any potential sources of error have been accounted for in calculating the USL.
5. Demonstration that there is a dependable and consistent amount of conservatism in k_{eff} due to the conservatism in modeling practices.
In addition, justification of the MoS for abnormal conditions may include:
6. Demonstration that the increased likelihood of a process calculated as subcritical being critical is offset by the unlikelihood of achieving the abnormal condition.
This list is not allinclusive; other technical justification demonstrating that there is a high degree of confidence in the calculation of k_{eff} may be used.
Recommendation
The guidance in this ISG should supplement the current guidance in the NCS chapters of the fuel facility SRPs (NUREG1520 and 1718). In addition, NUREG1718, Section 6.4.3.3.4, should be revised to remove the following sentence: “A minimum subcritical margin of 0.05 is generally considered to be acceptable without additional justification when both the bias and its uncertainty are determined to be negligible.”
References
NUREG1520, “Standard Review Plan for the Review of a License Application for a Fuel Cycle Facility”
NUREG1718, “Standard Review Plan for the Review of an Application for a Mixed Oxide (MOX) Fuel Fabrication Facility”
NUREG/CR6698, “Guide for Validation of Nuclear Criticality Safety Calculational Methodology”
NUREG/CR6361, “Criticality Benchmark Guide for LightWaterReactor Fuel in Transportation and Storage Packages”
Approved:
Date:
Director, FCSS
End Supplemental InformationFootnotes
1. There are many different ways of computing bias as used in calculation of the USL. This may be an average bias, a leastsquares fitted bias, a bounding bias, etc., as described in the applicant's methodology.
Back to Citation2. Not all licensees have a separate subcritical and operating limit. Use of administrative operating limits is optional, because the subcritical limit should conservatively take parametric tolerances into account.
Back to Citation3. NUREG1718, Section 6.4.3.3.4, states that the applicant should submit justification for the MoS, but then states that an MoS of 0.05 is “generally considered to be acceptable without additional justification when both the bias and its uncertainty are determined to be negligible.” These statements are inconsistent. The statement about 0.05 being generally acceptable without additional justification is in error and should be removed from the next revision to the SRP.
Back to Citation[FR Doc. 0426688 Filed 12304; 8:45 am]
BILLING CODE 759001P