Even though Monte Carlo simulation (MCS) can be computationally expensive due to a large number of required model calls, it has a very big advantage over other sampling-based structural reliability methods — besides being rather robust: After a conducted MCS, the uncertainty about the probability of failure $p_f$ can be quantified analytically and does not depend on the particular shape of the limit state function. Meaning, a full probabilistic description of $p_f$ can be obtained conditional on a single simulation MCS run, just by knowing the total number $K$ of samples and the number $H$ of samples for which the failure event was observed. Among all sampling-based reliability methods, MCS is the only method with this property. The performance of other sampling-based reliability methods is always influenced by the particular shape of the limit-state function.
Monte Carlo simulation of rare events
In MCS, $K$ independent samples $\mathbf{x}_{(1)},\ldots,\mathbf{x}_{(K)}$ of the input random vector $\mathbf{X}$ are generated and for each sample $\mathbf{x}_{(i)}$ the limit state function $g\left(\mathbf{x}_{(i)}\right)$ is evaluated. We count the number $H$ of occurrences of the event $\mathcal{F}$, for which $g\left(\mathbf{x}\right)\le 0$, is counted; i.e., $H$ expresses for how many samples the modeled performance of the system of interest is unsatisfactory. An unbiased point estimate $\widehat{p}$ for $p_f$ is obtained as:
$$
p_f \approx \widehat{p} = \frac{H}{K}\,.
$$
Bayesian post-processing for MCS
Using Bayesian statistics, the (posterior) uncertainty about $p_f$ can be quantified through a Beta distributed random variable $P_f$ with parameters $H+1$ and $K-H+1$, and probability density function $$
f_{P_f|H,K}(p) = \frac{p^{M}\cdot\left(1-p\right)^{K-H}}{\operatorname{B}\left(H+1,K-H+1\right)}\,,
$$ where $B(\cdot,\cdot)$ denotes the beta function defined as $B(\alpha,\beta) = \int_0^1 t^{\alpha-1}(1-t)^{\beta-1}\,\mathrm{d}t$.
The posterior mean of $P_f$ is:
$$
\operatorname{E}\left[P_f|H,K\right] = \frac{H+1}{K+2}\,.
$$ The posterior variance of $P_f$ is.
$$
\operatorname{Var}\left[P_f|H,K\right] = \frac{\left(H+1\right)\left(K-H+1\right)}{\left(K+2\right)^2(K+3)}\,.
$$ The posterior coefficient of variation of $P_f$ is.
$$
\operatorname{Var}\left[P_f|H,K\right] = \sqrt{\frac{K-H+1}{\left(H+1\right)^2(K+3)}}\,.
$$
The prior distribution underlying the posterior distribution stated above is a uniform distribution. For the problem at hand, the uniform distribution is the maximum entropy probability distribution. This choice of prior distribution is particularly suitable to quantify upper credible intervals for MCS.
Credible intervals for MCS
Based on the posterior distribution stated in the previous section, credible intervals can be evaluated. Upper credible intervals are of particular relevance to quantify the uncertainty about the probability of failure $p_f$ for reliability analysis, as they quantify how plausible it is that the value of $p_f$ is smaller than an associated upper bound. For example, the $95\%$ upper credible interval gives the threshold $u_{95\%}$, for which there is a $95\%$ probability that the value of $P_f$ is smaller than $u_{95\%}$; i.e., $\Pr\left[P_f\le u_{95\%} \right] = 95\%$. As the posterior $P_f$ follows a Beta distribution, it is straightforward to evaluate upper credible intervals for $P_f$. For example, by using our Online Tool for post-processing of MCS.
The presented Bayesian approach gives you meaningful credible intervals even for the case when no failed samples were observed; i.e., for $H=0$.
+49 (0)89 21709083