API Reference
MLMC Estimation
MultilevelMonteCarlo.mlmc_estimate — Function
mlmc_estimate(levels, qoi_functions, samples_per_level, draw_parameters;
parallel=false) -> Vector{Float64}Compute MLMC estimates of E[qⱼ] for each quantity of interest, using pre-specified sample counts per level. Statistics are accumulated online.
The estimator for each QoI qⱼ is the telescoping sum:
\[\hat{Q}_j = \frac{1}{N_1}\sum_{i=1}^{N_1} q_j\bigl(L_1(p^{(i)})\bigr) + \sum_{l=2}^{L} \frac{1}{N_l}\sum_{i=1}^{N_l} \Bigl[q_j\bigl(L_l(p^{(i)})\bigr) - q_j\bigl(L_{l-1}(p^{(i)})\bigr)\Bigr]\]
For a single level this reduces to standard Monte Carlo.
Arguments
levels:[L₁, …, Lₗ]— model evaluators, one per fidelity level. EachLₗ(params) -> model_output.qoi_functions:[q₁, …, qQ]— QoI extractors. Eachqⱼ(model_output) -> scalar.samples_per_level:[N₁, …, Nₗ]— sample counts at each level.draw_parameters:() -> params.
Keyword Arguments
parallel::Bool = false: Use threading within each level.
Returns
Vector{Float64} of length Q with estimates [E[q₁], …, E[qQ]].
MultilevelMonteCarlo.mlmc_estimate_adaptive — Function
mlmc_estimate_adaptive(levels, qoi_functions, draw_parameters,
cost_per_level, tolerance;
variance_qoi=nothing, initial_samples=100,
parallel=false, max_iterations=20)
-> (estimates, samples_per_level)Adaptively compute MLMC estimates by running pilot samples, computing optimal allocation, and iteratively adding samples until the budget is met.
All QoI means and the allocation variance are maintained as online accumulators, so new batches are merged without reprocessing earlier data.
Algorithm
- Run
initial_samplesat every level to obtain pilot variance estimates. - Use
optimal_samples_per_levelto compute target sample counts. - Draw additional samples at any level where the current count is below target.
- Re-estimate variances from all accumulated samples and repeat from step 2.
- Converge when no level needs more samples.
Arguments
levels: Model evaluators[L₁, …, Lₗ], one per fidelity level.qoi_functions: QoI extractors[q₁, …, qQ], eachq(model_output) -> scalar.draw_parameters:() -> params.cost_per_level: Cost per sample at each level.tolerance: Target RMSE.
Keyword Arguments
variance_qoi: Scalarv(model_output) -> scalarfor allocation. Default:v(out) = Σᵢ qᵢ(out)².initial_samples::Int = 100: Pilot samples per level (≥ 2).parallel::Bool = false: Thread samples within each level.max_iterations::Int = 20: Safety limit on refinement iterations.
Returns
(estimates, samples_per_level) where estimates::Vector{Float64} has length Q and samples_per_level::Vector{Int} is the final allocation.
MultilevelMonteCarlo.mlmc_estimate_vector_qoi — Function
mlmc_estimate_vector_qoi(levels, qoi_function, samples_per_level, draw_parameters;
parallel=false) -> Vector{Float64}MLMC estimator for a single vector-valued QoI function. The function must return a vector of the same length for every evaluation. Statistics are accumulated online per component using the standard telescoping sum.
Arguments
levels:[L₁, …, Lₗ]— model evaluators, one per fidelity level.qoi_function:q(model_output) -> AbstractVector— a single QoI that returns a fixed-length vector.samples_per_level:[N₁, …, Nₗ]— sample counts at each level.draw_parameters:() -> params.
Keyword Arguments
parallel::Bool = false: Use threading within each level.
Returns
Vector{Float64} of the same length as qoi_function's output — the component-wise MLMC mean estimate.
MultilevelMonteCarlo.evaluate_monte_carlo — Function
evaluate_monte_carlo(n_samples, draw_parameters, level, qoi_functions;
parallel=false) -> Vector{Float64}Standard (single-level) Monte Carlo estimator for multiple QoIs.
Arguments
n_samples: Number of samples.draw_parameters:() -> params.level: Model evaluatorL(params) -> model_output.qoi_functions:[q₁, …, qQ]— QoI extractors.
Returns
Vector{Float64} of length Q with estimates [E[q₁], …, E[qQ]].
evaluate_monte_carlo(n_samples, draw_parameters, qoi_function;
parallel=false) -> Float64Standard Monte Carlo estimator of E[Q] for a single scalar QoI.
MultilevelMonteCarlo.evaluate_monte_carlo_vector_qoi — Function
evaluate_monte_carlo_vector_qoi(n_samples, draw_parameters, level, qoi_function;
parallel=false) -> Vector{Float64}Standard (single-level) Monte Carlo estimator for a single vector-valued QoI.
Arguments
n_samples: Number of samples.draw_parameters:() -> params.level: Model evaluatorL(params) -> model_output.qoi_function:q(model_output) -> AbstractVector— must return a fixed-length vector.
Returns
Vector{Float64} — the component-wise MC mean estimate.
Sample Allocation
MultilevelMonteCarlo.variance_per_level — Function
variance_per_level(levels, qoi_functions, samples_per_level, draw_parameters;
variance_qoi=nothing, parallel=false)
-> (variances_corrections, variances)Estimate the variance of a scalar QoI and its multilevel corrections at each level. Statistics are computed online (Welford's algorithm).
The returned variances drive optimal sample allocation via optimal_samples_per_level. A single scalar variance_qoi is used for allocation; the individual qoi_functions are only needed to construct the default variance_qoi.
Arguments
levels: Vector of model evaluators[L₁, …, Lₗ], one per fidelity level. EachLₗ(params) -> model_outputruns the forward model.qoi_functions: Vector of QoI extractors[q₁, …, qQ]. Eachqⱼ(model_output) -> scalar. Used to construct the defaultvariance_qoi.samples_per_level: Pilot sample counts, one per level.draw_parameters:() -> params.
Keyword Arguments
variance_qoi:v(model_output) -> scalarwhose correction variance drives allocation. Default:v(out) = Σᵢ qᵢ(out)².parallel::Bool = false: Use threading within each level.
Returns
(variances_corrections, variances):
variances_corrections[l]=Var[v(Lₗ₊₁(p)) − v(Lₗ(p))]forl = 1…L−1variances[l]=Var[v(Lₗ(p))]forl = 1…L
For a single-level ensemble variances_corrections is an empty vector.
MultilevelMonteCarlo.optimal_samples_per_level — Function
optimal_samples_per_level(variances_corrections, variances, cost_per_level, tolerance)Compute optimal sample counts using the classical MLMC allocation formula that minimises total work for a target mean-square-error budget.
Arguments
variances_corrections:Var[correction]at levels 2…L (lengthL−1).variances:Var[QoI]at each level (lengthL). Onlyvariances[1]is used directly; levels 2+ draw fromvariances_corrections.cost_per_level: Cost per sample at each level (lengthL).tolerance: Target RMSE. Variance budget =tolerance² / 2.
Returns
Vector of optimal sample counts (integers).
Sample Collection & Storage
MultilevelMonteCarlo.MLMCSamples — Type
MLMCSamplesContainer holding raw QoI samples from an MLMC run.
Fields
fine::Vector{Matrix{Float64}}:fine[lvl]is(n_qois × N_lvl)— QoI values evaluated at the fine level.coarse::Vector{Matrix{Float64}}:coarse[lvl]is(n_qois × N_lvl)— QoI values evaluated at the coarse level (lvl-1). For level 1 this is a matrix of zeros (no coarser level exists).corrections::Vector{Matrix{Float64}}:corrections[lvl] = fine[lvl] - coarse[lvl].n_levels::Intn_qois::Int
MultilevelMonteCarlo.mlmc_sample — Function
mlmc_sample(levels, qoi_functions, samples_per_level, draw_parameters;
parallel=false, netcdf_path=nothing) -> MLMCSamplesRun an MLMC sampling campaign and return (or save) all raw QoI samples.
For each level l and each sample i, the model is evaluated at the fine level l and the coarse level l−1 (with the same parameters), and every QoI extractor is applied to both outputs.
Arguments
levels,qoi_functions,samples_per_level,draw_parameters: same asmlmc_estimate.
Keyword Arguments
parallel::Bool = false: thread samples within each level.netcdf_path::Union{String,Nothing} = nothing: if a file path is given, all samples are written to a single NetCDF-4 file in addition to being returned.
Returns
An MLMCSamples struct containing fine, coarse, and corrections matrices for every level.
MultilevelMonteCarlo.mlmc_estimate_from_samples — Function
mlmc_estimate_from_samples(samples::MLMCSamples) -> Vector{Float64}Compute the MLMC telescoping-sum estimate from pre-collected samples.
MultilevelMonteCarlo.read_mlmc_samples_netcdf — Function
read_mlmc_samples_netcdf(path::String) -> MLMCSamplesRead an MLMCSamples struct from a NetCDF file previously written by mlmc_sample.
PDF & CDF Estimation
MultilevelMonteCarlo.estimate_pdf_mlmc_kernel_density — Function
estimate_pdf_mlmc_kernel_density(samples::MLMCSamples, qoi_index::Int;
bandwidth=nothing) -> FunctionConstruct a kernel density estimate of the PDF for QoI qoi_index from MLMC samples, using the multilevel telescoping-sum structure.
The estimator is:
\[\hat{f}(u) = \frac{1}{N_1} \sum_{i=1}^{N_1} K_h(u - Q_1^{(i)}) + \sum_{l=2}^{L} \frac{1}{N_l} \sum_{i=1}^{N_l} \bigl[K_h(u - Q_l^{(i)}) - K_h(u - Q_{l-1}^{(i)})\bigr]\]
where $K_h(x) = \frac{1}{h} \varphi(x/h)$ is a Gaussian kernel with bandwidth $h$.
Arguments
samples: AnMLMCSamplesstruct.qoi_index: Which QoI (column) to estimate the PDF for.
Keyword Arguments
bandwidth: Kernel bandwidth. Ifnothing, Silverman's rule of thumb is used based on the finest-level fine samples.
Returns
A callable f(u) -> Float64 that evaluates the estimated PDF at point u.
MultilevelMonteCarlo.estimate_cdf_mlmc_kernel_density — Function
estimate_cdf_mlmc_kernel_density(samples::MLMCSamples, qoi_index::Int;
bandwidth=nothing) -> FunctionConstruct a kernel CDF estimate for QoI qoi_index from MLMC samples.
Uses the same MLMC telescoping sum as the PDF estimator, but with the Gaussian CDF kernel $\Phi((u - x)/h)$ instead of the PDF kernel.
Returns
A callable F(u) -> Float64 that evaluates the estimated CDF at point u.
MultilevelMonteCarlo.estimate_pdf_maxent — Function
estimate_pdf_maxent(samples::MLMCSamples, qoi_index::Int;
R::Int=10, maxiter::Int=100, tol::Float64=1e-12,
support=nothing) -> (pdf_func, λ, a, b)Estimate the PDF via the Maximum Entropy method using moments estimated from MLMC samples.
The PDF is approximated as:
\[\tilde{f}_U(u) = \exp\!\left(\sum_{k=0}^{R} \lambda_k \phi_k\!\left(\frac{2(u-a)}{b-a} - 1\right)\right)\]
where $\phi_k$ are orthonormal Legendre polynomials on $[-1,1]$, and $\lambda_k$ are determined by matching generalized moments via Newton's method. The Jacobian is computed using ForwardDiff.jl.
Arguments
samples:MLMCSamplesstruct.qoi_index: Which QoI to estimate the PDF for.
Keyword Arguments
R::Int = 10: Number of polynomial moments (polynomial degree).maxiter::Int = 100: Maximum Newton iterations.tol::Float64 = 1e-12: Newton convergence tolerance (‖residual‖₂).support: Tuple(a, b)for the support interval. Ifnothing, estimated from the sample range with 10% padding.
Returns
A tuple (pdf_func, λ, a, b) where:
pdf_func(u)evaluates the MaxEnt PDF atu(in original coordinates).λis the coefficient vector of lengthR+1.a, bare the support bounds.
MultilevelMonteCarlo.estimate_cdf_maxent — Function
estimate_cdf_maxent(samples::MLMCSamples, qoi_index::Int; kwargs...)
-> (cdf_func, λ, a, b)Estimate the CDF via the Maximum Entropy method. First computes the MaxEnt PDF via estimate_pdf_maxent, then integrates it numerically.
All keyword arguments are forwarded to estimate_pdf_maxent.
Returns
A tuple (cdf_func, λ, a, b) where cdf_func(u) evaluates the CDF at u.
2-D PDF & CDF Estimation
MultilevelMonteCarlo.estimate_pdf_mlmc_kernel_density_2d — Function
estimate_pdf_mlmc_kernel_density_2d(samples::MLMCSamples,
qoi_indices::NTuple{2,Int}; bandwidth=nothing) -> FunctionConstruct a 2-D kernel density estimate of the joint PDF for two QoIs using the MLMC telescoping sum with a bivariate Gaussian kernel:
\[K_{\mathbf{h}}(\mathbf{x}) = \frac{1}{h_1 h_2} \varphi\!\left(\frac{x_1}{h_1}\right) \varphi\!\left(\frac{x_2}{h_2}\right)\]
Arguments
samples: AnMLMCSamplesstruct.qoi_indices: Tuple(j₁, j₂)of the two QoI row indices.
Keyword Arguments
bandwidth: Per-dimension bandwidths(h₁, h₂). Default: Silverman's rule per dimension using finest-level fine samples.
Returns
A callable f(x₁, x₂) -> Float64 evaluating the estimated joint PDF.
MultilevelMonteCarlo.estimate_cdf_mlmc_kernel_density_2d — Function
estimate_cdf_mlmc_kernel_density_2d(samples::MLMCSamples,
qoi_indices::NTuple{2,Int}; bandwidth=nothing) -> FunctionConstruct a 2-D kernel CDF estimate for two QoIs using product Gaussian CDF kernels and the MLMC telescoping sum:
\[\hat{F}(x_1, x_2) = \frac{1}{N_1} \sum_{i=1}^{N_1} \Phi\!\left(\frac{x_1 - Q_{1,1}^{(i)}}{h_1}\right) \Phi\!\left(\frac{x_2 - Q_{1,2}^{(i)}}{h_2}\right) + \sum_{l=2}^{L} \frac{1}{N_l}\sum_{i=1}^{N_l}\bigl[\cdots\bigr]\]
Returns
A callable F(x₁, x₂) -> Float64 evaluating the estimated joint CDF.
MultilevelMonteCarlo.estimate_pdf_maxent_2d — Function
estimate_pdf_maxent_2d(samples::MLMCSamples, qoi_indices::NTuple{2,Int};
R::Int=4, maxiter::Int=200, tol::Float64=1e-10,
support=nothing) -> (pdf_func, Λ, a, b)Estimate the 2-D joint PDF via Maximum Entropy using tensor-product Legendre moments estimated from MLMC samples.
The PDF is:
\[f(x,y) = \frac{4}{(b_1-a_1)(b_2-a_2)} \exp\!\left(\sum_{k=0}^{R}\sum_{l=0}^{R} \Lambda_{kl}\, P_k(\xi_1)\, P_l(\xi_2)\right)\]
where $\xi_i = 2(x_i - a_i)/(b_i - a_i) - 1$ maps to $[-1,1]$.
Arguments
samples:MLMCSamples.qoi_indices: Tuple(j₁, j₂).
Keyword Arguments
R::Int = 4: Polynomial degree per dimension. Total parameters =(R+1)².maxiter,tol: Newton iteration controls.support:((a₁,b₁), (a₂,b₂)). Default: sample range + 10% padding.
Returns
(pdf_func, Λ, (a₁,b₁), (a₂,b₂)) where pdf_func(x₁,x₂) evaluates the PDF.
MultilevelMonteCarlo.estimate_cdf_maxent_2d — Function
estimate_cdf_maxent_2d(samples::MLMCSamples, qoi_indices::NTuple{2,Int};
kwargs...) -> (cdf_func, Λ, (a₁,b₁), (a₂,b₂))Estimate the 2-D joint CDF via the Maximum Entropy method. Computes the MaxEnt PDF via estimate_pdf_maxent_2d and integrates it numerically.
Returns
(cdf_func, Λ, bounds₁, bounds₂) where cdf_func(x₁,x₂) evaluates the CDF.
Rank Histograms
MultilevelMonteCarlo.ml_gregory_resample — Function
ml_gregory_resample(samples::MLMCSamples, qoi_index::Int,
number_of_resamples::Int) -> Vector{Float64}Generate resampled values from an MLMC ensemble using Gregory's multilevel inverse-CDF method.
Fine and coarse QoI samples are sorted independently at each level to form per-level empirical quantile functions, which are then combined via the MLMC telescoping sum:
\[\hat{Q}^{-1}(u) = \hat{F}_1^{-1}(u) + \sum_{l=2}^{L}\bigl[\hat{F}_l^{-1}(u) - \hat{G}_l^{-1}(u)\bigr]\]
where $\hat{F}_l^{-1}$ and $\hat{G}_l^{-1}$ are the empirical quantile functions of the fine and coarse samples at level $l$. Random draws $u \sim \mathrm{Uniform}(0,1)$ are passed through this inverse CDF to produce resamples.
Arguments
samples: AnMLMCSamplesstruct.qoi_index: Which QoI (row index) to resample.number_of_resamples: How many i.i.d. resampled values to generate.
Returns
Vector{Float64} of resampled QoI values.
MultilevelMonteCarlo.rank_histogram_gregory — Function
rank_histogram_gregory(observations, levels, qoi_functions,
samples_per_level, draw_parameters;
number_of_resamples=1000, parallel=false)
-> Vector{Int}Compute a rank histogram using Gregory's MLMC inverse-CDF resampling for a given list of observations.
For each observation $y$ in observations:
- Run a fresh MLMC sampling campaign via
mlmc_sample. - Generate
number_of_resamplesvalues from the MLMC ensemble usingml_gregory_resample. - Sort the resampled array and record the rank of $y$.
If the MLMC ensemble correctly represents the distribution of the observations, the returned ranks are approximately $\mathrm{Uniform}\{1,\ldots,N_r+1\}$ where $N_r$ = number_of_resamples.
Arguments
observations:Vector{<:Real}of observed values to rank.levels: Model evaluators[L₁, …, L_L].qoi_functions:[q(model_output) -> scalar]— a vector with a single QoI.samples_per_level: Sample counts for each MLMC ensemble.draw_parameters:() -> params.
Keyword Arguments
number_of_resamples::Int = 1000: Resampled values per ensemble.parallel::Bool = false: Thread MLMC sampling within each ensemble.
Returns
Vector{Int} of ranks, each in $\{1,\ldots,N_r+1\}$.
rank_histogram_gregory(levels, qoi_function, draw_parameters,
number_of_rank_samples, samples_per_level;
number_of_resamples=1000, parallel=false)
-> Vector{Int}Convenience wrapper that draws number_of_rank_samples observations from the finest level and then calls the observation-based rank_histogram_gregory.
MultilevelMonteCarlo.rank_histogram_cdf — Function
rank_histogram_cdf(observations, levels, qoi_functions,
samples_per_level, cdf_method, draw_parameters;
parallel=false) -> Vector{Float64}Compute a probability-integral-transform (PIT) histogram using an MLMC-based CDF estimate for a given list of observations.
For each observation $y$ in observations:
- Run a fresh MLMC sampling campaign.
- Estimate the CDF $\hat{F}$ from the MLMC samples using
cdf_method. - Record $\hat{F}(y)$.
If the CDF estimate is well calibrated for the observations, the returned PIT values are approximately $\mathrm{Uniform}(0,1)$.
Arguments
observations:Vector{<:Real}of observed values to evaluate.levels: Model evaluators[L₁, …, L_L].qoi_functions:[q(model_output) -> scalar]— a vector with a single QoI.samples_per_level: Sample counts for each MLMC ensemble.cdf_method:(samples::MLMCSamples, qoi_index::Int) -> cdf_funcwherecdf_func(x)::Float64. Passestimate_cdf_mlmc_kernel_densityfor KDE, or(s, i) -> first(estimate_cdf_maxent(s, i; R=6))for MaxEnt.draw_parameters:() -> params.
Keyword Arguments
parallel::Bool = false: Thread MLMC sampling within each ensemble.
Returns
Vector{Float64} of PIT values, each ideally in $[0, 1]$.
rank_histogram_cdf(levels, qoi_function, draw_parameters,
number_of_rank_samples, samples_per_level,
cdf_method; parallel=false) -> Vector{Float64}Convenience wrapper that draws number_of_rank_samples observations from the finest level and then calls the observation-based rank_histogram_cdf.
Multivariate Rank Histogram
MultilevelMonteCarlo.estimate_cdf_multivariate_mlmc_kernel_density — Function
estimate_cdf_multivariate_mlmc_kernel_density(samples::MLMCSamples,
qoi_indices::AbstractVector{<:Integer}; bandwidth=nothing) -> FunctionEstimate the joint multivariate CDF for the specified QoIs from MLMC samples using a product of Gaussian-CDF kernels with the MLMC telescoping sum.
The estimator uses a product of univariate Gaussian CDF kernels:
\[\hat{F}(\mathbf{x}) = \frac{1}{N_1} \sum_{i=1}^{N_1} \prod_{k=1}^{d} \Phi\!\left(\frac{x_k - Q_{1,k}^{(i)}}{h_k}\right) + \sum_{l=2}^{L} \frac{1}{N_l} \sum_{i=1}^{N_l} \left[\prod_{k} \Phi\!\left(\frac{x_k - F_{l,k}^{(i)}}{h_k}\right) - \prod_{k} \Phi\!\left(\frac{x_k - C_{l,k}^{(i)}}{h_k}\right)\right]\]
where $\Phi$ is the standard normal CDF and $h_k$ is the per-dimension bandwidth (Silverman's rule by default).
Arguments
samples: AnMLMCSamplesstruct.qoi_indices: Which QoI rows to include as dimensions of the joint CDF.
Keyword Arguments
bandwidth: Per-dimension bandwidths as a vector. Ifnothing, Silverman's rule is applied per dimension using finest-level fine samples.
Returns
A callable F(x::AbstractVector) -> Float64 evaluating the estimated joint CDF.
MultilevelMonteCarlo.ml_bootstrap_resample_multivariate — Function
ml_bootstrap_resample_multivariate(samples::MLMCSamples,
qoi_indices::AbstractVector{<:Integer},
number_of_resamples::Int) -> Matrix{Float64}Generate multivariate resamples from an MLMC ensemble using a bootstrap-style telescoping sum.
For each resample, a random sample index is drawn independently at each level (shared across all QoI dimensions within that level), and the MLMC telescoping sum is formed:
\[\hat{\mathbf{x}} = \mathbf{F}_1^{(i_1)} + \sum_{l=2}^{L} \bigl[\mathbf{F}_l^{(i_l)} - \mathbf{C}_l^{(i_l)}\bigr]\]
This preserves within-sample correlations between QoI dimensions at each level.
Arguments
samples: AnMLMCSamplesstruct.qoi_indices: Which QoI rows to include as dimensions.number_of_resamples: How many resampled vectors to generate.
Returns
Matrix{Float64} of size (d, number_of_resamples).
MultilevelMonteCarlo.multivariate_rank_histogram — Function
multivariate_rank_histogram(observations, levels, qoi_functions,
samples_per_level, draw_parameters;
number_of_resamples=1000, cdf_method=nothing, parallel=false)
-> Vector{Float64}Compute a multivariate rank histogram (MRH) using the MLMC-estimated multivariate CDF for a given matrix of observations.
Implements the approach of Gneiting et al. (2008): the multivariate rank of an observation $\mathbf{x}_0$ relative to an ensemble is
\[\mathrm{rank}_{\mathrm{MRH}}(\mathbf{x}_0, \hat{F}_m) = \hat{F}_{G}\bigl(G(\mathbf{x}_0)\bigr)\]
where $G(\mathbf{x}) = \hat{F}(\mathbf{x})$ is the multivariate CDF and $\hat{F}_G$ is the empirical CDF of the $G$-values over the ensemble.
For each observation column in observations:
- Run
mlmc_sampleto obtain MLMC samples. - Estimate $G = \hat{F}$ via the chosen CDF method.
- Generate an ensemble via
ml_bootstrap_resample_multivariate. - Compute $g_0 = G(\mathbf{y})$ and $g_j = G(\mathbf{x}_j)$ for each ensemble member.
- Record the PIT value $\hat{F}_G(g_0) = \frac{1}{m}\#\{j : g_j \le g_0\}$.
If the MLMC ensemble is well calibrated, the returned PIT values are approximately $\mathrm{Uniform}(0,1)$.
Arguments
observations:Matrix{<:Real}of size(d, n)— each column is one d-dimensional observation.levels: Model evaluators[L₁, …, L_L]. Each returns a d-dimensional output (or an object from whichqoi_functionsextract d scalars).qoi_functions:[q₁, …, q_d]— d QoI extractors, eachqⱼ(model_output) -> scalar. Currently must be exactly 2 QoIs.samples_per_level: MLMC sample counts per level.draw_parameters:() -> params.
Keyword Arguments
number_of_resamples::Int = 1000: Ensemble size for each rank evaluation.cdf_method:(samples, (j₁,j₂)) -> F̂(x₁,x₂), a callable that takesMLMCSamplesand a tuple of QoI indices and returns a 2-D CDF function. Default:estimate_cdf_mlmc_kernel_density_2d. For MaxEnt pass(s, idx) -> first(estimate_cdf_maxent_2d(s, idx; R=4)).parallel::Bool = false: Thread MLMC sampling within each ensemble.
Returns
Vector{Float64} of PIT values, each ideally in $[0, 1]$.
multivariate_rank_histogram(levels, qoi_functions, draw_parameters,
number_of_rank_samples, samples_per_level;
number_of_resamples=1000, cdf_method=nothing, parallel=false)
-> Vector{Float64}Convenience wrapper that draws number_of_rank_samples observations from the finest level and then calls the observation-based multivariate_rank_histogram.
Online Statistics (Internals)
MultilevelMonteCarlo.OnlineAccumulator — Type
OnlineAccumulator()Welford's online accumulator for computing running mean and variance in a single pass without storing individual samples.
Two accumulators can be combined with merge (Chan's parallel algorithm), making this safe for threaded use with per-thread instances.
MultilevelMonteCarlo.update! — Function
Update the accumulator with a new observation (Welford's algorithm).
MultilevelMonteCarlo.get_variance — Function
Return the sample variance, or 0.0 if fewer than 2 observations.
Base.merge — Method
Merge two accumulators using Chan's parallel algorithm.
MultilevelMonteCarlo.merge! — Function
Merge src into dest in-place.
MultilevelMonteCarlo.accumulate_samples — Function
accumulate_samples(f, n; parallel=false) -> OnlineAccumulatorCall f() exactly n times and accumulate scalar return values online. When parallel=true, per-thread accumulators are merged via Chan's algorithm.
acc = accumulate_samples(1000; parallel=true) do
my_expensive_computation()
endMultilevelMonteCarlo.accumulate_samples! — Function
Like accumulate_samples but merges into an existing accumulator.
MultilevelMonteCarlo.accumulate_vector_samples — Function
accumulate_vector_samples(body!, n, k; parallel=false) -> Vector{OnlineAccumulator}Execute body!(accs) exactly n times, where accs is a k-element vector of OnlineAccumulators. The caller should call update!(accs[j], val) inside body! to record observations. When parallel=true each thread gets its own accumulator vector; results are merged afterwards.
This avoids per-sample allocation — only the accumulators are mutated.
accs = accumulate_vector_samples(1000, 2; parallel=true) do accs
x = rand()
update!(accs[1], x)
update!(accs[2], x^2)
endMultilevelMonteCarlo.accumulate_vector_samples! — Function
Like accumulate_vector_samples but merges into existing accumulators.
MultilevelMonteCarlo._default_variance_qoi — Function
_default_variance_qoi(qoi_functions)Return a callable out -> Σᵢ qᵢ(out)² used as the default scalar QoI for variance-based sample allocation when multiple QoIs are present.
MultilevelMonteCarlo._legendre_polynomials — Function
_legendre_polynomials(x, R)Evaluate orthonormal Legendre polynomials $P_0(x), …, P_R(x)$ on $[-1,1]$ via the three-term recurrence. Returns a vector of length R+1.