gensbi.recipes.unconditional_pipeline#

Pipeline for training and using a Unconditional model for simulation-based inference.

Classes#

UnconditionalPipeline

Model-agnostic unconditional pipeline parameterized by a GenerativeMethod.

Module Contents#

class gensbi.recipes.unconditional_pipeline.UnconditionalPipeline(model, train_dataset, val_dataset, dim_obs, method, ch_obs=1, params=None, training_config=None)[source]#

Bases: gensbi.recipes.pipeline.AbstractPipeline

Model-agnostic unconditional pipeline parameterized by a GenerativeMethod.

Unlike the old method-specific pipeline classes, this class works with any generative method and any user-provided model that conforms to the UnconditionalWrapper interface.

Parameters:
  • model (nnx.Module) – The model to be trained.

  • train_dataset (iterable) – Training dataset yielding x_1 batches (not tuples).

  • val_dataset (iterable) – Validation dataset.

  • dim_obs (int) – Dimension of the data space.

  • method (GenerativeMethod) – Strategy object (e.g. FlowMatchingMethod(), DiffusionEDMMethod(), ScoreMatchingMethod()).

  • ch_obs (int, optional) – Number of channels per token. Default is 1.

  • params (optional) – Model parameters (stored but not used directly).

  • training_config (dict, optional) – Training configuration.

Examples

>>> from gensbi.core import FlowMatchingMethod
>>> pipeline = UnconditionalPipeline(
...     model=my_model,
...     train_dataset=train_ds,
...     val_dataset=val_ds,
...     dim_obs=9,
...     method=FlowMatchingMethod(),
... )
abstractmethod _make_model()[source]#

Create and return the model to be trained.

_wrap_model()[source]#

Wrap the model for evaluation (either using JointWrapper or ConditionalWrapper).

classmethod get_default_params(*args, **kwargs)[source]#
Abstractmethod:

get_log_prob_fn(use_ema=True, **kwargs)[source]#

Get a log-probability function.

Parameters:
  • use_ema (bool, optional) – Whether to use the EMA model. Default is True.

  • **kwargs – Forwarded to method.build_log_prob_fn.

Returns:

log_prob_fn(x_1) -> log_prob

Return type:

Callable

get_loss_fn()[source]#

Return the loss function for training/validation.

get_sampler(use_ema=True, **sampler_kwargs)[source]#

Get a sampler function.

Parameters:
  • use_ema (bool, optional) – Whether to use the EMA model. Default is True.

  • **sampler_kwargs – Forwarded to method.build_sampler_fn.

Returns:

sampler(key, nsamples) -> samples

Return type:

Callable

classmethod init_pipeline_from_config(*args, **kwargs)[source]#
Abstractmethod:

Initialize the pipeline from a configuration file.

Parameters:
  • train_dataset (iterable) – Training dataset.

  • val_dataset (iterable) – Validation dataset.

  • dim_obs (int) – Dimensionality of the parameter (theta) space.

  • dim_cond (int) – Dimensionality of the observation (x) space.

  • config_path (str) – Path to the configuration file.

  • checkpoint_dir (str) – Directory for saving checkpoints.

Returns:

pipeline – An instance of the pipeline initialized from the configuration.

Return type:

AbstractPipeline

log_prob(x_1, use_ema=True, *, key=None, **kwargs)[source]#

Compute log-probability of x_1.

Parameters:
  • x_1 (array-like) – Data samples to evaluate.

  • use_ema (bool, optional) – Use the EMA model. Default is True.

  • key (jax.random.PRNGKey, optional) – Required when exact_divergence=False (Hutchinson).

  • **kwargs – Forwarded to get_log_prob_fn().

Returns:

Log-probabilities.

Return type:

Array

sample(key, nsamples=10000, use_ema=True, **sampler_kwargs)[source]#

Draw samples from the model.

Parameters:
  • key (jax.random.PRNGKey) – Random key.

  • nsamples (int, optional) – Number of samples. Default is 10 000.

  • use_ema (bool, optional) – Use the EMA model. Default is True.

  • **sampler_kwargs – Forwarded to get_sampler().

Returns:

Samples of shape (nsamples, dim_obs, ch_obs).

Return type:

Array

abstractmethod sample_batched(*args, **kwargs)[source]#

Generate samples from the trained model in batches.

Parameters:
  • key (jax.random.PRNGKey) – Random number generator key.

  • x_o (array-like) – Conditioning variable (e.g., observed data).

  • nsamples (int) – Number of samples to generate.

  • chunk_size (int, optional) – Size of each batch for sampling. Default is 50.

  • show_progress_bars (bool, optional) – Whether to display progress bars during sampling. Default is True.

  • args (tuple) – Additional positional arguments for the sampler.

  • kwargs (dict) – Additional keyword arguments for the sampler.

Returns:

samples – Generated samples of shape (nsamples, batch_size_cond, dim_obs, ch_obs).

Return type:

array-like

loss_obj[source]#
method[source]#
path[source]#