Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
90 commits
Select commit Hold shift + click to select a range
ae4edb9
Rename Stan problems directory to remove spaces
alecksphillips Apr 29, 2022
7cc36e3
Made a start on calling redding-stan to evaluate Stan models
alecksphillips Apr 29, 2022
ca9e100
Add Reversible-Jump MCMC (RJMCMC)
alecksphillips May 3, 2022
2c8614b
Change prior on mixture components for RJMCMC example
alecksphillips May 26, 2022
0c4cd82
Moved random walk proposal to separate function and added NUTS as alt…
PhilClemson Aug 11, 2022
4696430
Rename Stan problems directory to remove spaces
alecksphillips Apr 29, 2022
3d4e9ac
Made a start on calling redding-stan to evaluate Stan models
alecksphillips Apr 29, 2022
f18bcee
Add Reversible-Jump MCMC (RJMCMC)
alecksphillips May 3, 2022
38841fb
Change prior on mixture components for RJMCMC example
alecksphillips May 26, 2022
1949a6d
Moved random walk proposal to separate function and added NUTS as alt…
PhilClemson Aug 11, 2022
cf434fc
Started replacing redding-stan with bridge-stan (currently broken)
alecksphillips Sep 20, 2022
52532f1
Update bridgestan submodule
alecksphillips Oct 3, 2022
9367840
Fix evaluating with bridgestan
alecksphillips Oct 4, 2022
76feb91
Fix flake errors
alecksphillips Oct 4, 2022
6c73782
feat: get num. parameters from bridgestan
alecksphillips Oct 13, 2022
95b29fe
style: fix flake8 errors
alecksphillips Oct 13, 2022
c9c3fc2
Added grid search and fixed rjmcmc algo
PhilClemson Oct 17, 2022
5c058eb
Adding changes that should have been in c9c3fc2
PhilClemson Oct 17, 2022
0dc4875
Merged calling-redding-stan into bridge-stan
PhilClemson Oct 18, 2022
30677cd
Updated RJMCMC examples
PhilClemson Oct 20, 2022
d7b1129
Added step size adaptation to NUTS
PhilClemson Nov 3, 2022
b055a02
Restructured continuous variable proposals
PhilClemson Nov 4, 2022
85716a4
Added RJMCMC inside SMC
PhilClemson Nov 17, 2022
c77b6af
chore: add bridgestan to requirements.txt
alecksphillips Nov 18, 2022
4ea14a5
refactor: move stan models to examples/
alecksphillips Nov 18, 2022
6ab2d49
chore: fix bridgestan install cmd in requirements.txt
alecksphillips Nov 18, 2022
3ed892d
chore: add cmdstanpy to requirements.txt
alecksphillips Nov 18, 2022
071ef9e
refactor: remove cmdstan submodule, add cmdstanpy as requirement
alecksphillips Nov 18, 2022
a528c27
docs: Add cmdstan installation instructions to README
alecksphillips Nov 18, 2022
cba365b
fix: bridgestan submodule
alecksphillips Nov 18, 2022
791b0e6
chore: Update github workflow to accomodate bridgestan
alecksphillips Nov 18, 2022
07074aa
Add bridgestan installation instructions to README
alecksphillips Nov 18, 2022
4ae203f
Updated examples and minor fixes
PhilClemson Nov 22, 2022
c4f6691
Resolved flake8 issues
PhilClemson Nov 22, 2022
5256896
Undoing changes to files accidentally staged
PhilClemson Nov 22, 2022
dd7afee
chore: update bridgestan to latest version
alecksphillips Dec 1, 2022
9cd0a9f
chore: remove plotting from stan_model_example.py
alecksphillips Dec 1, 2022
2afc332
chore: update to bridgestan v1.0.1
alecksphillips Feb 8, 2023
9935fc8
fix: np.unique on ragged arrays is deprecated
alecksphillips Feb 1, 2023
92bcd57
Merge branch 'main' into bridge-stan_refactor
alecksphillips Jul 24, 2023
6c4392d
refactor: (WIP) refactoring bridgestan branch to match work in main
alecksphillips Jul 24, 2023
45e3ffa
fix: fix call to rng.randomMvNormal in continuous_samplers
alecksphillips Jul 24, 2023
c67d2eb
fix: base type proposal rng
alecksphillips Jul 25, 2023
5ac87c5
chore: rename tests
alecksphillips Jul 25, 2023
db26632
fix: base initial proposal
alecksphillips Jul 25, 2023
c57bd17
test: add tests
alecksphillips Jul 25, 2023
fc87831
style: long import line
alecksphillips Jul 25, 2023
657e463
tests: add tests/test_random.py
alecksphillips Jul 26, 2023
20f3e61
tests: add test_variable_size_redistribution
alecksphillips Jul 26, 2023
3e97327
tests: add dt metric tests
alecksphillips Jul 26, 2023
2bd467f
fix: additive structure proposal heuristic
alecksphillips Jul 26, 2023
c2ffc3b
tests: add tests for additive structure proposal
alecksphillips Jul 26, 2023
b345f9c
fix: bell number calculation
alecksphillips Jul 26, 2023
b0623f4
tests: add tests for additive structure numbers
alecksphillips Jul 26, 2023
22015e7
fix: AdditiveStructureTarget.evaluatePrior
alecksphillips Jul 27, 2023
f1242ad
tests: add unit tests for additive structure
alecksphillips Jul 27, 2023
2783319
fix: executor rank
alecksphillips Jul 27, 2023
a790362
tests: add tests for executor_MPI
alecksphillips Jul 27, 2023
95306d2
test: add test for generic encode/decode
alecksphillips Jul 27, 2023
14d02c9
test: fix test_smc
alecksphillips Jul 27, 2023
b40690a
test: add tests for executor MPI
alecksphillips Jul 27, 2023
839e762
test: fix test for executor_MPI.cumsum
alecksphillips Jul 27, 2023
2716986
test: change test case for Executor_MPI.redistribute
alecksphillips Jul 27, 2023
d22e178
fix: DiscreteVariable decode
alecksphillips Jul 28, 2023
81614f1
fix: tree equality
alecksphillips Jul 28, 2023
8e2447f
test: add tests for decision tree + distributions
alecksphillips Jul 28, 2023
95ab31d
test: add tests for optimal L
alecksphillips Jul 28, 2023
4277534
test: add tests for check_stability
alecksphillips Jul 28, 2023
8dbde0a
test: mark executor_MPI tests
alecksphillips Jul 28, 2023
a89208a
test: skip parallel optimal L
alecksphillips Jul 29, 2023
76dc41e
test: fix numerical issues in tests/test_smc_components.py
alecksphillips Jul 29, 2023
3cbaf57
chore: add coverage to workflow
alecksphillips Jul 28, 2023
a21734f
fix: rjmcmc example
alecksphillips Aug 8, 2023
c314299
chore: remove bridgestan submodule
alecksphillips Aug 9, 2023
9fedf78
Merge branch 'tests/coverage' into bridge-stan-test-rebase
alecksphillips Aug 9, 2023
e6bcc89
fix: stan model example
alecksphillips Sep 19, 2023
708745c
feat: add randomNormal to RNG
alecksphillips Sep 19, 2023
dc388a8
fix: sample_offsets
alecksphillips Sep 19, 2023
4f7f20b
fix: random numbers in continuous_samplers with RNG
alecksphillips Sep 19, 2023
2f78fde
fix: sonar_example
alecksphillips Sep 20, 2023
82286f3
fix: sonar_example_smc
alecksphillips Sep 20, 2023
607ec57
refactor: move reversible_jump.py to base
alecksphillips Sep 20, 2023
af9824a
refactor: rename stan_model -> StanModel
alecksphillips Sep 20, 2023
2dbb649
refactor: restructuring continuous samplers (WIP)
alecksphillips Sep 21, 2023
f3d3fd7
Merge branch 'main' into refactor/bridge-stan-w-tests
alecksphillips Sep 25, 2023
5074660
chore: tidy up imports in examples
alecksphillips Sep 25, 2023
7257ba9
chore: tidy up imports in examples
alecksphillips Sep 25, 2023
8ea18ca
Merge branch 'main' into refactor/bridge-stan-w-tests
alecksphillips Oct 5, 2023
197cf7d
fix: update linear_array stan model
alecksphillips Oct 5, 2023
9ed27a4
fix: NUTS and RandomWalk inputs to RJMCMC
alecksphillips Oct 5, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 74 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,80 @@ Copyright (c) 2023, University of Liverpool.
- pandas
- scipy
- scikit-learn (for examples)
- [bridgestan](https://github.com/roualdes/bridgestan) (for models with continuous parameters specified with Stan)
- cmdstanpy (for models with continuous parameters specified with Stan)

## Continuous Parameters and Stan
For models with continuous parameters, the continuous model conditional on discrete
parameters can be specified using a model written in Stan.
Log density and gradient information are extracted from Stan models using
[bridgestan](https://github.com/roualdes/bridgestan).



## Installation

Install with pip:
```bash
pip install pip install git+https://github.com/DiscreteVariablesTaskForce/DiscreteSamplingFramework.git
```

or

```
git clone --recurse-submodules https://github.com/DiscreteVariablesTaskForce/DiscreteSamplingFramework.git
pip install -e DiscreteSamplingFramework
```

### bridgestan

[bridgestan](https://github.com/roualdes/bridgestan) must be "installed".
It is set as a submodule of this repo and so, if you installed by cloning the repo, you can
set an environment variable to point to the bridgestan repo:
```bash
export BRIDGESTAN=<path/to/this/repo/>/bridgestan
```


Alternatively, you can clone bridgestan elsewhere and set an environment variable:
```bash
git clone https://github.com/roualdes/bridgestan.git <path/to/bridgestan>
export BRIDGESTAN=<path/to/bridgestan>
```

If this environment variable is not set, discretesampling will attempt to set it to
the bridgestan submodule.


### Cmdstan
[cmdstan](https://github.com/stan-dev/cmdstan) is also required to query stan models with bridgestan.

#### Installation with cmdstanpy
The simplest way to install cmdstan is using cmdstanpy:
```python
import cmdstanpy
cmdstanpy.install_cmdstan()
```
which will download and build the latest version of cmdstan to
your home directory in `~/.cmdstan` and
sets an environment variable `CMDSTAN` to that location.

#### Manual installation
Alternatively, cmdstan can be installed manually following instructions for [installing with conda](https://mc-stan.org/docs/cmdstan-guide/cmdstan-installation.html#conda-install) or [building from source](https://mc-stan.org/docs/cmdstan-guide/cmdstan-installation.html#installation-from-github).

The `CMDSTAN` environment variable should be then set:
```bash
export CMDSTAN=path/to/cmdstan
```
or with cmdstanpy:
```python
import cmdstanpy
cmdstanpy.set_cmdstan_path('path/to/cmdstan')
```

#### Windows
On Windows there's an additional step in the cmdstan installation to link the TBB libraries. However, this might not work correctly. In this case you may run into an error when using bridgestan, where it will say that it is unable to find the model .so file or one of its dependencies. The way to fix this is to copy the tbb.dll file from `cmdstan\stan\lib\stan_math\lib\tbb` to the folder containing the stan model.


### Cloning and installing from github

Expand Down
306 changes: 306 additions & 0 deletions discretesampling/base/algorithms/continuous/NUTS.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,306 @@
import copy
import numpy as np
from scipy.stats import multivariate_normal
from discretesampling.base.algorithms.continuous.base import ContinuousSampler

MAX_TREEDEPTH = 10


class NUTS(ContinuousSampler):
# We need to adapt the NUTS parameters before we sample from a specific model as defined by the discrete parameters.
# Each time we jump to a new model, initialise NUTS to adapt the mass matrix and step size parameters and then store these
# for later use.
def __init__(self, do_warmup, model, data_function, rng, adapt_delta=0.9, warmup_iters=100):
self.NUTS_params = {}
self.current_stepsize = None
self.do_warmup = do_warmup
self.stan_model = model
self.data_function = data_function
self.rng = rng
self.delta = adapt_delta
self.warmup_iters = warmup_iters

# Perform a single leapfrog step
def Leapfrog(self, current_data, theta, r, epsilon):
[L, dtheta] = self.stan_model.eval(current_data, theta)
r1 = r + epsilon*np.array(dtheta)/2.0
theta1 = theta + epsilon*r
[L1, dtheta_list] = self.stan_model.eval(current_data, theta1)
r1 = r1 + epsilon*np.array(dtheta_list)/2.0

return theta1, r1, L1, L

# Recursive part of NUTS algorithm which decides direction of tree growth
def BuildTree(self, current_data, theta, r, u, v, j, epsilon, treedepth, init_energy, init_potential, M):
treedepth += 1
if treedepth > MAX_TREEDEPTH:
print("max tree depth exceeded")
return 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1
Delta_max = 1000
n1 = 0
s1 = 0
if j == 0:
# take one leapfrog step
[theta1, r1, L1] = self.Leapfrog(current_data, theta, r, v*epsilon)[0:3]
energy1 = L1 - 0.5 * np.dot(np.linalg.solve(M, r1), r1) - init_potential
if u <= np.exp(energy1):
n1 = 1
if u == 0:
# stop log(0) error
s1 = 1
else:
if energy1 > (np.log(u) - Delta_max):
s1 = 1
theta_n = theta1
r_n = r1
theta_p = theta1
r_p = r1
alpha1 = np.exp(energy1 - init_energy)
if alpha1 > 1:
alpha1 = 1
n_alpha1 = 1
else:
# build the left and right subtrees using recursion
[theta_n, r_n, theta_p, r_p, theta1, n1, s1, alpha1, n_alpha1, r1, depth_exceeded] = \
self.BuildTree(current_data, theta, r, u, v, j-1, epsilon,
treedepth, init_energy, init_potential, M)
if depth_exceeded == 1:
return 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1
if s1 == 1:
if v < 0:
[theta_n, r_n, _, _, theta2, n2, s2, alpha2, n_alpha2, r2, depth_exceeded] = \
self.BuildTree(current_data, theta_n, r_n, u, v, j-1, epsilon,
treedepth, init_energy, init_potential, M)
else:
[_, _, theta_p, r_p, theta2, n2, s2, alpha2, n_alpha2, r2, depth_exceeded] = \
self.BuildTree(current_data, theta_p, r_p, u, v, j-1, epsilon,
treedepth, init_energy, init_potential, M)
if depth_exceeded == 1:
return 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1
if n1 != 0 and n2 != 0:
u1 = self.rng.uniform(0, 1)
if u1 < n2/(n1+n2):
theta1 = theta2
r1 = r2
alpha1 = alpha1 + alpha2
n_alpha1 = n_alpha1 + n_alpha2
arg = theta_p - theta_n
I1 = 0
if np.dot(arg, r_n) >= 0:
I1 = 1
I2 = 0
if np.dot(arg, r_p) >= 0:
I2 = 1
s1 = s2*I1*I2
n1 = n1 + n2

return theta_n, r_n, theta_p, r_p, theta1, n1, s1, alpha1, n_alpha1, r1, 0

def NUTS(self, current_continuous, current_discrete, M, epsilon):
current_data = self.data_function(current_discrete)
param_length = self.stan_model.num_unconstrained_parameters(current_data)

# calculate likelihood for starting state
L = self.stan_model.eval(current_data, current_continuous[0:param_length])[0]

# randomly sample momenta (mass matrix not currently implemented)
r0 = self.rng.randomMvNormal(np.zeros([param_length]), np.identity(param_length))

# calculate energy
init_energy = -0.5 * np.dot(np.linalg.solve(M, r0), r0)
# more numerically-stable if energy is normalised relative to initial
# logp value (which then needs to be passed into BuildTree())
# note there should also be a "- np.log(np.linalg.det(M))" term here but

# it's constant and so doesn't affect the algorithm
u = self.rng.uniform(0, np.exp(init_energy))
theta_n = copy.deepcopy(current_continuous[0:param_length])
theta_p = copy.deepcopy(current_continuous[0:param_length])
p_params = copy.deepcopy(current_continuous)
r_n = r0
r_p = r0
j = 0
s = 1
n = 1
treedepth = 0

# start building tree
while s == 1:
v_j = self.rng.uniform(-1, 1)
if v_j < 0:
[theta_n, r_n, _, _, theta1, n1, s1, alpha, n_alpha, r1, depth_exceeded] = \
self.BuildTree(current_data, theta_n, r_n, u, v_j, j, epsilon, treedepth, init_energy, L, M)
else:
[_, _, theta_p, r_p, theta1, n1, s1, alpha, n_alpha, r1, depth_exceeded] = \
self.BuildTree(current_data, theta_p, r_p, u, v_j, j, epsilon, treedepth, init_energy, L, M)
if depth_exceeded == 1:
# max tree depth exceeded, restart from beginning
r0 = self.rng.randomMvNormal(np.zeros([param_length]), np.identity(param_length))
init_energy = -0.5 * np.dot(np.linalg.solve(M, r0), r0)
u = self.rng.uniform(0, np.exp(init_energy))
theta_n = copy.deepcopy(current_continuous[0:param_length])
theta_p = copy.deepcopy(current_continuous[0:param_length])
p_params = copy.deepcopy(current_continuous)
r_n = r0
r_p = r0
j = 0
s = 1
n = 1
treedepth = 0
else:
if s1 == 1:
u1 = self.rng.uniform(0, 1)
if n1/n > u1:
p_params[0:param_length] = theta1
n += n1
u1 = 0
if np.dot((theta_p - theta_n), r_n) >= 0:
u1 = 1
u2 = 0
if np.dot((theta_p - theta_n), r_p) >= 0:
u2 = 1
s = s1*u1*u2
j += 1

return p_params, r0, r1, alpha, n_alpha

def FindReasonableEpsilon(self, current_continuous, current_discrete):
epsilon = 1
current_data = self.data_function(current_discrete)
param_length = self.stan_model.num_unconstrained_parameters(current_data)

# randomly sample momenta
r = self.rng.randomMvNormal(np.zeros([param_length]), np.identity(param_length))

[_, _, L1, L] = self.Leapfrog(current_data, current_continuous[0:param_length], r, epsilon)

# work out whether we need to increase or decrease step size
u = 0
if (L1 - L) > np.log(0.5):
u = 1
a = 2*u - 1

# iteratively adjust step size until the acceptance probability is approximately 0.5
while a*(L1 - L) > -a*np.log(2):
epsilon = (2**a)*epsilon
theta1 = self.Leapfrog(current_data, current_continuous[0:param_length], r, epsilon)[0]
L1 = self.stan_model.eval(current_data, theta1)[0]

return epsilon

def adapt(self, current_continuous, current_discrete, log_epsilon, log_epsilon_bar, M, H_bar, n):
mu = np.log(10) + log_epsilon
gamma = 0.05
t0 = 10
kappa = 0.75
[proposed_continuous, r0, r1, alpha, n_alpha] = self.NUTS(current_continuous, current_discrete, M,
np.exp(log_epsilon))
H_bar = (1 - 1 / (n + t0)) * H_bar + (self.delta - alpha / n_alpha) / (n + t0)
log_epsilon = mu - np.sqrt(n) * H_bar / gamma
n_scaled = n ** (-kappa)
log_epsilon_bar = n_scaled * log_epsilon + (1 - n_scaled) * log_epsilon_bar

return proposed_continuous, r0, r1, log_epsilon, log_epsilon_bar, H_bar

def warmup(self, current_continuous, current_discrete, log_epsilon, M):
# Step 2: Adapt step size using dual averaging
H_bar = 0
log_epsilon_bar = 0
proposed_continuous = copy.deepcopy(current_continuous)
for n in range(1, self.warmup_iters):
[proposed_continuous, r0, r1, log_epsilon, log_epsilon_bar, H_bar] = self.adapt(proposed_continuous,
current_discrete, log_epsilon,
log_epsilon_bar, M, H_bar, n)

return proposed_continuous, r0, r1, log_epsilon_bar

def continual_adaptation(self, current_continuous, current_discrete, log_epsilon, log_epsilon_bar, M, H_bar, n):
[proposed_continuous, r0, r1, log_epsilon, log_epsilon_bar, H_bar] = self.adapt(current_continuous, current_discrete,
log_epsilon, log_epsilon_bar, M,
H_bar, n)

return proposed_continuous, r0, r1, log_epsilon, log_epsilon_bar, H_bar

def init_NUTS(self, current_continuous, current_discrete):
current_data = self.data_function(current_discrete)
param_length = self.stan_model.num_unconstrained_parameters(current_data)
M = np.identity(param_length) # currently no mass matrix adaptation (assume identity matrix)

# Step 1: Get initial estimate for optimal step size
if self.current_stepsize is None:
log_epsilon = np.log(self.FindReasonableEpsilon(current_continuous, current_discrete))
else:
log_epsilon = np.log(self.current_stepsize)

if self.do_warmup:
# adapt parameters in one go so no need to save the dual averaging variables
[proposed_continuous, r0, r1, log_epsilon_bar] = self.warmup(current_continuous, current_discrete, log_epsilon, M)
if hasattr(current_discrete.value, "__iter__"):
self.NUTS_params[','.join(current_discrete.value)] = (M, np.exp(log_epsilon_bar))
else:
self.NUTS_params[str(current_discrete.value)] = (M, np.exp(log_epsilon_bar))
else:
# do one iteration of adaptation
H_bar = 0
log_epsilon_bar = 0
n = 1
[proposed_continuous, r0, r1, log_epsilon, log_epsilon_bar, H_bar] = \
self.continual_adaptation(current_continuous, current_discrete, log_epsilon, log_epsilon_bar, M, H_bar, n)
if hasattr(current_discrete.value, "__iter__"):
self.NUTS_params[','.join(current_discrete.value)] = (M, np.exp(log_epsilon_bar), log_epsilon, H_bar, n + 1)
else:
self.NUTS_params[str(current_discrete.value)] = (M, np.exp(log_epsilon_bar), log_epsilon, H_bar, n + 1)

# save step size for initialising other parameter spaces
self.current_stepsize = np.exp(log_epsilon_bar)

return proposed_continuous, r0, r1

def sample(self, current_continuous, current_discrete):
if str(current_discrete.value) in self.NUTS_params.keys():
if self.do_warmup:
# run for a set amount of warmup iterations in order to optimise NUTS
if hasattr(current_discrete.value, "__iter__"):
[M, epsilon] = self.NUTS_params[','.join(current_discrete.value)]
else:
[M, epsilon] = self.NUTS_params[str(current_discrete.value)]
[proposed_continuous, r0, r1] = self.NUTS(current_continuous, current_discrete, M, epsilon)[0:3]
else:
# continually optimise NUTS (e.g. for SMC)
if hasattr(current_discrete.value, "__iter__"):
[M, epsilon, log_epsilon, H_bar, n] = self.NUTS_params[','.join(current_discrete.value)]
[proposed_continuous, r0, r1, log_epsilon, log_epsilon_bar, H_bar] = \
self.continual_adaptation(current_continuous, current_discrete,
log_epsilon, np.log(epsilon), M, H_bar, n)
self.NUTS_params[','.join(current_discrete.value)] = (M, np.exp(log_epsilon_bar),
log_epsilon, H_bar, n + 1)
else:
[M, epsilon, log_epsilon, H_bar, n] = self.NUTS_params[str(current_discrete.value)]
[proposed_continuous, r0, r1, log_epsilon, log_epsilon_bar, H_bar] = \
self.continual_adaptation(current_continuous, current_discrete,
log_epsilon, np.log(epsilon), M, H_bar, n)
self.NUTS_params[str(current_discrete.value)] = (M, np.exp(log_epsilon_bar),
log_epsilon, H_bar, n + 1)
else:
[proposed_continuous, r0, r1] = self.init_NUTS(current_continuous, current_discrete)

return proposed_continuous, r0, r1

def eval(self, current_continuous, current_discrete, proposed_continuous, current_r, proposed_r):
# pull mass matrix from the NUTS parameters (needed to calculate kinetic energy)
if hasattr(current_discrete.value, "__iter__"):
M = self.NUTS_params[','.join(current_discrete.value)][0]
else:
M = self.NUTS_params[str(current_discrete.value)][0]
current_data = self.data_function(current_discrete)
param_length = self.stan_model.num_unconstrained_parameters(current_data)
init_L = self.stan_model.eval(current_data, current_continuous[0:param_length])[0]
L = self.stan_model.eval(current_data, proposed_continuous[0:param_length])[0]

log_acceptance_ratio = L - 0.5 * np.dot(np.linalg.solve(M, proposed_r), proposed_r) \
- init_L + 0.5 * np.dot(np.linalg.solve(M, current_r), current_r)

if log_acceptance_ratio > 0:
log_acceptance_ratio = 0

return log_acceptance_ratio
Loading