Skip to content

Conversation

@smejak
Copy link

@smejak smejak commented Oct 21, 2025

Summary

This PR introduces GLMPPSeq, a generalized version of PPSeq that incorporates a Generalized Linear Model (GLM) for modeling time-varying background firing rates. This extends the original PPSeq framework to handle non-stationary neural data where background activity varies over time.

Key Features

GLMPPSeq Model:

  • Time-varying background: Models background firing rates using a GLM with RBF (Radial Basis Function) covariates, replacing the constant background rate in vanilla PPSeq
  • Flexible covariate system: Uses smooth temporal basis functions to capture slow changes in population activity
  • Sparsity regularization: Supports L1 penalties on both sequence amplitudes (l1_amp) and template neuron participation (l1)
  • Configurable RBF basis: rbf_width parameter controls temporal smoothness of background (scale factor relative to basis spacing, default=0.2)
  • Efficient Newton-Raphson updates: Vectorized GLM parameter optimization with numerical stability safeguards

Implementation Details

  • EM algorithm with three update steps:

    1. E-step: Update sequence amplitudes given current parameters
    2. M-step (background): Update GLM coefficients via Newton-Raphson
    3. M-step (templates): Update Gaussian template parameters via moment matching
  • Proper GLM initialization in initialize_random() and initialize_default()

  • Consistent use of phi for covariates throughout the class

  • Device-aware tensor operations for GPU acceleration

Backward Compatibility

  • PPSeq and CAVI classes remain unchanged
  • New model is in separate GLMPPSeq class to avoid breaking existing code

Testing Recommendations

After merge, users should verify:

  • Model initializes without errors across different parameter settings
  • fit() runs successfully on both CPU and GPU
  • GLM background rates properly capture temporal dynamics
  • L1 regularization produces expected sparsity

Copy link

@xanderladd xanderladd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks good.

a few things to consider going forward:

  • are we clipping eta (line 129) and delta (line 270) to agressively? to leinent? we can set these thresholds empirically
  • same as above for scales = torch.relu(scales - self.l1)
  • if we add inference mode T_for_RBF will be annoying - RBF basis wont have correct time axis
  • do we ever want to add RBF basis with different lengthscale / width

print("Warning: NaN or Inf detected in GLM update, stopping iteration")
break

self.beta = self.beta + 0.1 * delta

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should allow this 0.1 to configured - sets the learning rate for newton updates

N = self.num_neurons

# Store T for RBF basis functions
self.T_for_rbf = T

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is a better way to do T for RBF, it would be nice - if you want to do inference, it needs to be set again and can cause some issues... though model isn't currently set for inference. but we can maybe implement this in a cleaner way next time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants