-
Notifications
You must be signed in to change notification settings - Fork 2
Add GLMPPSeq: PPSeq with GLM Background Model #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
add PPSEQ GLM file and IBL notebook
add basis fxn + empirical GLM, test both
xanderladd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks good.
a few things to consider going forward:
- are we clipping
eta(line 129) anddelta(line 270) to agressively? to leinent? we can set these thresholds empirically - same as above for
scales = torch.relu(scales - self.l1) - if we add inference mode T_for_RBF will be annoying - RBF basis wont have correct time axis
- do we ever want to add RBF basis with different lengthscale / width
| print("Warning: NaN or Inf detected in GLM update, stopping iteration") | ||
| break | ||
|
|
||
| self.beta = self.beta + 0.1 * delta |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should allow this 0.1 to configured - sets the learning rate for newton updates
| N = self.num_neurons | ||
|
|
||
| # Store T for RBF basis functions | ||
| self.T_for_rbf = T |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is a better way to do T for RBF, it would be nice - if you want to do inference, it needs to be set again and can cause some issues... though model isn't currently set for inference. but we can maybe implement this in a cleaner way next time
Summary
This PR introduces
GLMPPSeq, a generalized version ofPPSeqthat incorporates a Generalized Linear Model (GLM) for modeling time-varying background firing rates. This extends the original PPSeq framework to handle non-stationary neural data where background activity varies over time.Key Features
GLMPPSeq Model:
l1_amp) and template neuron participation (l1)rbf_widthparameter controls temporal smoothness of background (scale factor relative to basis spacing, default=0.2)Implementation Details
EM algorithm with three update steps:
Proper GLM initialization in
initialize_random()andinitialize_default()Consistent use of
phifor covariates throughout the classDevice-aware tensor operations for GPU acceleration
Backward Compatibility
PPSeqandCAVIclasses remain unchangedGLMPPSeqclass to avoid breaking existing codeTesting Recommendations
After merge, users should verify:
fit()runs successfully on both CPU and GPU