Skip to content

Conversation

@MikeSchutzman
Copy link

Created Nifti Zarr file

Used Volume Logic from Zarr but modelTransform logic from Nifti

@balbasty
Copy link
Collaborator

Cool!

Did you check that when you load the demo file with the niftizarr:// protocol on top of this state, both layers are aligned?

@aaronkanzer
Copy link

@balbasty @MikeSchutzman could you share a valid, converted nifti-zarr? I just wanted to stress-test this on my own locally with a valid asset -- thanks!

@balbasty
Copy link
Collaborator

@aaronkanzer See the last post in the brainhack issue: brainhack-boston/brainhack-boston.github.io#50

There are links to two nifti-zarr assets (the first one should be used, as it has a single channel), and links to two neuroglancer scenes: one that just uses the default zarr reader, and one where I manually applied the correct "nifti" transform on top of it. The asset read with the niftizarr protocol should aligned with the second scene.

@aaronkanzer
Copy link

aaronkanzer commented Apr 30, 2024

@MikeSchutzman were you able to confirm this working on your local dev environment? e.g. in my local dev server, there is no data source for niftizarr://

Screenshot 2024-04-30 at 9 13 38 AM Screenshot 2024-04-30 at 9 16 50 AM

There seems to be additional setup/files that neuroglancer is expecting in order to fully facilitate an end-to-end support for a file type -- e.g. see here for how zarr is supported: https://github.com/google/neuroglancer/blob/master/package.json#L356-L373 -- thus, I'm curious how you were able to validate the code here successfully

Wanted to double-check first with you with these details to ensure that I'm not missing something in regards to setup...

@aaronkanzer
Copy link

@MikeSchutzman could you also merge main back in here when you have the chance? The maintainers of neuroglancer recently changed their frontend buildpack from vite to webpack, so wanted to ensure we don't miss any inconsistencies with that -- thanks again for your work here!

@aaronkanzer
Copy link

aaronkanzer commented Apr 30, 2024

@MikeSchutzman @balbasty @kabilar Just another thought here, as it seems that there is a decent amount of redundancy of code here vs. the zarr equivalent in https://github.com/lincbrain/neuroglancer/tree/master/src/datasource/zarr

Where are we validating confirmation that pixel-to-pixel (e.g. what we see is truly accurate) with the nifti -> zarr conversion? My estimation would be that we need to confirm this on both the computational side, but also, as it pertains to this PR, in a rendered environment.

Let me know your thoughts -- happy to brainstorm here, but without proper validation, this presents risk

@MikeSchutzman
Copy link
Author

@MikeSchutzman were you able to confirm this working on your local dev environment? e.g. in my local dev server, there is no data source for niftizarr://

Screenshot 2024-04-30 at 9 13 38 AM Screenshot 2024-04-30 at 9 16 50 AM
There seems to be additional setup/files that neuroglancer is expecting in order to fully facilitate an end-to-end support for a file type -- e.g. see here for how zarr is supported: https://github.com/google/neuroglancer/blob/master/package.json#L356-L373 -- thus, I'm curious how you were able to validate the code here successfully

Wanted to double-check first with you with these details to ensure that I'm not missing something in regards to setup...

haven't tested yet. once i test, will change pr from draft to published

@MikeSchutzman
Copy link
Author

@MikeSchutzman @balbasty @kabilar Just another thought here, as it seems that there is a decent amount of redundancy of code here vs. the zarr equivalent in https://github.com/lincbrain/neuroglancer/tree/master/src/datasource/zarr

Where are we validating confirmation that pixel-to-pixel (e.g. what we see is truly accurate) with the nifti -> zarr conversion? My estimation would be that we need to confirm this on both the computational side, but also, as it pertains to this PR, in a rendered environment.

Let me know your thoughts -- happy to brainstorm here, but without proper validation, this presents risk

yes great point... not sure how to confirm this is actually working logically as it is supposed to

@MikeSchutzman
Copy link
Author

just to update with status:
the issue lines are caused by these 2 statements:
staticAnnotations: makeDataBoundsBoundingBoxAnnotationSet(box),
It wants volumeZarr.modelSpace.bounds,

and

modelTransform: {
            sourceRank: info.rank,
            rank: info.rank,
            inputSpace,
            outputSpace,
            transform: info.transform,
          },

it wants modelTransform: makeIdentityTransform(volumeNifti.modelSpace),

@MikeSchutzman
Copy link
Author

still not sure how to make this work. Been playing around with it. It breaks when i use getNiftiVolumeInfo() instead of MultiscaleVolumeChunkSource but that's the entire point of this combo niftizarr file...

@MikeSchutzman
Copy link
Author

@aaronkanzer @balbasty any ideas on last 2 comments?

@MikeSchutzman
Copy link
Author

i didn't create a combo niftizarr backend that's the issue...

@MikeSchutzman
Copy link
Author

still not sure how to make this work. Been playing around with it. It breaks when i use getNiftiVolumeInfo() instead of MultiscaleVolumeChunkSource but that's the entire point of this combo niftizarr file...

update here - it's the call to this that breaks it. Originally, i thought the issue was saving this result to a var and passing that var into later transform broke it but this was wrong. Just calling this function breaks the code, even if i pass the zarr info into the transform later on

@MikeSchutzman
Copy link
Author

@kabilar kabilar removed their request for review October 9, 2025 15:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants