Skip to content

Documentation + Fine Tuning + Neuroglancer #1

@william-silversmith

Description

@william-silversmith

Hi Brian et al,

I'm the author of CloudVolume, a cloud services python client used for reading and writing neuroglancer compatible data. It's used extensively by my lab and some other labs are starting to use it.

I've been curious about this algorithm and have been starting to play around with it (please let me know if you'd like me to update how you are cited in that PR). I have some questions about how to use the algorithm.

  1. There are three parameters for compression, data, res, and steps. I believe data is a 3D numpy array, res is data.shape[:3], and steps is...? Is it the size of the template e.g. Figure 2 in the paper?
  2. If I'm correct that it's the template size, does Compresso support 3D windows? There's a z parameter there. I can experiment, but is there a recommended template size? Does the template size dramatically affect performance? Is there a reasonable default?
  3. Have you thought about creating a neuroglancer plugin for Compresso? If I can get the encoding/decoding performance of the plugin comparable to gzipping numpy arrays, this codec might become very popular with my users. However, the use cases will be highly restricted to e.g. data transfer because of neuroglancer incompatibility.

Thanks for reading and thanks for your efforts.
Will S.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions