Skip to content

Training for better embedding using self-supervised fashion with masked language modeling (MLM) for my own dataset #161

@Frank-LIU-520

Description

@Frank-LIU-520

I have a dataset containing millions sequences without any labels generated with specific method in lab. I would like to train with those sequences to get better embedding for those unlabeled sequence starting from the ProtTrans pretrained model. What should I do?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions