Skip to content

[Q]: Quantize Per-Trained model Using QLoRa or LoRa , PFET Technique #13

@deep-matter

Description

@deep-matter

hey all i hope you have a good day
i would like to ask a question please :
Q : Quantize Per-Trained model Using QLoRa or LoRa , PFET Technique
i would like to ask how can I use QLoRa or Parameter-Efficient Fine-Tuning thin a model does not register at Hugging face instead is Based on OFA

Here the repo of the model: GitHub

i am trying to Quantize the Tiny version but I don’t know if I need to use Lora in which way to Parameter-Efficient Fine-Tuning

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions