LoRA & DoRA in TinyGrad
This project demonstrates the implementation of LoRA (Low-Rank Adaptation) and DoRA (Weight-Decomposed LoRA) in TinyGrad. These techniques enable efficient fine-tuning of deep learning models by injecting low-rank adapters into linear layers.
It showcases how low-rank approximations can reduce parameter count while maintaining performance.