Hey all, I am in the process of testing several models for fine-tuning and that question cropped up.
I would like to add new facts to a foundational model and then train it for instruction tuning. Problem is, I will regularly have new data to add. I was wondering if there is a change that I could do a single LORA for the instruction tuning and reapply it each time I finished a new fine-tuning?
The UT5 person seems not too convinced himself: https://github.com/bublint/ue5-llama-lora/issues/7#issuecomment-1612001607
The xFinance one seems to be evaluated with positive results.