AI/ML LAB
Jump to navigation
Jump to search
| Factor | Approximate Weight (%) |
|---|---|
| Diverse training language | 30 |
| Model size | 15 |
| Fine-tuning | 15 |
| Regularization | 10 |
| Optimizer and learning rate | 10 |
| Context size | 10 |
| Libraries | 5 |
| Model architecture | 3 |
| Model initialization | 2 |
Implement Vicuna (ChatGPT)
- Download LLAMA model (LLM model) :
magnet:?xt=urn:btih:b8287ebfa04f879b048d4d4404108cf3e8014352&dn=LLaMA
- Update model :
https://github.com/lm-sys/FastChat#vicuna-weights
- Code :
https://github.com/lm-sys/FastChat