AI/ML LAB: Difference between revisions
Jump to navigation
Jump to search
(Created page with "== Implement Vicuna (ChatGPT) == * Download LLAMA model (LLM model) : magnet:?xt=urn:btih:b8287ebfa04f879b048d4d4404108cf3e8014352&dn=LLaMA * Update model : https://github.c...") |
No edit summary |
||
Line 1: | Line 1: | ||
{| class="wikitable" | |||
|- | |||
! Factor !! Approximate Weight (%) | |||
|- | |||
| Diverse training language || 30 | |||
|- | |||
| Model size || 15 | |||
|- | |||
| Fine-tuning || 15 | |||
|- | |||
| Regularization || 10 | |||
|- | |||
| Optimizer and learning rate || 10 | |||
|- | |||
| Context size || 10 | |||
|- | |||
| Libraries || 5 | |||
|- | |||
| Model architecture || 3 | |||
|- | |||
| Model initialization || 2 | |||
|} | |||
== Implement Vicuna (ChatGPT) == | == Implement Vicuna (ChatGPT) == | ||
* Download LLAMA model (LLM model) : | * Download LLAMA model (LLM model) : |
Latest revision as of 15:23, 9 July 2023
Factor | Approximate Weight (%) |
---|---|
Diverse training language | 30 |
Model size | 15 |
Fine-tuning | 15 |
Regularization | 10 |
Optimizer and learning rate | 10 |
Context size | 10 |
Libraries | 5 |
Model architecture | 3 |
Model initialization | 2 |
Implement Vicuna (ChatGPT)
- Download LLAMA model (LLM model) :
magnet:?xt=urn:btih:b8287ebfa04f879b048d4d4404108cf3e8014352&dn=LLaMA
- Update model :
https://github.com/lm-sys/FastChat#vicuna-weights
- Code :
https://github.com/lm-sys/FastChat