Update 'pages/topics/bert/README.md'
This commit is contained in:
parent
87828f160b
commit
a2f8b2b110
@ -22,9 +22,13 @@ author: Daniel Hládek
|
||||
|
||||
[https://medium.com/nvidia-ai/how-to-scale-the-bert-training-with-nvidia-gpus-c1575e8eaf71](zz):
|
||||
|
||||
When the mini-batch size n is multiplied by k, we should multiply the starting learning rate η by the square root of k as some theories may suggest. However, with experiments from multiple researchers, linear scaling shows better results, i.e. multiply the starting learning rate by k instead.
|
||||
|
||||
| BERT Large | 330M |
|
||||
| BERT Base | 110M |
|
||||
|
||||
Väčšia veľkosť vstupného vektora => menšia veľkosť dávky => menší parameter učenia => pomalšie učenie
|
||||
|
||||
|
||||
## Hotové úlohy
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user