본문 바로가기

자연어 처리 과정

Why RNN share the same weights?

https://stackoverflow.com/questions/47865034/recurrent-nns-whats-the-point-of-parameter-sharing-doesnt-padding-do-the-tri

 

Recurrent NNs: what's the point of parameter sharing? Doesn't padding do the trick anyway?

The following is how I understand the point of parameter sharing in RNNs: In regular feed-forward neural networks, every input unit is assigned an individual parameter, which means that the number of

stackoverflow.com

주된 목적은 학습해야 할 파라미터의 개수를 줄이기 위함이다.

'자연어 처리 과정' 카테고리의 다른 글

RNN with attention(seq2seq with attention)  (0) 2023.01.04
Why do we need to add bias in neural networks?  (0) 2022.12.28
Word2vec vs GloVe  (0) 2022.12.27
Time sequence로 정렬하기  (0) 2022.12.21
LSTM  (0) 2022.12.18