Feature/while op sentiment analysis#6282
Conversation
A v2 API like data feeder for book demos. We can feed data directly from reader.
…op_sentiment_analysis
…op_sentiment_analysis
…op_sentiment_analysis
| class DynamicRNN(object): | ||
| BEFORE_RNN = 0 | ||
| IN_RNN = 1 | ||
| AFTER_RNN = 2 |
There was a problem hiding this comment.
So, can we use nested dynamic RNN? Should the status be IN_RNN and OUT_RNN?
| type='max_sequence_len', | ||
| inputs={'RankTable': self.lod_rank_table}, | ||
| outputs={"Out": self.max_seq_len}) | ||
| self.cond.stop_gradient = True |
There was a problem hiding this comment.
It is no need to calculate gradient of less than
| self.output_array = [] | ||
| self.outputs = [] | ||
| self.cond = self.helper.create_tmp_variable(dtype='bool') | ||
| self.cond.stop_gradient = False |
There was a problem hiding this comment.
I am little confused about these two concepts, stop_gradient and no_gradient. Are these two the same?
There was a problem hiding this comment.
no_grad_set is a parameter of backward(...) method. stop_gradient is an attribute of Python Variable.
The variables whose stop_gradient=True will be passed to backward(no_grad_set=...)
| self.step_idx.stop_gradient = False | ||
| self.status = DynamicRNN.IN_RNN | ||
| with self.while_op.block(): | ||
| yield |
There was a problem hiding this comment.
Why yield here? Not before line 1922
| rnn.update_memory(mem, out_) | ||
| rnn.output(out_) | ||
|
|
||
| last = fluid.layers.sequence_pool(input=rnn(), pool_type='last') |
There was a problem hiding this comment.
Just follow the API design on https://github.com/PaddlePaddle/talks/blob/develop/paddle-gtc-china.pdf
fixes #5861