dynamic recurrent op forward c++ implentation#4597
dynamic recurrent op forward c++ implentation#4597Superjomn merged 38 commits intoPaddlePaddle:developfrom
Conversation
…/dynamic-recurrent-op
…/dynamic-recurrent-op
…/dynamic-recurrent-op
…ddle into feature/dynamic-recurrent-op
…/dynamic-recurrent-op
| namespace operators { | ||
|
|
||
| namespace detail { | ||
|
|
There was a problem hiding this comment.
If you add "namespace detail", you should put this into "paddle/operators/detail" path.
There was a problem hiding this comment.
reference https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor_array.cc#L26
I think namespace detail is a special namespace, it is meaningless to put it into a detail directory.
|
|
||
| } // namespace detail | ||
|
|
||
| class DynamicRecurrentAlgorithmProtoAndCheckerMaker |
There was a problem hiding this comment.
why not name DynamicRecurrentOpProtoAndCheckerMaker
…/dynamic-recurrent-op
…/dynamic-recurrent-op
…/dynamic-recurrent-op
…/dynamic-recurrent-op
| "names of pre-memories"); | ||
| AddAttr<std::vector<std::string>>(name.memories, "names of memories"); | ||
|
|
||
| AddComment("This is a recurrent group operator."); |
There was a problem hiding this comment.
what is the difference of recurrent group operator and recurrent operator
paddle/pybind/CMakeLists.txt
Outdated
| cc_library(paddle_pybind SHARED | ||
| SRCS pybind.cc exception.cc protobuf.cc | ||
| DEPS pybind python backward proto_desc tensor_array | ||
| DEPS pybind python backward proto_desc tensor_array dynamic_recurrent_op |
There was a problem hiding this comment.
does not need a special dependency
| namespace paddle { | ||
| namespace operators { | ||
|
|
||
| using framework::Scope; |
There was a problem hiding this comment.
There are two problems:
- You may not use a using-directive to make all names from a namespace available.
- Do not use Namespace aliases at namespace scope in header files
refs: https://google.github.io/styleguide/cppguide.html#Namespaces
There was a problem hiding this comment.
Scope is not a namespace, so it should be ok.
Google-style just tells thatusing namespace xxx is forbidden.
There was a problem hiding this comment.
using is not recommended to use in header file because it may pollute other files that include it, put into cpp file is Ok.
https://stackoverflow.com/questions/19940081/using-keyword-in-header-implementation-in-c
|
|
||
| // call stepnet in all the time steps | ||
| for (size_t step = 0; step < cache_.num_steps; step++) { | ||
| stepnet_->Run(scope, dev_ctx); |
| // TODO(superjom) check all the inputs has the same LoD | ||
| int level = 0; | ||
| const auto& inlinks = cache_.inlinks; | ||
| for (auto& item : inlinks) { |
There was a problem hiding this comment.
for (const auto& item : inlinks) {...}| dy_seq_metas_[item.first] = | ||
| ta.Unpack(tensor, level, true /*length_descend*/); | ||
|
|
||
| if (cache_.num_steps) { |
There was a problem hiding this comment.
if (cache_.num_steps == 0UL) {...}use 0 to inform the type of num_steps
There was a problem hiding this comment.
this is ok. 0 equals false is clear.
…/dynamic-recurrent-op
…/dynamic-recurrent-op
| auto ta_it = step_inputs_.find(item.first); | ||
| PADDLE_ENFORCE(ta_it != step_inputs_.end(), | ||
| "step_inputs_ not compatible with memory set"); | ||
| TensorArray& ta = step_inputs_[item.first]; |
There was a problem hiding this comment.
TensorArray& ta = step_inputs_[item.first];==>
TensorArray& ta = ta_it->second;
jacquesqiao
left a comment
There was a problem hiding this comment.
LGTM! There will be a serial of pr to complete the dynamic recurrent op.
resolves: #4554