Conversation
paddle/math/cross_map_normal_op.cpp
Outdated
|
|
||
| const int start = -((int)sizeX) / 2; | ||
| const int end = (int)sizeX + start; | ||
| const real ratio = -(real)2 * scale * pow; |
There was a problem hiding this comment.
can we write: -2.0 * scale * pow ?
paddle/math/Matrix.cpp
Outdated
|
|
||
| const int start = -((int)sizeX) / 2; | ||
| const int end = (int)sizeX + start; | ||
| const real ratio = -(real)2 * scale * pow; |
There was a problem hiding this comment.
can we write: -2.0 * scale * pow ?
There was a problem hiding this comment.
I think we cannot since 2.0 is considered a double-typed value by the compiler, but real is a macro which could evaluate to float.
There was a problem hiding this comment.
but const real ratio might convert double into float if real is float.
There was a problem hiding this comment.
but const real ratio might convert double into float if real is float. There may be warning.
paddle/math/cross_map_normal_op.cpp
Outdated
| } | ||
|
|
||
| template <> | ||
| void CrossMapNormalGrad<DEVICE_TYPE_CPU>::operator()(CpuMatrix& inputsGrad, |
There was a problem hiding this comment.
can you write down the math expression here for better understanding the code?
There was a problem hiding this comment.
The backward formula is derived from the forward formula.
paddle/math/cross_map_normal_op.cpp
Outdated
|
|
||
| // NCHW | ||
| template <> | ||
| void CrossMapNormal<DEVICE_TYPE_CPU>::operator()(CpuMatrix& outputs, |
There was a problem hiding this comment.
can you write down the math expression here for better understanding the code?
tianbingsz
left a comment
There was a problem hiding this comment.
Looks very good, I only have minor comments.
| } | ||
| } | ||
|
|
||
| void testCrossMapNormalFwd( |
There was a problem hiding this comment.
Test暂时先写这里了,后��会移到function里面。
|
|
||
| REGISTER_TYPED_FUNC(CrossMapNormal, CPU, CrossMapNormalFunc); | ||
| REGISTER_TYPED_FUNC(CrossMapNormalGrad, CPU, CrossMapNormalGradFunc); | ||
| #ifndef PADDLE_ONLY_CPU |
There was a problem hiding this comment.
后续会把PADDLE_ONLY_CPU移到REGISTER_TYPED_FUNC里面判断
There was a problem hiding this comment.
可能有的Function实现会不支持GPU,这个暂时不修改。
paddle/gserver/layers/Layer.h
Outdated
| std::vector<bool> markInBackward_; | ||
|
|
||
| /// Layer forward function | ||
| FunctionBase* forward_; |
There was a problem hiding this comment.
后续会修改成std::vector,一个Layer可以包含多个Function。
paddle/gserver/layers/Layer.h
Outdated
| std::vector<bool> markInBackward_; | ||
|
|
||
| /// Layer forward function | ||
| FunctionBase* forward_; |
There was a problem hiding this comment.
后续会修改成std::vector,一个Layer可以包含多个Function。
| /* the size of inputs for norm-layer is 1 */ | ||
| CHECK_EQ(config_.inputs_size(), 1); | ||
|
|
||
| if (useGpu_) { |
|
Very exciting, will follow the work. |
| forward_->calc( | ||
| {Tensor(input->getData(), dims_)}, | ||
| {Tensor(outV->getData(), dims_), Tensor(denoms_->getData(), dims_)}, | ||
| {}); |
There was a problem hiding this comment.
后续是否会改为数据构造的时候就是Tensor呢? 而不是中间转换呢?
There was a problem hiding this comment.
作为Function的参数的数据结构和用于计算的数据结构是两个,中间会有一次转换。
作为参数的数据结构(能表示多维,多类型的数据)是用来替代现在的paddle::Argument,用于计算的数据结构是继承于paddle::TensorExpression。后面会分别写两个issue来解释这两个事情。
tianbingsz
left a comment
There was a problem hiding this comment.
Look very good to me, only minor comments. You may ignore some of them for the fast development.
|
|
||
| template <> | ||
| FuncConfig& FuncConfig::set<size_t>(const std::string& key, size_t v) { | ||
| CHECK(valueMap_.count(key) == 0) << "Duplicated value: " << key; |
There was a problem hiding this comment.
Should we use count > 0 to indicate there is already a value for the key?
CHECK(valueMap_.count(key) > 0) << "Duplicated value: " << key;
There was a problem hiding this comment.
CHECK(true) will pass. There need valueMap_.count(key) == 0 is true.
There was a problem hiding this comment.
这里应该是 CHECK_EQ(0, valueMap_.count(key)) ,这样万一count返回的不是0,则 glog 可以打印出count返回的值,以便debug。
|
|
||
| template <> | ||
| FuncConfig& FuncConfig::set<real>(const std::string& key, real v) { | ||
| CHECK(valueMap_.count(key) == 0) << "Duplicated value: " << key; |
| template <DeviceType Device> | ||
| void CrossMapNormal(real* outputs, | ||
| real* denoms, | ||
| real* inputs, |
There was a problem hiding this comment.
Should we use const for inputs, such as const real * denoms? Anyway, if it is too complicated to do, please ignore this comment.
| */ | ||
| template <DeviceType Device> | ||
| void CrossMapNormalGrad(real* inputsGrad, | ||
| real* inputsValue, |
| const int idx = threadIdx.x + blockIdx.x * blockDim.x; | ||
| if (idx < imageSize) { | ||
| const int w = idx % width; | ||
| const int h = (idx / width) % height; |
There was a problem hiding this comment.
Should we check width > 0 and height > 0 here? Seems not so important here though.
There was a problem hiding this comment.
size_t is unsigned
| template <> | ||
| void CrossMapNormal<DEVICE_TYPE_GPU>(real* outputs, | ||
| real* denoms, | ||
| real* inputs, |
There was a problem hiding this comment.
Should we write like const real* inputs?
|
|
||
| template <> | ||
| void CrossMapNormalGrad<DEVICE_TYPE_GPU>(real* inputsGrad, | ||
| real* inputsValue, |
| template <> | ||
| size_t FuncConfig::get<size_t>(const std::string& key) const { | ||
| auto it = valueMap_.find(key); | ||
| CHECK(it != valueMap_.end()) << "Cannot find value: '" << key << "'"; |
There was a problem hiding this comment.
CHECK_NE not support type of it.
|
|
||
| template <> | ||
| FuncConfig& FuncConfig::set<size_t>(const std::string& key, size_t v) { | ||
| CHECK(valueMap_.count(key) == 0) << "Duplicated value: " << key; |
|
|
||
| template <> | ||
| FuncConfig& FuncConfig::set<real>(const std::string& key, real v) { | ||
| CHECK(valueMap_.count(key) == 0) << "Duplicated value: " << key; |
| template <> | ||
| real FuncConfig::get<real>(const std::string& key) const { | ||
| auto it = valueMap_.find(key); | ||
| CHECK(it != valueMap_.end()) << "Cannot find value: '" << key << "'"; |
There was a problem hiding this comment.
CHECK_NE not support type of it.
|
|
||
| virtual void init(const FuncConfig& config) {} | ||
|
|
||
| virtual void calc(const Arguments& inputs, |
There was a problem hiding this comment.
Need to have detailed comment for this interface function.
* add FewCLUE 9 datasets * fix a bug for tnews * Add CI for Ernie text matching * Add CI for Ernie text matching * Add CI for Ernie text matching * fix encoding problem for windows * update ernie_text_matching ci * standard GPU id for CI * standard GPU id for CI * add simcse * update train.py * support multi-card training * add README.md Co-authored-by: Zeyu Chen <chenzeyu01@baidu.com>
该PR实现的功能
两部分合并在一起写是把normal当作一个例子指导后续Matrix API的重构。以后Paddle新增一个算法功能,实现上不再是Matrix增加一个算法API,而是Paddle增加一个算法Function。
Paddle Function的用法见 #892 的描述,对应到该PR,新增cross map normalize Function