Big data op_test benchmark, for checking output consistent in different runs.#10646
Conversation
dzhwinter
commented
May 14, 2018
- refine op_test framework, move and refine some standalone function as the test suite.
- add benchmarkSuite, for checking the number stability, gpu cpu speed.
|
Maybe the change of mul_op test can be separated into another PR? |
|
This PR is mainly focused on the big input data test case, that's why I noticed the mul_op time cost sucks. But that's ok if you think that we should separate mul_op into a new PR. |
| start = time.time() | ||
| for i in range(iters): | ||
| callback(*args, **kwargs) | ||
| elapse = time.time() - start |
There was a problem hiding this comment.
+=? shoud elapse be initiated before?
| elif dtype == np.bool: | ||
| return core.VarDesc.VarType.BOOL | ||
| elif dtype == np.uint16: | ||
| return core.VarDesc.VarType.INT16 |
There was a problem hiding this comment.
uint16 and int16 is different?
There was a problem hiding this comment.
It is obvious you are right, but we have to use INT16 here, because
-
we do not have unsigned data type in OpProto, both for uint16, uint32 and uint64.
https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/framework.proto#L97 -
the uint16 in this PR is only for float16 in op_test, it seems a little confusing but it is the fact.
https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/tests/unittests/op_test.py#L473
The guy who do the fp16 job explain the reason, "the pybind does not have the built-in float16 support, so he chooses INT16 to allocate the same size memory".
Considering that our messed datatype in python, it definitely needs to be clean up.
| Block. Users can use the build in instructions to describe their neural | ||
| network. | ||
| """ | ||
| OP_WITHOUT_KERNEL_SET = { |
There was a problem hiding this comment.
This set is sad...Can you file a issue and assign to me to clean it up? So that I don't forget.
| def customize_fetch_list(self): | ||
| """ | ||
| customize fetch list, configure the wanted variables. | ||
| >>> self.fetch_list = ["Out"] |
There was a problem hiding this comment.
The output of operator will be automatically inserted into fetch list, it is same here.
