Skip to content

make benchmark really working#11215

Merged
gongweibao merged 4 commits intoPaddlePaddle:developfrom
panyx0718:benchmark
Jun 7, 2018
Merged

make benchmark really working#11215
gongweibao merged 4 commits intoPaddlePaddle:developfrom
panyx0718:benchmark

Conversation

@panyx0718
Copy link
Contributor

user complain they crash when following our doc.

@panyx0718 panyx0718 requested a review from typhoonzero June 6, 2018 02:42
@panyx0718 panyx0718 requested a review from jacquesqiao June 6, 2018 02:45
Copy link
Contributor

@gongweibao gongweibao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we have run_fluid_benchmark.sh, do we need the run step in README.md?

You can choose to use GPU/CPU training. With GPU training, you can specify
`--gpus <gpu_num>` to run multi GPU training.
* Run distributed training with parameter servers:
* see run_fluid_benchmark.sh as an example.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need a link here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure this is needed. link can be broken and the file is just in this folder

Copy link
Contributor

@gongweibao gongweibao Jun 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[run_fluid_benchmark.sh](./run_fluid_benchmark.sh)

* see run_fluid_benchmark.sh as an example.
* start parameter servers:
```bash
PADDLE_TRAINING_ROLE=PSERVER PADDLE_PSERVER_PORT=7164 PADDLE_PSERVER_IPS=127.0.0.1 PADDLE_TRAINERS=1 PADDLE_CURRENT_IP=127.0.0.1 PADDLE_TRAINER_ID=0 python fluid_benchmark.py --model mnist --device GPU --update_method pserver
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pserver runs on CPU

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,10 @@
#!/bin/bash

PADDLE_TRAINING_ROLE=PSERVER PADDLE_PSERVER_PORT=7164 PADDLE_PSERVER_IPS=127.0.0.1 PADDLE_TRAINERS=2 PADDLE_CURRENT_IP=127.0.0.1 PADDLE_TRAINER_ID=0 python fluid_benchmark.py --model resnet --device GPU --update_method pserver --iterations=10000 &
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems this command would print all logs on the terminal, we can startup them as follows:

PADDLE_TRAINING_ROLE=PSERVER ... stdbuf -oL nohup python fluid_benchmark.py <args> 2>&1 > server.log &

And then users would check the logs in the server.log file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's fine to print out some logs to give user some feedback. There aren't many outputs

Pass 0, batch 162, loss [2.7855887 2.973915 ]
Pass 0, batch 162, loss [3.0754983 3.2426462]
Pass 0, batch 171, loss [3.4701207 4.438573 ]
Pass 0, batch 171, loss [3.7791452 3.3191109]
typhoonzero
typhoonzero previously approved these changes Jun 6, 2018
gongweibao
gongweibao previously approved these changes Jun 6, 2018
Copy link
Contributor

@gongweibao gongweibao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@gongweibao gongweibao merged commit 53a509d into PaddlePaddle:develop Jun 7, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

4 participants