generated from fastai/nbdev_template
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Open
cam1llynha/trl
#1Labels
❓ questionSeeking clarification or more informationSeeking clarification or more information📚 documentationImprovements or additions to documentationImprovements or additions to documentation
Description
In the example scripts there is a final trainer.evaluate that performs a last evaluation and outputs eval metrics that are saved to a file with other calls like trainer.save_metrics (https://github.com/huggingface/trl/blob/d4222a1e08def2be56572eb2973ef3bf50143a4f/trl/scripts/dpo.py#L128C1-L131C46).
if training_args.eval_strategy != "no":
metrics = trainer.evaluate()
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
Is there a way to save these metrics to files during the regular evaluations performed by the trainer too? I coudn't find anything in the TrainingArguments.
Also are the metrics logged during these eval to W&B accumulated over the whole eval dataset or refer to a single eval batch?
Thanks
SilverSoldier and cam1llynhaSilverSoldier
Metadata
Metadata
Assignees
Labels
❓ questionSeeking clarification or more informationSeeking clarification or more information📚 documentationImprovements or additions to documentationImprovements or additions to documentation