In most LLM inference applications the token generation loop needs to know the EOS token id to figure out when to stop. The tokenizer already has this information, but it doesn't expose any API that the caller can use to retrieve it.
This results in applications/callers either hardcoding the EOS./BOS ids or having to resort to other metadata (e.g, the ONNXRuntime GenAI sdk has to duplicate this information in GenAI config file).
It would be great if such an API can be exposed from ORTX Tokenizer itself.