-
Notifications
You must be signed in to change notification settings - Fork 104
Make transformer lazy import #292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review by Korbit AI
Korbit automatically attempts to detect when you fix issues in new commits.
| Category | Issue | Status |
|---|---|---|
| Missing Upper Bound Temperature Validation ▹ view | ||
| Inconsistent temperature default values ▹ view |
Files scanned
| File Path | Reviewed |
|---|---|
| src/agentlab/llm/huggingface_utils.py | ✅ |
| src/agentlab/llm/chat_api.py | ✅ |
| src/agentlab/llm/llm_utils.py | ✅ |
Explore our documentation to understand the languages and file types we support and the files we ignore.
Check out our docs on how you can make Korbit work best for you and your team.
| self.temperature = temperature | ||
|
|
||
| if token is None: | ||
| # support both env var names used elsewhere | ||
| token = os.environ.get("TGI_TOKEN") or os.environ.get("AGENTLAB_MODEL_TOKEN") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing Upper Bound Temperature Validation 
Tell me more
What is the issue?
The temperature parameter is set but never validated for the upper bound, while the lower bound is checked for < 1e-3.
Why this matters
High temperature values (> 1.0) can lead to extremely random outputs, potentially making the model's responses unusable.
Suggested change ∙ Feature Preview
Add an upper bound check for temperature:
if temperature is not None:
if temperature < 1e-3:
logging.warning("Models might behave weirdly when temperature is too low.")
if temperature > 2.0:
logging.warning("High temperature values (>2.0) may result in extremely random outputs.")
self.temperature = temperatureProvide feedback to improve future suggestions
💬 Looking for more details? Reply to this comment to chat with Korbit.
| temperature = ( | ||
| temperature | ||
| if temperature is not None | ||
| else getattr(self, "temperature", 0.1) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent temperature default values 
Tell me more
What is the issue?
The temperature default value of 0.1 is hardcoded in multiple places - in the getattr() call and in the class initialization (1e-1).
Why this matters
Having the same magic number in different formats (0.1 vs 1e-1) in multiple places makes it harder to maintain consistent temperature defaults and increases the risk of inconsistency when updating values.
Suggested change ∙ Feature Preview
Define a class-level constant DEFAULT_TEMPERATURE = 0.1 and use it consistently throughout the code:
DEFAULT_TEMPERATURE = 0.1
def __init__(..., temperature: Optional[float] = DEFAULT_TEMPERATURE, ...):
...
def __call__(...)
temperature = temperature if temperature is not None else getattr(self, "temperature", DEFAULT_TEMPERATURE)Provide feedback to improve future suggestions
💬 Looking for more details? Reply to this comment to chat with Korbit.
optimass
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Make transformer import lazy
This pull request refactors the way HuggingFace and Transformers dependencies are handled throughout the codebase, making these imports lazy and optional. This change reduces unnecessary import overhead and avoids requiring users to install heavy dependencies unless they are actually needed (for example, when using HuggingFace backends). The
HuggingFaceURLChatModelclass is moved and restructured to further support this lazy-loading strategy.Dependency management and lazy imports:
All imports of
transformersandhuggingface_hubare now performed lazily and only when required, with clear error messages if the dependencies are missing. This affects functions and classes inchat_api.py,huggingface_utils.py, andllm_utils.py. [1] [2] [3] [4] [5] [6] [7]The
HuggingFaceURLChatModelclass is moved fromchat_api.pytohuggingface_utils.pyand is now only imported when explicitly needed, either via a lazy import in code or through a module-level__getattr__hook. This keeps the main API lightweight for non-HuggingFace use cases. [1] [2] [3]HuggingFace backend improvements:
General code quality improvements:
These changes help make the codebase more modular, lightweight, and user-friendly for those not using HuggingFace or Transformers-based models.
Description by Korbit AI
What change is being made?
Make the import of Hugging Face and transformers libraries lazy to reduce startup overhead and improve performance when they are not needed.
Why are these changes being made?
The changes optimize the code by avoiding the import of heavy dependencies unless they are specifically required, thus minimizing resource usage and potential delays during the startup of systems that do not need these libraries. This approach also provides clearer error handling for missing packages when users attempt to use the optional functionalities that require these libraries.