Skip to content

Conversation

@lorenzejay
Copy link
Collaborator

  • Introduced LLMCallBlockedError to manage blocked LLM calls from before_llm_call hooks.
  • Updated LLM class to raise LLMCallBlockedError instead of returning a boolean.
  • Enhanced Agent class to emit events and handle LLMCallBlockedError during task execution.
  • Added error handling in CrewAgentExecutor and agent utilities to gracefully manage blocked calls.
  • Updated tests to verify behavior when LLM calls are blocked.
Screenshot 2025-12-21 at 9 44 26 PM
- Introduced LLMCallBlockedError to manage blocked LLM calls from before_llm_call hooks.
- Updated LLM class to raise LLMCallBlockedError instead of returning a boolean.
- Enhanced Agent class to emit events and handle LLMCallBlockedError during task execution.
- Added error handling in CrewAgentExecutor and agent utilities to gracefully manage blocked calls.
- Updated tests to verify behavior when LLM calls are blocked.
…LM call handling

- Updated Agent class to emit TaskFailedEvent instead of AgentExecutionErrorEvent when LLM calls are blocked.
- Removed unnecessary LLMCallBlockedError handling from CrewAgentExecutor.
- Enhanced test cases to ensure proper exception handling for blocked LLM calls.
- Improved code clarity and consistency in event handling across agent execution.
get_llm_response,
handle_agent_action_core,
handle_context_length,
handle_llm_call_blocked_error,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you want to utilize this?

... if response.lower() == "no":
... print("LLM call skipped by user")
"""
# from crewai.events.event_listener import event_listener
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if we can remove

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

3 participants