-
Notifications
You must be signed in to change notification settings - Fork 623
Feat/adding directory and file prompt loading in genkit #3962
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/adding directory and file prompt loading in genkit #3962
Conversation
…st based on code review
Summary of ChangesHello @MengqinShen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands Genkit's prompt management capabilities by introducing a robust system for loading prompts directly from file directories. This new feature supports structured prompt organization with variants and reusable partials, enhancing modularity and developer workflow. It also refines the internal prompt registration and lookup mechanisms, ensuring seamless integration of both programmatically defined and file-based prompts, while updating core dependencies for broader Python version compatibility. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
new changes accidentally pushed to an old branch , close it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant new feature for loading prompts from directories and files, including support for lazy loading. The implementation is quite extensive, particularly in py/packages/genkit/src/genkit/blocks/prompt.py. While the overall approach is sound, I've identified a critical bug in the handling of weak references that will cause runtime errors. Additionally, there are some inconsistencies in the caching logic that could lead to memory leaks. I've also found issues in the new tests, including one that appears to be incorrect and several others with weak assertions that could be improved. My review includes suggestions to fix these issues.
| if hasattr(action, '_executable_prompt') and action._executable_prompt is not None: | ||
| return action._executable_prompt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _executable_prompt attribute on the action is a weakref.ref object. You are returning the weak reference itself instead of the referenced ExecutablePrompt object. You need to call the weak reference to get the object it points to. You should also handle the case where the object has been garbage collected and the weak reference returns None.
| if hasattr(action, '_executable_prompt') and action._executable_prompt is not None: | |
| return action._executable_prompt | |
| if hasattr(action, '_executable_prompt') and action._executable_prompt is not None: | |
| prompt = action._executable_prompt() | |
| if prompt: | |
| return prompt |
| if not hasattr(action, '_executable_prompt') or action._executable_prompt is None: | ||
| action._executable_prompt = executable_prompt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an inconsistency in how the _executable_prompt is cached. The create_prompt_from_file function (which is what _async_factory calls) correctly uses a weakref to avoid circular dependencies. However, here you are caching the ExecutablePrompt instance directly, not as a weak reference. This can lead to memory leaks and inconsistent behavior. You should consistently use weakref.ref() for caching.
| if not hasattr(action, '_executable_prompt') or action._executable_prompt is None: | |
| action._executable_prompt = executable_prompt | |
| if not hasattr(action, '_executable_prompt') or action._executable_prompt() is None: | |
| action._executable_prompt = weakref.ref(executable_prompt) |
| if not hasattr(action, '_executable_prompt') or action._executable_prompt is None: | ||
| action._executable_prompt = executable_prompt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the block above, you are caching the ExecutablePrompt instance directly instead of using a weakref. This is inconsistent with how create_prompt_from_file caches the prompt and can lead to memory leaks. Please use weakref.ref() consistently.
| if not hasattr(action, '_executable_prompt') or action._executable_prompt is None: | |
| action._executable_prompt = executable_prompt | |
| if not hasattr(action, '_executable_prompt') or action._executable_prompt() is None: | |
| action._executable_prompt = weakref.ref(executable_prompt) |
| async def test_messages_with_explicit_override() -> None: | ||
| """Test that explicit messages in render options are included.""" | ||
| ai, *_ = setup_test() | ||
|
|
||
| my_prompt = ai.define_prompt( | ||
| prompt='Final question', | ||
| ) | ||
|
|
||
| override_messages = [ | ||
| Message(role=Role.USER, content=[TextPart(text='First message')]), | ||
| Message(role=Role.MODEL, content=[TextPart(text='First response')]), | ||
| ] | ||
|
|
||
| # The override messages should be prepended to the prompt | ||
| rendered = await my_prompt.render(input=None, config=None) | ||
|
|
||
| # Check that we have the final prompt message | ||
| assert any('Final question' in str(msg) for msg in rendered.messages) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test seems to be incorrect or incomplete. The override_messages variable is defined but never used. The test name suggests it's testing an override, but no override is performed as the render method is called without passing these messages.
Please either fix the test to perform the override it's meant to test, or remove the unused variable and rename the test to reflect what it actually does.
| Args: | ||
| registry: The registry to look up the prompt from. | ||
| name: The name of the prompt. | ||
| variant: Optional variant name. | ||
| dir: Optional directory parameter (accepted for compatibility but not used). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring for the prompt method incorrectly lists registry and dir as arguments. These are not part of the method's signature and can be confusing for users of the library.
| Args: | |
| registry: The registry to look up the prompt from. | |
| name: The name of the prompt. | |
| variant: Optional variant name. | |
| dir: Optional directory parameter (accepted for compatibility but not used). | |
| Args: | |
| name: The name of the prompt. | |
| variant: Optional variant name. |
| executable_prompt._prompt_action = prompt_action | ||
| # Also store ExecutablePrompt reference on the action | ||
| # prompt_action._executable_prompt = executable_prompt | ||
| prompt_action._executable_prompt = weakref.ref(executable_prompt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a weakref here is a good practice to prevent circular dependencies and potential memory leaks. However, the corresponding lookup logic in lookup_prompt seems to handle this incorrectly, which I've commented on separately. It's great that you're thinking about memory management with these complex object relationships.
| # Test variant prompt | ||
| casual_exec = await prompt(ai.registry, 'greeting', variant='casual') | ||
| casual_response = await casual_exec({'name': 'Bob'}) | ||
| assert 'Hey' in casual_response.text or "what's up" in casual_response.text.lower() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assertion here uses or, which makes the test weaker. Given that you're using an echoModel, the output should be deterministic. You should use and to ensure both parts of the expected string are present in the response. This applies to several other new tests in this file as well (e.g., lines 320, 341, 407-408).
| assert 'Hey' in casual_response.text or "what's up" in casual_response.text.lower() | |
| assert 'Hey' in casual_response.text and "what's up" in casual_response.text.lower() |
Description here... Help the reviewer by:
Checklist (if applicable):