Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions docs/dotprompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,31 +155,31 @@ You can set the format and output schema of a prompt to coerce into JSON:
model: vertexai/gemini-1.0-pro
input:
schema:
location: string
theme: string
output:
format: json
schema:
name: string
hitPoints: integer
description: string
price: integer
ingredients(array): string
---

Generate a tabletop RPG character that would be found in {{location}}.
Generate a menu item that could be found at a {{theme}} themed restaurant.
```

When generating a prompt with structured output, use the `output()` helper to
retrieve and validate it:

```ts
const characterPrompt = await prompt('create_character');
const createMenuPrompt = await prompt('create_menu');

const character = await characterPrompt.generate({
const menu = await createMenuPrompt.generate({
input: {
location: 'the beach',
theme: 'banana',
},
});

console.log(character.output());
console.log(menu.output());
```

## Multi-message prompts
Expand All @@ -199,8 +199,8 @@ input:
---

{{role "system"}}
You are a helpful AI assistant that really loves to talk about puppies. Try to work puppies
into all of your conversations.
You are a helpful AI assistant that really loves to talk about food. Try to work
food items into all of your conversations.
{{role "user"}}
{{userQuestion}}
```
Expand Down
12 changes: 8 additions & 4 deletions docs/evaluation.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,14 +34,18 @@ Note: The configuration above requires installing the `@genkit-ai/evaluator` and
Start by defining a set of inputs that you want to use as an input dataset called `testQuestions.json`. This input dataset represents the test cases you will use to generate output for evaluation.

```json
["How old is Bob?", "Where does Bob lives?", "Does Bob have any friends?"]
[
"What is on the menu?",
"Does the restaurant have clams?",
"What is the special of the day?"
]
```

You can then use the `eval:flow` command to evaluate your flow against the test
cases provided in `testQuestions.json`.

```posix-terminal
genkit eval:flow bobQA --input testQuestions.json
genkit eval:flow menuQA --input testQuestions.json
```

You can then see evaluation results in the Developer UI by running:
Expand All @@ -55,7 +59,7 @@ Then navigate to `localhost:4000/evaluate`.
Alternatively, you can provide an output file to inspect the output in a json file.

```posix-terminal
genkit eval:flow bobQA --input testQuestions.json --output eval-result.json
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is another reference to bobQA much further down below, under "Running on existing datasets"

genkit eval:flow menuQA --input testQuestions.json --output eval-result.json
```

Note: Below you can see an example of how an LLM can help you generate the test
Expand Down Expand Up @@ -167,7 +171,7 @@ genkit eval:run customLabel_dataset.json
To output to a different location, use the `--output` flag.

```posix-terminal
genkit eval:flow bobQA --input testQuestions.json --output customLabel_evalresult.json
genkit eval:flow menuQA --input testQuestions.json --output customLabel_evalresult.json
```

To run on a subset of the configured evaluators, use the `--evaluators` flag and provide a comma separated list of evaluators by name:
Expand Down