Skip to content

Commit 026e9b5

Browse files
committed
feat: added new notebooks of agents fast start
1 parent 2669c44 commit 026e9b5

File tree

3 files changed

+768
-97
lines changed

3 files changed

+768
-97
lines changed
Lines changed: 99 additions & 97 deletions
Original file line numberDiff line numberDiff line change
@@ -1,47 +1,36 @@
11
{
2-
"nbformat": 4,
3-
"nbformat_minor": 0,
4-
"metadata": {
5-
"colab": {
6-
"provenance": []
7-
},
8-
"kernelspec": {
9-
"name": "python3",
10-
"display_name": "Python 3"
11-
},
12-
"language_info": {
13-
"name": "python"
14-
}
15-
},
162
"cells": [
173
{
184
"cell_type": "markdown",
5+
"metadata": {
6+
"id": "StkY5oHGU-iN"
7+
},
198
"source": [
209
"# LLMWare Model Exploration\n",
2110
"\n",
2211
"## This is the 'entrypoint' example that provides a general introduction of llmware models.\n",
2312
"\n",
2413
"This notebook provides an introduction to LLMWare Agentic AI models and demonstrates their usage."
25-
],
26-
"metadata": {
27-
"id": "StkY5oHGU-iN"
28-
}
14+
]
2915
},
3016
{
3117
"cell_type": "code",
32-
"source": [
33-
"# install dependencies\n",
34-
"!pip3 install llmware"
35-
],
18+
"execution_count": null,
3619
"metadata": {
3720
"collapsed": true,
3821
"id": "KyaEnPzOVTJe"
3922
},
40-
"execution_count": null,
41-
"outputs": []
23+
"outputs": [],
24+
"source": [
25+
"# install dependencies\n",
26+
"!pip3 install llmware"
27+
]
4228
},
4329
{
4430
"cell_type": "markdown",
31+
"metadata": {
32+
"id": "mcOxXgs1XTjD"
33+
},
4534
"source": [
4635
"If you have any dependency install issues, please review the README, docs link, or raise an Issue.\n",
4736
"\n",
@@ -58,55 +47,58 @@
5847
"The second script `\"welcome_to_llmware.sh\"` will install all of the dependencies.\n",
5948
"\n",
6049
"If using Windows, then use the `\"welcome_to_llmware_windows.sh\"` script."
61-
],
62-
"metadata": {
63-
"id": "mcOxXgs1XTjD"
64-
}
50+
]
6551
},
6652
{
6753
"cell_type": "code",
68-
"source": [
69-
"# Import Library\n",
70-
"from llmware.models import ModelCatalog"
71-
],
54+
"execution_count": null,
7255
"metadata": {
7356
"id": "n4aKjcEiVjYE"
7457
},
75-
"execution_count": null,
76-
"outputs": []
58+
"outputs": [],
59+
"source": [
60+
"# Import Library\n",
61+
"from llmware.models import ModelCatalog"
62+
]
7763
},
7864
{
7965
"cell_type": "markdown",
66+
"metadata": {
67+
"id": "ePtRGBIlZEkP"
68+
},
8069
"source": [
8170
"## GETTING STARTED WITH AGENTIC AI\n",
8271
"All LLMWare models are accessible through the ModelCatalog generally consisting of two steps to access any model\n",
8372
"\n",
8473
"- Step 1 - load the model - pulls from global repo the first time, and then automatically caches locally\n",
8574
"- Step 2 - use the model with inference or function call"
86-
],
87-
"metadata": {
88-
"id": "ePtRGBIlZEkP"
89-
}
75+
]
9076
},
9177
{
9278
"cell_type": "code",
79+
"execution_count": null,
80+
"metadata": {
81+
"collapsed": true,
82+
"id": "D0xL5WOgVlGX"
83+
},
84+
"outputs": [],
9385
"source": [
9486
"# 'Standard' Models use 'inference' and take a general text input and provide a general text output\n",
9587
"\n",
9688
"model = ModelCatalog().load_model(\"bling-answer-tool\")\n",
9789
"response = model.inference(\"My son is 21 years old.\\nHow old is my son?\")\n",
9890
"\n",
9991
"print(\"\\nresponse: \", response)"
100-
],
101-
"metadata": {
102-
"collapsed": true,
103-
"id": "D0xL5WOgVlGX"
104-
},
105-
"execution_count": null,
106-
"outputs": []
92+
]
10793
},
10894
{
10995
"cell_type": "code",
96+
"execution_count": null,
97+
"metadata": {
98+
"collapsed": true,
99+
"id": "1AkSZ3Z_VqWt"
100+
},
101+
"outputs": [],
110102
"source": [
111103
"# Optional parameters can improve results\n",
112104
"model = ModelCatalog().load_model(\"bling-phi-3-gguf\", temperature=0.0,sample=False, max_output=200)\n",
@@ -120,16 +112,13 @@
120112
"response = model.inference(prompt,add_context=text_passage)\n",
121113
"\n",
122114
"print(\"\\nresponse: \", response)"
123-
],
124-
"metadata": {
125-
"collapsed": true,
126-
"id": "1AkSZ3Z_VqWt"
127-
},
128-
"execution_count": null,
129-
"outputs": []
115+
]
130116
},
131117
{
132118
"cell_type": "markdown",
119+
"metadata": {
120+
"id": "OuNEktB-aPVw"
121+
},
133122
"source": [
134123
"## Models we have and support\n",
135124
"Inference models can also be integrated into Prompts - which provide advanced handling for integrating with knowledge retrieval, managing source information, and providing fact-checking\n",
@@ -140,13 +129,16 @@
140129
"- we do **include other popular models** such as `phi-3`, `qwen-2`, `yi`, `llama-3`, `mistral`\n",
141130
"- it is easy to extend the model catalog to **include other 3rd party models**, including `ollama` and `lm studio`.\n",
142131
"- we do **support** `open ai`, `anthropic`, `cohere` and `google api` models as well."
143-
],
144-
"metadata": {
145-
"id": "OuNEktB-aPVw"
146-
}
132+
]
147133
},
148134
{
149135
"cell_type": "code",
136+
"execution_count": null,
137+
"metadata": {
138+
"collapsed": true,
139+
"id": "erEHenbjaYqi"
140+
},
141+
"outputs": [],
150142
"source": [
151143
"all_generative_models = ModelCatalog().list_generative_local_models()\n",
152144
"print(\"\\n\\nModel Catalog - load model with ModelCatalog().load_model(model_name)\")\n",
@@ -156,28 +148,28 @@
156148
" model_family = model[\"model_family\"]\n",
157149
"\n",
158150
" print(\"model: \", i, model)"
159-
],
160-
"metadata": {
161-
"collapsed": true,
162-
"id": "erEHenbjaYqi"
163-
},
164-
"execution_count": null,
165-
"outputs": []
151+
]
166152
},
167153
{
168154
"cell_type": "markdown",
155+
"metadata": {
156+
"id": "tLCuxZcYdTHn"
157+
},
169158
"source": [
170159
"## Slim Models\n",
171160
"Slim models are 'Function Calling' Models that perform a specialized task and output python dictionaries\n",
172161
"- by design, slim models are specialists that **perform single function**.\n",
173162
"- by design, slim models generally **do not require any specific** `'prompt instructions'`, but will often accept a `\"parameter\"` which is passed to the function."
174-
],
175-
"metadata": {
176-
"id": "tLCuxZcYdTHn"
177-
}
163+
]
178164
},
179165
{
180166
"cell_type": "code",
167+
"execution_count": null,
168+
"metadata": {
169+
"collapsed": true,
170+
"id": "1ZS2wo8zdDOd"
171+
},
172+
"outputs": [],
181173
"source": [
182174
"model = ModelCatalog().load_model(\"slim-sentiment-tool\")\n",
183175
"response = model.function_call(\"That was the worst earnings call ever - what a disaster.\")\n",
@@ -186,16 +178,15 @@
186178
"print(\"\\nresponse: \", response)\n",
187179
"print(\"llm_response: \", response['llm_response'])\n",
188180
"print(\"sentiment: \", response['llm_response']['sentiment'])"
189-
],
190-
"metadata": {
191-
"collapsed": true,
192-
"id": "1ZS2wo8zdDOd"
193-
},
194-
"execution_count": null,
195-
"outputs": []
181+
]
196182
},
197183
{
198184
"cell_type": "code",
185+
"execution_count": null,
186+
"metadata": {
187+
"id": "Ohp-shGjkDkz"
188+
},
189+
"outputs": [],
199190
"source": [
200191
"# here is one of the slim model applied against a common earnings extract\n",
201192
"\n",
@@ -212,15 +203,15 @@
212203
"response = model.function_call(text_passage,function=\"extract\",params=[\"revenue\"])\n",
213204
"\n",
214205
"print(\"\\nextract response: \", response)"
215-
],
216-
"metadata": {
217-
"id": "Ohp-shGjkDkz"
218-
},
219-
"execution_count": null,
220-
"outputs": []
206+
]
221207
},
222208
{
223209
"cell_type": "code",
210+
"execution_count": null,
211+
"metadata": {
212+
"id": "w13LLW_OdxCm"
213+
},
214+
"outputs": [],
224215
"source": [
225216
"# Function calling models generally come with a test set that is a great way to learn how they work\n",
226217
"# please note that each test can take a few minutes with 20-40 test questions\n",
@@ -232,39 +223,50 @@
232223
"ModelCatalog().tool_test_run(\"slim-summary-tool\")\n",
233224
"ModelCatalog().tool_test_run(\"slim-xsum-tool\")\n",
234225
"ModelCatalog().tool_test_run(\"slim-boolean-tool\")"
235-
],
236-
"metadata": {
237-
"id": "w13LLW_OdxCm"
238-
},
239-
"execution_count": null,
240-
"outputs": []
226+
]
241227
},
242228
{
243229
"cell_type": "markdown",
230+
"metadata": {
231+
"id": "kOPly8bfdnan"
232+
},
244233
"source": [
245234
"## Agentic AI\n",
246235
"Function calling models can be integrated into Agent processes which can orchestrate processes comprising multiple models and steps - most of our use cases will use the function calling models in that context\n",
247236
"\n",
248237
"## Last note:\n",
249238
"Most of the models are packaged as `\"gguf\"` usually identified as GGUFGenerativeModel, or with `'-gguf'` or `'-tool` at the end of their name. These models are optimized to run most efficiently on a CPU-based laptop (especially Mac OS). You can also try the standard Pytorch versions of these models, which should yield virtually identical results, but will be slower."
250-
],
251-
"metadata": {
252-
"id": "kOPly8bfdnan"
253-
}
239+
]
254240
},
255241
{
256242
"cell_type": "markdown",
243+
"metadata": {
244+
"id": "rvLVgWYMe6RO"
245+
},
257246
"source": [
258247
"## Journey is yet to start!\n",
259248
"Loved it?? This is just an example of our models. Please check out our other Agentic AI examples with every model in detail here: https://github.com/llmware-ai/llmware/tree/main/fast_start/agents\n",
260249
"\n",
261250
"Also, if you have more interest in RAG, then please go with our RAG examples, which you can find here: https://github.com/llmware-ai/llmware/tree/main/fast_start/rag\n",
262251
"\n",
263-
"If you liked it, then please **star our repo https://github.com/llmware-ai/llmware** ⭐"
264-
],
265-
"metadata": {
266-
"id": "rvLVgWYMe6RO"
267-
}
252+
"If you liked it, then please **star our repo https://github.com/llmware-ai/llmware** ⭐\n",
253+
"\n",
254+
"Any doubts?? Join our **discord server: https://discord.gg/GN49aWx2H3** 🫂"
255+
]
256+
}
257+
],
258+
"metadata": {
259+
"colab": {
260+
"provenance": []
261+
},
262+
"kernelspec": {
263+
"display_name": "Python 3",
264+
"name": "python3"
265+
},
266+
"language_info": {
267+
"name": "python"
268268
}
269-
]
270-
}
269+
},
270+
"nbformat": 4,
271+
"nbformat_minor": 0
272+
}

0 commit comments

Comments
 (0)