Skip to content

Commit d0d9203

Browse files
committed
feat: added colab notebook for agents
1 parent 2563496 commit d0d9203

File tree

1 file changed

+270
-0
lines changed

1 file changed

+270
-0
lines changed

‎colab notebook/Agents_01.ipynb‎

Lines changed: 270 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,270 @@
1+
{
2+
"nbformat": 4,
3+
"nbformat_minor": 0,
4+
"metadata": {
5+
"colab": {
6+
"provenance": []
7+
},
8+
"kernelspec": {
9+
"name": "python3",
10+
"display_name": "Python 3"
11+
},
12+
"language_info": {
13+
"name": "python"
14+
}
15+
},
16+
"cells": [
17+
{
18+
"cell_type": "markdown",
19+
"source": [
20+
"# LLMWare Model Exploration\n",
21+
"\n",
22+
"## This is the 'entrypoint' example that provides a general introduction of llmware models.\n",
23+
"\n",
24+
"This notebook provides an introduction to LLMWare Agentic AI models and demonstrates their usage."
25+
],
26+
"metadata": {
27+
"id": "StkY5oHGU-iN"
28+
}
29+
},
30+
{
31+
"cell_type": "code",
32+
"source": [
33+
"# install dependencies\n",
34+
"!pip3 install llmware"
35+
],
36+
"metadata": {
37+
"collapsed": true,
38+
"id": "KyaEnPzOVTJe"
39+
},
40+
"execution_count": null,
41+
"outputs": []
42+
},
43+
{
44+
"cell_type": "markdown",
45+
"source": [
46+
"If you have any dependency install issues, please review the README, docs link, or raise an Issue.\n",
47+
"\n",
48+
"Usually, if there is a missing dependency, the code will give the warning - and a clear direction like `pip install transformers'` required for this example, etc.\n",
49+
"\n",
50+
"As an alternative to pip install ... if you prefer, you can also clone the repo from github which provides a benefit of having access to 100+ examples.\n",
51+
"\n",
52+
"To clone the repo:\n",
53+
"```\n",
54+
"git clone \"https://www.github.com/llmware-ai/llmware.git\"\n",
55+
"sh \"welcome_to_llmware.sh\"\n",
56+
"```\n",
57+
"\n",
58+
"The second script `\"welcome_to_llmware.sh\"` will install all of the dependencies.\n",
59+
"\n",
60+
"If using Windows, then use the `\"welcome_to_llmware_windows.sh\"` script."
61+
],
62+
"metadata": {
63+
"id": "mcOxXgs1XTjD"
64+
}
65+
},
66+
{
67+
"cell_type": "code",
68+
"source": [
69+
"# Import Library\n",
70+
"from llmware.models import ModelCatalog"
71+
],
72+
"metadata": {
73+
"id": "n4aKjcEiVjYE"
74+
},
75+
"execution_count": null,
76+
"outputs": []
77+
},
78+
{
79+
"cell_type": "markdown",
80+
"source": [
81+
"## GETTING STARTED WITH AGENTIC AI\n",
82+
"All LLMWare models are accessible through the ModelCatalog generally consisting of two steps to access any model\n",
83+
"\n",
84+
"- Step 1 - load the model - pulls from global repo the first time, and then automatically caches locally\n",
85+
"- Step 2 - use the model with inference or function call"
86+
],
87+
"metadata": {
88+
"id": "ePtRGBIlZEkP"
89+
}
90+
},
91+
{
92+
"cell_type": "code",
93+
"source": [
94+
"# 'Standard' Models use 'inference' and take a general text input and provide a general text output\n",
95+
"\n",
96+
"model = ModelCatalog().load_model(\"bling-answer-tool\")\n",
97+
"response = model.inference(\"My son is 21 years old.\\nHow old is my son?\")\n",
98+
"\n",
99+
"print(\"\\nresponse: \", response)"
100+
],
101+
"metadata": {
102+
"collapsed": true,
103+
"id": "D0xL5WOgVlGX"
104+
},
105+
"execution_count": null,
106+
"outputs": []
107+
},
108+
{
109+
"cell_type": "code",
110+
"source": [
111+
"# Optional parameters can improve results\n",
112+
"model = ModelCatalog().load_model(\"bling-phi-3-gguf\", temperature=0.0,sample=False, max_output=200)\n",
113+
"\n",
114+
"# all LLMWare models have been fine-tuned to assume that the input will include a text passage, and that the\n",
115+
"# model's main job is to 'read' the passage, and then 'answer' a question based on that information\n",
116+
"\n",
117+
"text_passage = \"The company's stock price increased by $3 after reporting positive earnings.\"\n",
118+
"prompt = \"What was the increase in the stock price?\"\n",
119+
"\n",
120+
"response = model.inference(prompt,add_context=text_passage)\n",
121+
"\n",
122+
"print(\"\\nresponse: \", response)"
123+
],
124+
"metadata": {
125+
"collapsed": true,
126+
"id": "1AkSZ3Z_VqWt"
127+
},
128+
"execution_count": null,
129+
"outputs": []
130+
},
131+
{
132+
"cell_type": "markdown",
133+
"source": [
134+
"## Models we have and support\n",
135+
"Inference models can also be integrated into Prompts - which provide advanced handling for integrating with knowledge retrieval, managing source information, and providing fact-checking\n",
136+
"\n",
137+
"Discovering other models is easy -> to invoke a model, simply use the `'model_name'` and pass in `.load_model()`.\n",
138+
"\n",
139+
"***note***: *model_names starting with `'bling'`, `'dragon'`, and `'slim'` are llmware models.*\n",
140+
"- we do **include other popular models** such as `phi-3`, `qwen-2`, `yi`, `llama-3`, `mistral`\n",
141+
"- it is easy to extend the model catalog to **include other 3rd party models**, including `ollama` and `lm studio`.\n",
142+
"- we do **support** `open ai`, `anthropic`, `cohere` and `google api` models as well."
143+
],
144+
"metadata": {
145+
"id": "OuNEktB-aPVw"
146+
}
147+
},
148+
{
149+
"cell_type": "code",
150+
"source": [
151+
"all_generative_models = ModelCatalog().list_generative_local_models()\n",
152+
"print(\"\\n\\nModel Catalog - load model with ModelCatalog().load_model(model_name)\")\n",
153+
"for i, model in enumerate(all_generative_models):\n",
154+
"\n",
155+
" model_name = model[\"model_name\"]\n",
156+
" model_family = model[\"model_family\"]\n",
157+
"\n",
158+
" print(\"model: \", i, model)"
159+
],
160+
"metadata": {
161+
"collapsed": true,
162+
"id": "erEHenbjaYqi"
163+
},
164+
"execution_count": null,
165+
"outputs": []
166+
},
167+
{
168+
"cell_type": "markdown",
169+
"source": [
170+
"## Slim Models\n",
171+
"Slim models are 'Function Calling' Models that perform a specialized task and output python dictionaries\n",
172+
"- by design, slim models are specialists that **perform single function**.\n",
173+
"- by design, slim models generally **do not require any specific** `'prompt instructions'`, but will often accept a `\"parameter\"` which is passed to the function."
174+
],
175+
"metadata": {
176+
"id": "tLCuxZcYdTHn"
177+
}
178+
},
179+
{
180+
"cell_type": "code",
181+
"source": [
182+
"model = ModelCatalog().load_model(\"slim-sentiment-tool\")\n",
183+
"response = model.function_call(\"That was the worst earnings call ever - what a disaster.\")\n",
184+
"\n",
185+
"# the 'overall' model response is just a python dictionary\n",
186+
"print(\"\\nresponse: \", response)\n",
187+
"print(\"llm_response: \", response['llm_response'])\n",
188+
"print(\"sentiment: \", response['llm_response']['sentiment'])"
189+
],
190+
"metadata": {
191+
"collapsed": true,
192+
"id": "1ZS2wo8zdDOd"
193+
},
194+
"execution_count": null,
195+
"outputs": []
196+
},
197+
{
198+
"cell_type": "code",
199+
"source": [
200+
"# here is one of the slim model applied against a common earnings extract\n",
201+
"\n",
202+
"text_passage = (\"Here’s what Costco reported for its fiscal second quarter of 2024 compared with what Wall Street \"\n",
203+
" \"was expecting, based on a survey of analysts by LSEG, formerly known as Refinitiv: Earnings \"\n",
204+
" \"per share: $3.92 vs. $3.62 expected. Revenue: $58.44 billion vs. $59.16 billion expected \"\n",
205+
" \"In the three-month period that ended Feb. 18, Costco’s net income rose to $1.74 billion, or \"\n",
206+
" \"$3.92 per share, compared with $1.47 billion, or $3.30 per share, a year earlier. \")\n",
207+
"\n",
208+
"# extract model takes a 'key' for a parameter, and looks for the 'value' in the text\n",
209+
"model = ModelCatalog().load_model(\"slim-extract-tool\")\n",
210+
"\n",
211+
"# the general structure of a function call includes a text passage input, a function and parameters\n",
212+
"response = model.function_call(text_passage,function=\"extract\",params=[\"revenue\"])\n",
213+
"\n",
214+
"print(\"\\nextract response: \", response)"
215+
],
216+
"metadata": {
217+
"id": "Ohp-shGjkDkz"
218+
},
219+
"execution_count": null,
220+
"outputs": []
221+
},
222+
{
223+
"cell_type": "code",
224+
"source": [
225+
"# Function calling models generally come with a test set that is a great way to learn how they work\n",
226+
"# please note that each test can take a few minutes with 20-40 test questions\n",
227+
"\n",
228+
"# You can try rest of them yourself by removing comment(s) of the below catalog.\n",
229+
"ModelCatalog().tool_test_run(\"slim-topics-tool\")\n",
230+
"ModelCatalog().tool_test_run(\"slim-tags-tool\")\n",
231+
"ModelCatalog().tool_test_run(\"slim-emotions-tool\")\n",
232+
"ModelCatalog().tool_test_run(\"slim-summary-tool\")\n",
233+
"ModelCatalog().tool_test_run(\"slim-xsum-tool\")\n",
234+
"ModelCatalog().tool_test_run(\"slim-boolean-tool\")"
235+
],
236+
"metadata": {
237+
"id": "w13LLW_OdxCm"
238+
},
239+
"execution_count": null,
240+
"outputs": []
241+
},
242+
{
243+
"cell_type": "markdown",
244+
"source": [
245+
"## Agentic AI\n",
246+
"Function calling models can be integrated into Agent processes which can orchestrate processes comprising multiple models and steps - most of our use cases will use the function calling models in that context\n",
247+
"\n",
248+
"## Last note:\n",
249+
"Most of the models are packaged as `\"gguf\"` usually identified as GGUFGenerativeModel, or with `'-gguf'` or `'-tool` at the end of their name. These models are optimized to run most efficiently on a CPU-based laptop (especially Mac OS). You can also try the standard Pytorch versions of these models, which should yield virtually identical results, but will be slower."
250+
],
251+
"metadata": {
252+
"id": "kOPly8bfdnan"
253+
}
254+
},
255+
{
256+
"cell_type": "markdown",
257+
"source": [
258+
"## Journey is yet to start!\n",
259+
"Loved it?? This is just an example of our models. Please check out our other Agentic AI examples with every model in detail here: https://github.com/llmware-ai/llmware/tree/main/fast_start/agents\n",
260+
"\n",
261+
"Also, if you have more interest in RAG, then please go with our RAG examples, which you can find here: https://github.com/llmware-ai/llmware/tree/main/fast_start/rag\n",
262+
"\n",
263+
"If you liked it, then please **star our repo https://github.com/llmware-ai/llmware** ⭐"
264+
],
265+
"metadata": {
266+
"id": "rvLVgWYMe6RO"
267+
}
268+
}
269+
]
270+
}

0 commit comments

Comments
 (0)