You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"## This is the 'entrypoint' example that provides a general introduction of llmware models.\n",
23
+
"\n",
24
+
"This notebook provides an introduction to LLMWare Agentic AI models and demonstrates their usage."
25
+
],
26
+
"metadata": {
27
+
"id": "StkY5oHGU-iN"
28
+
}
29
+
},
30
+
{
31
+
"cell_type": "code",
32
+
"source": [
33
+
"# install dependencies\n",
34
+
"!pip3 install llmware"
35
+
],
36
+
"metadata": {
37
+
"collapsed": true,
38
+
"id": "KyaEnPzOVTJe"
39
+
},
40
+
"execution_count": null,
41
+
"outputs": []
42
+
},
43
+
{
44
+
"cell_type": "markdown",
45
+
"source": [
46
+
"If you have any dependency install issues, please review the README, docs link, or raise an Issue.\n",
47
+
"\n",
48
+
"Usually, if there is a missing dependency, the code will give the warning - and a clear direction like `pip install transformers'` required for this example, etc.\n",
49
+
"\n",
50
+
"As an alternative to pip install ... if you prefer, you can also clone the repo from github which provides a benefit of having access to 100+ examples.\n",
"Inference models can also be integrated into Prompts - which provide advanced handling for integrating with knowledge retrieval, managing source information, and providing fact-checking\n",
136
+
"\n",
137
+
"Discovering other models is easy -> to invoke a model, simply use the `'model_name'` and pass in `.load_model()`.\n",
138
+
"\n",
139
+
"***note***: *model_names starting with `'bling'`, `'dragon'`, and `'slim'` are llmware models.*\n",
140
+
"- we do **include other popular models** such as `phi-3`, `qwen-2`, `yi`, `llama-3`, `mistral`\n",
141
+
"- it is easy to extend the model catalog to **include other 3rd party models**, including `ollama` and `lm studio`.\n",
142
+
"- we do **support** `open ai`, `anthropic`, `cohere` and `google api` models as well."
"print(\"\\n\\nModel Catalog - load model with ModelCatalog().load_model(model_name)\")\n",
153
+
"for i, model in enumerate(all_generative_models):\n",
154
+
"\n",
155
+
" model_name = model[\"model_name\"]\n",
156
+
" model_family = model[\"model_family\"]\n",
157
+
"\n",
158
+
" print(\"model: \", i, model)"
159
+
],
160
+
"metadata": {
161
+
"collapsed": true,
162
+
"id": "erEHenbjaYqi"
163
+
},
164
+
"execution_count": null,
165
+
"outputs": []
166
+
},
167
+
{
168
+
"cell_type": "markdown",
169
+
"source": [
170
+
"## Slim Models\n",
171
+
"Slim models are 'Function Calling' Models that perform a specialized task and output python dictionaries\n",
172
+
"- by design, slim models are specialists that **perform single function**.\n",
173
+
"- by design, slim models generally **do not require any specific** `'prompt instructions'`, but will often accept a `\"parameter\"` which is passed to the function."
"Function calling models can be integrated into Agent processes which can orchestrate processes comprising multiple models and steps - most of our use cases will use the function calling models in that context\n",
247
+
"\n",
248
+
"## Last note:\n",
249
+
"Most of the models are packaged as `\"gguf\"` usually identified as GGUFGenerativeModel, or with `'-gguf'` or `'-tool` at the end of their name. These models are optimized to run most efficiently on a CPU-based laptop (especially Mac OS). You can also try the standard Pytorch versions of these models, which should yield virtually identical results, but will be slower."
250
+
],
251
+
"metadata": {
252
+
"id": "kOPly8bfdnan"
253
+
}
254
+
},
255
+
{
256
+
"cell_type": "markdown",
257
+
"source": [
258
+
"## Journey is yet to start!\n",
259
+
"Loved it?? This is just an example of our models. Please check out our other Agentic AI examples with every model in detail here: https://github.com/llmware-ai/llmware/tree/main/fast_start/agents\n",
260
+
"\n",
261
+
"Also, if you have more interest in RAG, then please go with our RAG examples, which you can find here: https://github.com/llmware-ai/llmware/tree/main/fast_start/rag\n",
262
+
"\n",
263
+
"If you liked it, then please **star our repo https://github.com/llmware-ai/llmware** ⭐"
0 commit comments