You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"Inference models can also be integrated into Prompts - which provide advanced handling for integrating with knowledge retrieval, managing source information, and providing fact-checking\n",
@@ -140,13 +129,16 @@
140
129
"- we do **include other popular models** such as `phi-3`, `qwen-2`, `yi`, `llama-3`, `mistral`\n",
141
130
"- it is easy to extend the model catalog to **include other 3rd party models**, including `ollama` and `lm studio`.\n",
142
131
"- we do **support** `open ai`, `anthropic`, `cohere` and `google api` models as well."
"print(\"\\n\\nModel Catalog - load model with ModelCatalog().load_model(model_name)\")\n",
@@ -156,28 +148,28 @@
156
148
" model_family = model[\"model_family\"]\n",
157
149
"\n",
158
150
" print(\"model: \", i, model)"
159
-
],
160
-
"metadata": {
161
-
"collapsed": true,
162
-
"id": "erEHenbjaYqi"
163
-
},
164
-
"execution_count": null,
165
-
"outputs": []
151
+
]
166
152
},
167
153
{
168
154
"cell_type": "markdown",
155
+
"metadata": {
156
+
"id": "tLCuxZcYdTHn"
157
+
},
169
158
"source": [
170
159
"## Slim Models\n",
171
160
"Slim models are 'Function Calling' Models that perform a specialized task and output python dictionaries\n",
172
161
"- by design, slim models are specialists that **perform single function**.\n",
173
162
"- by design, slim models generally **do not require any specific** `'prompt instructions'`, but will often accept a `\"parameter\"` which is passed to the function."
"Function calling models can be integrated into Agent processes which can orchestrate processes comprising multiple models and steps - most of our use cases will use the function calling models in that context\n",
247
236
"\n",
248
237
"## Last note:\n",
249
238
"Most of the models are packaged as `\"gguf\"` usually identified as GGUFGenerativeModel, or with `'-gguf'` or `'-tool` at the end of their name. These models are optimized to run most efficiently on a CPU-based laptop (especially Mac OS). You can also try the standard Pytorch versions of these models, which should yield virtually identical results, but will be slower."
250
-
],
251
-
"metadata": {
252
-
"id": "kOPly8bfdnan"
253
-
}
239
+
]
254
240
},
255
241
{
256
242
"cell_type": "markdown",
243
+
"metadata": {
244
+
"id": "rvLVgWYMe6RO"
245
+
},
257
246
"source": [
258
247
"## Journey is yet to start!\n",
259
248
"Loved it?? This is just an example of our models. Please check out our other Agentic AI examples with every model in detail here: https://github.com/llmware-ai/llmware/tree/main/fast_start/agents\n",
260
249
"\n",
261
250
"Also, if you have more interest in RAG, then please go with our RAG examples, which you can find here: https://github.com/llmware-ai/llmware/tree/main/fast_start/rag\n",
262
251
"\n",
263
-
"If you liked it, then please **star our repo https://github.com/llmware-ai/llmware** ⭐"
264
-
],
265
-
"metadata": {
266
-
"id": "rvLVgWYMe6RO"
267
-
}
252
+
"If you liked it, then please **star our repo https://github.com/llmware-ai/llmware** ⭐\n",
0 commit comments