Skip to content

Commit 5230210

Browse files
authored
rename E2E examples and add a new one for PyTorch (#4)
Signed-off-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com>
1 parent 521c762 commit 5230210

File tree

26 files changed

+544
-612
lines changed

26 files changed

+544
-612
lines changed

‎AIKit_Sample_Overview.ipynb‎

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n",
8+
"# Samples to Get Started with Intel® oneAPI AI Analytics Toolkit\n",
9+
"\n",
10+
"This AI Kit JupyterHub platform also features several samples to get started with understanding how some Intel® oneAPI AI Analytics Toolkit delivers optimized and scalable solutions for deep learning and machine learning data science workflows. \n",
11+
"\n",
12+
"Below are the list of samples included for RHODS users:\n",
13+
"\n"
14+
]
15+
},
16+
{
17+
"cell_type": "markdown",
18+
"metadata": {},
19+
"source": [
20+
"1. **Intel Optimized_TensorFlow_and_INC_Quantization_Tool_E2E_Sample**: This sample utilizes Intel-optimized Tensorflow and INC (Intel® Neural Compressor) in the Intel® oneAPI AI Analytics Toolkit offered in the RHODS platform. The sample will train MNIST with Intel-optimized Tensorflow on alexnet, followed by quantizing with INC to convert fp32 trained model to int8 low-precision model and perform optimized inference. This sample will also provide performance and accuracy comparisons on fp32 vs int8 inference highlighting the importance of using LPOT in Intel® oneAPI AI Analytics Toolkit to perform low-precision inference. **Open the [inc_sample_tensorflow.ipynb](./Intel_Optimized_TensorFlow_and_INC_Quantization_Tool_E2E_Sample/inc_sample_tensorflow.ipynb) notebook and follow the instructions to run.**\n",
21+
"\n",
22+
"2. **Intel_Modin_and_Intel_Extension_for_Scikit-Learn_E2E_Sample**: This sample will introduce users how to use Intel® Distribution of Modin and Intel® Extension for Scikit-Learn from the Intel® oneAPI API analytics toolkit offered in the RHODS platform. The sample trains US Census data with Intel® Extension for Scikit-Learn and utilizes Intel® Distribution of Modin on Pandas to perform optimized and distributed data preprocessing calls such as read_csv and other ETL operations. **Open the [census_modin.ipynb](./Intel_Modin_and_Intel_Extension_for_Scikit-Learn_E2E_Sample/census_modin.ipynb) notebook and follow the instructions to run.**\n",
23+
"\n",
24+
"3. **Intel_Extension_for_PyTorch_GettingStarted_and_AutoMixedPrecision_Sample**: This sample code shows how to get started with Intel Extension for PyTorch as well as how to use AutoMixedPrecision with Intel Extension for PyTorch to extend the official PyTorch with optimizations for extra performance boost on Intel hardware. **Open the [Intel_Extension_for_PyTorch_GettingStarted_and_AutoMixedPrecision.ipynb](./Intel_Extension_for_PyTorch_GettingStarted_and_AutoMixedPrecision_Sample/Intel_Extension_for_PyTorch_Getting_Started_and_AutoMixedPrecision.ipynb) notebook and follow the instructions to run.**\n",
25+
"\n"
26+
]
27+
},
28+
{
29+
"cell_type": "markdown",
30+
"metadata": {},
31+
"source": [
32+
"4. Intel Model Zoo is also shipped as part of the toolkit and can be found in the \"models\" folder. Go to `/models.git/quickstart` to learn more on how to run various models offered as part of Intel Model Zoo.\n",
33+
"\n",
34+
"**For even more samples,** please visit the [Intel® oneAPI AI Analytics Toolkit](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics) repository"
35+
]
36+
},
37+
{
38+
"cell_type": "code",
39+
"execution_count": null,
40+
"metadata": {},
41+
"outputs": [],
42+
"source": []
43+
}
44+
],
45+
"metadata": {
46+
"kernelspec": {
47+
"display_name": "Python 3 (ipykernel)",
48+
"language": "python",
49+
"name": "python3"
50+
},
51+
"language_info": {
52+
"codemirror_mode": {
53+
"name": "ipython",
54+
"version": 3
55+
},
56+
"file_extension": ".py",
57+
"mimetype": "text/x-python",
58+
"name": "python",
59+
"nbconvert_exporter": "python",
60+
"pygments_lexer": "ipython3",
61+
"version": "3.9.5"
62+
}
63+
},
64+
"nbformat": 4,
65+
"nbformat_minor": 4
66+
}

‎AI_Kit_Welcome.ipynb‎

Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Welcome to Intel® oneAPI AI Analytics Toolkit on RedHat OpenShift Data Science!\n",
8+
"This document covers the basics of Intel® oneAPI AI Analytics Toolkit (AI Kit) on Red Hat OpenShift Data Science for Data Science Projects. It provides information on where to access certain Intel AI optimizations within the AI Analytics Toolkit pre-built kernel environments, and more resources on examples and how to find more information on Intel® oneAPI AI Analytics Toolkit.\n",
9+
"\n",
10+
"## Overview of Intel® oneAPI AI Analytics Toolkit\n",
11+
"The Intel® oneAPI AI Analytics Toolkit gives data scientists, AI developers, and researchers familiar Python* tools and frameworks to accelerate end-to-end data science and machine learning pipelines on Intel® architectures. The components are built using oneAPI libraries for low-level compute optimizations. This toolkit maximizes performance from preprocessing through machine and deep learning phases and provides interoperability for efficient model development and deployment across single and multinodes. \n",
12+
"\n",
13+
"Using this toolkit, you can:\n",
14+
"\n",
15+
"- Deliver high-performance deep learning (DL) training and inference on Intel® XPUs with Intel-optimized DL frameworks: TensorFlow* and PyTorch*, pretrained models, and low-precision tools. \n",
16+
"\n",
17+
"- Achieve drop-in acceleration for data preprocessing and machine learning workflows with compute-intensive Python packages: Modin*, Scikit-Learn*, and XGBoost* optimized for Intel.\n",
18+
"\n",
19+
"- Seamlessly scale up and scale out to leverage AI compute continuum across single and multi nodes\n",
20+
"\n",
21+
"- Gain direct access to Intel’s latest machine and deep learning optimizations in a single integrated package tested for interoperability.\n"
22+
]
23+
},
24+
{
25+
"cell_type": "markdown",
26+
"metadata": {},
27+
"source": [
28+
"## Table of Contents\n",
29+
"1. [Intel® oneAPI AI Analytics Toolkit Pre-built Environment Packages Information](#sec-env)\n",
30+
"2. [Intel® oneAPI AI Analytics Toolkit Getting Started Resources](#sec-gettingstarted)"
31+
]
32+
},
33+
{
34+
"cell_type": "markdown",
35+
"metadata": {
36+
"tags": []
37+
},
38+
"source": [
39+
"<a id=\"sec-env\"></a>\n",
40+
"## 1. Intel® oneAPI AI Analytics Toolkit Environment Packages Information\n",
41+
"\n",
42+
"You can find more detailed information on using the Intel® oneAPI AI Analytics Toolkit at <a href=\"software.intel.com/oneapi/ai-kit\">software.intel.com/oneapi/ai-kit</a>. \n",
43+
"\n",
44+
"On RedHat OpenShift Data Science, we have provided some pre-installed AI Kit environments based on different workload needs:\n",
45+
"\n",
46+
"**`Intel SKLearn, XGBoost, & Modin` Kernel Environment**\n",
47+
"- <a href=\"https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/distribution-of-modin.html\">Intel® Distribution of Modin*</a>\n",
48+
"- <a href=\"https://www.intel.com/content/www/us/en/developer/tools/oneapi/scikit-learn.html\">Intel® Extension for Scikit-Learn*</a>\n",
49+
"- <a href=\"https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/distribution-for-python.html\">Intel® Distribution for Python (including Intel optimizations for NumPy, SciPy, and Numba)</a>\n",
50+
"- <a href=\"https://www.intel.com/content/www/us/en/developer/tools/frameworks/overview.html#xgboost\">XGBoost Optimized for Intel Architecture</a>\n",
51+
"\n",
52+
"**`Intel TensorFlow & Quantization` Kernel Environment**\n",
53+
"- <a href=\"https://software.intel.com/content/www/us/en/develop/tools/frameworks.html#tensorflow\">Intel® Optimization for TensorFlow</a>\n",
54+
"- <a href=\"https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html\">Intel® Neural Compressor</a>\n",
55+
"- <a href=\"https://github.com/IntelAI/models\">Model Zoo for Intel® Architecture</a>\n",
56+
"\n",
57+
"**`Intel PyTorch & Quantization` Kernel Environment**\n",
58+
"- <a href=\"https://software.intel.com/content/www/us/en/develop/tools/frameworks.html#pytorch\">Intel® Optimization for PyTorch</a>\n",
59+
"- <a href=\"https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html\">Intel® Neural Compressor</a>\n",
60+
"- <a href=\"https://github.com/IntelAI/models\">Model Zoo for Intel® Architecture</a>\n",
61+
"\n",
62+
"You can also create and install additional AI kit packages and environments on Red Hat OpenShift Data Science using <a href=\"https://www.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top/installation/install-using-package-managers/conda/install-intel-ai-analytics-toolkit-via-conda.html\">Conda* Package Manager</a>."
63+
]
64+
},
65+
{
66+
"cell_type": "markdown",
67+
"metadata": {},
68+
"source": [
69+
"<a name=\"sec-gettingstarted\"></a>\n",
70+
"## 2. Intel® oneAPI AI Analytics Toolkit Getting Started Resources\n",
71+
"As part of the Intel® oneAPI AI Analytics Toolkit on RedHat OpenShift Data Science, we have provided you several examples to help you get started with different AI Kit optimizations. These can be found in the `oneAPI-samples.git` directory.\n",
72+
"\n",
73+
"For even more examples and information on AI Kit optimizations consider visiting the following resources:\n",
74+
"- <a href=\"https://software.intel.com/en-us/oneapi/ai-kit\">Intel® oneAPI AI Analytics Toolkit Website</a>\n",
75+
"- <a href=\"https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics\">Intel® oneAPI AI Analytics Toolkit Code Samples</a>\n",
76+
"- <a href=\"https://software.intel.com/content/www/us/en/develop/articles/oneapi-ai-analytics-toolkit-release-notes.html\">Intel® oneAPI AI Analytics Toolkit Release Notes</a>\n"
77+
]
78+
}
79+
],
80+
"metadata": {
81+
"kernelspec": {
82+
"display_name": "Python 3 (ipykernel)",
83+
"language": "python",
84+
"name": "python3"
85+
},
86+
"language_info": {
87+
"codemirror_mode": {
88+
"name": "ipython",
89+
"version": 3
90+
},
91+
"file_extension": ".py",
92+
"mimetype": "text/x-python",
93+
"name": "python",
94+
"nbconvert_exporter": "python",
95+
"pygments_lexer": "ipython3",
96+
"version": "3.9.5"
97+
}
98+
},
99+
"nbformat": 4,
100+
"nbformat_minor": 4
101+
}

‎E2E_use_case_with_Intel_Modin_Intel_optimized_scikit-learn/.gitkeep‎

Whitespace-only changes.
Binary file not shown.
Binary file not shown.

‎E2E_use_case_with_Intel_optimized_Tensorflow_LPOT/run_jupyter.sh‎

Lines changed: 0 additions & 2 deletions
This file was deleted.

0 commit comments

Comments
 (0)