Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Gaudi Tensor split for memory optimization #1575

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

ClarkChin08
Copy link
Contributor

No description provided.

Signed-off-by: Chen Xi <xi2.chen@intel.com>
Signed-off-by: Chen Xi <xi2.chen@intel.com>
@ClarkChin08 ClarkChin08 requested a review from PenghuiCheng as a code owner May 29, 2024 08:46
Copy link

github-actions bot commented May 29, 2024

⛈️ Required checks status: Has failure 🔴

Warning
If you do not have the access to re-run the CI-Summary bot, please contact VincyZhang for help. If you push a new commit, all of the workflow will be re-triggered.

Groups summary

🔴 Format Scan Tests workflow
Check ID Status Error details
format-scan (pylint) failure download
format-scan (bandit) success
format-scan (cloc) success
format-scan (cpplint) success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_gaudi/models/llama/modeling_llama.py.

🟢 Optimize Unit Test workflow
Check ID Status Error details
optimize-unit-test-baseline success
optimize-unit-test-PR-test success
Genreate-OptimizeUT-Report success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_gaudi/models/llama/modeling_llama.py.

🟢 NeuralChat Unit Test
Check ID Status Error details
neuralchat-unit-test-baseline success
neuralchat-unit-test-PR-test success
Generate-NeuralChat-Report success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_gaudi/models/llama/modeling_llama.py.

🟢 Engine Unit Test workflow
Check ID Status Error details
engine-unit-test-baseline success
engine-unit-test-PR-test success
Genreate-Engine-Report success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_gaudi/models/llama/modeling_llama.py.

🟢 Chat Bot Test workflow
Check ID Status Error details
call-inference-llama-2-7b-chat-hf / inference test success
call-inference-mpt-7b-chat / inference test success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_gaudi/models/llama/modeling_llama.py.


Thank you for your contribution! 💜

Note
This comment is automatically generated and will be updates every 180 seconds within the next 6 hours. If you have any other questions, contact VincyZhang or XuehaoSun for help.

@ClarkChin08
Copy link
Contributor Author

ClarkChin08 commented Jun 7, 2024

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
1 participant