Overall Adversarial Framework.
Built on top of OpenVLA, a remarkable generalist vision-language-action model work.
- [2025-03-08]
- Upate UADA.
- Bug fix.
- Simple and flexible new evaluation tool (See in roboticAttack/evaluation_tool)
- Release new ver. of Paper (ARXIV Release eta 11 Mar 2025 00:00:00 GMT).
- ALL patches released (See in roboticAttack /adversarial_patches).
- [2024-11-26] Pre release
- Adversarial Patches (See adversarial_patches)
- Video (See videos)
(a) Use the setup commands below to get started:
conda create -n roboticAttack python=3.10 -y
conda activate roboticAttack
# Install PyTorch.
conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia -y
# Clone and install this repo
git clone https://github.com/William-wAng618/roboticAttack.git
cd roboticAttack
pip install -e .
pip install packaging ninja
ninja --version; echo $? # Verify Ninja --> should return exit code "0"
pip install "flash-attn==2.5.5" --no-build-isolation(b) Install LIBERO evaluation environment
# install LIBERO repo
git clone https://github.com/Lifelong-Robot-Learning/LIBERO.git
cd LIBERO
pip install -e .
# install other required packages:
cd ../ # cd roboticAttack/
pip install -r experiments/robot/libero/libero_requirements.txtWe utilize two datasets for generating adversarial examples:
(a) BridgeData V2:
Download the BridgeData V2 dataset:
# Change directory to your base datasets folder
cd roboticAttack
# Download the full dataset (124 GB)
wget -r -nH --cut-dirs=4 --reject="index.html*" https://rail.eecs.berkeley.edu/datasets/bridge_release/data/tfds/bridge_dataset/
# Rename the dataset to `bridge_orig` (NOTE: Omitting this step may lead to runtime errors later)
mv bridge_dataset datasets/bridge_orig(b) LIBERO:
Please download this preprocessed version of the LIBERO dataset, and place it in the dataset/ folder.
(c) The structure should look like:
├── roboticAttack
│ └── dataset
| └──bridge_orig
| └──libero_spatial_no_noops
| └──libero_object_no_noops
| └──libero_goal_no_noops
| └──libero_10_no_noops
(a) Target Manipulation Attack (TMA)
bash scripts/run_TMA.sh(b) Untargeted Action Discrepancy (UADA)
bash scripts/run_UADA.sh(b2) DDP ver For UADA
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port=29501 VLAAttacker/UADA_wrapper3_ddp.py \
--maskidx 0 \
--lr 1e-3 \
--iter 2000 \
--MSE_weights 5 \
--accumulate 1 \
--bs 8 \
--warmup 20 \
--tags XXX \
--geometry True \
--patch_size 3,50,50 \
--wandb_project your_wandb_project \
--wandb_entity your_wandb_entity \
--innerLoop 50 \
--dataset bridge_orig \
--resize_patch False \
--reverse_direction True (c) Untargeted Position-aware Attack (UPA)
bash scripts/run_UPA.shbash scripts/run_simulation.shHigh-level overview of repository/project file-tree:
VLAAttcker/- Including the code for generating adversarial examples (UADA, UPA, TMA).scripts/- Scripts for Attack and Simulation.experiments/- Code for evaluating OpenVLA policies in robot environments.LICENSE- All code is made available under the MIT License; happy hacking!
If you find this project useful, please consider citing our paper:
@InProceedings{Wang_2025_ICCV,
author = {Wang, Taowen and Han, Cheng and Liang, James and Yang, Wenhao and Liu, Dongfang and Zhang, Luna Xinyu and Wang, Qifan and Luo, Jiebo and Tang, Ruixiang},
title = {Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2025},
pages = {6948--6958}
}
