Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
"onCreateCommand": "./codespaces_create_and_start_containers.sh"
"onCreateCommand": "./scripts/codespaces_create_and_start_containers.sh"
}
17 changes: 9 additions & 8 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,12 @@ src/hackingBuddyGPT/usecases/web_api_testing/openapi_spec/
src/hackingBuddyGPT/usecases/web_api_testing/converted_files/
/src/hackingBuddyGPT/usecases/web_api_testing/documentation/openapi_spec/
/src/hackingBuddyGPT/usecases/web_api_testing/documentation/reports/
codespaces_ansible.cfg
codespaces_ansible_hosts.ini
codespaces_ansible_id_rsa
codespaces_ansible_id_rsa.pub
mac_ansible.cfg
mac_ansible_hosts.ini
mac_ansible_id_rsa
mac_ansible_id_rsa.pub
scripts/codespaces_ansible.cfg
scripts/codespaces_ansible_hosts.ini
scripts/codespaces_ansible_id_rsa
scripts/codespaces_ansible_id_rsa.pub
scripts/mac_ansible.cfg
scripts/mac_ansible_hosts.ini
scripts/mac_ansible_id_rsa
scripts/mac_ansible_id_rsa.pub
.aider*
18 changes: 13 additions & 5 deletions MAC.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,15 @@ There are bugs in Docker Desktop on Mac that prevent creation of a custom Docker

Therefore, localhost TCP port 49152 (or higher) dynamic port number is used for an ansible-ready-ubuntu container

http://localhost:8080 is genmini-openai-proxy
http://localhost:8080 is gemini-openai-proxy

gpt-4 maps to gemini-1.5-flash-latest

Hence use gpt-4 below in --llm.model=gpt-4

Gemini free tier has a limit of 15 requests per minute, and 1500 requests per day

Hence --max_turns 999999999 will exceed the daily limit

For example:

Expand All @@ -23,7 +31,7 @@ export GEMINI_API_KEY=

export PORT=49152

wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gemini-1.5-flash-latest --llm.context_size=1000000 --conn.host=localhost --conn.port $PORT --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999
wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gpt-4 --llm.context_size=1000000 --conn.host=localhost --conn.port $PORT --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999
```

The above example is consolidated into shell scripts with prerequisites as follows:
Expand All @@ -40,7 +48,7 @@ The above example is consolidated into shell scripts with prerequisites as follo
brew install bash
```

Bash version 4 or higher is needed for `mac_create_and_start_containers.sh`
Bash version 4 or higher is needed for `scripts/mac_create_and_start_containers.sh`

Homebrew provides GNU Bash version 5 via license GPLv3+

Expand All @@ -49,7 +57,7 @@ Whereas Mac provides Bash version 3 via license GPLv2
**Create and start containers:**

```zsh
./mac_create_and_start_containers.sh
./scripts/mac_create_and_start_containers.sh
```

**Start hackingBuddyGPT against a container:**
Expand All @@ -59,7 +67,7 @@ export GEMINI_API_KEY=
```

```zsh
./mac_start_hackingbuddygpt_against_a_container.sh
./scripts/mac_start_hackingbuddygpt_against_a_container.sh
```

**Troubleshooting:**
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ In the Command Palette, type `>` and `Terminal: Create New Terminal` and press t

Type the following to manually run:
```bash
./codespaces_start_hackingbuddygpt_against_a_container.sh
./scripts/codespaces_start_hackingbuddygpt_against_a_container.sh
```
7. Eventually, you should see:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,23 @@

# Purpose: In GitHub Codespaces, automates the setup of Docker containers,
# preparation of Ansible inventory, and modification of tasks for testing.
# Usage: ./codespaces_create_and_start_containers.sh
# Usage: ./scripts/codespaces_create_and_start_containers.sh

# Enable strict error handling for better script robustness
set -e # Exit immediately if a command exits with a non-zero status
set -u # Treat unset variables as an error and exit immediately
set -o pipefail # Return the exit status of the last command in a pipeline that failed
set -x # Print each command before executing it (useful for debugging)

cd $(dirname $0)

bash_version=$(/bin/bash --version | head -n 1 | awk '{print $4}' | cut -d. -f1)

if (( bash_version < 4 )); then
echo 'Error: Requires Bash version 4 or higher.'
exit 1
fi

# Step 1: Initialization

if [ ! -f hosts.ini ]; then
Expand Down
Original file line number Diff line number Diff line change
@@ -1,17 +1,27 @@
#!/bin/bash

# Purpose: In GitHub Codespaces, start hackingBuddyGPT against a container
# Usage: ./codespaces_start_hackingbuddygpt_against_a_container.sh
# Usage: ./scripts/codespaces_start_hackingbuddygpt_against_a_container.sh

# Enable strict error handling for better script robustness
set -e # Exit immediately if a command exits with a non-zero status
set -u # Treat unset variables as an error and exit immediately
set -o pipefail # Return the exit status of the last command in a pipeline that failed
set -x # Print each command before executing it (useful for debugging)

cd $(dirname $0)

bash_version=$(/bin/bash --version | head -n 1 | awk '{print $4}' | cut -d. -f1)

if (( bash_version < 4 )); then
echo 'Error: Requires Bash version 4 or higher.'
exit 1
fi

# Step 1: Install prerequisites

# setup virtual python environment
cd ..
python -m venv venv
source ./venv/bin/activate

Expand All @@ -35,3 +45,21 @@ echo "Starting hackingBuddyGPT against a container..."
echo

wintermute LinuxPrivesc --llm.api_key=$OPENAI_API_KEY --llm.model=gpt-4-turbo --llm.context_size=8192 --conn.host=192.168.122.151 --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1

# Alternatively, the following comments demonstrate using gemini-openai-proxy and Gemini

# http://localhost:8080 is gemini-openai-proxy

# gpt-4 maps to gemini-1.5-flash-latest

# Hence use gpt-4 below in --llm.model=gpt-4

# Gemini free tier has a limit of 15 requests per minute, and 1500 requests per day

# Hence --max_turns 999999999 will exceed the daily limit

# docker run --restart=unless-stopped -it -d -p 8080:8080 --name gemini zhu327/gemini-openai-proxy:latest

# export GEMINI_API_KEY=

# wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gpt-4 --llm.context_size=1000000 --conn.host=192.168.122.151 --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,13 +1,22 @@
#!/opt/homebrew/bin/bash

# Purpose: Automates the setup of docker containers for local testing on Mac.
# Usage: ./mac_create_and_start_containers.sh
# Purpose: Automates the setup of docker containers for local testing on Mac
# Usage: ./scripts/mac_create_and_start_containers.sh

# Enable strict error handling
set -e
set -u
set -o pipefail
set -x
# Enable strict error handling for better script robustness
set -e # Exit immediately if a command exits with a non-zero status
set -u # Treat unset variables as an error and exit immediately
set -o pipefail # Return the exit status of the last command in a pipeline that failed
set -x # Print each command before executing it (useful for debugging)

cd $(dirname $0)

bash_version=$(/opt/homebrew/bin/bash --version | head -n 1 | awk '{print $4}' | cut -d. -f1)

if (( bash_version < 4 )); then
echo 'Error: Requires Bash version 4 or higher.'
exit 1
fi

# Step 1: Initialization

Expand All @@ -21,9 +30,6 @@ if [ ! -f tasks.yaml ]; then
exit 1
fi

# Default value for base port
# BASE_PORT=${BASE_PORT:-49152}

# Default values for network and base port, can be overridden by environment variables
DOCKER_NETWORK_NAME=${DOCKER_NETWORK_NAME:-192_168_65_0_24}
DOCKER_NETWORK_SUBNET="192.168.65.0/24"
Expand Down Expand Up @@ -251,6 +257,6 @@ docker --debug run --restart=unless-stopped -it -d -p 8080:8080 --name gemini-op

# Step 14: Ready to run hackingBuddyGPT

echo "You can now run ./mac_start_hackingbuddygpt_against_a_container.sh"
echo "You can now run ./scripts/mac_start_hackingbuddygpt_against_a_container.sh"

exit 0
Original file line number Diff line number Diff line change
@@ -1,17 +1,27 @@
#!/bin/bash

# Purpose: On a Mac, start hackingBuddyGPT against a container
# Usage: ./mac_start_hackingbuddygpt_against_a_container.sh
# Usage: ./scripts/mac_start_hackingbuddygpt_against_a_container.sh

# Enable strict error handling for better script robustness
set -e # Exit immediately if a command exits with a non-zero status
set -u # Treat unset variables as an error and exit immediately
set -o pipefail # Return the exit status of the last command in a pipeline that failed
set -x # Print each command before executing it (useful for debugging)

cd $(dirname $0)

bash_version=$(/bin/bash --version | head -n 1 | awk '{print $4}' | cut -d. -f1)

if (( bash_version < 3 )); then
echo 'Error: Requires Bash version 3 or higher.'
exit 1
fi

# Step 1: Install prerequisites

# setup virtual python environment
cd ..
python -m venv venv
source ./venv/bin/activate

Expand Down Expand Up @@ -44,9 +54,23 @@ echo

PORT=$(docker ps | grep ansible-ready-ubuntu | cut -d ':' -f2 | cut -d '-' -f1)

# http://localhost:8080 is gemini-openai-proxy

# gpt-4 maps to gemini-1.5-flash-latest

# https://github.com/zhu327/gemini-openai-proxy/blob/559085101f0ce5e8c98a94fb75fefd6c7a63d26d/README.md?plain=1#L146

# | gpt-4 | gemini-1.5-flash-latest |

# https://github.com/zhu327/gemini-openai-proxy/blob/559085101f0ce5e8c98a94fb75fefd6c7a63d26d/pkg/adapter/models.go#L60-L61

# case strings.HasPrefix(openAiModelName, openai.GPT4):
# return Gemini1Dot5Flash

# Hence use gpt-4 below in --llm.model=gpt-4

# Gemini free tier has a limit of 15 requests per minute, and 1500 requests per day
# Hence --max_turns 999999999 will exceed the daily limit

# http://localhost:8080 is genmini-openai-proxy
# Hence --max_turns 999999999 will exceed the daily limit

wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gemini-1.5-flash-latest --llm.context_size=1000000 --conn.host=localhost --conn.port $PORT --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999
wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gpt-4 --llm.context_size=1000000 --conn.host=localhost --conn.port $PORT --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999
File renamed without changes.
Loading