Skip to content

Commit 791e0b6

Browse files
authored
Merge pull request #13 from ipa-lab/docs
update README
2 parents d88dd2c + 72af5a3 commit 791e0b6

File tree

1 file changed

+33
-31
lines changed

1 file changed

+33
-31
lines changed

‎README.md‎

Lines changed: 33 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,16 @@ What is it doing? it uses SSH to connect to a (presumably) vulnerable virtual ma
66

77
This tool is only intended for experimenting with this setup, only use it against virtual machines. Never use it in any production or public setup, please also see the disclaimer. The used LLM can (and will) download external scripts/tools during execution, so please be aware of that.
88

9-
For information about its implemenation, please see our [implemenation notes](docs/implementation_notes.md). All source code can be found on [github](https://github.com/ipa-lab/hackingbuddyGPT).
9+
For information about its implementation, please see our [implementation notes](docs/implementation_notes.md). All source code can be found on [github](https://github.com/ipa-lab/hackingbuddyGPT).
1010

1111
## Current features:
1212

1313
- connects over SSH (linux targets) or SMB/PSExec (windows targets)
14-
- supports multiple openai models (gpt-3.5-turbo, gpt4, gpt-3.5-turbo-16k, etc.)
14+
- supports OpenAI REST-API compatible models (gpt-3.5-turbo, gpt4, gpt-3.5-turbo-16k, etc.)
1515
- supports locally running LLMs
1616
- beautiful console output
17-
- log storage in sqlite either into a file or in-memory
18-
- automatic (very rough) root detection
17+
- logs run data through sqlite either into a file or in-memory
18+
- automatic root detection
1919
- can limit rounds (how often the LLM will be asked for a new command)
2020

2121
## Vision Paper
@@ -25,28 +25,37 @@ hackingBuddyGPT is described in the paper [Getting pwn'd by AI: Penetration Test
2525
If you cite this repository/paper, please use:
2626

2727
~~~ bibtex
28-
@inproceedings{getting_pwned,
29-
author = {Happe, Andreas and Cito, Jürgen},
30-
title = {Getting pwn’d by AI: Penetration Testing with Large Language Models},
31-
year = {2023},
32-
publisher = {Association for Computing Machinery},
33-
address = {New York, NY, USA},
34-
url = {https://doi.org/10.1145/3611643.3613083},
35-
doi = {10.1145/3611643.3613083},
36-
booktitle = {Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
37-
numpages = {5},
38-
keywords = {machine learning, penetration testing},
39-
location = {San Francisco, USA},
40-
series = {ESEC/FSE 2023}
28+
@inproceedings{Happe_2023, series={ESEC/FSE ’23},
29+
title={Getting pwn’d by AI: Penetration Testing with Large Language Models},
30+
url={http://dx.doi.org/10.1145/3611643.3613083},
31+
DOI={10.1145/3611643.3613083},
32+
booktitle={Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
33+
publisher={ACM},
34+
author={Happe, Andreas and Cito, Jürgen},
35+
year={2023},
36+
month=nov, collection={ESEC/FSE ’23}
37+
}
38+
~~~
39+
40+
This work is partially based upon our empiric research into [how hackers work](https://arxiv.org/abs/2308.07057):
41+
42+
~~~ bibtex
43+
@inproceedings{Happe_2023, series={ESEC/FSE ’23},
44+
title={Understanding Hackers’ Work: An Empirical Study of Offensive Security Practitioners},
45+
url={http://dx.doi.org/10.1145/3611643.3613900},
46+
DOI={10.1145/3611643.3613900},
47+
booktitle={Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
48+
publisher={ACM},
49+
author={Happe, Andreas and Cito, Jürgen},
50+
year={2023},
51+
month=nov, collection={ESEC/FSE ’23}
4152
}
4253
~~~
4354

4455
## Example run
4556

4657
This is a simple example run of `wintermute.py` using GPT-4 against a vulnerable VM. More example runs can be seen in [our collection of historic runs](docs/old_runs/old_runs.md).
4758

48-
This happened during a recent run:
49-
5059
![Example wintermute run](example_run_gpt4.png)
5160

5261
Some things to note:
@@ -59,14 +68,13 @@ Some things to note:
5968

6069
## Setup and Usage
6170

62-
You'll need:
71+
We try to keep our python dependencies as light as possible. This should allow for easier experimentation. To run the main priv-escalation program (which is called `wintermute`) together with an OpenAI-based model you need:
6372

64-
1. a vulnerable virtual machine, I am currenlty using [Lin.Security.1](https://www.vulnhub.com/entry/linsecurity-1,244/) as a target.
65-
- start-up the virtual machine, note the used username, password and IP-address
66-
2. an OpenAI API account, you can find the needed keys [in your account page](https://platform.openai.com/account/api-keys)
73+
1. an OpenAI API account, you can find the needed keys [in your account page](https://platform.openai.com/account/api-keys)
6774
- please note that executing this script will call OpenAI and thus charges will occur to your account. Please keep track of those.
75+
2. a potential target that is accessible over SSH. You can either use a deliberately vulnerable machine such as [Lin.Security.1](https://www.vulnhub.com/entry/) or a security benchmark such as our [own priv-esc benchmark](https://github.com/ipa-lab/hacking-benchmark).
6876

69-
To get everying up and running, clone the repo, download requirements, setup API-keys and credentials and start `wintermute.py`:
77+
To get everything up and running, clone the repo, download requirements, setup API-keys and credentials and start `wintermute.py`:
7078

7179
~~~ bash
7280
# clone the repository
@@ -85,14 +93,8 @@ $ cp .env.example .env
8593

8694
# IMPORTANT: setup your OpenAI API key, the VM's IP and credentials within .env
8795
$ vi .env
88-
~~~
89-
90-
### Usage
9196

92-
It's just a simple python script, so..
93-
94-
~~~ bash
95-
# start wintermute
97+
# start wintermute, i.e., attack the configured virtual machine
9698
$ python wintermute.py
9799
~~~
98100

0 commit comments

Comments
 (0)