Skip to content
Merged

V4 #3

Changes from 1 commit
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
5771d46
try to improve token counting
andreashappe Sep 16, 2023
4c0eb6d
add timestamps to runs
andreashappe Sep 16, 2023
4ca10b6
allow for different hostnames during root detection
andreashappe Sep 16, 2023
0423702
fix: history in next-cmd, SSH root detection; add: logging
andreashappe Sep 18, 2023
b8ed6bf
do not reuse host SSH keys
andreashappe Sep 18, 2023
9d220fc
remove some newlines
andreashappe Sep 18, 2023
9ad8bad
add a hint for each virtual machine
andreashappe Sep 18, 2023
55b42db
increase SSH timeout to allow for docker operations
andreashappe Sep 18, 2023
4830017
split up analyze_response into response/state
andreashappe Sep 18, 2023
bacf3df
make code a bit more readable
andreashappe Sep 19, 2023
772e05e
add hints for two new test VMs
andreashappe Sep 19, 2023
1414390
fix: status code checking for openai connection
andreashappe Sep 19, 2023
6fefda2
fix: actually perform back-off in case of rate-limiting
andreashappe Sep 19, 2023
4fdee6e
colorize important stuff on console output
andreashappe Sep 19, 2023
18a1fb1
switch from JSON to text-based prompt format
andreashappe Sep 19, 2023
8b2f665
chg: make root detection more resistent with a regexp
andreashappe Sep 20, 2023
cc92546
try to remove more weird wrapping from LLM results
andreashappe Sep 20, 2023
f67b903
output the command before it is executed
andreashappe Sep 20, 2023
11a1d2b
fix: array index for hints
andreashappe Sep 20, 2023
3269080
make openai connection more configurable
andreashappe Sep 20, 2023
e5c773f
fix whitespace
andreashappe Sep 20, 2023
3c995b4
remove unused code
andreashappe Sep 20, 2023
3cdf85a
del: remove openai lib based interface, we're using the REST interface
andreashappe Sep 20, 2023
d275421
make LLM server url configurable to allow for running local LLMs
andreashappe Sep 20, 2023
ff957be
oobabooga can use existing llm server config too
andreashappe Sep 20, 2023
ab735a4
try to allow for non-opanAI tokenizers
andreashappe Sep 20, 2023
d564e4f
use openai_rest as default connection
andreashappe Sep 20, 2023
af2c8fe
wrap llama2 prompts to get better results
andreashappe Sep 20, 2023
File filter

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
remove unused code
  • Loading branch information
andreashappe committed Sep 20, 2023
commit 3c995b4096ab9203a3f54523e648b31d2380a424
18 changes: 1 addition & 17 deletions llm_with_state.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import json
import time
import typing

Expand Down Expand Up @@ -95,19 +94,4 @@ def create_and_ask_prompt_text(self, template_file, **params):
tic = time.perf_counter()
result, tok_query, tok_res = self.llm_connection.exec_query(self.llm_connection.get_model(), self.llm_connection.get_context_size(), prompt)
toc = time.perf_counter()
return LLMResult(result, prompt, result, toc - tic, tok_query, tok_res)


def create_and_ask_prompt(self, template_file, **params):
template = Template(filename='templates/' + template_file)
prompt = template.render(**params)
tic = time.perf_counter()
result, tok_query, tok_res = self.llm_connection.exec_query(self.llm_connection.get_model(), self.llm_connection.get_context_size(), prompt)
toc = time.perf_counter()
try:
json_answer = json.loads(result)
except Exception as e:
print("there as an exception with JSON parsing: " + str(e))
print("debug[the plain result]: " + str(result))

return LLMResult(json_answer, prompt, result, toc - tic, tok_query, tok_res)
return LLMResult(result, prompt, result, toc - tic, tok_query, tok_res)