A simple chatbot built with Ollama running the gemma3:1b model and a Streamlit UI.
This project lets you run a local chatbot with conversation history, multi-chat support, and an interactive interface.
- Chat with the
gemma3:1bmodel locally - Conversation history (saved per chat)
- Multiple chat sessions (switch between them in the sidebar)
- Clear or create new chat sessions easily
- Scrollable chat UI
- Docker support for deployment
Gemma3_chatbot/
βββ __pycache__/
βββ Image/
βββ venv/
βββ backend.py
βββ Dockerfile
βββ main.py
βββ README.md
βββ requirements.txt
-
Python 3.10+
-
Streamlit
-
Ollama
-
Requests / httpx (for API calls)
Make sure you have Python 3.10+ installed.
Install VS Code and open the project folder.
Ollama provides CLI tools to interact with their models.
Install Ollama: Follow instructions at https://ollama.com/docs for your OS.
Create a new folder named gemma3_chatbot.
subprocess-tee
ollama
Open a terminal in the folder and run:
python -m venv venv
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
pip install -r requirements.txt
Local
streamlit run main.py
π¬ Chatbot using Ollama Gemma3:1b. Type 'exit' to quit.
You: Hello!
Bot: Hello! How can I assist you today?
Docker
docker build -t gemma3-chatbot .
docker run -p 8501:8501 gemma3-chatbot
βοΈ Author
Debbrath Debnath
π« Connect on LinkedIn
π GitHub Profile


