An iOS chat application that connects to a local AI assistant running on your Mac. Features voice input, web search, document analysis, and seamless CloudKit synchronization.
Pigeon Chat is the iOS companion to LLM Pigeon Server, enabling you to:
- π¬ Chat with AI models running on your Mac
- π€ Use voice transcription for hands-free input
- π Enable web search for real-time information
- π Attach images and PDFs with automatic text extraction
- π Listen to responses with text-to-speech
- π Toggle "thinking mode" for detailed reasoning
- π± Access conversations offline with intelligent caching
- Real-time sync via CloudKit with push notifications
- Offline mode with local caching for conversations
- Multiple LLM providers: Built-in models, LM Studio, and Ollama
- Advanced parameters: Configure temperature, context length, top-k, top-p
- Stop generation: Interrupt long responses mid-generation
- Voice input using on-device WhisperKit transcription (600MB model)
- Text-to-speech with language detection and premium voices
- Hands-free operation with voice recording button
- Web search integration for current information (requires Serper API key on macOS)
- Image attachments with OCR text extraction
- PDF attachments with automatic text extraction
- Multi-file support for batch processing
- Chain-of-thought reasoning for complex queries
- Separate parameters for thinking vs regular responses
- Toggle per conversation for flexibility
- iOS 16.0+
- iPhone 13 or newer (for voice transcription)
- Xcode 14.0+
- Active Apple Developer account
- iCloud account (same as macOS server)
- LLM Pigeon Server running on macOS
git clone https://github.com/permaevidence/LLM-Pigeon.git
cd LLM-PigeonImportant: Use the SAME CloudKit container as your macOS server!
- Open the project in Xcode
- Select your development team
- Update the bundle identifier (e.g.,
com.yourcompany.pigeonchat) - The CloudKit container MUST match:
iCloud.com.pigeonchat.pigeonchat
In Xcode β Target β Signing & Capabilities:
- CloudKit (using shared container)
- Push Notifications
- Background Modes: Remote notifications
- Accept terms of service
- Grant notification permissions
- (Optional) Download voice model for transcription
- Ensure macOS server is running
- Start the macOS server first with your chosen LLM
- Launch Pigeon Chat on your iPhone
- Verify connection - check for "Synced β" indicator
- Start chatting!
Tap the model name in the navigation bar to switch between:
- Built-in models (Qwen 0.6B to 32B)
- LM Studio (custom model names)
- Ollama (custom model names)
- Tap the microphone button
- First use: Download 600MB transcription model
- Speak your message
- Tap stop when done
- Tap the + button
- Choose images or PDFs
- Text is automatically extracted
- Send with your message
- Toggle the π button to enable/disable
- Searches are performed on the macOS server
- Results are integrated into AI responses
- Toggle the π button when available
- Enables step-by-step reasoning
- Uses separate model parameters
Each provider (Built-in, LM Studio, Ollama) has:
- Regular parameters: For standard responses
- Thinking parameters: For reasoning mode
- System prompts, temperature, context length, top-k/p
- Enable/disable transcription
- Download/manage voice model
- View cache size and conversation count
- Clear local cache when needed
- Automatic offline access
- Verify both devices use same iCloud account
- Check CloudKit container matches
- Ensure macOS server is running
- Check network connectivity
- Only available on iPhone 13+
- Check Settings β Voice Transcription is enabled
- Ensure model is downloaded
- Grant photo library permissions
- For PDFs: Ensure file access permissions
- Check attachment preview before sending
- Each conversation limited to ~800k characters
- Start new conversation when warned
- Old conversations remain in history
- On-device transcription: Voice never leaves your phone
- Local processing: AI runs on your Mac
- Private sync: CloudKit encryption for all data
- No external servers: Except optional web search
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing) - Commit changes (
git commit -m 'Add feature') - Push branch (
git push origin feature/amazing) - Open Pull Request
- WhisperKit for voice transcription
- MarkdownUI for formatting
- CloudKit for seamless sync
- Vision framework for OCR
MIT License - see LICENSE file for details
Project Link: https://github.com/permaevidence/LLM-Pigeon