Self-hosted web API that exposes free, unlimited access to modern LLM providers through a single, simple HTTP interface. It includes an optional web GUI for configuration and supports running via Python or Docker.
Free to use - No API keys or subscriptions required
Unlimited requests - No rate limiting
Simple HTTP interface - Returns plain text responses
Optional Web GUI - Easy configuration through browser
Docker support - Ready-to-use container available
Smart timeout handling - Automatic retry with optimized timeouts
Note: The demo server, when available, can be overloaded and may not always respond.
Features
Screenshots
Quick start
Run with Docker
Run from source
Usage
Quick examples (browser, curl, Python)
File input
Web GUI
Command-line options
Configuration
Cookies
Proxies
Models and providers
Private mode and password
Siri integration
Requirements
Star history
Contributing
License
Option A: Run with Docker
Pull and run with an optional cookies.json and port mapping. In Docker, setting a GUI password is recommended (and required by some setups).
docker run -p 5500:5500 d0ckmg/free-gpt4-web-api:latest
With cookies (read-only mount):
docker run \
-v /path/to/your/cookies.json:/cookies.json:ro \
-p 5500:5500 \
d0ckmg/free-gpt4-web-api:latest
Override container port mapping:
docker run -p YOUR_PORT:5500 d0ckmg/free-gpt4-web-api:latest
version: "3.9"
services:
api:
image: "d0ckmg/free-gpt4-web-api:latest"
ports:
- "YOUR_PORT:5500"
Note:
If you plan to use the Web GUI in Docker, set a password (see “Command-line options”).
The API listens on port 5500 in the container.
Option B: Run from source
Clone the repo
git clone https://github.com/aledipa/Free-GPT4-WEB-API.git
cd Free-GPT4-WEB-API
Install dependencies
pip install -r requirements.txt
Start the server (basic)
python3 src/FreeGPT4_Server.py
When using the Web GUI, always set a secure password:
python3 src/FreeGPT4_Server.py --enable-gui --password your_secure_password
The API returns plain text by default.
Examples:
curl "http://127.0.0.1:5500/?text=Explain%20quicksort%20in%20simple%20terms"
File input (see --file-input in options):
fileTMP="$1"
curl -s -F file=@"${fileTMP}" http://127.0.0.1:5500/
import requests
resp = requests.get("http://127.0.0.1:5500/" , params={"text" : "Give me a limerick" })
print (resp.text)
python3 FreeGPT4_Server.py --enable-gui
From the GUI you can configure common options (e.g., model, provider, keyword, history, cookies).
Show help:
python3 src/FreeGPT4_Server.py [-h] [--remove-sources] [--enable-gui]
[--private-mode] [--enable-history] [--password PASSWORD]
[--cookie-file COOKIE_FILE] [--file-input] [--port PORT]
[--model MODEL] [--provider PROVIDER] [--keyword KEYWORD]
[--system-prompt SYSTEM_PROMPT] [--enable-proxies] [--enable-virtual-users]
Options:
-h, --help Show help and exit
--remove-sources Remove sources from responses
--enable-gui Enable graphical settings interface
--private-mode Require a private token to access the API
--enable-history Enable message history
--password PASSWORD Set/change the password for the settings page
Note: Mandatory in some Docker environments
--cookie-file COOKIE_FILE Use a cookie file (e.g., /cookies.json)
--file-input Enable file-as-input support (see curl example)
--port PORT HTTP port (default: 5500)
--model MODEL Model to use (default: gpt-4)
--provider PROVIDER Provider to use (default: Bing)
--keyword KEYWORD Change input query keyword (default: text)
--system-prompt SYSTEM_PROMPT
System prompt to steer answers
--enable-proxies Use one or more proxies to reduce blocking
--enable-virtual-users
Enable virtual users to divide requests among multiple users
Some providers require cookies to work properly. For the Bing model, only the “_U” cookie is needed.
Passing cookies via file:
Use --cookie-file /cookies.json when running from source
In Docker, mount your cookies file read-only: -v /path/to/cookies.json:/cookies.json:ro
The GUI also exposes cookie-related settings.
Enable proxies to mitigate blocks:
Start with --enable-proxies
Ensure your environment is configured for aiohttp/aiohttp_socks if you need SOCKS/HTTP proxies.
Models : gpt-4, gpt-4o, deepseek-r1, and other modern LLMs
Default model : gpt-4
Default provider : DuckDuckGo (reliable fallback)
Provider Fallback : Automatic switching between Bing, DuckDuckGo, and other providers
Health Monitoring : Real-time provider status tracking
Change via flags or in the GUI:
--model gpt-4o --provider Bing
Smart Timeout Handling : Optimized 30-second timeouts with automatic retry
Provider Fallback : Automatic switching when primary provider fails
Health Monitoring : Continuous provider status tracking
Blacklist System : Automatic exclusion of problematic providers
Private mode and password
--private-mode requires a private token to access the API
--password protects the settings page (mandatory in Docker setups )
Security Enhancement : Authentication system hardened against bypass attacks
Logging : All authentication attempts are logged for security monitoring
Use a strong password if you expose the API beyond localhost
Important : Always set a password when using the Web GUI to prevent unauthorized access.
Use the GPTMode Apple Shortcut to ask your self-hosted API via Siri.
Shortcut:
Say “GPT Mode” to Siri and ask your question when prompted.
For development and testing:
Timeout Errors : The system now automatically retries with fallback providers
Provider Blocks : Health monitoring automatically switches to working providers
Authentication Issues : Ensure you set a strong password and check logs for failed attempts
Docker Permission Issues : Use read-only mounts for sensitive files like cookies.json
If you encounter issues:
Check the application logs for detailed error information
Verify your provider configuration in the Web GUI
Ensure cookies are properly formatted (if using)
Try different providers through the fallback system
Contributions are welcome! Feel free to open issues and pull requests to improve features, docs, or reliability.
GNU General Public License v3.0
See LICENSE for details.