Create customizable AI assistants, automations, chat bots and agents that run 100% locally. No need for agentic Python libraries or cloud service keys, just bring your GPU (or even just CPU) and a web browser.
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. Create Agents with a couple of clicks, connect via MCP and give it skills with skillserver. Every agent exposes a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. No clouds. No data leaks. Just pure local AI that works on consumer-grade hardware (CPU and GPU).
Are you tired of AI wrappers calling out to cloud APIs, risking your privacy? So were we.
LocalAGI ensures your data stays exactly where you want it—on your hardware. No API keys, no cloud subscriptions, no compromise.
# Clone the repository
git clone https://github.com/mudler/LocalAGI
cd LocalAGI
# CPU setup (default)
docker compose up
# NVIDIA GPU setup
docker compose -f docker-compose.nvidia.yaml up
# Intel GPU setup (for Intel Arc and integrated GPUs)
docker compose -f docker-compose.intel.yaml up
# AMD GPU setup
docker compose -f docker-compose.amd.yaml up
# Start with a specific model (see available models in models.localai.io, or localai.io to use any model in huggingface)
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU setup with custom multimodal and image models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=moondream2-20250414 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up
Now you can access and manage your agents at http://localhost:8080
Still having issues? see this Youtube video: https://youtu.be/HtVwIxW3ePg
🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:
LocalAGI supports multiple hardware configurations through Docker Compose profiles:
docker compose -f docker-compose.nvidia.yaml upgemma-3-4b-it-qatmoondream2-20250414sd-1.5-ggmlMODEL_NAME: Text model to useMULTIMODAL_MODEL: Multimodal model to useIMAGE_MODEL: Image generation model to useLOCALAI_SINGLE_ACTIVE_BACKEND: Set to true to enable single active backend modedocker compose -f docker-compose.intel.yaml upgemma-3-4b-it-qatmoondream2-20250414sd-1.5-ggmlMODEL_NAME: Text model to useMULTIMODAL_MODEL: Multimodal model to useIMAGE_MODEL: Image generation model to useLOCALAI_SINGLE_ACTIVE_BACKEND: Set to true to enable single active backend modeYou can customize the models used by LocalAGI by setting environment variables when running docker-compose. For example:
# CPU with custom model
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=moondream2-20250414 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up
# Intel GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=moondream2-20250414 \
IMAGE_MODEL=sd-1.5-ggml \
docker compose -f docker-compose.intel.yaml up
# With custom actions directory
LOCALAGI_CUSTOM_ACTIONS_DIR=/app/custom-actions docker compose up
If no models are specified, it will use the defaults:
gemma-3-4b-it-qatmoondream2-20250414sd-1.5-ggmlGood (relatively small) models that have been tested are:
qwen_qwq-32b (best in co-ordinating agents)gemma-3-12b-itgemma-3-27b-it
Explore detailed documentation including:
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
| Variable | What It Does |
|---|---|
LOCALAGI_MODEL | Your go-to model |
LOCALAGI_MULTIMODAL_MODEL | Optional model for multimodal capabilities |
LOCALAGI_LLM_API_URL | OpenAI-compatible API server URL |
LOCALAGI_LLM_API_KEY | API authentication |
LOCALAGI_TIMEOUT | Request timeout settings |
LOCALAGI_STATE_DIR | Where state gets stored |
LOCALAGI_LOCALRAG_URL | LocalRecall connection |
LOCALAGI_ENABLE_CONVERSATIONS_LOGGING | Toggle conversation logs |
LOCALAGI_API_KEYS | A comma separated list of api keys used for authentication |
LOCALAGI_CUSTOM_ACTIONS_DIR | Directory containing custom Go action files to be automatically loaded |
Download ready-to-run binaries from the Releases page.
Requirements:
# Clone repo
git clone https://github.com/mudler/LocalAGI.git
cd LocalAGI
# Build it
cd webui/react-ui && bun i && bun run build
cd ../..
go build -o localagi
# Run it
./localagi
LocalAGI can be used as a Go library to programmatically create and manage AI agents. Let's start with a simple example of creating a single agent:
import (
"github.com/mudler/LocalAGI/core/agent"
"github.com/mudler/LocalAGI/core/types"
)
// Create a new agent with basic configuration
agent, err := agent.New(
agent.WithModel("gpt-4"),
agent.WithLLMAPIURL("http://localhost:8080"),
agent.WithLLMAPIKey("your-api-key"),
agent.WithSystemPrompt("You are a helpful assistant."),
agent.WithCharacter(agent.Character{
Name: "my-agent",
}),
agent.WithActions(
// Add your custom actions here
),
agent.WithStateFile("./state/my-agent.state.json"),
agent.WithCharacterFile("./state/my-agent.character.json"),
agent.WithTimeout("10m"),
agent.EnableKnowledgeBase(),
agent.EnableReasoning(),
)
if err != nil {
log.Fatal(err)
}
// Start the agent
go func() {
if err := agent.Run(); err != nil {
log.Printf("Agent stopped: %v", err)
}
}()
// Stop the agent when done
agent.Stop()
This basic example shows how to:
For managing multiple agents, you can use the AgentPool system:
import (
"github.com/mudler/LocalAGI/core/state"
"github.com/mudler/LocalAGI/core/types"
)
// Create a new agent pool
pool, err := state.NewAgentPool(
"default-model", // default model name
"default-multimodal-model", // default multimodal model
"image-model", // image generation model
"http://localhost:8080", // API URL
"your-api-key", // API key
"./state", // state directory
"http://localhost:8081", // LocalRAG API URL
func(config *AgentConfig) func(ctx context.Context, pool *AgentPool) []types.Action {
// Define available actions for agents
return func(ctx context.Context, pool *AgentPool) []types.Action {
return []types.Action{
// Add your custom actions here
}
}
},
func(config *AgentConfig) []Connector {
// Define connectors for agents
return []Connector{
// Add your custom connectors here
}
},
func(config *AgentConfig) []DynamicPrompt {
// Define dynamic prompts for agents
return []DynamicPrompt{
// Add your custom prompts here
}
},
func(config *AgentConfig) types.JobFilters {
// Define job filters for agents
return types.JobFilters{
// Add your custom filters here
}
},
"10m", // timeout
true, // enable conversation logs
)
// Create a new agent in the pool
agentConfig := &AgentConfig{
Name: "my-agent",
Model: "gpt-4",
SystemPrompt: "You are a helpful assistant.",
EnableKnowledgeBase: true,
EnableReasoning: true,
// Add more configuration options as needed
}
err = pool.CreateAgent("my-agent", agentConfig)
// Start all agents
err = pool.StartAll()
// Get agent status
status := pool.GetStatusHistory("my-agent")
// Stop an agent
pool.Stop("my-agent")
// Remove an agent
err = pool.Remove("my-agent")
Key features available through the library:
For more details about available configuration options and features, refer to the Agent Configuration Reference section.
LocalAGI provides two powerful ways to extend its functionality with custom actions:
LocalAGI supports custom actions written in Go that can be defined inline when creating an agent. These actions are interpreted at runtime, so no compilation is required.
You can also place custom Go action files in a directory and have LocalAGI automatically load them. Set the LOCALAGI_CUSTOM_ACTIONS_DIR environment variable to point to a directory containing your custom action files. Each .go file in this directory will be automatically loaded and made available to all agents.
Example setup:
# Set the environment variable
export LOCALAGI_CUSTOM_ACTIONS_DIR="/path/to/custom/actions"
# Or in docker-compose.yaml
environment:
- LOCALAGI_CUSTOM_ACTIONS_DIR=/app/custom-actions
Directory structure:
custom-actions/ ├── weather_action.go ├── file_processor.go └── database_query.go
Each file should contain the three required functions (Run, Definition, RequiredFields) as described below.
When creating a new Agent, in the action sections select the "custom" action, you can add the Golang code directly there.
Custom actions in LocalAGI require three main functions:
Run(config map[string]interface{}) (string, map[string]interface{}, error) - The main execution functionDefinition() map[string][]string - Defines the action's parameters and their typesRequiredFields() []string - Specifies which parameters are requiredNote: You can't use additional modules, but just use libraries that are included in Go.
Here's a practical example of a custom action that fetches weather information:
import (
"encoding/json"
"fmt"
"net/http"
"io"
)
type WeatherParams struct {
City string `json:"city"`
Country string `json:"country"`
}
type WeatherResponse struct {
Main struct {
Temp float64 `json:"temp"`
Humidity int `json:"humidity"`
} `json:"main"`
Weather []struct {
Description string `json:"description"`
} `json:"weather"`
}
func Run(config map[string]interface{}) (string, map[string]interface{}, error) {
// Parse parameters
p := WeatherParams{}
b, err := json.Marshal(config)
if err != nil {
return "", map[string]interface{}{}, err
}
if err := json.Unmarshal(b, &p); err != nil {
return "", map[string]interface{}{}, err
}
// Make API call to weather service
url := fmt.Sprintf("http://api.openweathermap.org/data/2.5/weather?q=%s,%s&appid=YOUR_API_KEY&units=metric", p.City, p.Country)
resp, err := http.Get(url)
if err != nil {
return "", map[string]interface{}{}, err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return "", map[string]interface{}{}, err
}
var weather WeatherResponse
if err := json.Unmarshal(body, &weather); err != nil {
return "", map[string]interface{}{}, err
}
// Format response
result := fmt.Sprintf("Weather in %s, %s: %.1f°C, %s, Humidity: %d%%",
p.City, p.Country, weather.Main.Temp, weather.Weather[0].Description, weather.Main.Humidity)
return result, map[string]interface{}{}, nil
}
func Definition() map[string][]string {
return map[string][]string{
"city": []string{
"string",
"The city name to get weather for",
},
"country": []string{
"string",
"The country code (e.g., US, UK, DE)",
},
}
}
func RequiredFields() []string {
return []string{"city", "country"}
}
Here's another example that demonstrates file system operations:
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
)
type FileParams struct {
Path string `json:"path"`
Action string `json:"action"`
Content string `json:"content,omitempty"`
}
func Run(config map[string]interface{}) (string, map[string]interface{}, error) {
p := FileParams{}
b, err := json.Marshal(config)
if err != nil {
return "", map[string]interface{}{}, err
}
if err := json.Unmarshal(b, &p); err != nil {
return "", map[string]interface{}{}, err
}
switch p.Action {
case "read":
content, err := os.ReadFile(p.Path)
if err != nil {
return "", map[string]interface{}{}, err
}
return string(content), map[string]interface{}{}, nil
case "write":
err := os.WriteFile(p.Path, []byte(p.Content), 0644)
if err != nil {
return "", map[string]interface{}{}, err
}
return fmt.Sprintf("Successfully wrote to %s", p.Path), map[string]interface{}{}, nil
case "list":
files, err := os.ReadDir(p.Path)
if err != nil {
return "", map[string]interface{}{}, err
}
var fileList []string
for _, file := range files {
fileList = append(fileList, file.Name())
}
result, _ := json.Marshal(fileList)
return string(result), map[string]interface{}{}, nil
default:
return "", map[string]interface{}{}, fmt.Errorf("unknown action: %s", p.Action)
}
}
func Definition() map[string][]string {
return map[string][]string{
"path": []string{
"string",
"The file or directory path",
},
"action": []string{
"string",
"The action to perform: read, write, or list",
},
"content": []string{
"string",
"Content to write (required for write action)",
},
}
}
func RequiredFields() []string {
return []string{"path", "action"}
}
To use custom actions, add them to your agent configuration:
LocalAGI supports both local and remote MCP servers, allowing you to extend functionality with external tools and services.
The Model Context Protocol (MCP) is a standard for connecting AI applications to external data sources and tools. LocalAGI can connect to any MCP-compliant server to access additional capabilities.
Local MCP servers run as processes that LocalAGI can spawn and communicate with via STDIO.
{
"mcpServers": {
"github": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
}
}
}
Remote MCP servers are HTTP-based and can be accessed over the network.
You can create MCP servers in any language that supports the MCP protocol and add the URLs of the servers to LocalAGI.
The development workflow is similar to the source build, but with additional steps for hot reloading of the frontend:
# Clone repo
git clone https://github.com/mudler/LocalAGI.git
cd LocalAGI
cd webui/react-ui
# Install dependencies
bun i
# Compile frontend (the build directory needs to exist for the backend to start)
bun run build
# Start frontend development server
bun run dev
Then in separate terminal:
cd LocalAGI
# Create a "pool" directory for agent state
mkdir pool
# Set required environment variables
export LOCALAGI_MODEL=gemma-3-4b-it-qat
export LOCALAGI_MULTIMODAL_MODEL=moondream2-20250414
export LOCALAGI_IMAGE_MODEL=sd-1.5-ggml
export LOCALAGI_LLM_API_URL=http://localai:8080
export LOCALAGI_LOCALRAG_URL=http://localrecall:8080
export LOCALAGI_STATE_DIR=./pool
export LOCALAGI_TIMEOUT=5m
export LOCALAGI_ENABLE_CONVERSATIONS_LOGGING=false
export LOCALAGI_SSHBOX_URL=root:root@sshbox:22
# Start development server
go run main.go
Note: see webui/react-ui/.vite.config.js for env vars that can be used to configure the backend URL
Link your agents to the services you already use. Configuration examples below.
{
"token": "YOUR_PAT_TOKEN",
"repository": "repo-to-monitor",
"owner": "repo-owner",
"botUserName": "bot-username"
}
After creating your Discord bot:
{
"token": "Bot YOUR_DISCORD_TOKEN",
"defaultChannel": "OPTIONAL_CHANNEL_ID"
}
Don't forget to enable "Message Content Intent" in Bot(tab) settings! Enable " Message Content Intent " in the Bot tab!
Use the included slack.yaml manifest to create your app, then configure:
{
"botToken": "xoxb-your-bot-token",
"appToken": "xapp-your-app-token"
}
Get a token from @botfather, then:
{
"token": "your-bot-father-token",
"group_mode": "true",
"mention_only": "true",
"admins": "username1,username2"
}
Configuration options:
token: Your bot token from BotFathergroup_mode: Enable/disable group chat functionalitymention_only: When enabled, bot only responds when mentioned in groupsadmins: Comma-separated list of Telegram usernames allowed to use the bot in private chatschannel_id: Optional channel ID for the bot to send messages toImportant: For group functionality to work properly:
- Go to @BotFather
- Select your bot
- Go to "Bot Settings" > "Group Privacy"
- Select "Turn off" to allow the bot to read all messages in groups
- Restart your bot after changing this setting
Connect to IRC networks:
{
"server": "irc.example.com",
"port": "6667",
"nickname": "LocalAGIBot",
"channel": "#yourchannel",
"alwaysReply": "false"
}
{
"smtpServer": "smtp.gmail.com:587",
"imapServer": "imap.gmail.com:993",
"smtpInsecure": "false",
"imapInsecure": "false",
"username": "user@gmail.com",
"email": "user@gmail.com",
"password": "correct-horse-battery-staple",
"name": "LogalAGI Agent"
}
| Endpoint | Method | Description | Example |
|---|---|---|---|
/api/agents | GET | List all available agents | Example |
/api/agent/:name/status | GET | View agent status history | Example |
/api/agent/create | POST | Create a new agent | Example |
/api/agent/:name | DELETE | Remove an agent | Example |
/api/agent/:name/pause | PUT | Pause agent activities | Example |
/api/agent/:name/start | PUT | Resume a paused agent | Example |
/api/agent/:name/config | GET | Get agent configuration | |
/api/agent/:name/config | PUT | Update agent configuration | |
/api/meta/agent/config | GET | Get agent configuration metadata | |
/settings/export/:name | GET | Export agent config | Example |
/settings/import | POST | Import agent config | Example |
| Endpoint | Method | Description | Example |
|---|---|---|---|
/api/actions | GET | List available actions | |
/api/action/:name/run | POST | Execute an action | |
/api/agent/group/generateProfiles | POST | Generate group profiles | |
/api/agent/group/create | POST | Create a new agent group |
| Endpoint | Method | Description | Example |
|---|---|---|---|
/api/chat/:name | POST | Send message & get response | Example |
/api/notify/:name | POST | Send notification to agent | Example |
/api/sse/:name | GET | Real-time agent event stream | Example |
/v1/responses | POST | Send message & get response | OpenAI's Responses |
curl -X GET "http://localhost:3000/api/agents"
curl -X GET "http://localhost:3000/api/agent/my-agent/status"
curl -X POST "http://localhost:3000/api/agent/create" \
-H "Content-Type: application/json" \
-d '{
"name": "my-agent",
"model": "gpt-4",
"system_prompt": "You are an AI assistant.",
"enable_kb": true,
"enable_reasoning": true
}'
curl -X DELETE "http://localhost:3000/api/agent/my-agent"
curl -X PUT "http://localhost:3000/api/agent/my-agent/pause"
curl -X PUT "http://localhost:3000/api/agent/my-agent/start"
curl -X GET "http://localhost:3000/api/agent/my-agent/config"
curl -X PUT "http://localhost:3000/api/agent/my-agent/config" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"system_prompt": "You are an AI assistant."
}'
curl -X GET "http://localhost:3000/settings/export/my-agent" --output my-agent.json
curl -X POST "http://localhost:3000/settings/import" \
-F "file=@/path/to/my-agent.json"
curl -X POST "http://localhost:3000/api/chat/my-agent" \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you today?"}'
curl -X POST "http://localhost:3000/api/notify/my-agent" \
-H "Content-Type: application/json" \
-d '{"message": "Important notification"}'
curl -N -X GET "http://localhost:3000/api/sse/my-agent"
Note: For proper SSE handling, you should use a client that supports SSE natively.
The agent configuration defines how an agent behaves and what capabilities it has. You can view the available configuration options and their descriptions by using the metadata endpoint:
curl -X GET "http://localhost:3000/api/meta/agent/config"
This will return a JSON object containing all available configuration fields, their types, and descriptions.
Here's an example of the agent configuration structure:
{
"name": "my-agent",
"model": "gpt-4",
"multimodal_model": "gpt-4-vision",
"hud": true,
"standalone_job": false,
"random_identity": false,
"initiate_conversations": true,
"enable_planning": true,
"identity_guidance": "You are a helpful assistant.",
"periodic_runs": "0 * * * *",
"permanent_goal": "Help users with their questions.",
"enable_kb": true,
"enable_reasoning": true,
"kb_results": 5,
"can_stop_itself": false,
"system_prompt": "You are an AI assistant.",
"long_term_memory": true,
"summary_long_term_memory": false
}
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
| Variable | What It Does |
|---|---|
LOCALAGI_MODEL | Your go-to model |
LOCALAGI_MULTIMODAL_MODEL | Optional model for multimodal capabilities |
LOCALAGI_LLM_API_URL | OpenAI-compatible API server URL |
LOCALAGI_LLM_API_KEY | API authentication |
LOCALAGI_TIMEOUT | Request timeout settings |
LOCALAGI_STATE_DIR | Where state gets stored |
LOCALAGI_LOCALRAG_URL | LocalRecall connection |
LOCALAGI_SSHBOX_URL | LocalAGI SSHBox URL, e.g. user:pass@ip:port |
LOCALAGI_ENABLE_CONVERSATIONS_LOGGING | Toggle conversation logs |
LOCALAGI_API_KEYS | A comma separated list of api keys used for authentication |
LOCALAGI_CUSTOM_ACTIONS_DIR | Directory containing custom Go action files to be automatically loaded |
MIT License — See the LICENSE file for details.
LOCAL PROCESSING. GLOBAL THINKING.
Made with ❤️ by mudler