How We Slashed ERP Frustration with LLM-Powered Agents and MCP Inference
The Problem: ERPs Are Annoying
Let’s be honest: most ERPs are built for compliance, not for usability. Even simple tasks—like listing all unpaid invoices or creating a new customer—can take minutes of clicking, searching, and double-checking. Multiply that by hundreds of daily operations, and you’re looking at a massive productivity drain.
Our goal: Cut down 80% of the time spent on routine ERP queries and updates by letting users interact with Dolibarr through a conversational agent.
The Solution: LLMs + MCP + API Automation
We built a Python agent that leverages OpenAI’s GPT models (optionally via Nebius for cost-effective inference) and runs on MCP servers. The agent acts as a smart middleware between the user and Dolibarr’s REST API.
Key Features
- Natural Language Interface: Users can ask for “all customers,” “recent invoices,” or “create a new product” in plain English.
- Direct API Calls: The agent interprets intent, maps it to the correct Dolibarr API endpoint, and executes the call.
- Structured Output: Results are formatted as clean tables or detailed records—no more JSON dumps or cryptic codes.
- No Truncation: By default, the agent shows all data returned by the API, unless the user asks for a summary or filter.
- Error Transparency: If something goes wrong, the agent explains the error and suggests alternatives.
How It Works (Under the Hood)
class OpenAIDolibarrAgent:
def chat(self, message: str, history: List[List[str]]) -> str:
# 1. Build a system prompt with Dolibarr context and rules
# 2. Convert chat history to OpenAI format
# 3. Call OpenAI API with function-calling for dolibarr_api
# 4. If function_call is returned, execute the API call
# 5. Feed the result back to the LLM for final formatting
# 6. Return the formatted response to the user
The agent uses OpenAI’s function-calling API to decide when to hit Dolibarr’s endpoints. Here’s a simplified flow:
- User asks: “Show me all customers.”
- LLM decides: “I need to call
GET /thirdparties.” - Agent executes: Calls Dolibarr’s API, gets the data.
- LLM formats: Presents the data in a readable table, with record counts and key fields.
All of this happens in real time, thanks to the scalable inference provided by MCP servers.
Why MCP?
Running inference on MCP (Managed Compute Platform) servers means we can scale up or down as needed, keep latency low, and avoid the headaches of self-hosting heavy LLMs. For teams with fluctuating workloads or demo environments, this is a game-changer.
Real-World Impact
- 80% Time Savings: Most users report that tasks that took 5-10 minutes now take less than a minute.
- Zero Training Required: If you can chat, you can use the agent.
- No More “ERP Anxiety”: Users don’t have to remember menu paths or field names—just ask.
Example: From Clicks to Queries
Old Way:
- Log in to Dolibarr
- Navigate to “Third Parties”
- Click “List”
- Filter by status
- Export to CSV
- Open in Excel
- …you get the idea
New Way:
“Show me all active customers”
And you get a clean, formatted table—instantly.
Technical Stack
- Python (Gradio for UI, Requests for API calls)
- OpenAI GPT-3.5/4 (or Nebius for cost savings)
- Dolibarr REST API
- MCP Servers (for scalable, reliable inference)
- .env for secrets (never hardcode your keys!)
Code Snippet: API Call Interface
{
def dolibarr_interface(method: str, endpoint: str, api_key: str, payload_str: str = "") -> str:
api = DolibarrAPI(api_key)
if method == 'GET':
result = api.get_req(endpoint, payload)
# ... handles POST, PUT, DELETE similarly
return json.dumps(result, indent=2)
}
Try It Yourself
We’ve set up a demo instance you can play with:
- Dolibarr Test Instance
- Username:
admin - Password:
admin123
Or, just run the agent locally and start chatting with your ERP!
TL;DR:
ERPs still suck hard. But llms can atleast make the experience less sucky.