Building Reliable Office Automation with Temporal, Local LLMs, and Robot Framework
Most automation projects fail not because AI is weak — but because workflow reliability is underestimated.
Emails arrive late.
Humans respond days later.
Systems crash.
SAP has no API.
If your automation cannot pause, retry, resume, and audit, it will eventually break.
This article explains a modern, production-ready architecture that combines:
- Temporal — durable workflow orchestration
- Local LLMs (Ollama) — intelligent routing & extraction
- Robot Framework — UI automation for systems without APIs
This stack is especially suitable for finance, ERP, procurement, and office processes.
The Problem with Typical Automation Stacks
Most teams start with:
- Cron jobs
- Message queues
- Low-code tools
- LLM agents
These work well for short tasks, but break down when workflows:
- run for hours or days
- require human approval
- involve retries and compensations
- must survive restarts and deployments
Example failures:
- Invoice posted twice after retry
- Approval lost because a worker crashed
- LLM made a confident but wrong decision
- UI automation ran out of order
What’s missing is a durable workflow engine.
The Core Idea: Separate Intelligence, Orchestration, and Execution
The key architectural principle is separation of responsibility:
| Layer | Responsibility |
|---|---|
| Temporal | Orchestration, state, retries, guarantees |
| Local LLM | Understanding & classification |
| Robot Framework | UI execution (SAP, legacy systems) |
No layer does another layer’s job.
High-Level Architecture (Text Diagram)
Email / API / Schedule
|
v
Temporal Workflow (Source of Truth)
|
+--> Activity: Local LLM (Ollama)
| - classify email
| - extract fields
| - return confidence
|
+--> Gate (confidence / rules)
|
+--> Activity: Robot Framework
| - SAP UI automation
| - legacy system interaction
|
+--> Signals
- human approval
- correction
Temporal owns what happens next, not the AI.
Why Temporal (Not Cron, Not Queues, Not n8n Alone)
Temporal is different because it provides:
- Durable execution — workflows survive crashes
- Exactly-once semantics — no duplicate invoices
- Deterministic replay — perfect audit trail
- Human-in-the-loop — wait days without hacks
- Built-in retries & timeouts
In short:
Temporal treats workflows like a database treats data.
Role of the Local LLM (Ollama)
The LLM is not the brain of the system.
It acts as an advisor.
Typical responsibilities:
- Classify incoming email (invoice, RFQ, reminder)
- Extract structured data (invoice number, amount)
- Report confidence level
- Explain uncertainty
Example LLM output:
{
"workflow_key": "invoice.receive",
"confidence": 0.82,
"extracted_fields": {
"invoice_no": "INV-2025-001",
"amount": 125000,
"vendor": "ABC SUPPLY"
}
}
Important rule:
LLMs suggest. Temporal decides.
Gate Design: Where Automation Becomes Safe
Temporal enforces gate logic:
if decision.confidence < 0.8:
wait_for_human()
Other gates may include:
- required fields present
- amount threshold
- vendor allowlist
- duplicate detection
This prevents:
- hallucinated actions
- silent failures
- irreversible mistakes
Robot Framework: Automating Systems Without APIs
Many enterprise systems (SAP GUI, legacy ERP) still rely on UI.
Robot Framework is ideal because:
- deterministic
- testable
- screenshot & log based
- widely supported
Temporal runs Robot Framework as an activity, meaning:
- retries are controlled
- failures are visible
- execution is isolated
If Robot crashes:
- Temporal retries or escalates
- workflow state remains intact
Example Workflow: Vendor Sends Invoice by Email
- Email arrives
- Temporal workflow starts
- LLM classifies email → invoice.receive
- Confidence = 0.91 → passes gate
- Robot Framework posts invoice in SAP UI
- SAP document number returned
- Workflow completes
- Audit trail preserved forever
If confidence were low:
- workflow pauses
- human approves or corrects
- workflow resumes automatically
Why This Stack Scales
Scaling LLM usage
- Local inference (Ollama)
- Stateless activities
- Easy model swap
Scaling automation
- Temporal workers scale horizontally
- Robot runners isolated per VM
- No shared mutable state
Scaling teams
- Developers work in Git
- Ops monitor Temporal UI
- Business sees audit logs
When NOT to Use This Stack
This architecture is not for:
- simple Zapier-style automation
- chatbot-only projects
- short-lived scripts
- one-off integrations
It is for:
- finance workflows
- ERP processes
- approvals
- compliance-sensitive automation
- systems that must never double-execute
Final Takeaway
If your automation:
- touches money
- waits for humans
- integrates with fragile systems
- must survive failure
Then:
Temporal is the backbone,
LLMs are advisors,
Robot Framework is the hands.
This separation is what turns AI automation from a demo into a reliable system.
Get in Touch with us
Related Posts
- 为什么应急响应系统必须采用 Offline First 设计(来自 ATAK 的启示)
- Why Emergency Systems Must Work Offline First (Lessons from ATAK)
- 为什么地方政府的软件项目会失败 —— 如何在编写代码之前避免失败
- Why Government Software Projects Fail — And How to Prevent It Before Writing Code
- AI 热潮之后:接下来会发生什么(以及这对中国企业意味着什么)
- After the AI Hype: What Always Comes Next (And Why It Matters for Business)
- 为什么没有系统集成,回收行业的 AI 项目往往会失败
- Why AI in Recycling Fails Without System Integration
- ISA-95 vs RAMI 4.0:中国制造业应该如何选择(以及为什么两者缺一不可)
- ISA-95 vs RAMI 4.0: Which One Should You Use (And Why Both Matter)
- 为什么低代码正在退潮(以及它正在被什么取代)
- Why Low‑Code Is Falling Out of Trend (and What Replaced It)
- 2025 年失败的产品 —— 真正的原因是什么?
- The Biggest Product Failures of 2025 — And the Real Reason They Failed
- Agentic AI Explained: Manus vs OpenAI vs Google —— 中国企业的实践选择
- Agentic AI Explained: Manus vs OpenAI vs Google — What Enterprises Really Need
- AI驱动的医院信息系统纵向整合(Vertical Integration)
- How AI Enables Vertical Integration of Hospital Systems
- 工业AI系统中的AI加速器 为什么“软件框架”比“芯片性能”更重要
- AI Accelerators in Industrial AI Systems: Why Software Frameworks Matter More Than Chips













