Building Agentic AI with Python, Langchain, and Ollama for eCommerce & Factory Automation
Artificial Intelligence has moved beyond passive Q\&A bots. Agentic AI is a new frontier — where AI systems don't just respond, they reason, plan, and act autonomously. In this post, we’ll build a hands-on Agentic AI system using:
- 🧠 Langchain — to structure our agent’s logic, memory, and tools
- 🔓 Ollama — to run powerful open-source LLMs locally (like LLaMA3 or Mistral)
- 🏭 Custom tools — for eCommerce and Factory Automation use cases
🚀 What Is Agentic AI?
Agentic AI refers to systems that operate with goals, autonomy, and tool-using capabilities. Unlike basic chatbots, an agentic AI can:
- Retain memory of past actions and conversations
- Use external tools (e.g., databases, APIs, scripts)
- Reason and plan steps to solve complex tasks
- Take multiple actions toward achieving a goal
🧰 Tech Stack Overview
| Component | Description |
|---|---|
| Python | Main language |
| Langchain | Agent framework (tool use, planning, memory) |
| Ollama | Local LLM runtime for open-source models |
| Custom Tools | Functions for eCommerce & factory scenarios |
🧠 Agent Architecture
Here’s a high-level view of how our agent works:
graph TD
A["User Query"] --> B["Langchain Agent"]
B --> C["LLM via Ollama (e.g., LLaMA3)"]
B --> D["Memory (ConversationBuffer)"]
B --> E["Tool Selector"]
E --> F["search_products()"]
E --> G["track_order()"]
E --> H["check_production_status()"]
E --> I["log_factory_issue()"]
F --> J["eCommerce Inventory"]
G --> K["Order System"]
H --> L["Factory Machine DB"]
I --> M["Maintenance Logging System"]
C -->|Plan & Reason| B
D -->|Context| B
📦 Step 1: Define Tools
Each tool simulates a real-world function such as looking up products or checking a machine’s status.
# tools.py
from langchain.tools import tool
@tool
def search_products(keyword: str) -> str:
return f"Found 3 products matching '{keyword}': A, B, C."
@tool
def track_order(order_id: str) -> str:
return f"Order {order_id} is currently in transit."
@tool
def check_production_status(machine_id: str) -> str:
return f"Machine {machine_id} is 87% complete with batch #42."
@tool
def log_factory_issue(issue: str) -> str:
return f"Issue '{issue}' has been logged and assigned to a technician."
🤖 Step 2: Initialize the Agent with Langchain + Ollama
# agent.py
from langchain_community.chat_models import ChatOllama
from langchain.agents import initialize_agent, AgentType
from langchain.memory import ConversationBufferMemory
from tools import search_products, track_order, check_production_status, log_factory_issue
llm = ChatOllama(model="llama3") # Run with: `ollama run llama3`
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent = initialize_agent(
tools=[search_products, track_order, check_production_status, log_factory_issue],
llm=llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory,
verbose=True
)
💬 Step 3: Run the Agent
# main.py
from agent import agent
print("=== Agentic AI for eCommerce + Factory ===")
while True:
query = input("You: ")
if query.lower() in ['exit', 'quit']:
break
result = agent.run(query)
print("Agent:", result)
🧪 Sample Interaction
You: Search for bluetooth speakers
Agent: Found 3 products matching 'bluetooth speakers': A, B, C.
You: Track order #99123
Agent: Order 99123 is currently in transit.
You: What's the status of machine A7?
Agent: Machine A7 is 87% complete with batch #42.
You: Log issue: Machine A7 overheating
Agent: Issue 'Machine A7 overheating' has been logged and assigned to a technician.
✅ Why It Matters
Agentic AI enables next-generation automation and digital assistants. With this approach:
- Businesses can create smart back-office tools that act, not just chat.
- Factories can monitor equipment, log issues, and respond faster.
- eCommerce platforms can offer smarter support without human intervention.
And because it’s powered by open-source LLMs, you have:
- Full control over data and model behavior
- No vendor lock-in
- Cost-effective deployment
📦 Next Steps
Want to go further?
- Add FastAPI or Streamlit for a frontend
- Integrate real APIs instead of mock functions
- Use vector memory for long-term knowledge retention
- Add multi-agent collaboration using CrewAI or LangGraph
Get in Touch with us
Related Posts
- SimpliMES Lite — 面向中国中小型制造企业的轻量化 MES 解决方案
- SimpliMES Lite — Lightweight MES for Small & Mid-Sized Manufacturers
- Nursing-Care Robots: How Open-Source Technology Is Powering the Future of Elderly Care
- 为什么中国大模型正在成为电商系统的新引擎?
- 为什么成功的线上卖家都选择 SimpliShop:打造、成长、并持续领先你的市场
- Why Successful Online Sellers Choose SimpliShop: Build, Grow, and Win Your Market
- AI 垂直整合:未来企业竞争力的核心引擎
- Vertical Integration of AI: The Next Breakthrough in Modern Business
- AI 预测系统 —— 让你的决策拥有「超级力量」
- AI Prediction Systems — Turn Your Decisions Into Superpowers
- 如果 AI 泡沫破裂,会发生什么?(现实、理性、不夸张的深度分析)
- If the AI Bubble Ends, What Will Actually Happen? (A Realistic, No-Hype Analysis)
- 深度学习 + 新闻情绪分析进行股票价格预测(完整实战指南)
- Using Deep Learning + News Sentiment to Predict Stock Prices (A Practical Guide)
- 用 AI 改造 COI 管理:真实工厂案例解析(Hybrid Rasa + LangChain)
- How AI Transforms COI Management: A Real Factory Use Case (Hybrid Rasa + LangChain)
- SimpliAgentic —— 新一代自律智能工厂,从这里开始
- SimpliAgentic — The Future of Autonomous Factory Automation Has Arrived
- 为什么理解 Android Internals(安卓内部机制)如此重要?——帮助企业打造高价值系统级服务
- Why Android Internals Matter — And the High-Value Services Your Business Can Build With Them













