Build a Local Product Recommendation System with LangChain, Ollama, and Open-Source Embeddings
In this post, you’ll learn how to create a fully local, privacy-friendly product recommendation engine for your e-commerce site using LangChain, Ollama (for LLMs), and open-source embeddings. No OpenAI API or external cloud needed—run everything on your machine or private server!
Why This Approach?
- Keep your customer data private
- Zero API cost—no pay-per-call fees
- Use powerful open-source LLMs (like Llama 3, Mistral)
- Flexible: works for product catalogs, FAQs, or any knowledge base
Solution Overview
We combine three key components:
- SentenceTransformers for generating semantic product embeddings.
- Chroma for efficient local vector search.
- Ollama to run LLMs (like Llama 3) locally, generating human-like recommendations.
Data Flow Diagram
Here’s how data flows through the system:
flowchart TD
U["User Query<br/>(e.g., 'waterproof running shoe for women')"]
Q["LangChain<br/>Similarity Search"]
V["Chroma Vector Store<br/>+ Embeddings"]
P["Product Data<br/>(JSON, CSV, DB)"]
R["Relevant Products"]
LLM["Ollama LLM<br/>(Llama 3, Mistral, etc.)"]
A["Final Recommendation<br/>(Chatbot Response)"]
U --> Q
Q --> V
V -->|Top Matches| R
R --> LLM
LLM --> A
P --> V
Flow:
- User enters a query.
- LangChain searches for the most relevant products using embeddings and Chroma.
- The matched products are passed to the LLM (via Ollama) to generate a friendly, personalized recommendation.
Step-by-Step Implementation
1. Prepare Product Data
Format your product catalog in a structured format like JSON:
[
{
"id": "1",
"name": "Nike Pegasus 39",
"description": "Waterproof women's running shoe",
"category": "Running Shoes",
"tags": ["waterproof", "running", "women"]
},
...
]
2. Install Required Packages
pip install langchain-community langchain-core chromadb sentence-transformers ollama
Make sure Ollama is installed and running with your chosen model (e.g., ollama pull llama3).
3. Python Code: Bringing It All Together
from langchain_community.llms import Ollama
from langchain_community.vectorstores import Chroma
from langchain_community.embeddings import SentenceTransformerEmbeddings
import json
# Load product data
with open('products.json', encoding='utf-8') as f:
products = json.load(f)
texts = [p['description'] for p in products]
metadatas = [{"id": p["id"], "name": p["name"], "category": p["category"], "tags": p["tags"]} for p in products]
# Generate embeddings
embeddings = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# Build vector store
vectorstore = Chroma.from_texts(texts, embeddings, metadatas=metadatas)
# User query
query = "waterproof running shoe for women"
results = vectorstore.similarity_search(query, k=2)
print("Recommended products:")
for r in results:
print("-", r.metadata['name'], "|", r.page_content)
# LLM: Generate final recommendation
llm = Ollama(model="llama3")
context = "\n".join([f"{r.metadata['name']}: {r.page_content}" for r in results])
user_question = f"Which of these products would you recommend for a woman who needs waterproof running shoes?\n\n{context}"
response = llm.invoke(user_question)
print("\nChatbot answer:")
print(response)
How Does It Work?
- Semantic Search: When the user asks for a product, we don’t just do keyword search—we find the closest matches in meaning using embeddings.
- Chroma Vector DB: Handles fast, efficient similarity search on your local machine.
- Ollama LLM: Receives the search results and generates a natural, human-like reply that feels like a real product expert.
What’s Next?
- Add more product metadata for richer answers.
- Connect this backend to your website’s chat UI.
- Swap in different LLMs with Ollama—try Mistral, Phi, Gemma, etc.
Ready to supercharge your e-commerce with open-source AI—without sending data to the cloud?
Try this setup, and your customers will enjoy smarter, more personal recommendations with full privacy and control.
Got questions or want more features? Leave a comment or contact me!
Get in Touch with us
Related Posts
- Tier-1 SOC アナリスト Agent を本番環境で動かす:Wazuh + Claude + Shuffle 実装の現場知見 なぜ「AI for SOC」のほとんどは機能しないのか — そして何が実際に機能するのか
- Building a Tier-1 SOC Analyst Agent: Wazuh + Claude + Shuffle in Production, Why “AI for SOC” mostly doesn’t work — and what does
- The Accounting Software Your Firm Uses Is Built for Your Clients, Not for You
- 2026年本地大模型(Local LLM)硬件选型实用指南
- Choosing Hardware for Local LLMs in 2026: A Practical Sizing Guide
- Why Your Finance Team Spends 40% of Their Week on Work AI Can Now Do
- 用纯开源方案搭建生产级 SOC:Wazuh + DFIR-IRIS + 自研集成层实战记录
- How We Built a Real Security Operations Center With Open-Source Tools
- FarmScript:我们如何从零设计一门农业IoT领域特定语言
- FarmScript: How We Designed a Programming Language for Chanthaburi Durian Farmers
- 智慧农业项目为何止步于试点阶段
- Why Smart Farming Projects Fail Before They Leave the Pilot Stage
- ERP项目为何总是超支、延期,最终令人失望
- ERP Projects: Why They Cost More, Take Longer, and Disappoint More Than Expected
- AI Security in Production: What Enterprise Teams Must Know in 2026
- 弹性无人机蜂群设计:具备安全通信的无领导者容错网状网络
- Designing Resilient Drone Swarms: Leaderless-Tolerant Mesh Networks with Secure Communications
- NumPy广播规则详解:为什么`(3,)`和`(3,1)`行为不同——以及它何时会悄悄给出错误答案
- NumPy Broadcasting Rules: Why `(3,)` and `(3,1)` Behave Differently — and When It Silently Gives Wrong Answers
- 关键基础设施遭受攻击:从乌克兰电网战争看工业IT/OT安全













