How to Use Embedding Models with LLMs for Smarter AI Applications
In today’s AI landscape, Large Language Models (LLMs) like GPT-4, Llama-3, or Qwen2.5 grab all the headlines — but if you want them to work with your data, you need another type of model alongside them: embedding models.
In this post, we’ll explore what embeddings are, why they matter, and how to combine them with LLMs to build powerful applications like semantic search and Retrieval-Augmented Generation (RAG).
1. What is an Embedding Model?
An embedding model converts text (or other data) into a list of numbers — a vector — that captures the meaning of the content.
In this vector space, similar ideas are located close together, even if the exact words are different.
Example:
"dog" → [0.12, -0.09, 0.33, ...]
"puppy" → [0.11, -0.08, 0.31, ...] ← close in meaning
"airplane" → [-0.44, 0.88, 0.05, ...] ← far in meaning
Popular embedding models:
- OpenAI:
text-embedding-3-large(3072 dims),text-embedding-3-small(1536 dims) - Local:
mxbai-embed-large,all-MiniLM-L6-v2,Qwen3-Embedding-0.6B-GGUF - Multilingual:
embed-multilingual-v3.0(Cohere)
2. Why Pair Embeddings with LLMs?
LLMs are great at reasoning and generating text — but they can’t magically access your private data unless you feed it to them.
Embedding models solve this by enabling semantic retrieval from your data store.
This combination is the backbone of RAG:
- Embedding Model → Converts all your documents into vectors and stores them in a vector database.
- LLM → Uses your question, retrieves relevant chunks from the DB, and generates an answer using them.
3. The RAG Pipeline in Action
graph TD
A["User Question"] --> B["Embedding Model → Query Vector"]
B --> C["Vector DB → Find Similar Document Vectors"]
C --> D["Relevant Docs"]
D --> E["LLM → Combine Question + Docs → Final Answer"]
Step-by-step
Step 1: Preprocess & Store Documents
- Split documents into chunks (e.g., 500 tokens each).
- Use the embedding model to convert each chunk into a vector.
- Store vectors + metadata in a vector database (e.g., Qdrant, Milvus, Weaviate).
Step 2: Handle User Queries
- Convert the query into a vector using the same embedding model.
- Search for the nearest vectors in the DB.
- Retrieve the original text chunks.
Step 3: Generate the Answer
- Pass both the query and retrieved chunks into your LLM prompt.
- Let the LLM compose a coherent, accurate answer.
4. Code Example: OpenAI API + Qdrant + GPT-4
from openai import OpenAI
import qdrant_client
# Setup
client = OpenAI(api_key="YOUR_KEY")
qdrant = qdrant_client.QdrantClient(":memory:")
# 1. Embed a document
doc = "Durian is a tropical fruit grown in Southeast Asia."
embedding = client.embeddings.create(
model="text-embedding-3-large",
input=doc
).data[0].embedding
# Store in Qdrant
qdrant.recreate_collection("docs", vector_size=len(embedding))
qdrant.upsert("docs", [(0, embedding, {"text": doc})])
# 2. Embed a query
query = "Where is durian grown?"
query_vec = client.embeddings.create(
model="text-embedding-3-large",
input=query
).data[0].embedding
# Search
results = qdrant.search("docs", query_vec, limit=1)
context = results[0].payload["text"]
# 3. Ask the LLM with retrieved context
answer = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "Answer based on the provided context."},
{"role": "user", "content": f"Context: {context}\n\nQuestion: {query}"}
]
)
print(answer.choices[0].message["content"])
5. Best Practices
- Match the embedding model to your domain (multilingual if needed).
- Chunk size matters: too small = loss of context; too big = poor match quality.
- Keep embedding and query models the same for best similarity scoring.
- Use LLMs with long context windows if you plan to retrieve many chunks.
6. When to Use This Approach
- Knowledge base Q\&A
- Semantic search over large corpora
- Chatbots that “remember” your documents
- Contextual assistants in enterprise apps
Final Thought
The magic of combining embedding models with LLMs is that you get the precision of search and the fluency of generation in one pipeline.
That’s why nearly every serious AI-powered application — from ChatGPT Enterprise to local RAG bots — uses this two-model setup.
Get in Touch with us
Related Posts
- The Accounting Software Your Firm Uses Is Built for Your Clients, Not for You
- 2026年本地大模型(Local LLM)硬件选型实用指南
- Choosing Hardware for Local LLMs in 2026: A Practical Sizing Guide
- Why Your Finance Team Spends 40% of Their Week on Work AI Can Now Do
- 用纯开源方案搭建生产级 SOC:Wazuh + DFIR-IRIS + 自研集成层实战记录
- How We Built a Real Security Operations Center With Open-Source Tools
- FarmScript:我们如何从零设计一门农业IoT领域特定语言
- FarmScript: How We Designed a Programming Language for Chanthaburi Durian Farmers
- 智慧农业项目为何止步于试点阶段
- Why Smart Farming Projects Fail Before They Leave the Pilot Stage
- ERP项目为何总是超支、延期,最终令人失望
- ERP Projects: Why They Cost More, Take Longer, and Disappoint More Than Expected
- AI Security in Production: What Enterprise Teams Must Know in 2026
- 弹性无人机蜂群设计:具备安全通信的无领导者容错网状网络
- Designing Resilient Drone Swarms: Leaderless-Tolerant Mesh Networks with Secure Communications
- NumPy广播规则详解:为什么`(3,)`和`(3,1)`行为不同——以及它何时会悄悄给出错误答案
- NumPy Broadcasting Rules: Why `(3,)` and `(3,1)` Behave Differently — and When It Silently Gives Wrong Answers
- 关键基础设施遭受攻击:从乌克兰电网战争看工业IT/OT安全
- Critical Infrastructure Under Fire: What IT/OT Security Teams Can Learn from Ukraine’s Energy Grid
- LM Studio代码开发的系统提示词工程:`temperature`、`context_length`与`stop`词详解













