Tools
Building Context-Aware Agents with LangGraph
2025-12-22
0 views
admin
βοΈ 1. Setup & Installation ## π§© 2. Idea: Context-Aware Agent Loop ## π§ 3. Define the Agent State ## π§ 4. Add a Memory Backend (Simple JSON File) ## π§ 5. LLM Nodes (Thinking + Planning) ## π§ 6. MemoryCheck Node ## π§° 7. Executor Node (Actions) ## π₯ 8. MemoryUpdate Node ## π 9. Build the LangGraph ## π 10. Run the Full Demo ## π§© Why Context Makes Agents Powerful ## π§ Final Reflection How to add memory, state, and long-term reasoning to LangGraph agents. Most AI agents behave like goldfish β they respond only to the last message and forget everything else.
But real intelligence needs memory.
Context changes decisions.
History shapes reasoning. Today weβll build a LangGraph agent that: Make sure you have LangGraph installed: If you're running this on a GPU or Colab, youβre good. Unlike stateless chatbot calls, a context-aware agent has: State β what it knows so far Memory β persistent information across runs Tools β actions it can take LLM nodes β thinking steps In LangGraph, this becomes a state graph: LangGraph agents use pydantic-style states. This is the entire brain of your agent: history β conversation log task β current objective memory β persistent knowledge Letβs create a tiny persistent memory store: You can replace this later with: Pinecone vector memory But for demo purposes, JSON works beautifully. The planner uses all accumulated context β not just the latest message. This step checks whether the agent already knows something relevant: You can later replace with: Store new knowledge after each run: Now your agent gets smarter with every loop. This is a fully context-aware agent loop. Run it again β and watch memory kick in: The second run will skip unnecessary work because the agent βremembers.β Fewer hallucinations β the agent doesn't forget past results Action optimization β avoids repeating tasks Long-term workflows β multi-step reasoning over time Personalization β your agent remembers preferences Multi-agent cooperation β context is shared across nodes Context is the difference between an LLM and an agentic system. Building agents is no longer about chaining prompts.
Itβs about orchestrating stateful intelligence. And suddenly youβre not just prompting a model β
youβre designing a mind. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse COMMAND_BLOCK:
pip install langgraph langchain openai Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
pip install langgraph langchain openai COMMAND_BLOCK:
pip install langgraph langchain openai CODE_BLOCK:
User β Planner β MemoryCheck β Executor β MemoryUpdate β Planner (loop) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
User β Planner β MemoryCheck β Executor β MemoryUpdate β Planner (loop) CODE_BLOCK:
User β Planner β MemoryCheck β Executor β MemoryUpdate β Planner (loop) CODE_BLOCK:
from typing import List, Optional
from langgraph.graph import StateGraph class AgentState: history: List[str] task: Optional[str] memory: dict Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
from typing import List, Optional
from langgraph.graph import StateGraph class AgentState: history: List[str] task: Optional[str] memory: dict CODE_BLOCK:
from typing import List, Optional
from langgraph.graph import StateGraph class AgentState: history: List[str] task: Optional[str] memory: dict CODE_BLOCK:
import json
import os MEMORY_FILE = "agent_memory.json" def load_memory(): if not os.path.exists(MEMORY_FILE): return {} return json.load(open(MEMORY_FILE)) def save_memory(memory): json.dump(memory, open(MEMORY_FILE, "w"), indent=2) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
import json
import os MEMORY_FILE = "agent_memory.json" def load_memory(): if not os.path.exists(MEMORY_FILE): return {} return json.load(open(MEMORY_FILE)) def save_memory(memory): json.dump(memory, open(MEMORY_FILE, "w"), indent=2) CODE_BLOCK:
import json
import os MEMORY_FILE = "agent_memory.json" def load_memory(): if not os.path.exists(MEMORY_FILE): return {} return json.load(open(MEMORY_FILE)) def save_memory(memory): json.dump(memory, open(MEMORY_FILE, "w"), indent=2) CODE_BLOCK:
from langchain.chat_models import ChatOpenAI
from langgraph.nodes import LLMNode llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) planner = LLMNode( id="planner", llm=llm, prompt=( "Given the memory and the current user task, " "decide: (1) what the user wants, (2) what steps to take next.\n" "Memory: {memory}\n" "Task: {task}\n" "History: {history}\n" )
) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
from langchain.chat_models import ChatOpenAI
from langgraph.nodes import LLMNode llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) planner = LLMNode( id="planner", llm=llm, prompt=( "Given the memory and the current user task, " "decide: (1) what the user wants, (2) what steps to take next.\n" "Memory: {memory}\n" "Task: {task}\n" "History: {history}\n" )
) CODE_BLOCK:
from langchain.chat_models import ChatOpenAI
from langgraph.nodes import LLMNode llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) planner = LLMNode( id="planner", llm=llm, prompt=( "Given the memory and the current user task, " "decide: (1) what the user wants, (2) what steps to take next.\n" "Memory: {memory}\n" "Task: {task}\n" "History: {history}\n" )
) CODE_BLOCK:
def memory_check_node(state: AgentState): task = state.task or "" memory = state.memory matches = [] for key, value in memory.items(): if key.lower() in task.lower(): matches.append((key, value)) return {"memory_matches": matches} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
def memory_check_node(state: AgentState): task = state.task or "" memory = state.memory matches = [] for key, value in memory.items(): if key.lower() in task.lower(): matches.append((key, value)) return {"memory_matches": matches} CODE_BLOCK:
def memory_check_node(state: AgentState): task = state.task or "" memory = state.memory matches = [] for key, value in memory.items(): if key.lower() in task.lower(): matches.append((key, value)) return {"memory_matches": matches} CODE_BLOCK:
def search_tool(query): return f"[Search results for '{query}']" def executor_node(state: AgentState): task = state.task result = search_tool(task) return {"result": result} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
def search_tool(query): return f"[Search results for '{query}']" def executor_node(state: AgentState): task = state.task result = search_tool(task) return {"result": result} CODE_BLOCK:
def search_tool(query): return f"[Search results for '{query}']" def executor_node(state: AgentState): task = state.task result = search_tool(task) return {"result": result} CODE_BLOCK:
def memory_update_node(state: AgentState): memory = load_memory() last_result = state.result memory[state.task] = last_result save_memory(memory) return {"memory": memory} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
def memory_update_node(state: AgentState): memory = load_memory() last_result = state.result memory[state.task] = last_result save_memory(memory) return {"memory": memory} CODE_BLOCK:
def memory_update_node(state: AgentState): memory = load_memory() last_result = state.result memory[state.task] = last_result save_memory(memory) return {"memory": memory} CODE_BLOCK:
graph = StateGraph(AgentState) graph.add_node("planner", planner)
graph.add_node("memory_check", memory_check_node)
graph.add_node("executor", executor_node)
graph.add_node("memory_update", memory_update_node) graph.connect("planner", "memory_check")
graph.connect("memory_check", "executor")
graph.connect("executor", "memory_update")
graph.connect("memory_update", "planner") graph.set_entry_point("planner") agent = graph.compile() Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
graph = StateGraph(AgentState) graph.add_node("planner", planner)
graph.add_node("memory_check", memory_check_node)
graph.add_node("executor", executor_node)
graph.add_node("memory_update", memory_update_node) graph.connect("planner", "memory_check")
graph.connect("memory_check", "executor")
graph.connect("executor", "memory_update")
graph.connect("memory_update", "planner") graph.set_entry_point("planner") agent = graph.compile() CODE_BLOCK:
graph = StateGraph(AgentState) graph.add_node("planner", planner)
graph.add_node("memory_check", memory_check_node)
graph.add_node("executor", executor_node)
graph.add_node("memory_update", memory_update_node) graph.connect("planner", "memory_check")
graph.connect("memory_check", "executor")
graph.connect("executor", "memory_update")
graph.connect("memory_update", "planner") graph.set_entry_point("planner") agent = graph.compile() CODE_BLOCK:
state = agent.invoke({ "history": [], "task": "Find articles on LangGraph", "memory": load_memory()
}) print(state) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
state = agent.invoke({ "history": [], "task": "Find articles on LangGraph", "memory": load_memory()
}) print(state) CODE_BLOCK:
state = agent.invoke({ "history": [], "task": "Find articles on LangGraph", "memory": load_memory()
}) print(state) CODE_BLOCK:
state = agent.invoke({ "history": ["Hi again!"], "task": "Find articles on LangGraph", "memory": load_memory()
}) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
state = agent.invoke({ "history": ["Hi again!"], "task": "Find articles on LangGraph", "memory": load_memory()
}) CODE_BLOCK:
state = agent.invoke({ "history": ["Hi again!"], "task": "Find articles on LangGraph", "memory": load_memory()
}) - remembers past interactions
- stores its state
- adapts reasoning based on context
- reads/writes persistent memory
- loops intelligently instead of starting fresh every time - history β conversation log
- task β current objective
- memory β persistent knowledge
how-totutorialguidedev.toaiopenaillmgptnode