Tools
Tools: π Science Teacher Chatbot β Full Explanation (Everything)
2026-02-21
0 views
admin
π± 0οΈβ£ What We Are Building ## π§ 1οΈβ£ How System Works ## π§° 2οΈβ£ Technologies ## π¦ 3οΈβ£ Create Project ## π¦ 4οΈβ£ Install Dependencies ## π 5οΈβ£ Folder Structure ## π 6οΈβ£ OpenAI Key ## πΎ 7οΈβ£ memory.mjs ## What it does ## π§ 8οΈβ£ llm.mjs ## What it does ## π¬ 9οΈβ£ filter.mjs ## What it does ## π 1οΈβ£0οΈβ£ route.mjs ## What it does ## π 1οΈβ£1οΈβ£ server.mjs ## What it does ## βΆοΈ 1οΈβ£2οΈβ£ Run Project ## π§ͺ 1οΈβ£3οΈβ£ Test API ## π§ 1οΈβ£4οΈβ£ Memory Demo ## β οΈ 1οΈβ£5οΈβ£ Important Notes ## π 1οΈβ£6οΈβ£ What You Built (Hero Level) ## π 1οΈβ£7οΈβ£ How to Explain to Students We are building a Science Teacher AI chatbot backend. Student: What is gravity?
AI: Gravity is a force⦠Student: explain again
AI: As I explained earlierβ¦ π AI remembers context. Add script in package.json POST β http://localhost:3000/ask AI remembers context. β
LLM backend
β
LangChain integration
β
Memory chatbot
β
Science filter
β
Express API
β
Teacher AI This is real AI app architecture. We built an AI teacher using OpenAI.
Express receives student questions.
LangChain connects AI and memory.
Memory stores conversation.
Filter ensures science-only answers. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
Student β API β Filter β LangChain β OpenAI β Answer β Memory Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Student β API β Filter β LangChain β OpenAI β Answer β Memory CODE_BLOCK:
Student β API β Filter β LangChain β OpenAI β Answer β Memory CODE_BLOCK:
mkdir science-teacher-bot
cd science-teacher-bot
pnpm init Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
mkdir science-teacher-bot
cd science-teacher-bot
pnpm init CODE_BLOCK:
mkdir science-teacher-bot
cd science-teacher-bot
pnpm init CODE_BLOCK:
pnpm add express cors dotenv langchain @langchain/openai nodemon Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
pnpm add express cors dotenv langchain @langchain/openai nodemon CODE_BLOCK:
pnpm add express cors dotenv langchain @langchain/openai nodemon CODE_BLOCK:
science-teacher-bot/
β
βββ src/
β βββ memory.mjs
β βββ llm.mjs
β βββ filter.mjs
β βββ route.mjs
β βββ server.mjs
β
βββ .env
βββ package.json Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
science-teacher-bot/
β
βββ src/
β βββ memory.mjs
β βββ llm.mjs
β βββ filter.mjs
β βββ route.mjs
β βββ server.mjs
β
βββ .env
βββ package.json CODE_BLOCK:
science-teacher-bot/
β
βββ src/
β βββ memory.mjs
β βββ llm.mjs
β βββ filter.mjs
β βββ route.mjs
β βββ server.mjs
β
βββ .env
βββ package.json CODE_BLOCK:
OPENAI_API_KEY=your_key_here
PORT=3000 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
OPENAI_API_KEY=your_key_here
PORT=3000 CODE_BLOCK:
OPENAI_API_KEY=your_key_here
PORT=3000 CODE_BLOCK:
import { BufferMemory } from "langchain/memory"; export const memory = new BufferMemory({ returnMessages: true, memoryKey: "chat_history",
}); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
import { BufferMemory } from "langchain/memory"; export const memory = new BufferMemory({ returnMessages: true, memoryKey: "chat_history",
}); CODE_BLOCK:
import { BufferMemory } from "langchain/memory"; export const memory = new BufferMemory({ returnMessages: true, memoryKey: "chat_history",
}); CODE_BLOCK:
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ConversationChain } from "langchain/chains";
import { memory } from "./memory.mjs"; const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0.3,
}); const template = `
You are a science teacher.
Answer only science questions.
Explain in simple way. Conversation:
{chat_history} Student: {input}
Teacher:
`; export const chatChain = new ConversationChain({ llm, memory, prompt: template,
}); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ConversationChain } from "langchain/chains";
import { memory } from "./memory.mjs"; const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0.3,
}); const template = `
You are a science teacher.
Answer only science questions.
Explain in simple way. Conversation:
{chat_history} Student: {input}
Teacher:
`; export const chatChain = new ConversationChain({ llm, memory, prompt: template,
}); CODE_BLOCK:
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ConversationChain } from "langchain/chains";
import { memory } from "./memory.mjs"; const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0.3,
}); const template = `
You are a science teacher.
Answer only science questions.
Explain in simple way. Conversation:
{chat_history} Student: {input}
Teacher:
`; export const chatChain = new ConversationChain({ llm, memory, prompt: template,
}); CODE_BLOCK:
export function isScience(text) { const words = [ "physics","chemistry","biology", "atom","cell","energy","force", "gravity","plant","reaction", "photosynthesis","molecule" ]; return words.some(w => text.toLowerCase().includes(w) );
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
export function isScience(text) { const words = [ "physics","chemistry","biology", "atom","cell","energy","force", "gravity","plant","reaction", "photosynthesis","molecule" ]; return words.some(w => text.toLowerCase().includes(w) );
} CODE_BLOCK:
export function isScience(text) { const words = [ "physics","chemistry","biology", "atom","cell","energy","force", "gravity","plant","reaction", "photosynthesis","molecule" ]; return words.some(w => text.toLowerCase().includes(w) );
} COMMAND_BLOCK:
import express from "express";
import { chatChain } from "./llm.mjs";
import { isScience } from "./filter.mjs"; export const router = express.Router(); router.post("/", async (req, res) => { const { text } = req.body; if (!text) { return res.json({ error: "Question required", }); } if (!isScience(text)) { return res.json({ answer: "I only answer science questions.", }); } const response = await chatChain.predict({ input: text, }); res.json({ answer: response });
}); Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
import express from "express";
import { chatChain } from "./llm.mjs";
import { isScience } from "./filter.mjs"; export const router = express.Router(); router.post("/", async (req, res) => { const { text } = req.body; if (!text) { return res.json({ error: "Question required", }); } if (!isScience(text)) { return res.json({ answer: "I only answer science questions.", }); } const response = await chatChain.predict({ input: text, }); res.json({ answer: response });
}); COMMAND_BLOCK:
import express from "express";
import { chatChain } from "./llm.mjs";
import { isScience } from "./filter.mjs"; export const router = express.Router(); router.post("/", async (req, res) => { const { text } = req.body; if (!text) { return res.json({ error: "Question required", }); } if (!isScience(text)) { return res.json({ answer: "I only answer science questions.", }); } const response = await chatChain.predict({ input: text, }); res.json({ answer: response });
}); COMMAND_BLOCK:
import express from "express";
import cors from "cors";
import "dotenv/config";
import { router } from "./route.mjs"; const app = express(); app.use(cors());
app.use(express.json()); app.use("/ask", router); app.get("/", (req, res) => { res.send("Science Teacher AI running");
}); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server running on", PORT);
}); Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
import express from "express";
import cors from "cors";
import "dotenv/config";
import { router } from "./route.mjs"; const app = express(); app.use(cors());
app.use(express.json()); app.use("/ask", router); app.get("/", (req, res) => { res.send("Science Teacher AI running");
}); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server running on", PORT);
}); COMMAND_BLOCK:
import express from "express";
import cors from "cors";
import "dotenv/config";
import { router } from "./route.mjs"; const app = express(); app.use(cors());
app.use(express.json()); app.use("/ask", router); app.get("/", (req, res) => { res.send("Science Teacher AI running");
}); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server running on", PORT);
}); CODE_BLOCK:
"scripts": { "dev": "node src/server.mjs"
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
"scripts": { "dev": "node src/server.mjs"
} CODE_BLOCK:
"scripts": { "dev": "node src/server.mjs"
} CODE_BLOCK:
pnpm dev Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
{ "text": "What is photosynthesis?"
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
{ "text": "What is photosynthesis?"
} CODE_BLOCK:
{ "text": "What is photosynthesis?"
} CODE_BLOCK:
{ "answer": "Photosynthesis is the process..."
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
{ "answer": "Photosynthesis is the process..."
} CODE_BLOCK:
{ "answer": "Photosynthesis is the process..."
} - answers science questions
- explains like teacher
- remembers chat
- ignores non-science - Student sends question
- Express API receives
- Science filter checks
- LangChain adds memory
- OpenAI generates answer
- API returns response - Node.js β backend
- Express β API
- LangChain β LLM framework
- OpenAI β AI brain
- BufferMemory β chat memory
- dotenv β API key
- pnpm β package manager - stores chat history
- remembers messages
- gives context to AI - creates AI model
- defines teacher behavior
- connects memory
- creates chatbot - checks if question is science
- blocks other topics - receives question
- validates text
- checks science
- sends answer - creates server
- enables JSON
- mounts API route
- starts backend - What is gravity
- explain again - shared for all users
- resets on restart - vector store
how-totutorialguidedev.toaiopenaillmgptserverrouternode