Tools
I Almost Used LangGraph for Social Media Automation (Here's Why I Built an MCP Server Instead)
2025-12-12
0 views
admin
The Context ## The LangGraph Temptation ## What I Found ## The Aha Moment ## The Problems I Saw With LangGraph ## 1. Complexity for Simple Use Cases ## 2. Deployment Constraints ## 3. Vendor Lock-in ## 4. Edge Incompatibility ## 5. Learning Curve ## Why MCP Won ## Protocol Over Framework ## Edge-Native from Day One ## Lightweight Implementation ## Future-Proof ## Cost Structure ## Architecture: The MCP Approach ## Building the MCP Server ## Storage Layer: SQLite to D1 ## Social Media Integration: Ayrshare ## AI Content Generation: Groq (FREE) ## MCP Protocol Implementation ## Real-World Usage ## Performance Comparison ## Lessons Learned ## What Worked Well ## Challenges Faced ## What I'd Do Differently ## When to Choose Each Approach ## Choose MCP if: ## Choose LangGraph if: ## Why This Matters for Freelancers ## What's Next ## Conclusion ## Try It Yourself How choosing protocol over framework saved me $45/month and shipped in one day I just posted my first AI-generated tweet to Twitter/x. It was generated by Groq's free AI, processed by a custom MCP server I built, and posted through Ayrshare—all running for $0/month. Total build time? Less than a day. I almost used LangGraph. I studied LangChain's 2,000-line social-media-agent for days. It's production-ready, battle-tested, and impressive. But I chose a different path—and saved $45/month in the process. Here's why I built an MCP server instead of using LangGraph, and why the decision might matter for your next AI project. I'm a full-stack developer from Port Harcourt, Nigeria, and I've been living in the Cloudflare Workers ecosystem for years. I run FPL Hub—a Fantasy Premier League platform serving 2,000+ users with 500K+ daily API calls and 99.9% uptime. Everything runs on Workers. When I needed social media automation for client work, my instinct was to reach for the established solution: LangGraph. After all, LangChain has a production social-media-agent with 1.7k stars. Why reinvent the wheel? But then I remembered: I'm not building a wheel. I'm building something simpler. LangGraph is impressive. I spent days studying LangChain's social-media-agent repository (1.7k+ stars) and it's genuinely production-ready: For social media automation, LangGraph offers everything: content generation, review cycles, posting workflows, and analytics—all with built-in state management. So why didn't I use it? I was halfway through setting up LangGraph Cloud when I realized: I was solving the wrong problem. LangGraph is built for complex, multi-step agent workflows with state management. My use case was simpler: I didn't need a graph. I needed a function call. That's when I remembered the Model Context Protocol (MCP) work I'd been doing. What if I just... built an MCP server? The reference implementation is 2,000+ lines of Python. For my use case (generate content, review, post), I needed maybe 500 lines of TypeScript. The rest was framework overhead. I wanted edge deployment on Cloudflare Workers with zero cold starts. LangGraph code is LangGraph-specific. If I want to switch frameworks or use a different client, I'm rewriting everything. LangGraph can't run on Cloudflare Workers. The entire paradigm assumes persistent processes and stateful checkpointing. To use LangGraph effectively, I need to learn: With MCP, I just need to understand a simple JSON-RPC protocol. Here's what made me choose MCP: MCP is a protocol, not a framework. Any MCP client (Claude Desktop, VS Code with Continue, custom UIs) can use my server without modification. I built this for Cloudflare Workers. The entire architecture assumes: My complete implementation: As MCP adoption grows (Anthropic, Zed, Continue, Cody), my server's utility multiplies. It's not tied to any specific framework's lifecycle. For a freelancer testing ideas, $5 vs $50 matters. The beauty? Each layer is independent and replaceable. I'll walk through the key components. Started with SQLite for local development: Migration to Cloudflare D1 is literally: Same SQL, different runtime. Beautiful. Instead of managing OAuth for Twitter, LinkedIn, Facebook, Instagram, TikTok, YouTube, Pinterest, Reddit, Telegram, and Google My Business individually, I used Ayrshare: One API, ten platforms. Done. Here's where I saved the most money. Instead of OpenAI or Claude, I used Groq's free tier with Llama 3.3 70B: Quality? Excellent. Cost? $0. Rate limits? 30 requests/minute (more than enough). The MCP server itself is straightforward: That's it. ~800 lines total for 8 tools, 3 resources, 3 prompts, storage, integrations, and AI generation. Here's what it looks like in Claude Desktop: Me: Create a professional tweet about edge computing benefits Claude: [calls draft_post tool] Me: Post it immediately to Twitter Claude: [calls post_immediately tool] And it's live. 5 views within minutes. Just like that. ![Live Tweet Posted via MCP Server] My first AI-generated tweet posted live via the MCP server MCP is simpler than expected. The protocol has three primitives (tools, resources, prompts) and one transport (JSON-RPC). That's it. TypeScript types catch errors early. Strict mode prevented countless runtime bugs. Groq is underrated. Free tier, fast responses, good quality. Perfect for prototyping. Ayrshare abstracts complexity. I didn't touch a single OAuth flow. Just API keys and posting. MCP documentation is still evolving. I relied heavily on examples and source code reading. Ayrshare free tier limits. 10 posts/month is fine for testing, but scheduling requires paid plan ($20/month). Platform-specific optimization. Each platform has different character limits, best practices, and engagement patterns. My prompts needed per-platform tuning. Error handling across async operations. Proper error boundaries took iteration to get right. Start with edge deployment. I built locally first, then migrated to Workers. Should've gone edge-native from day one. Add semantic search earlier. Vectorize for content templates would've been useful from the start. Build a template library upfront. Reusable content patterns speed up generation significantly. Beyond the technical wins, this project has been valuable for my freelance practice: Learning investment: Understanding MCP took a week of deep work, but it's positioned me differently in the market. Instead of "Full Stack Developer," I can now say "MCP Specialist" - which attracts different (and higher-paying) projects. Cost awareness: When pitching clients, being able to say "$5/month to run" vs "$50-80/month" matters. Especially for startups and small businesses watching their burn rate. Content as learning: Writing forces you to clarify your thinking. I understood MCP much better after writing about it than after just building with it. Portfolio depth: Having production code (with real integrations, real users, real costs) beats tutorial projects every time. If you're freelancing or consulting, building tools like this - and writing about them - compounds over time. Immediate improvements: Edge deployment guide:
I'm planning a follow-up article on deploying to Cloudflare Workers: Open source:
Once I clean up the code and add comprehensive docs, I'll publish the full implementation on GitHub. LangGraph is powerful, battle-tested, and feature-rich. For complex multi-agent systems with sophisticated state management, it's an excellent choice. But for lightweight automation, edge deployment, and cost-conscious projects, MCP is the better fit. The protocol-based approach offers portability, simplicity, and future-proofing that framework-specific solutions can't match. I built a production social media automation system in less than a day for $5/month. It runs on the edge with <100ms latency globally. It works with any MCP client. And it's simple enough that I can explain the entire architecture in one article. Sometimes the newer approach is the better approach. Want to build your own MCP server? Here's where to start: Questions? Drop them in the comments. I'm happy to help other developers explore MCP. Want to see the code? It's open source: GitHub repo Star the repo if you find it useful! ⭐ Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
Claude Desktop (Client) ↓ JSON-RPC over stdio
MCP Server (TypeScript) ├── Tools (LLM-controlled actions) │ ├── draft_post │ ├── schedule_post │ ├── post_immediately │ ├── generate_thread │ └── analyze_engagement ├── Resources (App-controlled data) │ ├── drafts://list │ ├── scheduled://posts │ └── stats://summary ├── Prompts (User-invoked templates) │ ├── write_tweet │ ├── linkedin_post │ └── thread_generator ├── Storage (SQLite → D1) └── Integrations ├── Ayrshare (multi-platform posting) └── Groq (AI content generation) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Claude Desktop (Client) ↓ JSON-RPC over stdio
MCP Server (TypeScript) ├── Tools (LLM-controlled actions) │ ├── draft_post │ ├── schedule_post │ ├── post_immediately │ ├── generate_thread │ └── analyze_engagement ├── Resources (App-controlled data) │ ├── drafts://list │ ├── scheduled://posts │ └── stats://summary ├── Prompts (User-invoked templates) │ ├── write_tweet │ ├── linkedin_post │ └── thread_generator ├── Storage (SQLite → D1) └── Integrations ├── Ayrshare (multi-platform posting) └── Groq (AI content generation) CODE_BLOCK:
Claude Desktop (Client) ↓ JSON-RPC over stdio
MCP Server (TypeScript) ├── Tools (LLM-controlled actions) │ ├── draft_post │ ├── schedule_post │ ├── post_immediately │ ├── generate_thread │ └── analyze_engagement ├── Resources (App-controlled data) │ ├── drafts://list │ ├── scheduled://posts │ └── stats://summary ├── Prompts (User-invoked templates) │ ├── write_tweet │ ├── linkedin_post │ └── thread_generator ├── Storage (SQLite → D1) └── Integrations ├── Ayrshare (multi-platform posting) └── Groq (AI content generation) CODE_BLOCK:
import Database from 'better-sqlite3'; export class StorageService { private db: Database.Database; constructor(dbPath: string) { this.db = new Database(dbPath); this.initializeSchema(); } createDraft(data: DraftData): Draft { const id = crypto.randomUUID(); const now = new Date().toISOString(); this.db.prepare(` INSERT INTO drafts (id, content, platform, tone, status, created_at) VALUES (?, ?, ?, ?, 'draft', ?) `).run(id, data.content, data.platform, data.tone, now); return this.getDraft(id); }
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
import Database from 'better-sqlite3'; export class StorageService { private db: Database.Database; constructor(dbPath: string) { this.db = new Database(dbPath); this.initializeSchema(); } createDraft(data: DraftData): Draft { const id = crypto.randomUUID(); const now = new Date().toISOString(); this.db.prepare(` INSERT INTO drafts (id, content, platform, tone, status, created_at) VALUES (?, ?, ?, ?, 'draft', ?) `).run(id, data.content, data.platform, data.tone, now); return this.getDraft(id); }
} CODE_BLOCK:
import Database from 'better-sqlite3'; export class StorageService { private db: Database.Database; constructor(dbPath: string) { this.db = new Database(dbPath); this.initializeSchema(); } createDraft(data: DraftData): Draft { const id = crypto.randomUUID(); const now = new Date().toISOString(); this.db.prepare(` INSERT INTO drafts (id, content, platform, tone, status, created_at) VALUES (?, ?, ?, ?, 'draft', ?) `).run(id, data.content, data.platform, data.tone, now); return this.getDraft(id); }
} CODE_BLOCK:
// Local
const db = new Database('./social_media.db'); // Edge
const db = env.DB; // Cloudflare D1 binding Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
// Local
const db = new Database('./social_media.db'); // Edge
const db = env.DB; // Cloudflare D1 binding CODE_BLOCK:
// Local
const db = new Database('./social_media.db'); // Edge
const db = env.DB; // Cloudflare D1 binding CODE_BLOCK:
export class SocialMediaAPI { private apiKey: string; private baseUrl = 'https://app.ayrshare.com/api'; async postImmediately(content: string, platforms: string[]) { const response = await fetch(`${this.baseUrl}/post`, { method: 'POST', headers: { 'Authorization': `Bearer ${this.apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ post: content, platforms, }), }); return response.json(); }
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
export class SocialMediaAPI { private apiKey: string; private baseUrl = 'https://app.ayrshare.com/api'; async postImmediately(content: string, platforms: string[]) { const response = await fetch(`${this.baseUrl}/post`, { method: 'POST', headers: { 'Authorization': `Bearer ${this.apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ post: content, platforms, }), }); return response.json(); }
} CODE_BLOCK:
export class SocialMediaAPI { private apiKey: string; private baseUrl = 'https://app.ayrshare.com/api'; async postImmediately(content: string, platforms: string[]) { const response = await fetch(`${this.baseUrl}/post`, { method: 'POST', headers: { 'Authorization': `Bearer ${this.apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ post: content, platforms, }), }); return response.json(); }
} COMMAND_BLOCK:
import Groq from 'groq-sdk'; export class ContentGenerator { private client: Groq; async generate(request: GenerateRequest): Promise<GenerateResponse> { const systemPrompt = this.buildSystemPrompt(request); const response = await this.client.chat.completions.create({ model: 'llama-3.3-70b-versatile', // FREE messages: [ { role: 'system', content: systemPrompt }, { role: 'user', content: request.input }, ], max_tokens: 1000, temperature: 0.7, }); const text = response.choices[0]?.message?.content || ''; return this.parseResponse(text, request); } private buildSystemPrompt(request: GenerateRequest): string { const platformPrompts = { twitter: `Create engaging tweets that:
- Stay under ${request.maxLength} characters (STRICT)
- Use ${request.tone} tone
- Hook readers in the first line
- End with engagement (question or CTA)`, linkedin: `Create professional posts that:
- Are detailed (1,300-1,500 characters)
- Use ${request.tone} tone
- Start with a compelling hook
- Include 3-5 key insights with takeaways`, }; return platformPrompts[request.platform]; }
} Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
import Groq from 'groq-sdk'; export class ContentGenerator { private client: Groq; async generate(request: GenerateRequest): Promise<GenerateResponse> { const systemPrompt = this.buildSystemPrompt(request); const response = await this.client.chat.completions.create({ model: 'llama-3.3-70b-versatile', // FREE messages: [ { role: 'system', content: systemPrompt }, { role: 'user', content: request.input }, ], max_tokens: 1000, temperature: 0.7, }); const text = response.choices[0]?.message?.content || ''; return this.parseResponse(text, request); } private buildSystemPrompt(request: GenerateRequest): string { const platformPrompts = { twitter: `Create engaging tweets that:
- Stay under ${request.maxLength} characters (STRICT)
- Use ${request.tone} tone
- Hook readers in the first line
- End with engagement (question or CTA)`, linkedin: `Create professional posts that:
- Are detailed (1,300-1,500 characters)
- Use ${request.tone} tone
- Start with a compelling hook
- Include 3-5 key insights with takeaways`, }; return platformPrompts[request.platform]; }
} COMMAND_BLOCK:
import Groq from 'groq-sdk'; export class ContentGenerator { private client: Groq; async generate(request: GenerateRequest): Promise<GenerateResponse> { const systemPrompt = this.buildSystemPrompt(request); const response = await this.client.chat.completions.create({ model: 'llama-3.3-70b-versatile', // FREE messages: [ { role: 'system', content: systemPrompt }, { role: 'user', content: request.input }, ], max_tokens: 1000, temperature: 0.7, }); const text = response.choices[0]?.message?.content || ''; return this.parseResponse(text, request); } private buildSystemPrompt(request: GenerateRequest): string { const platformPrompts = { twitter: `Create engaging tweets that:
- Stay under ${request.maxLength} characters (STRICT)
- Use ${request.tone} tone
- Hook readers in the first line
- End with engagement (question or CTA)`, linkedin: `Create professional posts that:
- Are detailed (1,300-1,500 characters)
- Use ${request.tone} tone
- Start with a compelling hook
- Include 3-5 key insights with takeaways`, }; return platformPrompts[request.platform]; }
} COMMAND_BLOCK:
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; const server = new Server( { name: 'social-media-server', version: '1.0.0' }, { capabilities: { tools: {}, resources: {}, prompts: {} } }
); // Define tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [ { name: 'draft_post', description: "'Create AI-generated social media post'," inputSchema: { type: 'object', properties: { content: { type: 'string', description: "'Topic or raw content' }," platform: { type: 'string', enum: ['twitter', 'linkedin', 'facebook', 'instagram'], }, tone: { type: 'string', enum: ['professional', 'casual', 'technical', 'engaging'], }, }, required: ['content', 'platform'], }, }, // ... 7 more tools ],
})); // Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => { const { name, arguments: args } = request.params; switch (name) { case 'draft_post': const generated = await contentGen.generate({ input: args.content, platform: args.platform, tone: args.tone || 'professional', maxLength: args.platform === 'twitter' ? 280 : 3000, }); const draft = await storage.createDraft({ content: generated.text, platform: args.platform, hashtags: generated.hashtags, metadata: { engagement_score: generated.score }, }); return { content: [{ type: 'text', text: `✅ Draft created!\n\n` + `ID: ${draft.id}\n` + `Platform: ${args.platform}\n` + `Score: ${generated.score}/100\n\n` + `${generated.text}`, }], }; }
}); // Connect transport
const transport = new StdioServerTransport();
await server.connect(transport); Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; const server = new Server( { name: 'social-media-server', version: '1.0.0' }, { capabilities: { tools: {}, resources: {}, prompts: {} } }
); // Define tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [ { name: 'draft_post', description: "'Create AI-generated social media post'," inputSchema: { type: 'object', properties: { content: { type: 'string', description: "'Topic or raw content' }," platform: { type: 'string', enum: ['twitter', 'linkedin', 'facebook', 'instagram'], }, tone: { type: 'string', enum: ['professional', 'casual', 'technical', 'engaging'], }, }, required: ['content', 'platform'], }, }, // ... 7 more tools ],
})); // Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => { const { name, arguments: args } = request.params; switch (name) { case 'draft_post': const generated = await contentGen.generate({ input: args.content, platform: args.platform, tone: args.tone || 'professional', maxLength: args.platform === 'twitter' ? 280 : 3000, }); const draft = await storage.createDraft({ content: generated.text, platform: args.platform, hashtags: generated.hashtags, metadata: { engagement_score: generated.score }, }); return { content: [{ type: 'text', text: `✅ Draft created!\n\n` + `ID: ${draft.id}\n` + `Platform: ${args.platform}\n` + `Score: ${generated.score}/100\n\n` + `${generated.text}`, }], }; }
}); // Connect transport
const transport = new StdioServerTransport();
await server.connect(transport); COMMAND_BLOCK:
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; const server = new Server( { name: 'social-media-server', version: '1.0.0' }, { capabilities: { tools: {}, resources: {}, prompts: {} } }
); // Define tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [ { name: 'draft_post', description: "'Create AI-generated social media post'," inputSchema: { type: 'object', properties: { content: { type: 'string', description: "'Topic or raw content' }," platform: { type: 'string', enum: ['twitter', 'linkedin', 'facebook', 'instagram'], }, tone: { type: 'string', enum: ['professional', 'casual', 'technical', 'engaging'], }, }, required: ['content', 'platform'], }, }, // ... 7 more tools ],
})); // Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => { const { name, arguments: args } = request.params; switch (name) { case 'draft_post': const generated = await contentGen.generate({ input: args.content, platform: args.platform, tone: args.tone || 'professional', maxLength: args.platform === 'twitter' ? 280 : 3000, }); const draft = await storage.createDraft({ content: generated.text, platform: args.platform, hashtags: generated.hashtags, metadata: { engagement_score: generated.score }, }); return { content: [{ type: 'text', text: `✅ Draft created!\n\n` + `ID: ${draft.id}\n` + `Platform: ${args.platform}\n` + `Score: ${generated.score}/100\n\n` + `${generated.text}`, }], }; }
}); // Connect transport
const transport = new StdioServerTransport();
await server.connect(transport); CODE_BLOCK:
✅ Draft created successfully! Draft ID: 61c5fa20-de39-458e-a198-98fbf077a3d4
Platform: twitter
Character Count: 115
Engagement Score: 70/100 Content:
[Sent with Free Plan] Edge computing reduces latency for IoT & AI. Where will you apply it first? #EdgeComputing 💡 Use schedule_post with this draft ID to schedule it. Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
✅ Draft created successfully! Draft ID: 61c5fa20-de39-458e-a198-98fbf077a3d4
Platform: twitter
Character Count: 115
Engagement Score: 70/100 Content:
[Sent with Free Plan] Edge computing reduces latency for IoT & AI. Where will you apply it first? #EdgeComputing 💡 Use schedule_post with this draft ID to schedule it. CODE_BLOCK:
✅ Draft created successfully! Draft ID: 61c5fa20-de39-458e-a198-98fbf077a3d4
Platform: twitter
Character Count: 115
Engagement Score: 70/100 Content:
[Sent with Free Plan] Edge computing reduces latency for IoT & AI. Where will you apply it first? #EdgeComputing 💡 Use schedule_post with this draft ID to schedule it. CODE_BLOCK:
✅ Posted successfully! Platforms: twitter
Post ID: lneLUiDHcrie5cr1ydGO Content: [tweet content] Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
✅ Posted successfully! Platforms: twitter
Post ID: lneLUiDHcrie5cr1ydGO Content: [tweet content] CODE_BLOCK:
✅ Posted successfully! Platforms: twitter
Post ID: lneLUiDHcrie5cr1ydGO Content: [tweet content] - Human-in-the-loop workflows with approval gates
- State checkpointing for complex processes
- Multi-step agent orchestration
- Battle-tested reference implementation - User gives me content
- AI optimizes it for platform
- Save as draft
- Post when ready - Persistent runtime (LangGraph Cloud at $29/month minimum, or VPS)
- PostgreSQL/MongoDB for checkpointing
- Always-on infrastructure - Framework-specific concepts (graphs, nodes, edges)
- Checkpointer management
- LangChain ecosystem conventions - Stateless request/response
- SQLite (D1) for persistence
- No cold starts
- Global distribution - ~800 lines of TypeScript
- 8 tools, 3 resources, 3 prompts
- Full CRUD for drafts
- AI content generation
- Multi-platform posting - Groq API: $0 (free tier, rate-limited but plenty)
- Ayrshare: $0 (10 posts/month free)
- Cloudflare Workers: $5/month (includes D1, KV, Vectorize, R2)
- Total: $5/month - LangGraph Cloud: $29/month (or VPS at $10-20/month)
- OpenAI API: $20-30/month
- Database hosting: $0-10/month
- Total: $50-80/month - You want lightweight automation without framework overhead
- Edge deployment is important (Cloudflare Workers, Deno Deploy)
- You value client portability (any MCP client works)
- You prefer protocol over framework thinking
- You're building for the future (MCP adoption is growing fast)
- Cost matters ($5/month vs $50+/month) - You need complex multi-agent workflows with sophisticated orchestration
- You have existing LangChain investment and team expertise
- You want built-in human approval gates with checkpointing
- Edge deployment isn't a priority
- You prefer a mature ecosystem with extensive documentation
- Your use case requires persistent state across long-running processes - Template library with Vectorize semantic search
- Multi-account support (manage multiple brands)
- Analytics dashboard (engagement tracking)
- Webhook integrations (post on external triggers) - D1 database setup
- SSE transport for remote access
- Environment variable management
- Production deployment checklist - Read the MCP docs: modelcontextprotocol.io
- Clone the SDK: npm install @modelcontextprotocol/sdk
- Start with a simple tool (calculator, weather, whatever)
- Add complexity gradually (storage, integrations, AI)
- Deploy to edge (Cloudflare Workers, Deno Deploy)
how-totutorialguidedev.toaiopenaillmserverswitchpostgresqlnodepythondatabasegitgithub