Tools: Weekend Project: Build a ₹0 Amazon India Price Tracker in 50 Lines of Python (2026)

Tools: Weekend Project: Build a ₹0 Amazon India Price Tracker in 50 Lines of Python (2026)

What we're building

Step 1 — Install the basics

Step 2 — The scraper

Step 3 — The logger

Step 4 — The Telegram alert

Step 5 — Tie it together

Step 6 — Schedule it

What you'll notice after 2 weeks

Ways to extend it this weekend

A small warning Saturday afternoon. You've been eyeing that ₹45,000 laptop on Amazon India for three weeks. Does it dip during the Great Indian Sale? Does it spike on weekends? Nobody knows — because nobody is watching. Today we'll fix that with ~50 lines of Python. No paid APIs, no ₹499/month SaaS, no Chrome extensions that sell your data. Just a weekend project you can finish before dinner. Total build time: ~45 minutes. Total cost: ₹0. You need Python 3.9+ and two libraries: That's it. No Selenium, no headless Chrome, no scraping service. Amazon blocks bare requests calls, so we send a real browser's User-Agent and accept-language header. This works 90% of the time for public product pages; if you hit a CAPTCHA, just wait an hour and try again. Two things worth noting: CSV is fine. You don't need a database for a personal price tracker. After a week of data you'll have a CSV you can open in Excel or pandas. Plot it. Watch the weekend dips. Create a Telegram bot via @botfather, note the token, and message your bot once so it knows your chat ID (fetch it from https://api.telegram.org/bot<TOKEN>/getUpdates). Environment variables, not hardcoded tokens. Always. That's the full 50 lines. Save it as price_tracker.py. On macOS or Linux, cron runs it every 6 hours: On Windows, Task Scheduler does the same. On a Raspberry Pi? Even better — your tracker runs 24/7 on ₹200/year of electricity. Running this on 5–10 products for two weeks taught me three things I didn't know: If you finish early, three upgrades worth ~30 min each: Scraping at human-level frequency (every few hours, a handful of products) is fine. Scraping 10,000 URLs every 60 seconds will get your IP blocked and is against Amazon's ToS. Be a good citizen. If you need scale, pay for a proper scraping API — but for personal use, this script is plenty. That's the whole weekend project. Clone, customize, commit. By Sunday evening you'll have a working tracker and a week of data starting to pile up. Ping me on Dev.to if you ship it — I read every reply. I'm Archit Mittal — I automate chaos for businesses. Follow me for daily automation content. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 500;">pip -weight: 500;">install requests beautifulsoup4 -weight: 500;">pip -weight: 500;">install requests beautifulsoup4 -weight: 500;">pip -weight: 500;">install requests beautifulsoup4 import requests from bs4 import BeautifulSoup HEADERS = { "User-Agent": ( "Mozilla/5.0 (Macintosh; Intel Mac OS X 13_5) " "AppleWebKit/537.36 (KHTML, like Gecko) " "Chrome/124.0.0.0 Safari/537.36" ), "Accept-Language": "en-IN,en;q=0.9", } def get_price(url: str) -> tuple[str, int]: r = requests.get(url, headers=HEADERS, timeout=20) r.raise_for_status() soup = BeautifulSoup(r.text, "html.parser") title = soup.select_one("#productTitle").get_text(strip=True) price_el = soup.select_one(".a-price .a-offscreen") if not price_el: raise RuntimeError("Price element not found — page layout changed.") # "₹45,999.00" -> 45999 raw = price_el.get_text(strip=True).replace("₹", "").replace(",", "") rupees = int(float(raw)) return title, rupees import requests from bs4 import BeautifulSoup HEADERS = { "User-Agent": ( "Mozilla/5.0 (Macintosh; Intel Mac OS X 13_5) " "AppleWebKit/537.36 (KHTML, like Gecko) " "Chrome/124.0.0.0 Safari/537.36" ), "Accept-Language": "en-IN,en;q=0.9", } def get_price(url: str) -> tuple[str, int]: r = requests.get(url, headers=HEADERS, timeout=20) r.raise_for_status() soup = BeautifulSoup(r.text, "html.parser") title = soup.select_one("#productTitle").get_text(strip=True) price_el = soup.select_one(".a-price .a-offscreen") if not price_el: raise RuntimeError("Price element not found — page layout changed.") # "₹45,999.00" -> 45999 raw = price_el.get_text(strip=True).replace("₹", "").replace(",", "") rupees = int(float(raw)) return title, rupees import requests from bs4 import BeautifulSoup HEADERS = { "User-Agent": ( "Mozilla/5.0 (Macintosh; Intel Mac OS X 13_5) " "AppleWebKit/537.36 (KHTML, like Gecko) " "Chrome/124.0.0.0 Safari/537.36" ), "Accept-Language": "en-IN,en;q=0.9", } def get_price(url: str) -> tuple[str, int]: r = requests.get(url, headers=HEADERS, timeout=20) r.raise_for_status() soup = BeautifulSoup(r.text, "html.parser") title = soup.select_one("#productTitle").get_text(strip=True) price_el = soup.select_one(".a-price .a-offscreen") if not price_el: raise RuntimeError("Price element not found — page layout changed.") # "₹45,999.00" -> 45999 raw = price_el.get_text(strip=True).replace("₹", "").replace(",", "") rupees = int(float(raw)) return title, rupees import csv from datetime import datetime from pathlib import Path LOG = Path("prices.csv") def log_price(title: str, rupees: int) -> None: new = not LOG.exists() with LOG.open("a", newline="", encoding="utf-8") as f: w = csv.writer(f) if new: w.writerow(["timestamp", "title", "rupees"]) w.writerow([ datetime.now().isoformat(timespec="seconds"), title, rupees, ]) import csv from datetime import datetime from pathlib import Path LOG = Path("prices.csv") def log_price(title: str, rupees: int) -> None: new = not LOG.exists() with LOG.open("a", newline="", encoding="utf-8") as f: w = csv.writer(f) if new: w.writerow(["timestamp", "title", "rupees"]) w.writerow([ datetime.now().isoformat(timespec="seconds"), title, rupees, ]) import csv from datetime import datetime from pathlib import Path LOG = Path("prices.csv") def log_price(title: str, rupees: int) -> None: new = not LOG.exists() with LOG.open("a", newline="", encoding="utf-8") as f: w = csv.writer(f) if new: w.writerow(["timestamp", "title", "rupees"]) w.writerow([ datetime.now().isoformat(timespec="seconds"), title, rupees, ]) import os BOT_TOKEN = os.environ["TG_BOT_TOKEN"] CHAT_ID = os.environ["TG_CHAT_ID"] def ping(msg: str) -> None: requests.post( f"https://api.telegram.org/bot{BOT_TOKEN}/sendMessage", data={"chat_id": CHAT_ID, "text": msg}, timeout=10, ) import os BOT_TOKEN = os.environ["TG_BOT_TOKEN"] CHAT_ID = os.environ["TG_CHAT_ID"] def ping(msg: str) -> None: requests.post( f"https://api.telegram.org/bot{BOT_TOKEN}/sendMessage", data={"chat_id": CHAT_ID, "text": msg}, timeout=10, ) import os BOT_TOKEN = os.environ["TG_BOT_TOKEN"] CHAT_ID = os.environ["TG_CHAT_ID"] def ping(msg: str) -> None: requests.post( f"https://api.telegram.org/bot{BOT_TOKEN}/sendMessage", data={"chat_id": CHAT_ID, "text": msg}, timeout=10, ) PRODUCTS = [ ("https://www.amazon.in/dp/B0CHX1W1XY", 45000), # (url, target_rupees) ("https://www.amazon.in/dp/B0BDHWDR12", 12000), ] def main(): for url, target in PRODUCTS: try: title, price = get_price(url) log_price(title, price) if price <= target: ping(f"💸 {title[:60]} is ₹{price:,} (target ₹{target:,})\n{url}") print(f"OK ₹{price:,} — {title[:60]}") except Exception as e: print(f"FAIL {url[:60]} — {e}") if __name__ == "__main__": main() PRODUCTS = [ ("https://www.amazon.in/dp/B0CHX1W1XY", 45000), # (url, target_rupees) ("https://www.amazon.in/dp/B0BDHWDR12", 12000), ] def main(): for url, target in PRODUCTS: try: title, price = get_price(url) log_price(title, price) if price <= target: ping(f"💸 {title[:60]} is ₹{price:,} (target ₹{target:,})\n{url}") print(f"OK ₹{price:,} — {title[:60]}") except Exception as e: print(f"FAIL {url[:60]} — {e}") if __name__ == "__main__": main() PRODUCTS = [ ("https://www.amazon.in/dp/B0CHX1W1XY", 45000), # (url, target_rupees) ("https://www.amazon.in/dp/B0BDHWDR12", 12000), ] def main(): for url, target in PRODUCTS: try: title, price = get_price(url) log_price(title, price) if price <= target: ping(f"💸 {title[:60]} is ₹{price:,} (target ₹{target:,})\n{url}") print(f"OK ₹{price:,} — {title[:60]}") except Exception as e: print(f"FAIL {url[:60]} — {e}") if __name__ == "__main__": main() 0 */6 * * * cd /home/you/tracker && /usr/bin/python3 price_tracker.py >> tracker.log 2>&1 0 */6 * * * cd /home/you/tracker && /usr/bin/python3 price_tracker.py >> tracker.log 2>&1 0 */6 * * * cd /home/you/tracker && /usr/bin/python3 price_tracker.py >> tracker.log 2>&1 - Visits an Amazon India product URL. - Scrapes the current price. - Logs it to a CSV with a timestamp. - Pings you on Telegram if the price drops below a target. - The CSS selectors (#productTitle, .a-price .a-offscreen) are stable on Amazon India as of 2026, but Amazon rotates layouts. If your script breaks, right-click the price → Inspect → copy a fresh selector. - Always check raise_for_status(). A 503 usually means you hit Amazon's rate limit — back off, don't hammer it. - Amazon India prices move daily, not seasonally. Same laptop: ₹45,999 on Tuesday, ₹43,499 on Saturday, ₹46,499 on Monday. - "Deals of the Day" aren't usually the lowest price that month. The real dip often happens the week after a sale ends. - Pin codes matter. A product shown at ₹1,299 in Mumbai can be ₹1,399 in a Tier-2 pin code — the script above uses whatever pin code Amazon defaults to. Add a ?pincode=110001 variant if you want consistency. - Flipkart support — different selectors, same pattern. You now track both in one CSV. - 7-day rolling min/max — load the CSV with pandas, alert only when price hits a new 7-day low. - Chart generator — a weekly email with a matplotlib PNG of every product you're tracking.