Tools: How I Built a ZZZ Code Aggregator in a Weekend

Tools: How I Built a ZZZ Code Aggregator in a Weekend

Source: Dev.to

The Stack ## How It Works ## The Fandom Scraping ## SEO Stuff ## What I'd Do Differently I play Zenless Zone Zero, and like most gacha games, HoYoverse drops redemption codes for free currency. The annoying part? I kept missing codes. So I built zenlesscodes.com - a simple aggregator that pulls from multiple sources and shows all active codes in one place. The site fetches codes from 3 sources every hour: The tricky part was deduplication and expiration detection. Different sources report the same codes with slightly different formatting, and the Fandom wiki is actually the most reliable for knowing when codes expire. Fandom uses a MediaWiki backend, so instead of fighting Cloudflare, I just use their API: Then I parse the HTML table, checking for bg-green (active) vs bg-red (expired) CSS classes on the status cells. Threw in IndexNow pings so search engines know when content updates. It's literally one API call to Bing and they share it with the other search engines. Honestly, not much. Flask might be overkill - this could probably be a static site generated by a cron job. But the Flask setup gives me a /api/codes endpoint for free, and the scheduling is cleaner. That's it. Launched a couple days ago. zenlesscodes.com if you play ZZZ and want to stop missing codes. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse COMMAND_BLOCK: def fetch_all_codes(): primary_codes = fetch_primary_api() # hoyo-codes API github_codes = fetch_github() # community GitHub list fandom_codes, expired = fetch_fandom() # wiki scraping return aggregate_codes(primary_codes, github_codes, fandom_codes, expired) Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK: def fetch_all_codes(): primary_codes = fetch_primary_api() # hoyo-codes API github_codes = fetch_github() # community GitHub list fandom_codes, expired = fetch_fandom() # wiki scraping return aggregate_codes(primary_codes, github_codes, fandom_codes, expired) COMMAND_BLOCK: def fetch_all_codes(): primary_codes = fetch_primary_api() # hoyo-codes API github_codes = fetch_github() # community GitHub list fandom_codes, expired = fetch_fandom() # wiki scraping return aggregate_codes(primary_codes, github_codes, fandom_codes, expired) CODE_BLOCK: def fetch_fandom_via_api(): params = { "action": "parse", "page": "Redemption_Code", "format": "json", "prop": "text", } response = requests.get(FANDOM_API_URL, params=params) return response.json()["parse"]["text"]["*"] Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: def fetch_fandom_via_api(): params = { "action": "parse", "page": "Redemption_Code", "format": "json", "prop": "text", } response = requests.get(FANDOM_API_URL, params=params) return response.json()["parse"]["text"]["*"] CODE_BLOCK: def fetch_fandom_via_api(): params = { "action": "parse", "page": "Redemption_Code", "format": "json", "prop": "text", } response = requests.get(FANDOM_API_URL, params=params) return response.json()["parse"]["text"]["*"] - Python/Flask for the backend - BeautifulSoup for scraping(respectfully) the Fandom wiki - APScheduler for hourly background updates - Gunicorn as the WSGI server - Runs on a $3/month VPS - Domain: ~$10/year - VPS: ~$3/month(Already had this, prepaid for 3 years) - Cloudflare: Free tier