An Automated udemy coupons scraper which scrapes coupons and autopost the result in blogspot post

Overview

Autoscraper-n-blogger

An Automated udemy coupons scraper which scrapes coupons and autopost the result in blogspot post and notifies via Telegram bot

Requirements

  • Blogger account and blog id
  • Telegram Bot API key and Your Telegram chat id to notify you and send results

    Setup

    Before setup place Telegram bot API key, Telegram chat id and Blogger id in config.json file !

    How to get my Telegram bot api key ? - Telegram-bot api-key

    How to get your Telegram chat id ? - Telegram chat-id

    pip3 install requirements.txt

    Once Installed all the requirements, setup the easyblogger by below command

    easyblogger --blogid get

    To get the blog id refer - https://subinsb.com/how-to-find-blogger-blog-id

    This will open up a browser window that you use to authenticate with your google account

    Note : Authenticate the google account associated with blogger account

    you’re all set to use Easyblogger !

    python3 auto.py

    This above file will scrape all the udemy course and coupons and it will post in blogger and it will send a copy of scraped results via Telegram bot !

    This can be hosted on a cloud server to run it automatically everyday !

    Demo

    Autoscraper.mp4
  • Owner
    GOKUL A.P
    Pythonist | Web Application Pentester | CTF player | Automation developer
    GOKUL A.P
    Google Maps crawler using Selenium

    Google Maps Crawler using Selenium Built as part of the Antifragile Dev Project Selenium crawler that browses Google Maps as a regular user and stores

    Guilherme Latrova 46 Dec 16, 2022
    This tool crawls a list of websites and download all PDF and office documents

    This tool crawls a list of websites and download all PDF and office documents. Then it analyses the PDF documents and tries to detect accessibility issues.

    AccessibilityLU 7 Sep 30, 2022
    京东茅台抢购

    截止 2021/2/1 日,该项目已无法使用! 京东:约满即止,仅限京东实名认证用户APP端抢购,2月1日10:00开始预约,2月1日12:00开始抢购(京东APP需升级至8.5.6版本及以上) 写在前面 本项目来自 huanghyw - jd_seckill,作者的项目地址我找不到了,找到了再贴上

    abee 73 Dec 03, 2022
    Web scraping library and command-line tool for text discovery and extraction (main content, metadata, comments)

    trafilatura: Web scraping tool for text discovery and retrieval Description Trafilatura is a Python package and command-line tool which seamlessly dow

    Adrien Barbaresi 704 Jan 06, 2023
    Scraping followers of an instagram account

    ScrapInsta A script to scraping data from Instagram Install First of all you can run: pip install scrapinsta After that you need to install these requ

    Matheus Kolln 1 Sep 05, 2021
    CreamySoup - a helper script for automated SourceMod plugin updates management.

    CreamySoup/"Creamy SourceMod Updater" (or just soup for short), a helper script for automated SourceMod plugin updates management.

    3 Jan 03, 2022
    Python scrapper scrapping torrent website and download new movies Automatically.

    torrent-scrapper Python scrapper scrapping torrent website and download new movies Automatically. If you like it Put a ⭐ on this repo 😇 Run this git

    Fazil vk 1 Jan 08, 2022
    Scrapy-soccer-games - Scraping information about soccer games from a few websites

    scrapy-soccer-games Esse projeto tem por finalidade pegar informação de tabela d

    Caio Alves 2 Jul 20, 2022
    Visual scraping for Scrapy

    Portia Portia is a tool that allows you to visually scrape websites without any programming knowledge required. With Portia you can annotate a web pag

    Scrapinghub 8.7k Jan 05, 2023
    腾讯课堂,模拟登陆,获取课程信息,视频下载,视频解密。

    腾讯课堂脚本 要学一些东西,但腾讯课堂不支持自定义变速,播放时有水印,且有些老师的课一遍不够看,于是这个脚本诞生了。 时间比较紧张,只会不定时修复重大bug。多线程下载之类的功能更新短期内不会有,如果你想一起完善这个脚本,欢迎pr 2020.5.22测试可用 使用方法 很简单,三部完成 下载代码,

    163 Dec 30, 2022
    京东茅台抢购 2021年4月最新版

    Jd_Seckill 特别声明: 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性,完整性和有效性,请根据情况自行判断。 本项目内所有资源文件,禁止任何公众号、自媒体进行任何形式的转载、发布。 huanghyw 对任何脚本问题概不

    45 Dec 14, 2022
    A scalable frontier for web crawlers

    Frontera Overview Frontera is a web crawling framework consisting of crawl frontier, and distribution/scaling primitives, allowing to build a large sc

    Scrapinghub 1.2k Jan 02, 2023
    A Web Scraper built with beautiful soup, that fetches udemy course information. Get udemy course information and convert it to json, csv or xml file

    Udemy Scraper A Web Scraper built with beautiful soup, that fetches udemy course information. Installation Virtual Environment Firstly, it is recommen

    Aditya Gupta 15 May 17, 2022
    Scraping news from Ucsal portal with Scrapy.

    NewsScraping Esse é um projeto de raspagem das últimas noticias, de 2021, do portal da universidade Ucsal http://noosfero.ucsal.br/institucional Tecno

    Crissiano Pires 0 Sep 30, 2021
    a high-performance, lightweight and human friendly serving engine for scrapy

    a high-performance, lightweight and human friendly serving engine for scrapy

    Speakol Ads 30 Mar 01, 2022
    A Powerful Spider(Web Crawler) System in Python.

    pyspider A Powerful Spider(Web Crawler) System in Python. Write script in Python Powerful WebUI with script editor, task monitor, project manager and

    Roy Binux 15.7k Jan 04, 2023
    PyQuery-based scraping micro-framework.

    demiurge PyQuery-based scraping micro-framework. Supports Python 2.x and 3.x. Documentation: http://demiurge.readthedocs.org Installing demiurge $ pip

    Matias Bordese 109 Jul 20, 2022
    Scrape Twitter for Tweets

    Backers Thank you to all our backers! 🙏 [Become a backer] Sponsors Support this project by becoming a sponsor. Your logo will show up here with a lin

    Ahmet Taspinar 2.2k Jan 05, 2023
    Extract embedded metadata from HTML markup

    extruct extruct is a library for extracting embedded metadata from HTML markup. Currently, extruct supports: W3C's HTML Microdata embedded JSON-LD Mic

    Scrapinghub 725 Jan 03, 2023
    Haphazard scripts for scraping bitcoin/bitcoin data from GitHub

    This is a quick-and-dirty tool used to scrape bitcoin/bitcoin pull request and commentary data. Each output/pr number folder contains comments.json:

    James O'Beirne 8 Oct 12, 2022