学习强国 自动化 百分百正确、瞬间答题,分值45分

Overview

项目简介

学习强国自动化脚本,解放你的时间!

使用Selenium、requests、mitmpoxy、百度智能云文字识别开发而成

使用说明

:Chrome版本

image-20201207091125954

驱动会自动下载

首次使用会生成数据库文件db.db,用于提高文章、视频任务效率。

依赖安装

pip install -r requirements.txt

没有梯子的同学可使用国内阿里源:

pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple

使用方法

一定要找个网络好的地方,不然可能会出现错误

1、控制台运行:python main.py

2、选择选项(如非必要,尽量选择不显示自动化过程,以免误操)

等待片刻,连接学习强国服务器需要时间,等待时间与网速关系很大。

3、扫描二维码登录

4、选择任务(暂时只支持文章、视频、每日答题、每周答题、专项答题,后续功能正在开发,不过暂时也差不多够用了,45分呢),可多选,不过每个选项要用空格隔开,选择文章或视频时,等待时间稍久一点。

5、任务完成后需手动结束程序

使用示例

image-20210127134017543

image-20210207000645085

image-20210207000823436

image-20210207001005808

image-20210207001047255

image-20210207001122282

image-20210207001149837

image-20210207001303406

image-20210207001404717

百度智能云操作流程

1、登录控制台

点击→百度智能云

image-20210206231654368

2、创建应用

image-20210206235118045

3、选择选项

image-20210206235421370

image-20210206235616741

4、获取API Key、Secret Key

image-20210207000519698

版本说明

  • v0.1:文章、视频,分数:25
  • v0.2:优化文章、优化视频、每日答题(百分百正确),分数30
  • v0.3:新增每周答题、专项答题(也是百分百正确),分数45
  • v0.31:优化记录存储、优化目录结构、优化配置文件结构,增加进度条、增加自动下载驱动、增加系统兼容(Linux、Windows、MacOS)
  • v1.0: 重构整个项目,增加持久化、驱动自动检测与谷歌浏览器匹配、驱动自主下载、更快的登录、文章和视频自适应、更快更精准的答题、加强的防检测、每个文件都有说明注释(便于各位大佬修改)
  • v1.1: 由于专项答题视频答案匹配问题,现加入百度智能云的文字识别功能,可将视频中的答案提取出来,不过答案还需手动填写,因为提取的答案暂时没有好的办法过滤。至少不用看视频了是不2333.

附语

在持久化登录方面思考了很久,要不要做批量持久化?,后来想了想,本项目的目的是为了帮助没有时间做学习强国任务的个人节省时间,如果做了批量的话,恐怕会沦为某些人的牟利工具。所以最后决定只做单用户持久模式,如果有想做批量的话,建议细读学习强国App积分页的提醒警示。

希望大佬点点start,给点动力,希望能给大家带来更多的功能!

Owner
lisztomania
lisztomania
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)

python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止

Sunday 1 Aug 27, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
A modern CSS selector implementation for BeautifulSoup

Soup Sieve Overview Soup Sieve is a CSS selector library designed to be used with Beautiful Soup 4. It aims to provide selecting, matching, and filter

Isaac Muse 151 Dec 23, 2022
Lovely Scrapper

Lovely Scrapper

Tushar Gadhe 2 Jan 01, 2022
A high-level distributed crawling framework.

Cola: high-level distributed crawling framework Overview Cola is a high-level distributed crawling framework, used to crawl pages and extract structur

Xuye (Chris) Qin 1.5k Dec 24, 2022
Automated data scraper for Thailand COVID-19 data

The Researcher COVID data Automated data scraper for Thailand COVID-19 data Accessing the Data 1st Dose Provincial Vaccination Data 2nd Dose Provincia

Porames Vatanaprasan 31 Apr 17, 2022
Consulta de CPF e CNPJ na Receita Federal com Web-Scraping

Repositório contendo scripts Python que realizam a consulta de CPF e CNPJ diretamente no site da Receita Federal.

Josué Campos 5 Nov 29, 2021
Subscrape - A Python scraper for substrate chains

subscrape A Python scraper for substrate chains that uses Subscan. Usage copy co

ChaosDAO 14 Dec 15, 2022
基于Github Action的定时HITsz疫情上报脚本,开箱即用

HITsz Daily Report 基于 GitHub Actions 的「HITsz 疫情系统」访问入口 定时自动上报脚本,开箱即用。 感谢 @JellyBeanXiewh 提供原始脚本和 idea。 感谢 @bugstop 对脚本进行重构并新增 Easy Connect 校内代理访问。

Ter 56 Nov 27, 2022
Screen scraping and web crawling framework

Pomp Pomp is a screen scraping and web crawling framework. Pomp is inspired by and similar to Scrapy, but has a simpler implementation that lacks the

Evgeniy Tatarkin 61 Jun 21, 2021
High available distributed ip proxy pool, powerd by Scrapy and Redis

高可用IP代理池 README | 中文文档 本项目所采集的IP资源都来自互联网,愿景是为大型爬虫项目提供一个高可用低延迟的高匿IP代理池。 项目亮点 代理来源丰富 代理抓取提取精准 代理校验严格合理 监控完备,鲁棒性强 架构灵活,便于扩展 各个组件分布式部署 快速开始 注意,代码请在release

SpiderClub 5.2k Jan 03, 2023
Telegram group scraper tool

Telegram Group Scrapper

Wahyusaputra 2 Jan 11, 2022
This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.

Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e

1 Jan 31, 2022
A package that provides you Latest Cyber/Hacker News from website using Web-Scraping.

cybernews A package that provides you Latest Cyber/Hacker News from website using Web-Scraping. Latest Cyber/Hacker News Using Webscraping Developed b

Hitesh Rana 4 Jun 02, 2022
A training task for web scraping using python multithreading and a real-time-updated list of available proxy servers.

Parallel web scraping The project is a training task for web scraping using python multithreading and a real-time-updated list of available proxy serv

Kushal Shingote 1 Feb 10, 2022
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Podalirius 13 Dec 27, 2022
🤖 Threaded Scraper to get discord servers from disboard.org written in python3

Disboard-Scraper Threaded Scraper to get discord servers from disboard.org written in python3. Setup. One thread / tag If you whant to look for multip

Ѵιcнч 11 Nov 01, 2022
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

2.3k Jan 04, 2023
A python module to parse the Open Graph Protocol

OpenGraph is a module of python for parsing the Open Graph Protocol, you can read more about the specification at http://ogp.me/ Installation $ pip in

Erik Rivera 213 Nov 12, 2022
Web Scraping OLX with Python and Bsoup.

webScrap WebScraping first step. Authors: Paulo, Claudio M. First steps in Web Scraping. Project carried out for training in Web Scrapping. The export

claudio paulo 5 Sep 25, 2022