Provably Rare Gem Miner.

Overview

Provably Rare Gem Miner

just another random project by yoyoismee.eth

useful link

useful thing you should know

  • read contract -> gems(gemID) to get useful info
  • write contract -> mine to claim(kind, salt) to claim your NFT

to run. just edit the python file and run it.

pip install -r requirement.txt
python3 stick_the_miner.py

or new one auto_mine.py for less input. but you'll need infura account

Ps. too lazy to write docs. but it's 50 LoCs have fun.


why stick the miner ? welp.. this is part of the stick the BUIDLer series.

TL;DR - I'm working on a series of opensource NFT related project just for fun.

Key parameters to change if you are using orginal version 'stick_the_miner.py' (cr. K Nattakit's FB post)

  • chain_id - eth:1, fantom:250
  • entropy - ??
  • gemAddr - Game address, can get from https://gems.alphafinance.io/ (loot/bloot/rarity)
  • userAddr - your Wallet address
  • kind = ประเภทของเพชรที่จะขุด ผมแนะนำเป็น Emerald เพราะ return/difficult สูงที่สุด ง่าย ๆ คือคุณจะกำไรเร็วกว่านั่นเอง
  • nonce - number of times you've minted a gem (https://gems.alphafinance.io/ and connect your wallet)
  • diff - difficulty of gemID (https://gems.alphafinance.io/), note that this changes everytime someone minted that gem, so you need to change it too

(more detail) how to use 'auto_mine.py', the updated version of stick_the_miner

  • benefits: manual version (stick_the_miner.py) requires you to update the 'diff' parameter every time someone minted the nft of the target gem, and 'nounce' if you successfully minted one. This version automates that so you just have to rerun to update.
  • steps:
    1. update requirements pip install -r requirements.txt
    1. create an account at (https://infura.io/), select your chain (e.g. Ethereum), create a project and obtain your project ID
    1. create a .env file in the same format as .env-example, inputing your information from (2.), your wallet address and gem ID
    1. python3 auto_mine.py
  • Note: although you dont have to manually adjust 'diff' parameter everytime, you still need to restart the process everytime someone minted target gem's nft still

Once you get the salt:

Multicore version

  • Normal version uses only 1 core of processors, the multicore version should be ~8 times faster depending on your CPU / coreNumber variable
  • You can select the number of processors by chainging coreNumber variable (should not exceed ~16 tho)
  • "fantom_mining_pool_auto_multicore_line.py" is the multicore version of fantom_mining_pool.py
  • for mining by yourself and manual claim please use "fantom_multicore_line.py"
Comments
  • 🎨Added colorlog package for output with colors

    🎨Added colorlog package for output with colors

    I use the classic stick_the_miner.py for mining and had a hard time looking for the salt output due to the monochrome color. So, I decided to differentiate the salt output with the colorlog package😁

    opened by mickyngub 2
  • Multicore version of the miner for both pool mining and self mining

    Multicore version of the miner for both pool mining and self mining

    Depending on your CPU and the coreNumber variable, it should be ~8 times faster than the original version but with the drawback of a tremendous increase in CPU utilization.

    opened by mickyngub 1
  • Lowering the priority of python.exe to reduce lags

    Lowering the priority of python.exe to reduce lags

    If a user is mining gems in the background while using other compute-intensive programs, the user might experience lags due to 100% CPU utilization. By lowering the priority of python.exe miner, other programs will have higher priorities. Thus, users would be less likely to experience lagging issues.

    Under a normal circumstance in which the CPU utilization is less than 100%, it should have no impact on iter/sec.

    Before

    image

    After

    image

    opened by mickyngub 1
  • update fantom_mining_pool

    update fantom_mining_pool

    • edit .env-example add NOTIFY_AUTH_TOKEN, DIFF and PRIVATE_KEY
    • edit var private_key to PRIVATE_KEY
    • insert if PRIVATE_KEY != ''
    • get PRIVATE_KEY from .env for safety
    opened by NuttakitDW 0
  • why other people mint so quickly

    why other people mint so quickly

    https://ftmscan.com/address/0x729d74098f6669541ed1b69403ae75f080ccf1e1

    this people mint level 4 gems so quickly ,his salt is too low, but execute success.

    are you knonw the reason? image

    opened by sumrise 3
  • refactor to support multiple chain properly

    refactor to support multiple chain properly

    some of our code is unnecessary based on Ethereum e.g. infura_key, hard code chain no, and more todo: refactor to a more generic one that would be valid across all EVM compatible chain e.g. infura_key -> rpc_provider (also fix others code to match this change) and more

    also TODO: remove the quick fix for fantom file LOL

    opened by yoyoismee 0
  • Idea for sampling different range of int random on multiple workers

    Idea for sampling different range of int random on multiple workers

    Will probably do tmr, parse n worker to the get_salt function so each worker could random int from different range of numbers eg. worker 1: 1-2^122, worker 2: 2^122 to 2^123

    opened by Duayt 1
Releases(v0.0.1d-test-build)
给yolov5加个gui界面,使用pyqt5,yolov5是5.0版本

博文地址 https://xugaoxiang.com/2021/06/30/yolov5-pyqt5 代码执行 项目中使用YOLOv5的v5.0版本,界面文件是project.ui pip install -r requirements.txt python main.py 图片检测 视频检测

Xu GaoXiang 215 Dec 30, 2022
Twin-deep neural network for semi-supervised learning of materials properties

Deep Semi-Supervised Teacher-Student Material Synthesizability Prediction Citation: Semi-supervised teacher-student deep neural network for materials

MLEG 3 Dec 14, 2022
[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

SADRNet Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction Requirements python

Multimedia Computing Group, Nanjing University 99 Dec 30, 2022
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer

This repository contains the source code for the paper SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021 Oral). The project page is here.

AllenXiang 65 Dec 26, 2022
Voxel-based Network for Shape Completion by Leveraging Edge Generation (ICCV 2021, oral)

Voxel-based Network for Shape Completion by Leveraging Edge Generation This is the PyTorch implementation for the paper "Voxel-based Network for Shape

10 Dec 04, 2022
Official implementation for the paper "Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection"

Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection PyTorch code release of the paper "Attentive Prototypes for Sour

Deepti Hegde 23 Oct 17, 2022
Exploring Visual Engagement Signals for Representation Learning

Exploring Visual Engagement Signals for Representation Learning Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie, Serge Belongie and Ser-Nam Lim C

Menglin Jia 9 Jul 23, 2022
Demonstration of transfer of knowledge and generalization with distillation

Distilling-the-Knowledge-in-a-Neural-Network This is an implementation of a part of the paper "Distilling the Knowledge in a Neural Network" (https://

26 Nov 25, 2022
xitorch: differentiable scientific computing library

xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely used in scientific computing applications as well as deep learning.

24 Apr 15, 2021
We present a regularized self-labeling approach to improve the generalization and robustness properties of fine-tuning.

Overview This repository provides the implementation for the paper "Improved Regularization and Robustness for Fine-tuning in Neural Networks", which

NEU-StatsML-Research 21 Sep 08, 2022
PyTorch implementation for Graph Contrastive Learning with Augmentations

Graph Contrastive Learning with Augmentations PyTorch implementation for Graph Contrastive Learning with Augmentations [poster] [appendix] Yuning You*

Shen Lab at Texas A&M University 382 Dec 15, 2022
Image Classification - A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

0 Jan 23, 2022
Implementation of the paper Recurrent Glimpse-based Decoder for Detection with Transformer.

REGO-Deformable DETR By Zhe Chen, Jing Zhang, and Dacheng Tao. This repository is the implementation of the paper Recurrent Glimpse-based Decoder for

Zhe Chen 33 Nov 30, 2022
Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.

AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ [ICCV-2021]. Overview This package contains the model implementation and training

Google Research 365 Dec 30, 2022
code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022

Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022 News (03/16/2022) upload retrieval checkpoints finetuned on COCO and Flickr T

187 Jan 02, 2023
Toward Realistic Single-View 3D Object Reconstruction with Unsupervised Learning from Multiple Images (ICCV 2021)

Table of Content Introduction Getting Started Datasets Installation Experiments Training & Testing Pretrained models Texture fine-tuning Demo Toward R

VinAI Research 42 Dec 05, 2022
Course materials for Fall 2021 "CIS6930 Topics in Computing for Data Science" at New College of Florida

Fall 2021 CIS6930 Topics in Computing for Data Science This repository hosts course materials used for a 13-week course "CIS6930 Topics in Computing f

Yoshi Suhara 101 Nov 30, 2022
Hybrid Neural Fusion for Full-frame Video Stabilization

FuSta: Hybrid Neural Fusion for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 430 Jan 04, 2023
SurfEmb (CVPR 2022) - SurfEmb: Dense and Continuous Correspondence Distributions

SurfEmb SurfEmb: Dense and Continuous Correspondence Distributions for Object Pose Estimation with Learnt Surface Embeddings Rasmus Laurvig Haugard, A

Rasmus Haugaard 56 Nov 19, 2022
Anomaly detection in multi-agent trajectories: Code for training, evaluation and the OpenAI highway simulation.

Anomaly Detection in Multi-Agent Trajectories for Automated Driving This is the official project page including the paper, code, simulation, baseline

12 Dec 02, 2022