Provably Rare Gem Miner.

Overview

Provably Rare Gem Miner

just another random project by yoyoismee.eth

useful link

useful thing you should know

  • read contract -> gems(gemID) to get useful info
  • write contract -> mine to claim(kind, salt) to claim your NFT

to run. just edit the python file and run it.

pip install -r requirement.txt
python3 stick_the_miner.py

or new one auto_mine.py for less input. but you'll need infura account

Ps. too lazy to write docs. but it's 50 LoCs have fun.


why stick the miner ? welp.. this is part of the stick the BUIDLer series.

TL;DR - I'm working on a series of opensource NFT related project just for fun.

Key parameters to change if you are using orginal version 'stick_the_miner.py' (cr. K Nattakit's FB post)

  • chain_id - eth:1, fantom:250
  • entropy - ??
  • gemAddr - Game address, can get from https://gems.alphafinance.io/ (loot/bloot/rarity)
  • userAddr - your Wallet address
  • kind = ประเภทของเพชรที่จะขุด ผมแนะนำเป็น Emerald เพราะ return/difficult สูงที่สุด ง่าย ๆ คือคุณจะกำไรเร็วกว่านั่นเอง
  • nonce - number of times you've minted a gem (https://gems.alphafinance.io/ and connect your wallet)
  • diff - difficulty of gemID (https://gems.alphafinance.io/), note that this changes everytime someone minted that gem, so you need to change it too

(more detail) how to use 'auto_mine.py', the updated version of stick_the_miner

  • benefits: manual version (stick_the_miner.py) requires you to update the 'diff' parameter every time someone minted the nft of the target gem, and 'nounce' if you successfully minted one. This version automates that so you just have to rerun to update.
  • steps:
    1. update requirements pip install -r requirements.txt
    1. create an account at (https://infura.io/), select your chain (e.g. Ethereum), create a project and obtain your project ID
    1. create a .env file in the same format as .env-example, inputing your information from (2.), your wallet address and gem ID
    1. python3 auto_mine.py
  • Note: although you dont have to manually adjust 'diff' parameter everytime, you still need to restart the process everytime someone minted target gem's nft still

Once you get the salt:

Multicore version

  • Normal version uses only 1 core of processors, the multicore version should be ~8 times faster depending on your CPU / coreNumber variable
  • You can select the number of processors by chainging coreNumber variable (should not exceed ~16 tho)
  • "fantom_mining_pool_auto_multicore_line.py" is the multicore version of fantom_mining_pool.py
  • for mining by yourself and manual claim please use "fantom_multicore_line.py"
Comments
  • 🎨Added colorlog package for output with colors

    🎨Added colorlog package for output with colors

    I use the classic stick_the_miner.py for mining and had a hard time looking for the salt output due to the monochrome color. So, I decided to differentiate the salt output with the colorlog package😁

    opened by mickyngub 2
  • Multicore version of the miner for both pool mining and self mining

    Multicore version of the miner for both pool mining and self mining

    Depending on your CPU and the coreNumber variable, it should be ~8 times faster than the original version but with the drawback of a tremendous increase in CPU utilization.

    opened by mickyngub 1
  • Lowering the priority of python.exe to reduce lags

    Lowering the priority of python.exe to reduce lags

    If a user is mining gems in the background while using other compute-intensive programs, the user might experience lags due to 100% CPU utilization. By lowering the priority of python.exe miner, other programs will have higher priorities. Thus, users would be less likely to experience lagging issues.

    Under a normal circumstance in which the CPU utilization is less than 100%, it should have no impact on iter/sec.

    Before

    image

    After

    image

    opened by mickyngub 1
  • update fantom_mining_pool

    update fantom_mining_pool

    • edit .env-example add NOTIFY_AUTH_TOKEN, DIFF and PRIVATE_KEY
    • edit var private_key to PRIVATE_KEY
    • insert if PRIVATE_KEY != ''
    • get PRIVATE_KEY from .env for safety
    opened by NuttakitDW 0
  • why other people mint so quickly

    why other people mint so quickly

    https://ftmscan.com/address/0x729d74098f6669541ed1b69403ae75f080ccf1e1

    this people mint level 4 gems so quickly ,his salt is too low, but execute success.

    are you knonw the reason? image

    opened by sumrise 3
  • refactor to support multiple chain properly

    refactor to support multiple chain properly

    some of our code is unnecessary based on Ethereum e.g. infura_key, hard code chain no, and more todo: refactor to a more generic one that would be valid across all EVM compatible chain e.g. infura_key -> rpc_provider (also fix others code to match this change) and more

    also TODO: remove the quick fix for fantom file LOL

    opened by yoyoismee 0
  • Idea for sampling different range of int random on multiple workers

    Idea for sampling different range of int random on multiple workers

    Will probably do tmr, parse n worker to the get_salt function so each worker could random int from different range of numbers eg. worker 1: 1-2^122, worker 2: 2^122 to 2^123

    opened by Duayt 1
Releases(v0.0.1d-test-build)
A facial recognition doorbell system using a Raspberry Pi

Facial Recognition Doorbell This project expands on the person-detecting doorbell system to allow it to identify faces, and announce names accordingly

rydercalmdown 22 Apr 15, 2022
As-ViT: Auto-scaling Vision Transformers without Training

As-ViT: Auto-scaling Vision Transformers without Training [PDF] Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wang, Denny Zhou In ICLR 2

VITA 68 Sep 05, 2022
PyTorch implementation of "Dataset Knowledge Transfer for Class-Incremental Learning Without Memory" (WACV2022)

Dataset Knowledge Transfer for Class-Incremental Learning Without Memory [Paper] [Slides] Summary Introduction Installation Reproducing results Citati

Habib Slim 5 Dec 05, 2022
FairMOT - A simple baseline for one-shot multi-object tracking

FairMOT - A simple baseline for one-shot multi-object tracking

Yifu Zhang 3.6k Jan 08, 2023
OBBDetection is a oriented object detection library, which is based on MMdetection.

OBBDetection news: We are now updating OBBDetection to new vision based on MMdetection v2.10, which has more advanced models and more efficient featur

jbwang1997 401 Jan 02, 2023
URIE: Universal Image Enhancementfor Visual Recognition in the Wild

URIE: Universal Image Enhancementfor Visual Recognition in the Wild This is the implementation of the paper "URIE: Universal Image Enhancement for Vis

Taeyoung Son 43 Sep 12, 2022
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 03, 2023
DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe.

DeepLab Introduction DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe. It combines densely-compute

Ali 234 Nov 14, 2022
Notspot robot simulation - Python version

Notspot robot simulation - Python version This repository contains all the files and code needed to simulate the notspot quadrupedal robot using Gazeb

50 Sep 26, 2022
Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python

FlappyAI Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python Everything Used Genetic Algorithm especially NEAT conce

Eryawan Presma Y. 2 Mar 24, 2022
hipCaffe: the HIP port of Caffe

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Cent

ROCm Software Platform 126 Dec 05, 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations

HIVE: Evaluating the Human Interpretability of Visual Explanations Project Page | Paper This repo provides the code for HIVE, a human evaluation frame

Princeton Visual AI Lab 16 Dec 13, 2022
Process text, including tokenizing and representing sentences as vectors and Applying some concepts like RNN, LSTM and GRU to create a classifier can detect the language in which a sentence is written from among 17 languages.

Language Identifier What is this ? The goal of this project is to create a model that is able to predict a given sentence language through text proces

Hossam Asaad 9 Dec 15, 2022
Deep learning for Engineers - Physics Informed Deep Learning

SciANN: Neural Networks for Scientific Computations SciANN is a Keras wrapper for scientific computations and physics-informed deep learning. New to S

SciANN 195 Jan 03, 2023
Training vision models with full-batch gradient descent and regularization

Stochastic Training is Not Necessary for Generalization -- Training competitive vision models without stochasticity This repository implements trainin

Jonas Geiping 32 Jan 06, 2023
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

Yunjey Choi 5.1k Dec 30, 2022
Deep ViT Features as Dense Visual Descriptors

dino-vit-features [paper] [project page] Official implementation of the paper "Deep ViT Features as Dense Visual Descriptors". We demonstrate the effe

Shir Amir 113 Dec 24, 2022
PyTorch code of my WACV 2022 paper Improving Model Generalization by Agreement of Learned Representations from Data Augmentation

Improving Model Generalization by Agreement of Learned Representations from Data Augmentation (WACV 2022) Paper ArXiv Why it matters? When data augmen

Rowel Atienza 5 Mar 04, 2022
PyTorch Implementation of AnimeGANv2

PyTorch implementation of AnimeGANv2

4k Jan 07, 2023
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch

Enformer - Pytorch (wip) Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch. The original tensorflow

Phil Wang 235 Dec 27, 2022