Comment Webpage Screenshot is a GitHub Action that captures screenshots of web pages and HTML files located in the repository

Overview

Comment Webpage Screenshot

Comment Webpage Screenshot is a GitHub Action that captures screenshots of web pages and HTML files located in the repository, uploads them to an Image Upload Service and comments the screenshots on the pull request that triggered the action.

Note: This Action Only Works on Pull Requests.

Workflow inputs

These are the inputs that can be provided on the workflow.

Name Required Description Default
upload_to No Image Upload Service Name (Options are: github_branch, imgur) More Details github_branch
capture_changed_html_files No Enable or Disable Screenshot Capture for Changed HTML Files on the Pull Request (Options are: yes, no) yes
capture_html_file_paths No Comma Seperated paths to the HTML files to be captured (Example: /pages/index.html, about.html) null
capture_urls No Comma Seperated URLs to be captured (Example: https://dev.example.com, https://dev.example.com/about.html) null
github_token No GITHUB_TOKEN provided by the workflow run or Personal Access Token (PAT) github.token

Example Workflow

name: Comment Webpage Screenshot

on:
  pull_request:
    types: [opened, reopened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/[email protected]

      - name: Comment Webpage Screenshot
        uses: saadmk11/[email protected]
        with:
          # Optional, the action will create a new branch and
          # upload the screenshots to that branch.
          upload_to: github_branch  # Or, imgur
          # Optional, the action will capture screenshots
          # of all the changed html files on the pull request.
          capture_changed_html_files: yes  # Or, no
          # Optional, the action will capture screenshots
          # of the html files provided in this input.
          # Comma seperated file paths are accepted
          capture_html_file_paths: "/pages/index.html, about.html"
          # Optional, the action will capture screenshots
          # of the URLs provided in this input.
          # You can add URLs of your development server or
          # run the server in the previous step
          # and add that URL here (For Example: http://172.17.0.1:8000/).
          # Comma seperated URLs are accepted.
          capture_urls: "https://dev.example.com, https://dev.example.com/about.html"
          # Optional
          github_token: {{ secrets.MY_GITHUB_TOKEN }}

Run Local Development Server Inside the Workflow and Capture Screenshots

If you want to run your application development server inside the action workflow and capture screenshot from the server running inside the workflow, Then You can structure the workflow yaml file similar to this:

Using Docker to Run The Application:

name: Comment App Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      # Checkout your pull request code
    - uses: actions/[email protected]
    
    # Build Development Docker Image
    - run: docker build -t local .
    # Run the Docker Image
    # You need to run this detached (-d)
    # so that the action is not blocked
    # and can move on to the next step
    # You Need to publish the port on the host (-p 8000:8000)
    # So that it is reachable outside the container
    - run: docker run --name demo -d -p 8000:8000 local
    # Sleep for few seconds and let the container start
    - run: sleep 10
    
    # Run Screenshot Comment Action
    - name: Run Screenshot Comment Action
      uses: saadmk11/[email protected]
      with:
        upload_to: github_branch
        capture_changed_html_files: no
        # You must use `172.17.0.1` if you are running
        # the application locally inside the workflow
        # Otherwise the container which will run this action 
        # will not be able to reach the application
        capture_urls: 'http://172.17.0.1:8000/, http://172.17.0.1:8000/admin/login/'

Directly Running The Application:

name: Comment App Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/[email protected]

      - name: Use Node.js
        uses: actions/[email protected]
        with:
          node-version: '16.x'
      # Use `nohup` to run the node app
      # so that the execution of the next steps are not blocked
      - run: nohup node main.js &
      # Sleep for few seconds and let the container start
      - run: sleep 5

      # Run Screenshot Comment Action
      - name: Run Screenshot Comment Action
        uses: saadmk11/[email protected]
        with:
          upload_to: imgur
          capture_changed_html_files: no
          # You must use `172.17.0.1` if you are running
          # the application locally inside the workflow
          # Otherwise, the container which will run this action 
          # will not be able to reach the application
          capture_urls: 'http://172.17.0.1:8081'

Important Note:

If you run the application server inside the GitHub Actions Workflow:

  • You need to run it in the background or detached mode.

  • If you are using docker to run your application server you need top publish the port to the host (for example: -p 8000:8000).

  • you can not use localhost url on capture_urls. You need to use 172.17.0.1 so that comment-webpage-screenshot action can send request to the server running locally. So, http://localhost:8081 will become http://172.17.0.1:8081

Examples including application code can be found here: Example Projects

Run External Development Server and Capture Screenshots

If your application has a external development server that deploys changes on every pull request. You can add the URLs of your development server on capture_urls input. This will let the action capture screenshots from the external development server after deployment.

Example:

name: Comment App Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    # Run Screenshot Comment Action
    - name: Run Screenshot Comment Action
      uses: saadmk11/[email protected]
      with:
        upload_to: github_branch
        capture_changed_html_files: no
        # Add you external development server URL
        capture_urls: 'https://dev.example.com, https://dev.example.com/about.html'

Capture Screenshots for Static HTML Pages

If your repository contains only static files and does not require a server. You can just put the file path of the HTML files you want to capture screenshot of.

Example:

name: Comment Static Site Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    # Run Screenshot Comment Action
    - name: Run Screenshot Comment Action
      uses: saadmk11/[email protected]
      with:
        upload_to: imgur
        # Capture Screenshots of Changed HTML Files
        capture_changed_html_files: yes
        # Comma seperated paths to any other HTML File
        capture_html_file_paths: "/pages/index.html, about.html"

Available Image Upload Services

As GitHub Does not allow us to upload images to a comment using the API we need to rely on other services to host the screenshots.

These are the currently available image upload services.

Imgur

If the value of upload_to input is imgur then the screenshots will be uploaded to Imgur. Keep in mind that the uploaded screenshots will be public and anyone can see them. Imgur also has a rate limit of how many images can be uploaded per hour. Refer to Imgur's Rate Limits Docs for more details. This is suitable for small open source repositories.

Please refer to Imgur terms of service here

GitHub Branch (Default)

If the value of upload_to input is github_branch then the screenshots will be pushed to a GitHub branch created by the action on your repository. The screenshots on the comments will reference the Images pushed to this branch.

This is suitable for open source and private repositories.

If you want to add/use a different image upload service, feel free create a new issue/pull request.

Examples

You Can find some example use cases of this action here: Example Projects

Here are some comments created by this action on the Example Project:

Screenshot from 2021-11-18 21-32-20 screencapture-github-saadmk11-django-demo-pull-11-2021-11-18-21_31_00

License

The code in this project is released under the GNU GENERAL PUBLIC LICENSE Version 3.

You might also like...
Parsel lets you extract data from XML/HTML documents using XPath or CSS selectors

Parsel Parsel is a BSD-licensed Python library to extract and remove data from HTML and XML using XPath and CSS selectors, optionally combined with re

Extract embedded metadata from HTML markup

extruct extruct is a library for extracting embedded metadata from HTML markup. Currently, extruct supports: W3C's HTML Microdata embedded JSON-LD Mic

A pure-python HTML screen-scraping library

Scrapely Scrapely is a library for extracting structured data from HTML pages. Given some example web pages and the data to be extracted, scrapely con

WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request
WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request

Project A: WebScraper A script that prints out a list of all EXTERNAL references

Scrapes the Sun Life of Canada Philippines web site for historical prices of their investment funds and then saves them as CSV files.

slocpi-scraper Sun Life of Canada Philippines Inc Investment Funds Scraper Install dependencies pip install -r requirements.txt Usage General format:

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

robobrowser - A simple, Pythonic library for browsing the web without a standalone web browser.

RoboBrowser: Your friendly neighborhood web scraper Homepage: http://robobrowser.readthedocs.org/ RoboBrowser is a simple, Pythonic library for browsi

A web service for scanning media hosted by a Matrix media repository

Matrix Content Scanner A web service for scanning media hosted by a Matrix media repository Installation TODO Development In a virtual environment wit

Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages
Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages

Github Scraper Github scraper app is used to scrape data for a specific user profile. Github scraper app gets a github profile name and check whether

Comments
  • Good job, thank you

    Good job, thank you

    Buddy, very good work! This is awesome. In a place that feels like the wild wild west, full of badly documented actions, crappy ones, actions that do not work or that are built for one person very specific use case finding a gem like this just feels awesome. It worked out of the box, with the provided example configuration and does exactly what I expect it to do: I'm impressed. Took me more time to find it than using it, and that is already a compliment to it. I love how it takes advantage of the features of github without requiring you to use anything else. Very, very good job, thank you.

    One little question, does it keep the history of screenshot or is the branch overwritten? Regards

    question 
    opened by danielo515 2
  • Factor out comment/upload feature from screenshot feature

    Factor out comment/upload feature from screenshot feature

    Hey ! Thanks a lot for this !

    In my use case, I do not want to use the screenshoting part, since I already produce my own in my dockerized test suite (by the way, they are animated gifs, not images).

    Unfortunately, this makes your action unusable. This is a shame since all the logic about upload service and calling the API works great.

    I'd suggest factoring out the upload/comment part to a different action, usable on it's own. I've done this here: https://github.com/opengisch/comment-pr-with-images but of course happy to merge back / collaborate on this if you're interested maintaining a splitted version of your action.

    Cheers.

    opened by olivierdalang 0
Releases(v0.5)
Owner
Maksudul Haque
Web Developer, Open Source Contributor
Maksudul Haque
🥫 The simple, fast, and modern web scraping library

About gazpacho is a simple, fast, and modern web scraping library. The library is stable, actively maintained, and installed with zero dependencies. I

Max Humber 692 Dec 22, 2022
Find papers by keywords and venues. Then download it automatically

paper finder Find papers by keywords and venues. Then download it automatically. How to use this? Search CLI python search.py -k "knowledge tracing,kn

Jiahao Chen (TabChen) 2 Dec 15, 2022
This tool can be used to extract information from any website

WEB-INFO- This tool can be used to extract information from any website Install Termux and run the command --- $ apt-get update $ apt-get upgrade $ pk

1 Oct 24, 2021
Scrapping Connections' info on Linkedin

Scrapping Connections' info on Linkedin

MohammadReza Ardestani 1 Feb 11, 2022
京东抢茅台,秒杀成功很多次讨论,天猫抢购,赚钱交流等。

Jd_Seckill 特别声明: 请添加个人微信:19972009719 进群交流讨论 目前群里很多人抢到【扫描微信添加群就好,满200关闭群,有喜欢薅信用卡羊毛的也可以找我交流】 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性

50 Jan 05, 2023
Poolbooru gelscraper - a simple python script for scraping images off gelbooru pools.

poolbooru_gelscraper a simple python script for scraping images off gelbooru pools. modules required:requests_html, and os by default saves files with

savantshuia 1 Jan 02, 2022
Create crawler get some new products with maximum discount in banimode website

crawler-banimode create crawler and get some new products with maximum discount in banimode website. این پروژه کوچک جهت یادگیری و کار با ابزار سلنیوم

nourollah rezaei 2 Feb 17, 2022
An automated, headless YouTube Watcher and Scraper

Searches YouTube, queries recommended videos and watches them. All fully automated and anonymised through the Tor network. The project consists of two independently usable components, the YouTube aut

44 Oct 18, 2022
Web Scraping Instagram photos with Selenium by only using a hashtag.

Web-Scraping-Instagram This project is used to automatically obtain images by web scraping Instagram with Selenium in Python. The required input will

Sandro Agama 3 Nov 24, 2022
A spider for Universal Online Judge(UOJ) system, converting problem pages to PDFs.

Universal Online Judge Spider Introduction This is a spider for Universal Online Judge (UOJ) system (https://uoj.ac/). It also works for all other Onl

TriNitroTofu 1 Dec 07, 2021
Web3 Pancakeswap Sniper bot written in python3

Pancakeswap_BSC_Sniper_Bot Web3 Pancakeswap Sniper bot written in python3, Please note the license conditions! The first Binance Smart Chain sniper bo

Treading-Tigers 295 Dec 31, 2022
This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster.

IST Research 1.1k Jan 06, 2023
Scrapes proxies and saves them to a text file

Proxy Scraper Scrapes proxies from https://proxyscrape.com and saves them to a file. Also has a customizable theme system Made by nell and Lamp

nell 2 Dec 22, 2021
京东秒杀商品抢购Python脚本

Jd_Seckill 非常感谢原作者 https://github.com/zhou-xiaojun/jd_mask 提供的代码 也非常感谢 https://github.com/wlwwu/jd_maotai 进行的优化 主要功能 登陆京东商城(www.jd.com) cookies登录 (需要自

Andy Zou 1.5k Jan 03, 2023
This was supposed to be a web scraping project, but somehow I've turned it into a spamming project

Introduction This was supposed to be a web scraping project, but somehow I've turned it into a spamming project.

Boss Perry (Pez) 1 Jan 23, 2022
Web Scraping OLX with Python and Bsoup.

webScrap WebScraping first step. Authors: Paulo, Claudio M. First steps in Web Scraping. Project carried out for training in Web Scrapping. The export

claudio paulo 5 Sep 25, 2022
Scrape puzzle scrambles from csTimer.net

Scroodle Selenium script to scrape scrambles from csTimer.net csTimer runs locally in your browser, so this doesn't strain the servers any more than i

Jason Nguyen 1 Oct 29, 2021
Divar.ir Ads scrapper

Divar.ir Ads Scrapper Introduction This project first asynchronously grab Divar.ir Ads and then save to .csv and .xlsx files named data.csv and data.x

Iman Kermani 4 Aug 29, 2022
Scraping news from Ucsal portal with Scrapy.

NewsScraping Esse é um projeto de raspagem das últimas noticias, de 2021, do portal da universidade Ucsal http://noosfero.ucsal.br/institucional Tecno

Crissiano Pires 0 Sep 30, 2021
An arxiv spider

An Arxiv Spider 做为一个cser,杰出男孩深知内核对连接到计算机上的硬件设备进行管理的高效方式是中断而不是轮询。每当小伙伴发来一篇刚挂在arxiv上的”热乎“好文章时,杰出男孩都会感叹道:”师兄这是每天都挂在arxiv上呀,跑的好快~“。于是杰出男孩找了找 github,借鉴了一下其

Jie Liu 11 Sep 09, 2022