for those who dont want to pay $10/month for high school game footage with ads

Overview

nfhs-scraper

Disclaimer: I am in no way responsible for what you choose to do with this script and guide. I do not endorse avoiding paywalls or any illegal activity relating to this matter, I am simply providing a Python script to those who are interested.

NFHS Network is "the leader in streaming Live and On Demand high school sports."

In short, you need to pay $10 a month for a subscription to watch these games. As an athlete, I didn't want to spend $10 a month to watch my own games, with ads in it, so I made this.

Usage

Download the provided main.py Python file, so you can run it yourself. Remember, whatever you do is your choice and your responsibility.

Navigate to https://www.nfhsnetwork.com/, and find your school and sport, and select the game video you'd like to download.

In the last portion of the url, you will find the game ID.

e.g. https://www.nfhsnetwork.com/events/cool-high-school-cool-town/gam4576a0f402 -> game ID is gam4576a0f402


In the main.py file, do 2 things:

  • Change the game_id variable to your game ID.
  • Change the scrub_count variable to however much of the game you'd like to download. The game footage sometimes goes until 1-2 hours after the game ends, so you can usually omit this by lowering the count.
    • Scrubs are 10 seconds long each, you figure out how many of them you want to get your desired video length.

Run the Python file, and let the magic of computers do it's thing. It can take a while, but the video will be saved to output.mp4 in the same directory as the project, by default.

How it works

With only a bit of reverse engineering, it isn't too hard to understand how NFHS streams video to the user, and why this script works.

NFHS requires a subscription to watch the videos, and with this subscription comes an API key used to fetch the stream. In this case, you need a valid API key to fetch the stream, which is a .m3u8 file that looks a little something like this:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGET-DURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXT-INF:10.000000,
gamed408a95df_000000.ts
#EXT-INF:10.000000,
gamed408a95df_000001.ts
#EXT-INF:10.000000,
gamed408a95df_000002.ts

...and so on

Here, we notice a few things.

  • a) each media file is 10 seconds long, as specified by EXT-INF and EXT-X-TARGET-DURATION
  • b) the media files are incremental, meaning we don't need the .m3u8 stream at all to construct one ourselves

In the network tab, while watching the game, I could see my browser making a request to these files, which are hosted at https://cfscrubbed.nfhsnetwork.com/. I tried downloading one of these files myself, and was able to do so successfully with no authentication needed. So, hypothetically, I could download every file and then patch them together into one big video.

Hence, nfhs-scraper. :)


Feel free to star this repo
Owner
Conrad Crawford
full-stack typescript engineer • i write code sometimes (i think)
Conrad Crawford
对于有验证码的站点爆破,用于安全合法测试

使用方法 python3 main.py + 配置好的文件 python3 main.py Verify.json python3 main.py NoVerify.json 以上分别对应有验证码的demo和无验证码的demo Tips: 你可以以域名作为配置文件名字加载:python3 main

47 Nov 09, 2022
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Jan 08, 2023
Demonstration on how to use async python to control multiple playwright browsers for web-scraping

Playwright Browser Pool This example illustrates how it's possible to use a pool of browsers to retrieve page urls in a single asynchronous process. i

Bernardas Ališauskas 8 Oct 27, 2022
Web Scraping COVID 19 Meta Portal with Python

Web-Scraping-COVID-19-Meta-Portal-with-Python - Requests API and Beautiful Soup to scrape real-time COVID statistics from worldometer website and perform data cleaning and visual analysis in Jupyter

Aarif Munwar Jahan 1 Jan 04, 2022
Web-scraping - Program that scrapes a website for a collection of quotes, picks one at random and displays it

web-scraping Program that scrapes a website for a collection of quotes, picks on

Manvir Mann 1 Jan 07, 2022
Danbooru scraper with python

Danbooru Version: 0.0.1 License under: MIT License Dependencies Python: = 3.9.7 beautifulsoup4 cloudscraper Example of use Danbooru from danbooru imp

Sugarbell 2 Oct 27, 2022
Scrape all the media from an OnlyFans account - Updated regularly

Scrape all the media from an OnlyFans account - Updated regularly

CRIMINAL 3.2k Dec 29, 2022
Scraping script for stats on covid19 pandemic status in Chiba prefecture, Japan

About 千葉県の地域別の詳細感染者統計(Excelファイル) をCSVに変換し、かつ地域別の日時感染者集計値を出力するスクリプトです。 Requirement POSIX互換なシェル, e.g. GNU Bash (1) curl (1) python = 3.8 pandas = 1.1.

Conv4Japan 1 Nov 29, 2021
This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

Faisal Ahmed 1 Jan 10, 2022
Proxy scraper. Format: IP | PORT | COUNTRY | TYPE

proxy scraper 🔎 Installation: git clone https://github.com/ebankoff/proxy_scraper Required pip libraries (pip install library name): lxml beautifulso

Eban'ko 19 Dec 07, 2022
爬虫案例合集。包括但不限于《淘宝、京东、天猫、豆瓣、抖音、快手、微博、微信、阿里、头条、pdd、优酷、爱奇艺、携程、12306、58、搜狐、百度指数、维普万方、Zlibraty、Oalib、小说、招标网、采购网、小红书》

lxSpider 爬虫案例合集。包括但不限于《淘宝、京东、天猫、豆瓣、抖音、快手、微博、微信、阿里、头条、pdd、优酷、爱奇艺、携程、12306、58、搜狐、百度指数、维普万方、Zlibraty、Oalib、小说网站、招标采购网》 简介: 时光荏苒,记不清写了多少案例了。

lx 793 Jan 05, 2023
Auto Join: A GitHub action script to automatically invite everyone to the organization who star your repository.

Auto Invite To The Organization By Star A GitHub Action script to automatically invite everyone to your organization that stars your repository. What

Max Base 11 Dec 11, 2022
Instagram profile scrapper with python

IG Profile Scrapper Instagram profile Scrapper Just type the username, and boo! :D Instalation clone this repo to your computer git clone https://gith

its Galih 6 Nov 07, 2022
Works very well and you can ask for the type of image you want the scrapper to collect.

Works very well and you can ask for the type of image you want the scrapper to collect. Also follows a specific urls path depending on keyword selection.

Memo Sim 1 Feb 17, 2022
WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request

Project A: WebScraper A script that prints out a list of all EXTERNAL references

2 Apr 26, 2022
Create crawler get some new products with maximum discount in banimode website

crawler-banimode create crawler and get some new products with maximum discount in banimode website. این پروژه کوچک جهت یادگیری و کار با ابزار سلنیوم

nourollah rezaei 2 Feb 17, 2022
Scrap-mtg-top-8 - A top 8 mtg scraper using python

Scrap-mtg-top-8 - A top 8 mtg scraper using python

1 Jan 24, 2022
Simply scrape / download all the media from an fansly account.

Simply scrape / download all the media from an fansly account. Providing updates as long as its continuously gaining popularity, so hit the ⭐ button!

Mika C. 334 Jan 01, 2023
Script used to download data for stocks.

This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the d

Carmelo Gonzales 71 Oct 04, 2022
Find papers by keywords and venues. Then download it automatically

paper finder Find papers by keywords and venues. Then download it automatically. How to use this? Search CLI python search.py -k "knowledge tracing,kn

Jiahao Chen (TabChen) 2 Dec 15, 2022