EOD Historical Data Python Library (Unofficial)

Overview

EOD Historical Data Python Library (Unofficial)

https://eodhistoricaldata.com

Installation

python3 -m pip install eodhistoricaldata

Note

Demo API key below is provided by EOD Historial Data for testing purposes https://eodhistoricaldata.com/financial-apis/new-real-time-data-api-websockets

Usage

None: """Main""" websocket = WebSocketClient( # Demo API key for testing purposes api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="crypto", symbols=["BTC-USD"] #api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="forex", symbols=["EURUSD"] #api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="us", symbols=["AAPL"] ) websocket.start() message_count = 0 while True: if websocket: if ( message_count != websocket.message_count ): print(websocket.message) message_count = websocket.message_count sleep(0.25) # output every 1/4 second, websocket is realtime if __name__ == "__main__": main() ">
"""Sample script"""

from time import sleep
from eodhistoricaldata import WebSocketClient

def main() -> None:
    """Main"""

    websocket = WebSocketClient(
        # Demo API key for testing purposes
        api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="crypto", symbols=["BTC-USD"]
        #api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="forex", symbols=["EURUSD"]
        #api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="us", symbols=["AAPL"]
    )
    websocket.start()

    message_count = 0
    while True:
        if websocket:
            if (
                message_count != websocket.message_count
            ):
                print(websocket.message)
                message_count = websocket.message_count
                sleep(0.25)  # output every 1/4 second, websocket is realtime

if __name__ == "__main__":
    main()
You might also like...
TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI) data
TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI) data

tedana: TE Dependent ANAlysis TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI)

Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data.
Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data.

Hatchet Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data. It is intended for analyzing

 🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.
🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.

🧪📈 🐍. The purpose of the panel-chemistry project is to make it really easy for you to do DATA ANALYSIS and build powerful DATA AND VIZ APPLICATIONS within the domain of Chemistry using using Python and HoloViz Panel.

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather than invoking the Python interpreter, Tuplex generates optimized LLVM bytecode for the given pipeline and input data set.

Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

Catalogue data - A Python Scripts to prepare catalogue data

catalogue_data Scripts to prepare catalogue data. Setup Clone this repo. Install

fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.
fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.

Fast Data Science, AKA fds, is a CLI for Data Scientists to version control data and code at once, by conveniently wrapping git and dvc

A data parser for the internal syncing data format used by Fog of World.
A data parser for the internal syncing data format used by Fog of World.

A data parser for the internal syncing data format used by Fog of World. The parser is not designed to be a well-coded library with good performance, it is more like a demo for showing the data structure.

Functional Data Analysis, or FDA, is the field of Statistics that analyses data that depend on a continuous parameter.
Comments
  • Syntax issue with query Parameter in get_calendar_ functions

    Syntax issue with query Parameter in get_calendar_ functions

    Hello,

    When using the get_calendar_XXX, functions we cannot use the query parameters defined by EOD as the word "from" is forbidden by Python, for instance : earning=client.get_calendar_earnings(from='2022-11-01', to='2022-11-30')

    will raise an issue.

    Should I pass the argument differently ?

    opened by ATCBGroup 1
  • dependency on matplotlib but it is not installed with pip

    dependency on matplotlib but it is not installed with pip

    dependency on matplotlib but it is not installed with pip

    [email protected]:~/git/traderai/eod$ cat test.py
    from eodhd import APIClient
    api = APIClient("DEMO")
    
    [email protected]:~/git/traderai/eod$ python3 test.py
    Traceback (most recent call last):
      File "/home/mshamber/.local/lib/python3.8/site-packages/eodhd/eodhdgraphs.py", line 5, in <module>
        import matplotlib.pyplot as plt
    ModuleNotFoundError: No module named 'matplotlib'
    
    [email protected]:~/git/traderai/eod$ python3 -m pip install eodhd
    Requirement already satisfied: eodhd in /home/mshamber/.local/lib/python3.8/site-packages (1.0.8)
    Requirement already satisfied: websocket-client==1.3.3 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (1.3.3)
    Requirement already satisfied: rich==12.5.1 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (12.5.1)
    Requirement already satisfied: websockets==10.3 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (10.3)
    Requirement already satisfied: numpy==1.21.6 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (1.21.6)
    Requirement already satisfied: pandas==1.3.5 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (1.3.5)
    Requirement already satisfied: requests==2.28.1 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (2.28.1)
    Requirement already satisfied: commonmark<0.10.0,>=0.9.0 in /home/mshamber/.local/lib/python3.8/site-packages (from rich==12.5.1->eodhd) (0.9.1)
    Requirement already satisfied: typing-extensions<5.0,>=4.0.0; python_version < "3.9" in /home/mshamber/.local/lib/python3.8/site-packages (from rich==12.5.1->eodhd) (4.3.0)
    Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /home/mshamber/.local/lib/python3.8/site-packages (from rich==12.5.1->eodhd) (2.13.0)
    Requirement already satisfied: python-dateutil>=2.7.3 in /home/mshamber/.local/lib/python3.8/site-packages (from pandas==1.3.5->eodhd) (2.8.2)
    Requirement already satisfied: pytz>=2017.3 in /home/mshamber/.local/lib/python3.8/site-packages (from pandas==1.3.5->eodhd) (2022.5)
    Requirement already satisfied: charset-normalizer<3,>=2 in /home/mshamber/.local/lib/python3.8/site-packages (from requests==2.28.1->eodhd) (2.1.1)
    Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3/dist-packages (from requests==2.28.1->eodhd) (2.8)
    Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests==2.28.1->eodhd) (2019.11.28)
    Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3/dist-packages (from requests==2.28.1->eodhd) (1.25.8)
    Requirement already satisfied: six>=1.5 in /home/mshamber/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas==1.3.5->eodhd) (1.16.0)
    
    opened by opme 1
Releases(1.0.8)
Owner
Michael Whittle
Solution Architect
Michael Whittle
My solution to the book A Collection of Data Science Take-Home Challenges

DS-Take-Home Solution to the book "A Collection of Data Science Take-Home Challenges". Note: Please don't contact me for the dataset. This repository

Jifu Zhao 1.5k Jan 03, 2023
Calculate multilateral price indices in Python (with Pandas and PySpark).

IndexNumCalc Calculate multilateral price indices using the GEKS-T (CCDI), Time Product Dummy (TPD), Time Dummy Hedonic (TDH), Geary-Khamis (GK) metho

Dr. Usman Kayani 3 Apr 27, 2022
Monitor the stability of a pandas or spark dataframe ⚙︎

Population Shift Monitoring popmon is a package that allows one to check the stability of a dataset. popmon works with both pandas and spark datasets.

ING Bank 403 Dec 07, 2022
In this tutorial, raster models of soil depth and soil water holding capacity for the United States will be sampled at random geographic coordinates within the state of Colorado.

Raster_Sampling_Demo (Resulting graph of this demo) Background Sampling values of a raster at specific geographic coordinates can be done with a numbe

2 Dec 13, 2022
Extract data from a wide range of Internet sources into a pandas DataFrame.

pandas-datareader Up to date remote data access for pandas, works for multiple versions of pandas. Installation Install using pip pip install pandas-d

Python for Data 2.5k Jan 09, 2023
Feature engineering and machine learning: together at last

Feature engineering and machine learning: together at last! Lambdo is a workflow engine which significantly simplifies data analysis by unifying featu

Alexandr Savinov 14 Sep 15, 2022
The micro-framework to create dataframes from functions.

The micro-framework to create dataframes from functions.

Stitch Fix Technology 762 Jan 07, 2023
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Trung-Duy Nguyen 27 Nov 01, 2022
Clean and reusable data-sciency notebooks.

KPACUBO KPACUBO is a set Jupyter notebooks focused on the best practices in both software development and data science, namely, code reuse, explicit d

Matvey Morozov 1 Jan 28, 2022
A set of procedures that can realize covid19 virus detection based on blood.

A set of procedures that can realize covid19 virus detection based on blood.

Nuyoah-xlh 3 Mar 07, 2022
A Python package for the mathematical modeling of infectious diseases via compartmental models

A Python package for the mathematical modeling of infectious diseases via compartmental models. Originally designed for epidemiologists, epispot can be adapted for almost any type of modeling scenari

epispot 12 Dec 28, 2022
Intake is a lightweight package for finding, investigating, loading and disseminating data.

Intake: A general interface for loading data Intake is a lightweight set of tools for loading and sharing data in data science projects. Intake helps

Intake 851 Jan 01, 2023
DataPrep — The easiest way to prepare data in Python

DataPrep — The easiest way to prepare data in Python

SFU Database Group 1.5k Dec 27, 2022
International Space Station data with Python research 🌎

International Space Station data with Python research 🌎 Plotting ISS trajectory, calculating the velocity over the earth and more. Plotting trajector

Facundo Pedaccio 41 Jun 16, 2022
Data Science Environment Setup in single line

datascienv is package that helps your to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries

Ashish Patel 55 Dec 16, 2022
Meltano: ELT for the DataOps era. Meltano is open source, self-hosted, CLI-first, debuggable, and extensible.

Meltano is open source, self-hosted, CLI-first, debuggable, and extensible. Pipelines are code, ready to be version c

Meltano 625 Jan 02, 2023
PyPDC is a Python package for calculating asymptotic Partial Directed Coherence estimations for brain connectivity analysis.

Python asymptotic Partial Directed Coherence and Directed Coherence estimation package for brain connectivity analysis. Free software: MIT license Doc

Heitor Baldo 3 Nov 26, 2022
Codes for the collection and predictive processing of bitcoin from the API of coinmarketcap

Codes for the collection and predictive processing of bitcoin from the API of coinmarketcap

Teo Calvo 5 Apr 26, 2022
MIR Cheatsheet - Survival Guidebook for MIR Researchers in the Lab

MIR Cheatsheet - Survival Guidebook for MIR Researchers in the Lab

SeungHeonDoh 3 Jul 02, 2022
Python library for creating data pipelines with chain functional programming

PyFunctional Features PyFunctional makes creating data pipelines easy by using chained functional operators. Here are a few examples of what it can do

Pedro Rodriguez 2.1k Jan 05, 2023