Nasdaq Cloud Data Service (NCDS) provides a modern and efficient method of delivery for realtime exchange data and other financial information. This repository provides an SDK for developing applications to access the NCDS.

Overview

Nasdaq Cloud Data Service (NCDS)

Nasdaq Cloud Data Service (NCDS) provides a modern and efficient method of delivery for realtime exchange data and other financial information. Data is made available through a suite of APIs, allowing for effortless integration of data from disparate sources, and a dramatic reduction in time to market for customer-designed applications. The API is highly scalable, and robust enough to support the delivery of real-time exchange data.

Items To Note

  • Connecting to the API requires credentials, which are provided by the Nasdaq Data Operations team during an on-boarding process
  • This sample code only connects to one topic (NLSCTA); during on-boarding process, you will receive a topic list that you're entitled to.
  • See https://github.com/Nasdaq/NasdaqCloudDataService-SDK-Java for our officially support Java-based SDK.

Table of Contents

Getting Started

Python version support

The SDK currently supports Python 3.9 and above

Get the SDK

The source code is currently hosted on GitHub at: https://github.com/Nasdaq/NasdaqCloudDataService-SDK-Python

  • Clone the repository: git clone https://github.com/Nasdaq/NasdaqCloudDataService-SDK-Python.git
  • Move into the directory cd NasdaqCloudDataService-SDK-Python
  • Install the library and its dependencies from local source with pip install -e .

Optional: to use the Jupyter notebook provided,

  • Download Jupyter notebook using either pip pip3 install notebook or conda conda install -c conda-forge notebook
  • To run the notebook, use the command jupyter notebook and the Notebook Dashboard will open in your browser
  • Select the file python_sdk_examples.ipynb

Retrieving certificates

Run ncdssdk_client/src/main/python/ncdsclient/NCDSSession.py with arguments, which takes the path where the certificate should be installed.

For example: python3.9 ncdssdk_client/src/main/python/ncdsclient/NCDSSession.py -opt INSTALLCERTS -path /my/trusted/store/ncdsinstallcerts

Stream configuration

Replace example stream properties in the file kafka-config.json (https://github.com/Nasdaq/NasdaqCloudDataService-SDK-Python/blob/master/ncdssdk_client/src/main/python/resources/kafka-config.json) with provided values during on-boarding.

Required kafka configuration

"bootstrap.servers": {streams_endpoint_url}:9094
"ssl.ca.location": ca.crt

For optional consumer configurations see: https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md

Client Authentication configuration

Replace example client authentication properties in the file client-authentication-config.json (https://github.com/Nasdaq/NasdaqCloudDataService-SDK-Python/blob/master/ncdssdk_client/src/main/python/resources/client-authentication-config.json) with valid credentials provided during on-boarding.

oauth.token.endpoint.uri: https://{auth_endpoint_url}/auth/realms/demo/protocol/openid-connect/token
oauth.client.id: client
oauth.client.secret: client-secret

Create NCDS Session Client

How to run:

-opt -- Provide the operation you want to perform \n" +
  "        * TOP - View the top nnn records in the Topic/Stream\n" +
  "        * SCHEMA - Display the Schema for the topic\n" +
  "        * METRICS - Display the Metrics for the topic\n" +
  "        * TOPICS - List of streams available on Nasdaq Cloud DataService\n" +
  "        * GETMSG - Get one example message for the given message name\n" +
  "        * INSTALLCERTS - Install certificate to keystore\n" +
  "        * CONTSTREAM   - Retrieve continuous stream  \n" +
  "        * FILTERSTREAM  - Retrieve continuous stream filtered by symbols and/or msgtypes \n" +
  "        * HELP - help \n" +
"-topic -- Provide topic for selected option         --- REQUIRED for TOP,SCHEMA,METRICS,GETMSG,CONTSTREAM and FILTERSTREAM \n" +
"-symbols -- Provide symbols comma separated list    --- OPTIONAL for FILTERSTREAM" +
"-msgnames -- Provide msgnames comma separated list  --- OPTIONAL for FILTERSTREAM" +
"-authprops -- Provide Client Properties File path   --- For using different set of Client Authentication Properties \n" +
"-kafkaprops -- Provide Kafka Properties File path   --- For using different set of Kafka Properties \n" +
"-n -- Provide number of messages to retrieve        --- REQUIRED for TOP \n" +
"-msgName -- Provide name of message based on schema --- REQUIRED for GETMSG \n" +
"-path -- Provide the path for key store             --- REQUIRED for INSTALLCERTS \n" +
"-timestamp -- Provide timestamp in milliseconds     --- OPTIONAL for TOP, CONTSTREAM and FILTERSTREAM\n"

A few examples:

Get first 100 records for given stream

python3.9 ncdssdk_client/src/main/python/ncdsclient/NCDSSession.py -opt TOP -n 100 -topic NLSCTA

Get all available streams

python3.9 ncdssdk_client/src/main/python/ncdsclient/NCDSSession.py -opt TOPICS

Using the SDK

Below are several examples for how to access data using the SDK. A Jupyter notebook with this same code and information is provided in the file python_sdk_examples.ipnyb

To run these examples, you will need the import and configuration dictionaries below. Replace the config information with your credentials.

from ncdssdk import NCDSClient

security_cfg = {
    "oauth.token.endpoint.uri": "https://{auth_endpoint_url}/auth/realms/demo/protocol/openid-connect/token",
    "oauth.client.id": "client",
    "oauth.client.secret": "client-secret"
}
kafka_cfg = {
    "bootstrap.servers": "{streams_endpoint_url}:9094",
    "ssl.ca.location": "ca.crt",
    "auto.offset.reset": "earliest"
}

Getting list of data stream available

List all available data stream for the user

ncds_client = NCDSClient(security_cfg, kafka_cfg)
topics = ncds_client.list_topics_for_client()
print("Data set topics:")
for topic_entry in topics:
print(topic_entry)

Example output:

List of streams available on Nasdaq Cloud Data Service:
GIDS
NLSUTP
NLSCTA

Getting schema for the stream

This method returns the schema for the stream in Apache Avro format (https://avro.apache.org/docs/current/spec.html)

ncds_client = NCDSClient(security_cfg, kafka_cfg)
topic = "NLSCTA"
schema = ncds_client.get_schema_for_topic(topic)
print(schema)

Example output:

[ {
"type" : "record",
"name" : "SeqAdjClosingPrice",
"namespace" : "com.nasdaq.equities.trades.applications.nls.messaging.binary21",
"fields" : [ {
  "name" : "SoupPartition",
  "type" : "int"
}, {
  "name" : "SoupSequence",
  "type" : "long"
}, {
  "name" : "trackingID",
  "type" : "long"
}, {
  "name" : "msgType",
  "type" : "string"
}, {
  "name" : "symbol",
  "type" : "string"
}, {
  "name" : "securityClass",
  "type" : "string"
}, {
  "name" : "adjClosingPrice",
  "type" : "int"
} ],
"version" : "1"
}, {...
} .......
.... ]

Get first 10 messages of the stream

ncds_client = NCDSClient(security_cfg, kafka_cfg)
topic = "NLSCTA"
records = ncds_client.top_messages(topic)
for i in range(0, 10):
    print("key: ", records[i].key())
    print("value: ", str(records[i].value()))

Example output:

Top 10 Records for the Topic: NLSCTA
key: 14600739
value: {"SoupPartition": 0, "SoupSequence": 14600739, "trackingID": 72000000024569, "msgType": "S", "event": "E", "schema_name": "SeqSystemEventMessage"}
key: 14600740
value: {"SoupPartition": 0, "SoupSequence": 14600740, "trackingID": 72900000006514, "msgType": "J", "symbol": "A", "securityClass": "N", "consHigh": 1487799, "consLow": 1466600, "consClose": 1478100, "cosolidatedVolume": 1259303, "consOpen": 1486800, "schema_name": "SeqEndOfDayTradeSummary"}
key: 14600741
value: {"SoupPartition": 0, "SoupSequence": 14600741, "trackingID": 72900000006514, "msgType": "J", "symbol": "AA", "securityClass": "N", "consHigh": 378039, "consLow": 366800, "consClose": 368400, "cosolidatedVolume": 6047752, "consOpen": 372000, "schema_name": "SeqEndOfDayTradeSummary"}
key: 14600742
value: {"SoupPartition": 0, "SoupSequence": 14600742, "trackingID": 72900000006514, "msgType": "J", "symbol": "AAA", "securityClass": "P", "consHigh": 250400, "consLow": 250101, "consClose": 250250, "cosolidatedVolume": 3121, "consOpen": 250400, "schema_name": "SeqEndOfDayTradeSummary"}
key: 14600743
value: {"SoupPartition": 0, "SoupSequence": 14600743, "trackingID": 72900000006514, "msgType": "J", "symbol": "AAAU", "securityClass": "P", "consHigh": 176500, "consLow": 174700, "consClose": 176000, "cosolidatedVolume": 303143, "consOpen": 175000, "schema_name": "SeqEndOfDayTradeSummary"}
key: 14600744
value: {"SoupPartition": 0, "SoupSequence": 14600744, "trackingID": 72900000006514, "msgType": "J", "symbol": "AAC", "securityClass": "N", "consHigh": 97900, "consLow": 97500, "consClose": 97500, "cosolidatedVolume": 19787, "consOpen": 97600, "schema_name": "SeqEndOfDayTradeSummary"}
key: 14600745
value: {"SoupPartition": 0, "SoupSequence": 14600745, "trackingID": 72900000006514, "msgType": "J", "symbol": "AAC+", "securityClass": "N", "consHigh": 12800, "consLow": 12000, "consClose": 12500, "cosolidatedVolume": 85652, "consOpen": 12300, "schema_name": "SeqEndOfDayTradeSummary"}
key: 14600746
value: {"SoupPartition": 0, "SoupSequence": 14600746, "trackingID": 72900000006514, "msgType": "J", "symbol": "AAC=", "securityClass": "N", "consHigh": 100500, "consLow": 99500, "consClose": 100000, "cosolidatedVolume": 74060, "consOpen": 99500, "schema_name": "SeqEndOfDayTradeSummary"}
key: 14600747
value: {"SoupPartition": 0, "SoupSequence": 14600747, "trackingID": 72900000006514, "msgType": "J", "symbol": "AAIC", "securityClass": "N", "consHigh": 41850, "consLow": 40600, "consClose": 40600, "cosolidatedVolume": 241597, "consOpen": 41800, "schema_name": "SeqEndOfDayTradeSummary"}
key: 14600748
value: {"SoupPartition": 0, "SoupSequence": 14600748, "trackingID": 72900000006514, "msgType": "J", "symbol": "AAIC-B", "securityClass": "N", "consHigh": 249700, "consLow": 249700, "consClose": 249700, "cosolidatedVolume": 238, "consOpen": 249700, "schema_name": "SeqEndOfDayTradeSummary"}

Get first 10 messages of the stream from given timestamp

This returns the first 10 available messages of the stream given timestamp in milliseconds since the UNIX epoch.

ncds_client = NCDSClient(security_cfg, kafka_cfg)
topic="NLSCTA"
timestamp = 1590084446510
records = ncds_client.top_messages(topic, timestamp)
for i in range(0, 10):
    print("key: ", records[i].key())
    print("value: ", str(records[i].value()))

Example output:

Offset: 105834100
Top 10 Records for the Topic:NLSCTA
key:9362630
value :{"SoupPartition": 0, "SoupSequence": 9362630, "trackingID": 50845551492208, "msgType": "T", "marketCenter": "L", "symbol": "SIVR    ", "securityClass": "P", "controlNumber": "0000A2MLOB", "price": 164797, "size": 1, "saleCondition": "@  o", "cosolidatedVolume": 520174}
key:9362631
value :{"SoupPartition": 0, "SoupSequence": 9362631, "trackingID": 50845557908136, "msgType": "T", "marketCenter": "Q", "symbol": "TJX     ", "securityClass": "N", "controlNumber": "   8358213", "price": 540300, "size": 100, "saleCondition": "@   ", "cosolidatedVolume": 16278768}
key:9362632
value :{"SoupPartition": 0, "SoupSequence": 9362632, "trackingID": 50845565203932, "msgType": "T", "marketCenter": "L", "symbol": "CMI     ", "securityClass": "N", "controlNumber": "0000A2MLOC", "price": 1579900, "size": 100, "saleCondition": "@   ", "cosolidatedVolume": 568622}
key:9362633
value :{"SoupPartition": 0, "SoupSequence": 9362633, "trackingID": 50845565791061, "msgType": "T", "marketCenter": "L", "symbol": "UTI     ", "securityClass": "N", "controlNumber": "0000A2MLOD", "price": 70150, "size": 64, "saleCondition": "@  o", "cosolidatedVolume": 151359}
key:9362634
value :{"SoupPartition": 0, "SoupSequence": 9362634, "trackingID": 50845566628604, "msgType": "T", "marketCenter": "L", "symbol": "UFS     ", "securityClass": "N", "controlNumber": "0000A2MLOE", "price": 203660, "size": 24, "saleCondition": "@  o", "cosolidatedVolume": 664962}
key:9362635
value :{"SoupPartition": 0, "SoupSequence": 9362635, "trackingID": 50845569154140, "msgType": "T", "marketCenter": "L", "symbol": "KR      ", "securityClass": "N", "controlNumber": "0000A2MLOF", "price": 320350, "size": 100, "saleCondition": "@   ", "cosolidatedVolume": 4054473}
key:9362636
value :{"SoupPartition": 0, "SoupSequence": 9362636, "trackingID": 50845577944984, "msgType": "T", "marketCenter": "L", "symbol": "PAGP    ", "securityClass": "N", "controlNumber": "0000A2MLOG", "price": 98350, "size": 100, "saleCondition": "@   ", "cosolidatedVolume": 1557084}
key:9362637
value :{"SoupPartition": 0, "SoupSequence": 9362637, "trackingID": 50845588007117, "msgType": "T", "marketCenter": "L", "symbol": "LUV     ", "securityClass": "N", "controlNumber": "0000A2MLOH", "price": 297413, "size": 4, "saleCondition": "@  o", "cosolidatedVolume": 16791899}
key:9362638
value :{"SoupPartition": 0, "SoupSequence": 9362638, "trackingID": 50845596356365, "msgType": "T", "marketCenter": "L", "symbol": "M       ", "securityClass": "N", "controlNumber": "0000A2MLOI", "price": 54000, "size": 10, "saleCondition": "@  o", "cosolidatedVolume": 39273663}
key:9362639
value :{"SoupPartition": 0, "SoupSequence": 9362639, "trackingID": 50845600594567, "msgType": "T", "marketCenter": "L", "symbol": "TTM     ", "securityClass": "N", "controlNumber": "0000A2MLOJ", "price": 56000, "size": 400, "saleCondition": "@   ", "cosolidatedVolume": 1293244}

Get example message from stream

Print message to the console for given message name.

ncds_client = NCDSClient(security_cfg, kafka_cfg)
topic = "NLSCTA"
print(ncds_client.get_sample_messages(topic, "SeqDirectoryMessage", all_messages=False))

Example output:

{'SoupPartition': 0, 'SoupSequence': 500, 'trackingID': 11578737109589, 'msgType': 'R', 'symbol': 'AMN', 'marketClass': 'N', 'fsi': '', 'roundLotSize': 100, 'roundLotOnly': 'N', 'issueClass': 'C', 'issueSubtype': 'Z', 'authenticity': 'P', 'shortThreshold': 'N', 'ipo': '', 'luldTier': '2', 'etf': 'N', 'etfFactor': 0, 'inverseETF': 'N', 'compositeId': 'BBG000BCT197', 'schema_name': 'SeqDirectoryMessage'}

Get continuous stream

ncds_client = NCDSClient(security_cfg, kafka_cfg)
topic = "NLSCTA"
consumer = ncds_client.ncds_kafka_consumer(topic)
while True:
    messages = consumer.consume(num_messages=1, timeout=5)
    if len(messages) == 0:
        print(f"No Records Found for the Topic: {topic}")
              
    for message in messages:
        print(f"value :" + message.value())

Example output: note that only the first ten messages of the stream are shown in this example

value :{"SoupPartition": 0, "SoupSequence": 1, "trackingID": 7233292771056, "msgType": "S", "event": "O", "schema_name": "SeqSystemEventMessage"}
value :{"SoupPartition": 0, "SoupSequence": 2, "trackingID": 11578719526113, "msgType": "R", "symbol": "A", "marketClass": "N", "fsi": "", "roundLotSize": 100, "roundLotOnly": "N", "issueClass": "C", "issueSubtype": "Z", "authenticity": "P", "shortThreshold": "N", "ipo": "", "luldTier": "1", "etf": "N", "etfFactor": 0, "inverseETF": "N", "compositeId": "BBG000C2V3D6", "schema_name": "SeqDirectoryMessage"}
value :{"SoupPartition": 0, "SoupSequence": 3, "trackingID": 11578719526113, "msgType": "G", "symbol": "A", "securityClass": "N", "adjClosingPrice": 1500300, "schema_name": "SeqAdjClosingPrice"}
value :{"SoupPartition": 0, "SoupSequence": 4, "trackingID": 11578719831656, "msgType": "R", "symbol": "AA", "marketClass": "N", "fsi": "", "roundLotSize": 100, "roundLotOnly": "N", "issueClass": "C", "issueSubtype": "Z", "authenticity": "P", "shortThreshold": "N", "ipo": "", "luldTier": "1", "etf": "N", "etfFactor": 1, "inverseETF": "N", "compositeId": "BBG00B3T3HD3", "schema_name": "SeqDirectoryMessage"}
value :{"SoupPartition": 0, "SoupSequence": 5, "trackingID": 11578719831656, "msgType": "G", "symbol": "AA", "securityClass": "N", "adjClosingPrice": 374400, "schema_name": "SeqAdjClosingPrice"}
value :{"SoupPartition": 0, "SoupSequence": 6, "trackingID": 11578719879872, "msgType": "R", "symbol": "AAA", "marketClass": "P", "fsi": "", "roundLotSize": 100, "roundLotOnly": "N", "issueClass": "Q", "issueSubtype": "I", "authenticity": "P", "shortThreshold": "N", "ipo": "", "luldTier": "2", "etf": "Y", "etfFactor": 1, "inverseETF": "N", "compositeId": "BBG00X5FSP48", "schema_name": "SeqDirectoryMessage"}
value :{"SoupPartition": 0, "SoupSequence": 7, "trackingID": 11578719879872, "msgType": "G", "symbol": "AAA", "securityClass": "P", "adjClosingPrice": 250050, "schema_name": "SeqAdjClosingPrice"}
value :{"SoupPartition": 0, "SoupSequence": 8, "trackingID": 11578719916519, "msgType": "R", "symbol": "AAAU", "marketClass": "P", "fsi": "", "roundLotSize": 100, "roundLotOnly": "N", "issueClass": "Q", "issueSubtype": "I", "authenticity": "P", "shortThreshold": "N", "ipo": "", "luldTier": "1", "etf": "Y", "etfFactor": 1, "inverseETF": "N", "compositeId": "BBG00LPXX872", "schema_name": "SeqDirectoryMessage"}
value :{"SoupPartition": 0, "SoupSequence": 9, "trackingID": 11578719916519, "msgType": "G", "symbol": "AAAU", "securityClass": "P", "adjClosingPrice": 179850, "schema_name": "SeqAdjClosingPrice"}
value :{"SoupPartition": 0, "SoupSequence": 10, "trackingID": 11578719950254, "msgType": "R", "symbol": "AAC", "marketClass": "N", "fsi": "", "roundLotSize": 100, "roundLotOnly": "N", "issueClass": "O", "issueSubtype": "Z", "authenticity": "P", "shortThreshold": "N", "ipo": "", "luldTier": "2", "etf": "N", "etfFactor": 1, "inverseETF": "N", "compositeId": "BBG00YZC2Z91", "schema_name": "SeqDirectoryMessage"}

Example syntax to run the client based on this SDK

  1. To list streams available on Nasdaq Cloud Data Service

python3.9 NCDSSession.py -opt TOPICS

  1. To display the schema for the given topic

python3.9 NCDSSession.py -opt SCHEMA -topic NLSCTA

  1. To dump top n records from the given topic

python3.9 NCDSSession.py -opt TOP -n 10 -topic NLSCTA

  1. To use client based specific authorization file instead of using from the resources of client code base

python3.9 NCDSSession.py -opt TOP -n 10 -topic NLSCTA -authprops client-authentication-config.json

  1. To use the specific kafka properties instead of using the kafka properties from the resources of the client base code

python3.9 NCDSSession.py -opt TOP -n 10 -topic NLSCTA -kafkaprops kafka-config.json

  1. To use the specific client based authorization file and specific kafka properties file

python3.9 NCDSSession.py -opt TOP -n 10 -topic NLSCTA -authprops client-authentication-config.json -kafkaprops kafka-config.json

  1. To display a specific message type

python3.9 NCDSSession.py -opt GETMSG -topic NLSCTA -msgname SeqDirectoryMessage

  1. To dump top n records from the given topic from given timestamp in milliseconds since the UNIX epoch

python3.9 NCDSSession.py -opt TOP -n 10 -topic NLSCTA -timestamp 1590084445610

  1. To retrieve a continuous stream of messages from the given topic

python3.9 NCDSSession.py -opt CONTSTREAM -topic NLSCTA

  1. To retrieve a stream of messages from the given topic, filtered by symbols or message names

python3.9 NCDSSession.py -opt FILTERSTREAM -topic NLSCTA -symbols SPCE

Documentation

An addition to the example application, there is extra documentation at the package and class level, which are located in project https://github.com/Nasdaq/NasdaqCloudDataService-SDK-Python​/tree/master/ncdssdk/docs

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

Code and documentation released under the Apache License, Version 2.0

Comments
  • Getting pip installation errors

    Getting pip installation errors

    I am trying to run the pip install -e . and getting the below error:

    #10 15.37   × python setup.py bdist_wheel did not run successfully.
    #10 15.37   │ exit code: 1
    #10 15.37   ╰─> [45 lines of output]
    #10 15.37       running bdist_wheel
    #10 15.37       running build
    #10 15.37       running build_py
    #10 15.37       creating build
    #10 15.37       creating build/lib.linux-x86_64-3.9
    ...
    #10 15.37       error: command 'gcc' failed: No such file or directory
    #10 15.37       [end of output]
    ...
    #10 15.96   × Running setup.py install for confluent-kafka did not run successfully.
    #10 15.96   │ exit code: 1
    #10 15.96   ╰─> [45 lines of output]
    #10 15.96       running install
    #10 15.96       running build
    #10 15.96       running build_py
    #10 15.96       creating build
    #10 15.96       creating build/lib.linux-x86_64-3.9
    ...
    #10 15.96       error: command 'gcc' failed: No such file or directory
    #10 15.96       [end of output]
    #10 15.96   
    #10 15.96   note: This error originates from a subprocess, and is likely not a problem with pip.
    #10 15.97 error: legacy-install-failure
    #10 15.97 
    #10 15.97 × Encountered error while trying to install package.
    #10 15.97 ╰─> confluent-kafka
    ...
    

    The Python version that I am using is 3.9. NOTE: I am running the source code inside a docker container.

    Can someone please help me with it?

    The steps I have taken to fix the issue but didn't help: I tried installing these pip install wheel setuptools but still, the error exists.

    opened by noorsheikh 1
  • Fix deserialization issue with a bytes field

    Fix deserialization issue with a bytes field

    Remove the serialization of the avro message into a json string. This is unneeded as the deserialize function is allowed to return any object, and it causes issues when there is an avro field of type bytes, as this is not a valid type for json objects.

    opened by ssortman 0
  • Update Jupyter notebook and README

    Update Jupyter notebook and README

    Adds more documentation to the Jupyter notebook as well as a code block to install dependencies. Updates the link to the Java github repo in the README.

    opened by jenniferwang99 0
  • Integration test top-level and util file

    Integration test top-level and util file

    Adds in the top level pytest file containing our integration tests as well as a helper util file for generating and pushing mock messages to topics for testing

    opened by jenniferwang99 0
  • Add documentation for NCDS Python SDK

    Add documentation for NCDS Python SDK

    Adds documentation for the Nasdaq Cloud Data Services Python SDK. Can be viewed by opening docs/build/index.html in your browser.

    Documentation generated with sphinx.

    opened by jenniferwang99 0
  • Adds in config loaders and other helper util files

    Adds in config loaders and other helper util files

    • Implements the authentication config and kafka config loaders
    • Adds in some helper util files: IsItPyTest.py for checking if a pytest is running, Oauth.py for returning the oauth callback, SeekToMidnight.py to help a consumer seek back to a certain timestamp
    opened by jenniferwang99 0
  • Add in NCDSSession file and file structure

    Add in NCDSSession file and file structure

    • creates file structure for the NCDSSession CLI
    • includes two helper util functions for printing help messages and validating command line input
    • adds temp authentication and kafka config files
    opened by jenniferwang99 0
  • Tracking Number Timestamp

    Tracking Number Timestamp

    In the Nasdaq Basic docs, I am seeing that "TrackingNumber/trackingID" for a quote is composed of the Nasdaq internal tracking number and the Timestamp in nanoseconds from midnight. I need to access the unix timestamp of this quote, and wanted to first see if there was a better way to access this than from manipulating the trackingID?

    If not, I would like to confirm that the Timestamp in nanoseconds from midnight is assuming UTC?

    Thanks.

    opened by lsharples1 2
  • Fix invalid notebook

    Fix invalid notebook

    I received the following error when trying to run the notebook:

    Unreadable Notebook: NasdaqCloudDataService-SDK-Python/python_sdk_examples.ipynb NotJSONError('Notebook does not appear to be JSON: \'{\\n "cells": [\\n {\\n "cell_type": "m...')
    

    After adding the missing comma, I was able to run the notebook with no issue

    opened by normand1 0
Releases(0.4.0)
Python SDK for the Buycoins API.

This library provides easy access to the Buycoins API using the Python programming language. It provides all the feature of the API so that you don't need to interact with the API directly. This libr

Musa Rasheed 48 May 04, 2022
Uma API pública contendo informações sobre o unvierso de Roberto Gomez Bolaños.

Chespirito API Objetivo Esta API tem como objetivo ser um ponto de referência para a procura sobre todo o universo do grande Roberto Gomez Bolaños, ta

Pery Lemke 6 Feb 02, 2022
Copier template for solving Advent of Code puzzles with Python

Advent of Code Python Template for Copier This template creates scaffolding for one day of Advent of Code. It includes tests and can download your per

Geir Arne Hjelle 6 Dec 25, 2022
A custom discord bot maker in python

custom-discord-bot-maker Sorry for using Translator. Each description may be inaccurate. how to use 1. Make new application at https://discord.com/dev

2 Nov 29, 2021
Converts a text file of songs to a playlist on your Spotify account.

Playlist Converter Convert a text file of songs to a playlist on your Spotify account. Create your playlists faster instead of manually searching for

Priya Aggarwal 18 Dec 21, 2022
A template that everyone can use for the start of their discord bot

Python Discord Bot Template This repository is a template that everyone can use for the start of their discord bot. When I first started creating my d

2 Nov 01, 2021
Gathers data and displays metrics related to climate change and resource depletion on a PowerBI report.

Apocalypse Status Dashboard Purpose Climate change and resource depletion are grave long-term dangers. The code in this repository will pull data from

Summer Is Here 1 Nov 12, 2021
OpenSea Bulk Uploader And Trader 100000 NFTs (MAC WINDOWS ANDROID LINUX) Automatically and massively upload and sell your non-fungible tokens on OpenSea using Python Selenium

OpenSea Bulk Uploader And Trader 100000 NFTs (MAC WINDOWS ANDROID LINUX) Automatically and massively upload and sell your non-fungible tokens on OpenS

ERC-7211 3 Mar 24, 2022
An attendance bot that joins google meet automatically according to schedule and marks present in the google meet.

Google-meet-self-attendance-bot An attendance bot which joins google meet automatically according to schedule and marks present in the google meet. I

Sarvesh Wadi 12 Sep 20, 2022
Baota-docker - Deploying baota panel via docker

baota-docker Deploying baota panel via docker. 通过docker一键部署宝塔面板。 一、前言 好像很多人对这个感兴

Mr. Cat 15 Dec 12, 2022
A simple, multipurpose Discord bot.

EpicBot 🏅 A simple, multipurpose Discord bot. • Info EpicBot is a multipurpose Discord bot that was designed to make your Discord life easier and coo

Nirlep_5252_ 130 Dec 29, 2022
A telegram bot that can upload telegram media files to anonfiles.com and give you direct download link

✯ AnonFilesBot ✯ Telegram Files to AnonFiles Upload Bot It will Also Give Direct Download Link Process : Fork This Repositry And Simply Cick On Heroku

Avishkar Patil 38 Dec 30, 2022
Verify your Accounts by Tempphone using this Discordbot

Verify your Accounts by Tempphone using this Discordbot 5sim.net is a service, that offer you temp phonenumbers for otp verification. It include a lot

23 Jan 03, 2023
Apex lets you build, deploy, and manage AWS Lambda functions with ease.

No longer maintained This software is no longer being maintainted and should not be chosen for new projects. See this issue for more information Apex

Apex 25 Dec 23, 2022
Discord Rpc With Python And 2 Buttons

Discord-RPC-With-Python- Discord Rpc With Python And 2 Buttons Packages pypresence time Required Programs Python Latest Version Random IDE Discord :P

Kaz 4 Dec 12, 2021
Deploy a STAC API and a dynamic mosaic tiler API using AWS CDK.

Earth Observation API Deploy a STAC API and a dynamic mosaic tiler API using AWS CDK.

Development Seed 39 Oct 30, 2022
This project checks the weather in the next 12 hours and sends an SMS to your phone number if it's going to rain to remind you to take your umbrella.

RainAlert-Request-Twilio This project checks the weather in the next 12 hours and sends an SMS to your phone number if it's going to rain to remind yo

9 Apr 15, 2022
AKShare is an elegant and simple financial data interface library for Python, built for human beings

AKShare is an elegant and simple financial data interface library for Python, built for human beings

AKFamily 5.8k Dec 30, 2022
Generate and Visualize Data Lineage from query history

Tokern Lineage Engine Tokern Lineage Engine is fast and easy to use application to collect, visualize and analyze column-level data lineage in databas

Tokern 237 Dec 29, 2022
Bot facebook

botfb Bot facebook Login via cookies cara install $pkg update && pkg upgrade $pkg install git python $git clone https://github.com/Ainx-BOT/botfb $cd

Fahmi Dev 12 Dec 18, 2022