Jarvis is a simple Chatbot with a GUI capable of chatting and retrieving information and daily news from the internet for it's user.

Overview

J.A.R.V.I.S

Kindly consider starring this repository if you like the program :-)

ForTheBadge powered-by-electricity

forthebadge made-with-python

Maintenance Documentation Status contributions welcome

Ui-screen

What/Who is J.A.R.V.I.S?

J.A.R.V.I.S is an chatbot written that is built and coded in Python whose aim is to be capable of chatting and retrieving any information and daily news from the internet for it's user.

Backstory of J.A.R.V.I.S

J.A.R.V.I.S was inspired by by Tony Stark's A.I "J.A.R.V.I.S" from the Iron Man movies from Marvel. Paving the way for my dream to create a bot which can help me in automation by keeping me informed, updated and productive.

What python packages are needed to run J.A.R.V.I.S (Requirements)?

In order for J.A.R.V.I.S to work at full capacity a few 3rd party python packages will be required to be installed:

  1. pip install PySimpleGUI == 4.33.0
  2. pip install requests == 2.25.1
  3. pip install beautifulsoup4 == 4.9.3
  4. pip install wikipedia == 1.4.0

How does J.A.R.V.I.S store info and is it safe?

Yes, Your info will be safe since it will be stored locally on your personal computer. J.A.R.V.I.S stores mainly 2 types of info.

  1. Response-Intents: Stored in Jarintents file used by J.A.R.V.I.S to check input with tags and provide the appropriate output.

  2. Info-Intents: Stored in the Jarinfo file used by J.A.R.V.I.S to store Api keys, Name and location for data retrieval. You can access these using the settings menu.

All CRITICAL INFO will be STORED IN your PERSONAL COMPUTER and NOT on the INTERNET.

What kind of API should I subscribe to?

When completed installing by default you can chat with J.A.R.V.I.S but not get any information. To activate it you will need to head to:

  1. https://newsapi.org/ : For Live News, Morning Briefings and News Headlines.

  2. https://openweathermap.org/ : For current Weather information.

Some examples of J.A.R.V.I.S commands

To make JARVIS respond Users will need to enter a Command in the input for which JARVIS will scan for keywords and provide an answer or information.

Here is a sample list of available Commands:

  1. Hello
  2. How are you
  3. Are you fine
  4. Are you real
  5. What is the time
  6. News about [your input]-- Ex. News about Github.
  7. Get me news headlines-- NOTE: Type in country's abbreviation in input bar in Newsui. EX. Us, Sg, Uk, Au.
  8. Send an email
  9. Wikipedia [Query]-- Ex. Wikipedia github.
  10. Who is [Query] / What is [Query]-- NOTE: JARVIS will get answer from Wikipedia.
  11. Get me stock price for [Query]-- NOTE: Query of stock should be abbreviations. EX. TSLA, AAPL, MSFT.
  12. Goodbye jarvis-- NOTE: Command to quit JARVIS.

License Open Source Love svg1

IMPORTANT NOTE: Any User who are willing to Share or Re-Distribute the above 'Program' are kindly advised to:

  1. keep at least ONE "(C) Epicalable" text in the 'program'.

  2. a link to this repository from the user's 'Modified program' README file.

It will be helpful for us as users will know it's original source and about our startup.

THANK YOU FOR YOUR COOPERATION :-)

ForTheBadge built-with-love

ForTheBadge built-by-developers

ForTheBadge uses-git ForTheBadge makes-people-smile

J.A.R.V.I.S Copyright (C) 2021 Epicalable LLC. All Rights Reserved.

Owner
Epicalable
A small coding startup to make cool projects and help create real world solutions.
Epicalable
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 3.5k Dec 30, 2022
End-to-end MLOps pipeline of a BERT model for emotion classification.

image source EmoBERT-MLOps The goal of this repository is to build an end-to-end MLOps pipeline based on the MLOps course from Made with ML, but this

Dimitre Oliveira 4 Nov 06, 2022
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Bethge Lab 61 Dec 21, 2022
Hierarchical unsupervised and semi-supervised topic models for sparse count data with CorEx

Anchored CorEx: Hierarchical Topic Modeling with Minimal Domain Knowledge Correlation Explanation (CorEx) is a topic model that yields rich topics tha

Greg Ver Steeg 592 Dec 18, 2022
Open Source Neural Machine Translation in PyTorch

OpenNMT-py: Open-Source Neural Machine Translation OpenNMT-py is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine trans

OpenNMT 5.8k Jan 04, 2023
Tool to check whether a GCP bucket is public or not.

Tool to check publicly accessible GCP bucket. Blog https://justm0rph3u5.medium.com/gcp-inspector-auditing-publicly-exposed-gcp-bucket-ac6cad55618c Wha

DIVYANSHU SHUKLA 7 Nov 24, 2022
ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

Antlr Project 13.6k Jan 05, 2023
Lattice methods in TensorFlow

TensorFlow Lattice TensorFlow Lattice is a library that implements constrained and interpretable lattice based models. It is an implementation of Mono

504 Dec 20, 2022
Anuvada: Interpretable Models for NLP using PyTorch

Anuvada: Interpretable Models for NLP using PyTorch So, you want to know why your classifier arrived at a particular decision or why your flashy new d

EDGE 102 Oct 01, 2022
A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models

wav2vec-toolkit A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models This repository accompanies the

Anton Lozhkov 29 Oct 23, 2022
ADCS - Automatic Defect Classification System (ADCS) for SSMC

Table of Contents Table of Contents ADCS Overview Summary Operator's Guide Demo System Design System Logic Training Mode Production System Flow Folder

Tam Zher Min 2 Jun 24, 2022
Curso práctico: NLP de cero a cien 🤗

Curso Práctico: NLP de cero a cien Comprende todos los conceptos y arquitecturas clave del estado del arte del NLP y aplícalos a casos prácticos utili

Somos NLP 147 Jan 06, 2023
Interpretable Models for NLP using PyTorch

This repo is deprecated. Please find the updated package here. https://github.com/EdGENetworks/anuvada Anuvada: Interpretable Models for NLP using PyT

Sandeep Tammu 19 Dec 17, 2022
Entity Disambiguation as text extraction (ACL 2022)

ExtEnD: Extractive Entity Disambiguation This repository contains the code of ExtEnD: Extractive Entity Disambiguation, a novel approach to Entity Dis

Sapienza NLP group 121 Jan 03, 2023
Dense Passage Retriever - is a set of tools and models for open domain Q&A task.

Dense Passage Retrieval Dense Passage Retrieval (DPR) - is a set of tools and models for state-of-the-art open-domain Q&A research. It is based on the

Meta Research 1.1k Jan 07, 2023
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p

Jungil Kong 1.1k Jan 02, 2023
小布助手对话短文本语义匹配的一个baseline

oppo-text-match 小布助手对话短文本语义匹配的一个baseline 模型 参考:https://kexue.fm/archives/8213 base版本线下大概0.952,线上0.866(单模型,没做K-flod融合)。 训练 测试环境:tensorflow 1.15 + keras

苏剑林(Jianlin Su) 132 Dec 14, 2022
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

Wasi Ahmad 138 Dec 30, 2022
LightSeq: A High-Performance Inference Library for Sequence Processing and Generation

LightSeq is a high performance inference library for sequence processing and generation implemented in CUDA. It enables highly efficient computation of modern NLP models such as BERT, GPT2, Transform

Bytedance Inc. 2.5k Jan 03, 2023
Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)

Linear Multihead Attention (Linformer) PyTorch Implementation of reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer:

Kui Xu 58 Dec 23, 2022