Dictionary - Application focused on word search through web scraping

Overview

Dictionary

GeimerDroiid | Discord GeimerDroiid | Spotify GeimerDroiid | Github Email | jmanuelhv9@gmail.com

About

Application focused on searching the meaning of words through web scraping, besides having more functions such as Dictation, Spelling and Syllables.
I created this application as a way to test the knowledge that I have started to acquire so I decided to make this dictionary with some basic functions like spelling but from there more ideas came up, like implementing a method that would tell me the meanings of the words that I didn't understand, or a way in which I didn't have to write the word and just by telling the computer I could write it. When I created this application I was just starting to learn Python (it is the language I used for this application) so I may have a lot of bad practices in the code that I am correcting for future versions. During the creation of this application I learned how to make user interfaces, I dabbled a bit in web scraping and besides investigating a method with which I can change text to sound and play it also at the end I used object oriented programming to facilitate the creation of the interface.

Dictionary | GUI Dictionary | GUI

What's new in v1.5

  • Interface improvements

    Better interface with buttons and colors that contrast better with each other as well as better typography, more minimalist animations for a better user experience.

  • Bugs fixed

    Correction of errors mainly of grammar among the most outstanding is the elimination of "Gua" and "Guo" since that conjugation of letters does not belong to the grammar of the Spanish language. Also improvement in the application startup time.

  • Code improvement

    I have focused on the almost total reconstruction of the application so all the code is new, I have looked for the way to preserve the readability of the same for it I have divided each function in different files. Besides looking for the most efficient and easy way to do each one (All the code is in English).

  • The dictation function has been disabled

    I have decided to disable the dictation feature in the final version, as it gave me a lot of problems when packaging the application, so I decided to keep it disabled until I find a way to build this feature and have as few bugs as possible as well as a proper functioning.


Functions

  • Dictation

    The dictation function listens and converts your voice into text that will be entered into the search bar of the application, thanks to this you can apply some other function to that text. For this function I have used the SpeechRecognition library that allows us to use the microphone of our computer to convert audio to text. All the code is in the file spelling.py

  • Spelling

    The spelling function breaks the sentence into words and spells it letter by letter, and when it reaches the end of a word, it spells it out in full

  • Syllables

    Syllables function has a menu containing all the conjugations of letters and syllables together with their respective sounds.

  • Meaning

    This function by means of websracping looks up the meaning of a word in the DEM dictionary and tells us its meaning with its respective examples, although if it does not find it, it tells you search alternatives. For this function I used the BeautifulSoup4 library for web scraping as well as pyttsx3 to convert text to audio.


Requirements

  • It is important not to delete the executable file from the folder, as this will cause errors. The best option is to create a shortcut and move it to the desktop or anywhere else you want to place it.

  • To have a good performance of the application I recommend downloading "Microsoft Sabina Desktop - Spanish (Mexico)" which is a voice provided by Microsoft for the devices.

How to download "Microsoft Sabina Desktop - Spanish (Mexico)".

In order to download the necessary voice for the program, the first thing to do is to go to:

Settings> Time and language> Voice> Manage voices> Add voices

In the search bar type Spanish and download the one that says "Spanish (Mexico)". And with that, everything would be ready to use the application correctly and avoid any pronunciation error.

If you wish to contribute to the development of the application:

  • First clone the repository

      git clone https://github.com/GeimerDroiid/Dictionary.git
    
  • Then create a branch with your user name

      git checkout -b 
         
    
         
  • And finally install the requirements

      py pip install -r requirements.txt
    

Contribution

Pull requests are welcome, I would appreciate your support to contribute to a better development of this application. For major changes, please open an issue to discuss what you would like to change.
You might also like...
Web Scraping Practica With Python

Web-Scraping-Practica Integrants: Guillem Vidal Pallarols. Lídia Bandrés Solé Fitxers: Aquest document és el primer que trobem. A continuació trobem u

Here I provide the source code for doing web scraping using the python library, it is Selenium.
Here I provide the source code for doing web scraping using the python library, it is Selenium.

Here I provide the source code for doing web scraping using the python library, it is Selenium.

Consulta de CPF e CNPJ na Receita Federal com Web-Scraping

Repositório contendo scripts Python que realizam a consulta de CPF e CNPJ diretamente no site da Receita Federal.

A package that provides you Latest Cyber/Hacker News from website using Web-Scraping.

cybernews A package that provides you Latest Cyber/Hacker News from website using Web-Scraping. Latest Cyber/Hacker News Using Webscraping Developed b

Web Scraping OLX with Python and Bsoup.
Web Scraping OLX with Python and Bsoup.

webScrap WebScraping first step. Authors: Paulo, Claudio M. First steps in Web Scraping. Project carried out for training in Web Scrapping. The export

Demonstration on how to use async python to control multiple playwright browsers for web-scraping

Playwright Browser Pool This example illustrates how it's possible to use a pool of browsers to retrieve page urls in a single asynchronous process. i

Google Scholar Web Scraping

Google Scholar Web Scraping This is a python script that asks for a user to input the url for a google scholar profile, and then it writes publication

This is a module that I had created along with my friend. It's a basic web scraping module
This is a module that I had created along with my friend. It's a basic web scraping module

QuickInfo PYPI link : https://pypi.org/project/quickinfo/ This is the library that you've all been searching for, it's built for developers and allows

A simple django-rest-framework api using web scraping

Apicell You can use this api to search in google, bing, pypi and subscene and get results Method : POST Parameter : query Example import request url =

Releases(v1.5)
  • v1.5(Jan 3, 2022)

    What's new in v1.5

    • Interface improvements

      Better interface with buttons and colors that contrast better with each other as well as better typography, more minimalist animations for a better user experience.

    • Bugs fixed

      Correction of errors mainly of grammar among the most outstanding is the elimination of "Gua" and "Guo" since that conjugation of letters does not belong to the grammar of the Spanish language. Also improvement in the application startup time.

    • Code improvement

      I have focused on the almost total reconstruction of the application so all the code is new, I have looked for the way to preserve the readability of the same for it I have divided each function in different files. Besides looking for the most efficient and easy way to do each one (All the code is in English).

    • The dictation function has been disabled

      I have decided to disable the dictation feature in the final version, as it gave me a lot of problems when packaging the application, so I decided to keep it disabled until I find a way to build this feature and have as few bugs as possible as well as a proper functioning.

    Full Changelog: https://github.com/DawntDev/Dictionary/compare/v1.0...v1.5

    Source code(tar.gz)
    Source code(zip)
    Dictionary.1.5.zip(75.35 MB)
  • v1.0(Jan 3, 2022)

    About

    Application focused on searching the meaning of words through web scraping, besides having more functions such as Dictation, Spelling and Syllables.
    I created this application as a way to test the knowledge that I have started to acquire so I decided to make this dictionary with some basic functions like spelling but from there more ideas came up, like implementing a method that would tell me the meanings of the words that I didn't understand, or a way in which I didn't have to write the word and just by telling the computer I could write it. When I created this application I was just starting to learn Python (it is the language I used for this application) so I may have a lot of bad practices in the code that I am correcting for future versions. During the creation of this application I learned how to make user interfaces, I dabbled a bit in web scraping and besides investigating a method with which I can change text to sound and play it also at the end I used object oriented programming to facilitate the creation of the interface.

    Full Changelog: https://github.com/DawntDev/Dictionary/commits/v1.0

    Source code(tar.gz)
    Source code(zip)
    dictionary.exe(50.35 MB)
Owner
Juan Manuel
Juan Manuel
This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster.

IST Research 1.1k Jan 06, 2023
A list of Python Bots used to extract data from several websites

A list of Python Bots used to extract data from several websites. Data extraction is for products on e-commerce (ecommerce) websites. Data fetched i

Sahil Ladhani 1 Jan 14, 2022
A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response and scrap complete article - No need to write scrappers for articles fetching anymore

GNews 🚩 A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response 🚩 As well as you can fetch full

Muhammad Abdullah 273 Dec 31, 2022
Proxy scraper. Format: IP | PORT | COUNTRY | TYPE

proxy scraper 🔎 Installation: git clone https://github.com/ebankoff/proxy_scraper Required pip libraries (pip install library name): lxml beautifulso

Eban'ko 19 Dec 07, 2022
Consulta de CPF e CNPJ na Receita Federal com Web-Scraping

Repositório contendo scripts Python que realizam a consulta de CPF e CNPJ diretamente no site da Receita Federal.

Josué Campos 5 Nov 29, 2021
Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

1 Jan 28, 2022
download NCERT books using scrapy

download_ncert_books download NCERT books using scrapy Downloading Books: You can either use the spider by cloning this repo and following the instruc

1 Dec 02, 2022
a high-performance, lightweight and human friendly serving engine for scrapy

a high-performance, lightweight and human friendly serving engine for scrapy

Speakol Ads 30 Mar 01, 2022
京东茅台抢购 2021年4月最新版

Jd_Seckill 特别声明: 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性,完整性和有效性,请根据情况自行判断。 本项目内所有资源文件,禁止任何公众号、自媒体进行任何形式的转载、发布。 huanghyw 对任何脚本问题概不

45 Dec 14, 2022
Console application for downloading images from Reddit in Python

RedditImageScraper Console application for downloading images from Reddit in Python Introduction This short Python script was created for the mass-dow

James 0 Jul 04, 2021
PyQuery-based scraping micro-framework.

demiurge PyQuery-based scraping micro-framework. Supports Python 2.x and 3.x. Documentation: http://demiurge.readthedocs.org Installing demiurge $ pip

Matias Bordese 109 Jul 20, 2022
This tool can be used to extract information from any website

WEB-INFO- This tool can be used to extract information from any website Install Termux and run the command --- $ apt-get update $ apt-get upgrade $ pk

1 Oct 24, 2021
mlscraper: Scrape data from HTML pages automatically with Machine Learning

🤖 Scrape data from HTML websites automatically with Machine Learning

Karl Lorey 798 Dec 29, 2022
a Scrapy spider that utilizes Postgres as a DB, Squid as a proxy server, Redis for de-duplication and Splash to render JavaScript. All in a microservices architecture utilizing Docker and Docker Compose

This is George's Scraping Project To get started cd into the theZoo file and run: chmod +x script.sh then: ./script.sh This will spin up a Postgres co

George Reyes 7 Nov 27, 2022
AssistScraper - program for /r/nba to use to find list of all players a player assisted and how many assists each player recieved

AssistScraper - program for /r/nba to use to find list of all players a player assisted and how many assists each player recieved

5 Nov 25, 2021
A package that provides you Latest Cyber/Hacker News from website using Web-Scraping.

cybernews A package that provides you Latest Cyber/Hacker News from website using Web-Scraping. Latest Cyber/Hacker News Using Webscraping Developed b

Hitesh Rana 4 Jun 02, 2022
✂️🕷️ Spider-Cut is a Network Mapper Framework (NMAP Framework)

Spider-Cut is a Network Mapper Framework (NMAP Framework) Installation | Usage | Creators | Donate Installation # Kali Linux | WSL

XforWorks 3 Mar 07, 2022
Tool to scan for secret files on HTTP servers

snallygaster Finds file leaks and other security problems on HTTP servers. what? snallygaster is a tool that looks for files accessible on web servers

Hanno Böck 2k Dec 28, 2022
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

2.3k Jan 04, 2023
The first public repository that provides free BUBT website scraping API script on Github.

BUBT WEBSITE SCRAPPING SCRIPT I think this is the first public repository that provides free BUBT website scraping API script on github. When I was do

Md Imam Hossain 3 Feb 10, 2022