DeepAmandine is an artificial intelligence that allows you to talk to it for hours, you won't know the difference.

Overview

DeepAmandine

This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes. We wish you a good experience with the AGI and hope you have fun.

screen_1


Installation and usage

- To use the version on Android - v1.0-beta :

1. Installing the pre-required Python libraries :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt

2. Download the executable file :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-android-v1.0-beta.pyc

3. Run the executable :

$ python3 DeepAmandine-android-v1.0-beta.pyc

- To use the version on Desktop - v1.0-beta :

1. Installing the pre-required Python libraries :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt

2. Download the executable file :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-desktop-v1.0-beta.pyc

3. Run the executable :

$ python3 DeepAmandine-desktop-v1.0-beta.pyc

Examples of use

You can select a language to speak with our AI.

screen_1

You can select a username to chat with the AI.

screen_2

Once all the steps have been completed, you can start talking to the AI.

screen_3

You might also like...
Text to speech is a process to convert any text into voice. Text to speech project takes words on digital devices and convert them into audio. Here I have used Google-text-to-speech library popularly known as gTTS library to convert text file to .mp3 file. Hope you like my project! simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.

Quickly train T5 models in just 3 lines of code + ONNX support simpleT5 is built on top of PyTorch-lightning ⚡️ and Transformers 🤗 that lets you quic

pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

A PyTorch implementation of the Transformer model in
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE

smaller-LaBSE LaBSE(Language-agnostic BERT Sentence Embedding) is a very good method to get sentence embeddings across languages. But it is hard to fi

I can help you convert your images to pdf file.
I can help you convert your images to pdf file.

IMAGE TO PDF CONVERTER BOT Configs TOKEN - Get bot token from @BotFather API_ID - From my.telegram.org API_HASH - From my.telegram.org Deploy to Herok

A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.

multitask-learning-transformers A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You

Rhythm-Finder is a unsupervised ML driven python powered web-application that can find the songs that suits you.
Releases(v1.0-beta)
  • v1.0-beta(Jan 15, 2022)

    DeepAmandine

    This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes.


    - To use the version on Android - v1.0-beta :

    1. Installing the pre-required Python libraries :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt
    

    2. Download the executable file :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-android-v1.0-beta.pyc
    

    3. Run the executable :

    $ python3 DeepAmandine-android-v1.0-beta.pyc
    

    - To use the version on Desktop - v1.0-beta :

    1. Installing the pre-required Python libraries :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt
    

    2. Download the executable file :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-desktop-v1.0-beta.pyc
    

    3. Run the executable :

    $ python3 DeepAmandine-desktop-v1.0-beta.pyc
    
    Source code(tar.gz)
    Source code(zip)
    DeepAmandine-android-v1.0-beta.pyc(3.17 KB)
    DeepAmandine-desktop-v1.0-beta.pyc(3.43 KB)
    requirements.txt(13 bytes)
Owner
BuyWithCrypto
Blockchain & Fintech
BuyWithCrypto
A high-level Python library for Quantum Natural Language Processing

lambeq About lambeq is a toolkit for quantum natural language processing (QNLP). Documentation: https://cqcl.github.io/lambeq/ Getting started Prerequ

Cambridge Quantum 315 Jan 01, 2023
Model for recasing and repunctuating ASR transcripts

Recasing and punctuation model based on Bert Benoit Favre 2021 This system converts a sequence of lowercase tokens without punctuation to a sequence o

Benoit Favre 88 Dec 29, 2022
Unlimited Call - Text Bombing Tool

FastBomber Unlimited Call - Text Bombing Tool Installation On Termux

Aryan 6 Nov 10, 2022
this repository has datasets containing information of Uber pickups in NYC from April 2014 to September 2014 and January to June 2015. data Analysis , virtualization and some insights are gathered here

uber-pickups-analysis Data Source: https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city Information about data set The dataset contain

1 Nov 02, 2021
CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT This repo provides the code for reproducing the experiments in CodeBERT: A Pre-Trained Model for Programming and Natural Languages. CodeBERT

Microsoft 1k Jan 03, 2023
A Persian Image Captioning model based on Vision Encoder Decoder Models of the transformers🤗.

Persian-Image-Captioning We fine-tuning the Vision Encoder Decoder Model for the task of image captioning on the coco-flickr-farsi dataset. The implem

Hamtech-ai 15 Aug 25, 2022
Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)

Linear Multihead Attention (Linformer) PyTorch Implementation of reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer:

Kui Xu 58 Dec 23, 2022
Text preprocessing, representation and visualization from zero to hero.

Text preprocessing, representation and visualization from zero to hero. From zero to hero • Installation • Getting Started • Examples • API • FAQ • Co

Jonathan Besomi 2.7k Jan 08, 2023
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification

Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV

Yongming Rao 89 Dec 18, 2022
Hierarchical unsupervised and semi-supervised topic models for sparse count data with CorEx

Anchored CorEx: Hierarchical Topic Modeling with Minimal Domain Knowledge Correlation Explanation (CorEx) is a topic model that yields rich topics tha

Greg Ver Steeg 592 Dec 18, 2022
Simplified diarization pipeline using some pretrained models - audio file to diarized segments in a few lines of code

simple_diarizer Simplified diarization pipeline using some pretrained models. Made to be a simple as possible to go from an input audio file to diariz

Chau 65 Dec 30, 2022
中文問句產生器;使用台達電閱讀理解資料集(DRCD)

Transformer QG on DRCD The inputs of the model refers to we integrate C and A into a new C' in the following form. C' = [c1, c2, ..., [HL], a1, ..., a

Philip 1 Oct 22, 2021
a test times augmentation toolkit based on paddle2.0.

Patta Image Test Time Augmentation with Paddle2.0! Input | # input batch of images / / /|\ \ \ # apply

AgentMaker 110 Dec 03, 2022
Predict an emoji that is associated with a text

Sentiment Analysis Sentiment analysis in computational linguistics is a general term for techniques that quantify sentiment or mood in a text. Can you

Tetsumichi(Telly) Umada 30 Sep 07, 2022
Use the power of GPT3 to execute any function inside your programs just by giving some doctests

gptrun Don't feel like coding today? Use the power of GPT3 to execute any function inside your programs just by giving some doctests. How is this diff

Roberto Abdelkader Martínez Pérez 11 Nov 11, 2022
Source code for the paper "TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly Representations"

TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly Representations Created by Jiahao Pang, Duanshun Li, and Dong Tian from InterDigital In

InterDigital 21 Dec 29, 2022
Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation This repository contains the implementation of the following paper: Live Speech

OldSix 575 Dec 31, 2022
Document processing using transformers

Doc Transformers Document processing using transformers. This is still in developmental phase, currently supports only extraction of form data i.e (ke

Vishnu Nandakumar 13 Dec 21, 2022
Implementation of ProteinBERT in Pytorch

ProteinBERT - Pytorch (wip) Implementation of ProteinBERT in Pytorch. Original Repository Install $ pip install protein-bert-pytorch Usage import torc

Phil Wang 92 Dec 25, 2022
Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Yoon Kim 43 Dec 23, 2022