Generate and Visualize Data Lineage from query history

Overview

Tokern Lineage Engine

CircleCI codecov PyPI image image

Tokern Lineage Engine is fast and easy to use application to collect, visualize and analyze column-level data lineage in databases, data warehouses and data lakes in AWS and GCP.

Tokern Lineage helps you browse column-level data lineage

Resources

  • Demo of Tokern Lineage App

data-lineage

Quick Start

Install a demo of using Docker and Docker Compose

Download the docker-compose file from Github repository.

# in a new directory run
wget https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/catalog-demo.yml
# or run
curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/tokern-lineage-engine.yml -o docker-compose.yml

Run docker-compose

docker-compose up -d

Check that the containers are running.

docker ps
CONTAINER ID   IMAGE                                    CREATED        STATUS       PORTS                    NAMES
3f4e77845b81   tokern/data-lineage-viz:latest   ...   4 hours ago    Up 4 hours   0.0.0.0:8000->80/tcp     tokern-data-lineage-visualizer
1e1ce4efd792   tokern/data-lineage:latest       ...   5 days ago     Up 5 days                             tokern-data-lineage
38be15bedd39   tokern/demodb:latest             ...   2 weeks ago    Up 2 weeks                            tokern-demodb

Try out Tokern Lineage App

Head to http://localhost:8000/ to open the Tokern Lineage app

Install Tokern Lineage Engine

# in a new directory run
wget https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/tokern-lineage-engine.yml
# or run
curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/catalog-demo.yml -o tokern-lineage-engine.yml

Run docker-compose

docker-compose up -d

If you want to use an external Postgres database, change the following parameters in tokern-lineage-engine.yml:

  • CATALOG_HOST
  • CATALOG_USER
  • CATALOG_PASSWORD
  • CATALOG_DB

You can also override default values using environement variables.

CATALOG_HOST=... CATALOG_USER=... CATALOG_PASSWORD=... CATALOG_DB=... docker-compose -f ... up -d

For more advanced usage of environment variables with docker-compose, refer to docker-compose docs

Pro-tip

If you want to connect to a database in the host machine, set

CATALOG_HOST: host.docker.internal # For mac or windows
#OR
CATALOG_HOST: 172.17.0.1 # Linux

Supported Technologies

  • Postgres
  • AWS Redshift
  • Snowflake

Coming Soon

  • SparkSQL
  • Presto

Documentation

For advanced usage, please refer to data-lineage documentation

Survey

Please take this survey if you are a user or considering using data-lineage. Responses will help us prioritize features better.

Comments
  • Error while parsing queries from json file

    Error while parsing queries from json file

    I was able to successfully load catalog using dbcat but I'm geting the following error when I tried to parse queries using a file in json format(I also tried the given test file)

    File "~/Python/3.8/lib/python/site-packages/data_lineage/parser/init.py", line 124, in parse name = str(hash(sql)) TypeError: unhashable type: 'dict'

    Here's line 124: https://github.com/tokern/data-lineage/blob/f347484c43f8cb9b97c44086dd2557e3b40904ab/data_lineage/parser/init.py#L124

    Code executed:

    from dbcat import catalog_connection
    from data_lineage.parser import parse_queries, visit_dml_query
    import json
    
    with open("queries2.json", "r") as file:
        queries = json.load(file)
    
    catalog_conf = """
    catalog:
      user:test
      password: [email protected]
      host: 127.0.0.1
      port: 5432
      database: postgres
    """
    catalog = catalog_connection(catalog_conf)
    
    parsed = parse_queries(queries)
    
    visited = visit_dml_query(catalog, parsed)
    
    Support 
    opened by siva-mudiyanur 42
  • Snowflake source defaulting to prod even though I'm specifying a different db name

    Snowflake source defaulting to prod even though I'm specifying a different db name

    I'm adding a snowflake source as follows.. where sf_db_name is my database name e.g. snowfoo (verified in debugger)...

     source = catalog.add_source(name=f"sf1_{time.time_ns()}", source_type="snowflake", database=sf_db_name, username=sf_username, password=sf_password, account=sf_account, role=sf_role, warehouse=sf_warehouse)
    

    ... but when it goes to scan, it looks like the code thinks my database name is 'prod':

    tokern-data-lineage | sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 002003 (02000): SQL compilation error:
    tokern-data-lineage | Database 'PROD' does not exist or not authorized.
    tokern-data-lineage | [SQL:
    tokern-data-lineage |     SELECT
    tokern-data-lineage |         lower(c.column_name) AS col_name,
    tokern-data-lineage |         c.comment AS col_description,
    tokern-data-lineage |         lower(c.data_type) AS col_type,
    tokern-data-lineage |         lower(c.ordinal_position) AS col_sort_order,
    tokern-data-lineage |         lower(c.table_catalog) AS database,
    tokern-data-lineage |         lower(c.table_catalog) AS cluster,
    tokern-data-lineage |         lower(c.table_schema) AS schema,
    tokern-data-lineage |         lower(c.table_name) AS name,
    tokern-data-lineage |         t.comment AS description,
    tokern-data-lineage |         decode(lower(t.table_type), 'view', 'true', 'false') AS is_view
    tokern-data-lineage |     FROM
    tokern-data-lineage |         prod.INFORMATION_SCHEMA.COLUMNS AS c
    tokern-data-lineage |     LEFT JOIN
    tokern-data-lineage |         prod.INFORMATION_SCHEMA.TABLES t
    tokern-data-lineage |             ON c.TABLE_NAME = t.TABLE_NAME
    tokern-data-lineage |             AND c.TABLE_SCHEMA = t.TABLE_SCHEMA
    tokern-data-lineage |      ;
    tokern-data-lineage |     ]
    tokern-data-lineage | (Background on this error at: http://sqlalche.me/e/13/f405)
    

    .. I'm trying to look through the tokern code repos to see where the disconnect might be happening, but not sure yet...

    opened by peteclark3 10
  • Any way to increase timeout for scanning?

    Any way to increase timeout for scanning?

    When I add my snowflake DB for scanning, using this bit of code (with the values replaced as per my snowflake database):

    from data_lineage import Catalog
    
    catalog = Catalog(docker_address)
    
    # Register wikimedia datawarehouse with data-lineage app.
    
    source = catalog.add_source(name="wikimedia", source_type="postgresql", **wikimedia_db)
    
    # Scan the wikimedia data warehouse and register all schemata, tables and columns.
    
    catalog.scan_source(source)
    

    ... I get

    tokern-data-lineage-visualizer | 2021/10/08 21:51:40 [error] 34#34: *1 upstream prematurely closed connection while reading response header from upstream, client: 10.10.0.1, server: , request: "POST /api/v1/catalog/scanner HTTP/1.1", upstream: "http://10.10.0.3:4142/api/v1/catalog/scanner", host: "127.0.0.1:8000"
    

    ... I think it's because snowflake isn't returning fast enough... but I'm not sure. Tried updating the warehouse size to large to make the scan faster, but getting the same thing. Seems like it times out pretty fast... at least for my large database. Any ideas?

    Python 3.8.0 in an isolated venv, 0.8.3 data-lineage. Thanks for this package!

    opened by peteclark3 10
  • CTE visiting

    CTE visiting

    Currently it doesn't appear that the dml_visitor will walk through the common table expressions to build the lineage. Am I interpreting this wrong? Within vistor.py line 45 and 61 both visit the "with clause". There doesn't seem to be any functionality for handling the commontableexpr or ctes within the parsed statments. This causes any statements with ctes to throw an error when calling parse_queries, as no table is found when attempting to bind a CTE in a FROM clause.

    opened by dhuettenmoser 8
  • Debian Buster can't find version

    Debian Buster can't find version

    Hi, I'm trying to install the 0.8 version on a docker that runs on Debian Buster and when it runs the pip command to install it prints the following warning/error:

    #12 9.444 Collecting data-lineage==0.8.0 (from -r /project/requirements.txt (line 25))
    #12 9.466   Could not find a version that satisfies the requirement data-lineage==0.8.0 (from -r /project/requirements.txt (line 25)) (from versions: 0.1.2, 0.2.0, 0.3.0, 0.5.1, 0.5.2, 0.6.0, 0.7.0)
    #12 9.541 No matching distribution found for data-lineage==0.8.0 (from -r /project/requirements.txt (line 25))
    

    Is this normal behavior? Do I have to add something before trying to install?

    opened by jesusjackson 5
  • problem with docker-compose

    problem with docker-compose

    Hello! i use this docs https://tokern.io/docs/data-lineage/installation

    1 curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose-demodb/docker-compose.yml -o docker-compose.yml 2 docker-compose up -d 3 get error ERROR: In file './docker-compose.yml', the services name 404 must be a quoted string, i.e. '404'.

    opened by kirill3000 4
  • cannot import name 'parse_queries' from 'data_lineage.parser'

    cannot import name 'parse_queries' from 'data_lineage.parser'

    Hi, I am trying to parse query history from Snowflake on Jupyter notebook.

    data lineage version 0.3.0

    !pip install snowflake-connector-python[secure-local-storage,pandas] data-lineage
    
    import datetime
    end_time = datetime.datetime.now()
    start_time = end_time - datetime.timedelta(days=7)
    
    query = f"""
    SELECT query_text
    FROM table(information_schema.query_history(
        end_time_range_start=>to_timestamp_ltz('{start_time.isoformat()}'),
        end_time_range_end=>to_timestamp_ltz('{end_time.isoformat()}')));
    """
    
    cursors = conn.execute_string(
        sql_text=query
    )
    
    queries = []
    for cursor in cursors:
      for row in cursor:
        print(row[0])
        queries.append(row[0])
    
    from data_lineage.parser import parse_queries, visit_dml_queries
    
    # Parse all queries
    parsed = parse_queries(queries)
    
    # Visit the parse trees to extract source and target queries
    visited = visit_dml_queries(catalog, parsed)
    
    # Create a graph and visualize it
    
    from data_lineage.parser import create_graph
    graph = create_graph(catalog, visited)
    
    import plotly
    plotly.offline.iplot(graph.fig())
    

    Then I got this error. Would you help me find the root cause?

    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-33-151c67ea977c> in <module>
    ----> 1 from data_lineage.parser import parse_queries, visit_dml_queries
          2 
          3 # Parse all queries
          4 parsed = parse_queries(queries)
          5 
    
    ImportError: cannot import name 'parse_queries' from 'data_lineage.parser' (/opt/conda/lib/python3.8/site-packages/data_lineage/parser/__init__.py)
    
    opened by yohei1126 4
  • Not an issue with data-lineage but issue with required package: pglast

    Not an issue with data-lineage but issue with required package: pglast

    Opening this issue to let everyone know till it gets fixed...the installation for pglast fails requiring a xxhash.h file. Here's the link to issue and how to resolve it: https://github.com/lelit/pglast/issues/82

    Please feel free to close if you think its inappropriate

    opened by siva-mudiyanur 3
  • What query format to pass to Analyzer.analyze(...)?

    What query format to pass to Analyzer.analyze(...)?

    I am trying to use this example: https://tokern.io/docs/data-lineage/queries ... first issue... this bit of code looks like it's just going to fetch a single row from the query history from snowflake:

    queries = []
    with connection.get_cursor() as cursor:
      cursor.execute(query)
      row = cursor.fetchone()
    
      while row is not None:
        queries.append(row[0])
    

    ... is this intended? Note that it's using .fetchone()

    Then.. second issue... when I go back to the example here: https://tokern.io/docs/data-lineage/example

    I see this bit of code...

    analyze = Analyze(docker_address)
    
    for query in queries:
        print(query)
        analyze.analyze(**query, source=source, start_time=datetime.now(), end_time=datetime.now())
    

    ... what does the queries array look like? Or better yet, what does the single query item look like? Above it, in the example, it looks to be a JSON payload....

    with open("queries.json", "r") as file:
        queries = json.load(file)
    

    .... but I've no idea what the payload is supposed to look like.

    I've tried 8 different ways of passing this **query variable into analyze(...) - using the results from the snowflake example on https://tokern.io/docs/data-lineage/queries - but I can never seem to get it right. Either I get an error saying that ** expects a mapping when I use strings or tuples (which is fine, but what's the mapping the function expects?) - or I get an error in the API console itself like

    tokern-data-lineage |     raise ValueError('Bad argument, expected a ast.Node instance or a tuple')
    tokern-data-lineage | ValueError: Bad argument, expected a ast.Node instance or a tuple
    

    .. could we get a more concrete snowflake example, or at the bare minimum please indicate what the query variable is supposed to look like?

    Note that I am also trying to inspect the unit tests and use those as examples, but still not getting very far.

    Thanks for this package!

    opened by peteclark3 2
  • Support for large queries

    Support for large queries

    calling

    analyze.analyze(**{"query":query}, source=dl_source, start_time=datetime.now(), end_time=datetime.now())
    

    with a large query, ​I get a "request too long" - seems that even though it is POSTing, it's still appending the query to the URL, thus the request fails.. e.g.

    tokern-data-lineage-visualizer | 10.10.0.1 - - [14/Oct/2021:14:39:00 +0000] "POST /api/v1/analyze?query=ANY_REALLY_LONG_QUERY_HERE
    
    opened by peteclark3 2
  • Conflicting package dependencies

    Conflicting package dependencies

    amundsen-databuilder which is one of the package dependencies for dbcat requires flask 1.0.2 whereas data-lineage requires flask 1.1

    Please feel free to close if its not a valid issue.

    opened by siva-mudiyanur 2
  • chore(deps): Bump certifi from 2021.5.30 to 2022.12.7

    chore(deps): Bump certifi from 2021.5.30 to 2022.12.7

    Bumps certifi from 2021.5.30 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Redis dependency not documented

    Redis dependency not documented

    Trying out a demo, I saw the scan (see also https://github.com/tokern/data-lineage/issues/106) command fail, with the server looking for port 6379 on localhost. Sure enough, starting local redis removed that problem. Can this be documented ? It looks like the docker compose file includes it, just the instructions don't.

    opened by debedb 0
  • Demo is wrong

    Demo is wrong

    Trying out a demo, I tried to run catalog.scan_source(source). But that does not exist. After some digging, it looks like this works:

    from data_lineage import Scan
    
    Scan('http://127.0.0.1:8000').start(source)
    

    Please fix the demo pages.

    opened by debedb 0
  • Use markupsafe==2.0.1

    Use markupsafe==2.0.1

    $ data_lineage --catalog-user xxx --catalog-password yyy
    Traceback (most recent call last):
      File "/opt/homebrew/bin/data_lineage", line 5, in <module>
        from data_lineage.__main__ import main
      File "/opt/homebrew/lib/python3.9/site-packages/data_lineage/__main__.py", line 7, in <module>
        from data_lineage.server import create_server
      File "/opt/homebrew/lib/python3.9/site-packages/data_lineage/server.py", line 5, in <module>
        import flask_restless
      File "/opt/homebrew/lib/python3.9/site-packages/flask_restless/__init__.py", line 22, in <module>
        from .manager import APIManager  # noqa
      File "/opt/homebrew/lib/python3.9/site-packages/flask_restless/manager.py", line 24, in <module>
        from flask import Blueprint
      File "/opt/homebrew/lib/python3.9/site-packages/flask/__init__.py", line 14, in <module>
        from jinja2 import escape
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/__init__.py", line 12, in <module>
        from .environment import Environment
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/environment.py", line 25, in <module>
        from .defaults import BLOCK_END_STRING
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/defaults.py", line 3, in <module>
        from .filters import FILTERS as DEFAULT_FILTERS  # noqa: F401
      File "/opt/homebrew/lib/python3.9/site-packages/jinja2/filters.py", line 13, in <module>
        from markupsafe import soft_unicode
    ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/opt/homebrew/lib/python3.9/site-packages/markupsafe/__init__.py)
    
    

    Looks like that was removed in 2.1.0. You may want to specify markupsafe==2.0.1.

    opened by debedb 0
  • MySQL client binaries seem to be required

    MySQL client binaries seem to be required

    This is probably due to SQLAlchemy's requirement of mysqlclient, but when doing

    pip install data-lineage
    

    The following is seen

    Collecting mysqlclient<3,>=1.3.6
      Using cached mysqlclient-2.1.1.tar.gz (88 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
      
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [16 lines of output]
          /bin/sh: mysql_config: command not found
          /bin/sh: mariadb_config: command not found
          /bin/sh: mysql_config: command not found
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "/private/var/folders/th/yz4tb0ss5t3_4df1xnfrkg3r0000gn/T/pip-install-auypdvbk/mysqlclient_42a825d5ee084d6686c16912ef8320cc/setup.py", line 15, in <module>
              metadata, options = get_config()
            File "/private/var/folders/th/yz4tb0ss5t3_4df1xnfrkg3r0000gn/T/pip-install-auypdvbk/mysqlclient_42a825d5ee084d6686c16912ef8320cc/setup_posix.py", line 70, in get_config
              libs = mysql_config("libs")
            File "/private/var/folders/th/yz4tb0ss5t3_4df1xnfrkg3r0000gn/T/pip-install-auypdvbk/mysqlclient_42a825d5ee084d6686c16912ef8320cc/setup_posix.py", line 31, in mysql_config
              raise OSError("{} not found".format(_mysql_config_path))
          OSError: mysql_config not found
          mysql_config --version
          mariadb_config --version
          mysql_config --libs
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    

    Installing mysql client fixes it.

    Since you are using SQLAlchemy, this is out of your hands but this issue is to suggest maybe add a note to that effect in the docs?

    opened by debedb 0
  •  could not translate host name

    could not translate host name "---" to address

    changed the CATALOG_PASSWORD,CATALOG_USER, CATALOG_DB, CATALOG_HOST accordingly and ran this command docker-compose -f tokern-lineage-engine.yml up. Throwing me an error
    return self.dbapi.connect(*cargs, **cparams) tokern-data-lineage | File "/opt/pysetup/.venv/lib/python3.8/site-packages/psycopg2/init.py", line 122, in connect tokern-data-lineage | conn = _connect(dsn, connection_factory=connection_factory, **kwasync) tokern-data-lineage | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "-xxxxxxx.amazonaws.com" to address: Temporary failure in name resolution

    opened by Opperessor 0
Releases(v0.8.5)
  • v0.8.5(Oct 13, 2021)

  • v0.8.4(Oct 13, 2021)

  • v0.8.3(Aug 19, 2021)

  • v0.8.2(Aug 17, 2021)

  • v0.8.0(Jul 29, 2021)

  • v0.7.8(Jul 17, 2021)

  • v0.7.7(Jul 14, 2021)

  • v0.7.6(Jul 10, 2021)

  • v0.7.5(Jul 7, 2021)

    v0.7.5 (2021-07-07)

    Chore

    • Prepare release 0.7.5

    Feature

    • Update to pglast 3.3 for inbuilt visitor pattern

    Fix

    • Fix docker build to compile psycopg2
    • Update dbcat (connection labels, default schema). CTAS, Subqueries support
    Source code(tar.gz)
    Source code(zip)
  • v0.7.4(Jul 4, 2021)

    v0.7.4 (2021-07-04)

    Chore

    • Prepare release 0.7.4

    Fix

    • Fix DB connection leak in resources. Set execution start/end time.
    • Update dbcat to 0.5.4. Pass source to binding and lineage functions
    • Fix documentation and examples for latest version.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Jun 26, 2021)

  • v0.7.2(Jun 22, 2021)

  • v0.7.1(Jun 21, 2021)

  • v0.7.0(Jun 16, 2021)

    v0.7.0 (2021-06-16)

    Chore

    • Prepare release 0.7.0
    • Pin flask version to ~1.1
    • Update README with information on the app and installation

    Feature

    • Add Parser and Scanner API. Examples use REST APIs
    • Add install manifests for a complete example with notebooks
    • Expose data lineage models through a REST API

    Fix

    • Remove /api/[node|main]. Build remaining APIs using flask_restful
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(May 7, 2021)

    v0.6.0 (2021-05-07)

    Chore

    • Prepare 0.6.0

    Feature

    • Build docker images on release

    Fix

    • Fix golang release scripts. Prepare 0.5.2
    • Catch parse exceptions and warn instead of exiting
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(May 4, 2021)

  • v0.3.0(Nov 7, 2020)

    v0.3.0 (2020-11-07)

    Chore

    • Prepare release 0.3.0
    • Enable pre-commit and pre-push checks
    • Add links to docs and survey
    • Change supported databases list
    • Improve message on differentiation.

    Feature

    • data-lineage as a plotly dash server

    Fix

    • fix coverage generation. clean up pytest configuration
    • Install with editable dependency option
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Mar 29, 2020)

    v0.2.0 (2020-03-29)

    Chore

    • Prepare release 0.2.0
    • Fill up README with overview and installation
    • Fill out setup.py to add information on the project

    Feature

    • Support sub-graphs for specific tables. Plot graphs using plotly
    • Create a di-graph from DML queries
    • Parse and visit a list of queries

    Fix

    • Fix coverage report
    • Reduce size by Removing outputs from notebook
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Mar 27, 2020)

Owner
Tokern
Automate Data Engineering Tasks with Column-Level Data Lineage
Tokern
Cleiton Leonel 4 Apr 22, 2022
🦊 Powerfull Discord Nitro Generator

🦊 Follow me here 🦊 Discord | YouTube | Github ☕ Usage 💻 Downloading git clone https://github.com/KanekiWeb/Nitro-Generator/new/main pip insta

Kaneki 104 Jan 02, 2023
Quadrirrotor UFABC - ROS/Gazebo

QuadROS_UFABC - Repositório utilizado durante minha dissertação de mestrado para simular sistemas de controle e estimação para navegação de um quadrirrotor utilizando visão computacional.

Mateus Ribeiro 1 Dec 13, 2022
a discord bot coded in Python which shows news based on the term searched by the user

Noah Miller v1.0 a discord bot coded in Python which shows news based on the term searched by the user Add the bot to your server About This is a disc

klevr 3 Nov 08, 2021
Tools to download and aggregate feeds of vaccination clinic location information in the United States.

vaccine-feed-ingest Pipeline for ingesting nationwide feeds of vaccine facilities. Contributing How to Configure your environment (instructions on the

Call the Shots 26 Aug 05, 2022
Telegram bot to provide links of different types of files you send

File To Link Bot - IDN-C-X Telegram bot to provide links of different types of files you send. WHAT CAN THIS BOT DO Is it a nuisance to send huge file

IDNCoderX 3 Oct 26, 2021
Discord bot ( discord.py ), uses pandas library from python for data-management.

Discord_bot A Best and the most easy-to-use Discord bot !! Some simple basic auto moderations, Chat functions. It includes a game similar to Casino, g

Jaitej 4 Aug 30, 2022
OpenVisionAPI client

OpenVisionAPI Client 🚀 Getting Started Prerequisites Installing Install the dependencies $ make setup Usage $ source .venv/bin/activate $ ./ova_clie

Open Vision API 40 Nov 11, 2022
PR Changes Matrix Builder

Pr-changes-matrix-builder - A Github Action that will output a variable to be used in a matrix strategy job based on a PR&'s changes

Kyle James Walker (he/him) 21 Oct 04, 2022
Query Amalgamator over StackOverflow and YouTube

QASY Query Amalgamator over StackOverflow and YouTube Decription A software you can use to save your valuable time of googling the errors you encounte

1 Nov 07, 2021
Grape - A webbrowser with its own Search Engine

Grape 🔎 A Web Browser made entirely in python. Search Engine 🔎 Installation: F

Grape 2 Sep 06, 2022
1 Feb 18, 2022
A Telegram bot to download youtube playlists and upload them to telegram. (may be slow becoz youtube limitations)

YTPlaylistDL 📛 A Telegram bot to download youtube playlists and upload them to telegram. (may be slow becoz youtube limitations) 🎯 Follow me and sta

Anjana Madu 43 Dec 28, 2022
A chatbot that helps you set price alerts for your amazon products.

Amazon Price Alert Bot Description A Telegram chatbot that helps you set price alerts for amazon products. The bot checks the price of your watchliste

Rittik Basu 24 Dec 29, 2022
The elegance of Airflow + the power of AWS

Orkestra The elegance of Airflow + the power of AWS

Stephan Fitzpatrick 42 Nov 01, 2022
A small module to communicate with Triller's API

A small, UNOFFICIAL module to communicate with Triller's API. I plan to add more features/methods in the future.

A3R0 1 Nov 01, 2022
Unlimited Filter Telegram Bot 2

Mother NAther Bot Features Auto Filter Manuel Filter IMDB Admin Commands Broadcast Index IMDB search Inline Search Random pics ids and User info Stats

LɪᴏɴKᴇᴛᴛʏUᴅ 1 Oct 30, 2021
A python library built on the API of the coderHub.sa, which helps you to fetch the challenges and more

coderHub A python library built on the API of the coderHub.sa, which helps you to fetch the challenges and more Installation • Features • Usage • Lice

TheAwiteb 5 Nov 04, 2022
Simple Self-Bot for Discord

KeunoBot 🐼 -Simple Self-Bot for Discord KEUNOBOT 🐼 - Run KeunoBot : /* - Install KeunoBot - Extract it - Run setup.bat - Set token and prefi

Bidouffe 2 Mar 10, 2022
Web3 Pancakeswap Sniper & honeypot detector Take Profit/StopLose bot written in python3, For ANDROID WIN MAC & LINUX

Pancakeswap BSC Sniper Bot web3 with honeypot detector (ANDROID WINDOWS MAC LINUX) First SNIPER BOT for ANDROID with honeypot detector Web3 Pancakeswa

HYDRA 1 Dec 23, 2021