Skip to content

MartinsRepo/FlowiseFaceMesh

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

intro

FlowiseMesh - Textual Facedescription with LLM

License

Google Mediapipe, flowiseai/flowise in Dockerhub and Ecal are licensed under Apache Version 2.0. This repo follows the licence terms.

Project Description

The Goal of this project is to connect external programs with a Large Language Model, designed and running in Flowise. The usecase is a life cam stream of a face decorated with Google Mediapipe generated landmarks as an input to a Flowise-model running in docker container. The model, locally an Ollama model or Openai interfaced via API, should interpret the data and post the result outside of the container. The following link points to the Flowise homepage.

Flowise - # Build AI Agents Visually

Installation

Step 1:

Docker installation

Installation Ubuntu

Installation Windows 11 / WSL2

Step 2:

Flowise is for free and we will install a Docker image from the Docker Hub

FlowiseAI from the Dockerhub

docker pull flowiseai/flowise

Step3:

a) Local Models - Install Ollama

Link to OLLAMA

Download the preferred model, eg

ollama pull deepseek-r1:14b

b) Cloud-based Create an API key, eg. OpenAI:

https://platform.openai.com/api-keys

and import the API - Key to Flowise

Step 4:

We are using a virtual python environment, eg pyenv or conda.

  • pyenv usage

  • conda usage

  • Create your virtual environment based on Python 3.12, here we name it ecal.

  • Activate the env

    pyenv activate ecal

Step 5:

Install the requirements:

pip install -r requirements.txt

Step 6:

Install the communication layer ECAL Landing page of ECAL

Download the latest version

Install in your virtual environment also the Python Binding, eg. For Linux:

pip install ecal5-5.13.3-1jammy-cp311-cp311-linux_x86_64.whl

For Windows:

pip install ecal5-5.13.3-cp311-cp311-win_amd64.whl

Step 7: Configuring .env

.env looks something like this:

###### FLOWISE ID NUMBER 
# Ollama Model 
# FLOWID = <Your Flowise ID>
# OpenAI Model
FLOWID = <Your Flowise ID>
###### Module Selection - VALID only for NON-Docker Application
# USECAM = False -> Sample image is used
# USECAM = True -> Webcam is used
# USECAM = False
USECAM = True
###### Picture Selection
FOLDER = Images/
# IMAGE = base.png
# IMAGE = looking_right.jpg
# IMAGE = looking_left.jpg
IMAGE = smiling.jpg

Your Flowise ID can be found here: flowid

Elements of the Software

A) Communication Layer ECAL

eCAL (enhanced Communication Abstraction Layer) is a fast publish-subscribe middleware that can manage inter-process data exchange, as well as inter-host communication. It comes with some tools, like the ECAL-Monitor and the ECAL-Player.

It is based on Protobuf messages. The documentation can be found here: Protocol Buffers Documentation The proto files in this project are:

  • facedata.proto
  • modeloutput.proto

They can be compiled by the command:

protoc -I =. --python_out=. facedata.proto

protoc -I =. --python_out=. modeloutput.proto

B) Using Facemesh from Google Mediapie

For Facedetection we are using Google Mediapipe. Link here. From this we are using the solutions Face Landmarker. The canonical Face Landmark Model is shown here: Canonical Face Model. The model, used here, can be found on mediapipe solutions. The module facedata2ecal.py is based on a modified example provided by Google.

Starting up the Project

Running on:

Ubuntu

Windows

Annotation

Using the Microsoft Lifecam HD3000, you can adjust the video frame. Keyboard shortcuts that you can use to manage the zoom out/in feature of camera:

Zoom Out = Ctrl + Minus Key, Zoom In = Ctrl + Plus key, Zoom to 100% = Ctrl + Zero key

Written with StackEdit.

About

I'm doing thrilling experiments with AI

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages