Tech Stack

What powers SenseMesh.

A modular multimodal engine built on real-time speech, gesture, sign assets, and adaptive outputs.

Frontend
Backend
AI/ML
Realtime
Design System

React + Tailwind

Reactive UI for multimodal input/output simulation.

Supabase

Auth, storage for sign assets, and a real-time DB.

Web Speech API

Low-latency speech recognition for hearing users.

Media Pipeline

Video → subtitle → sign mapping engine.

Gesture / Sign Processing

Hook-ready endpoint for ML-based sign detection.

Adaptive Output Engine

Sign, captions, audio, and haptics per user profile.

How the system flows.

Inputs

Speech / Text / Gesture / Video

Input Normalizer

Cleans + aligns modalities

SenseFuse Engine

The brain

Context AI

Emotion + urgency + intent

Output Layer

Sign / Captions / Audio / Haptics

low latency
multi-modal
node-based
scalable

Frontend

  • React + Tailwind for the interface
  • Reusable card + toggle components
  • Dark/light mode engine
  • Simulated sign overlays and conversation panels
  • Mobile-first adaptive layouts

<MicListener />


<SignOverlayPlayer>

...video content...

</SignOverlayPlayer>


<AdaptiveOutputPanel

profile="deaf"

/>

Sign Dictionary
Subtitle Files
User Profiles

Backend

  • Supabase storage for sign assets
  • Real-time DB for message streams
  • REST endpoints for mapping subtitles to signs
  • Lightweight function pipeline for preprocessing

Future-ready ML modules.

Gesture Recognition Model

coming soon

Emotion/Tone Classifier

coming soon

Scene Understanding Layer

coming soon

Design System

Color Palette

Typography

Plus Jakarta

Inter / Roboto

Corner Radius

Shadows

Built to be extended.

Developers can plug new sensors, new sign libraries, new gesture models, or new audio outputs without rewriting the base engine.

React

Vercel

Supabase

Web APIs

Ready to explore how it all fits together?