Researchers from MetaMind Innovations – MINDS, University of Western Macedonia, and Kingston University, have developed an innovative visual-based translator system that leverages deep learning to translate sign language in real-time. Unlike traditional text-based methods, this cutting-edge approach analyzes visual data—such as gestures, facial expressions, and environmental context—to accurately interpret and convert sign language into spoken words or text.
This groundbreaking research holds immense potential to revolutionize communication across multiple sectors including education, entertainment, tourism, and healthcare. By bridging communication gaps and enhancing accessibility, this technology represents a significant leap forward for artificial intelligence and information technology.
📌 Presented at the prestigious 2024 13th International Conference on Modern Circuits and Systems Technologies (MOCAST).
Special thanks to the authors: Stavros Piperakis, Maria Papatsimouli, Vasilis A., Panagiotis Sarigiannidis and George Fragulis
Curious to learn more? Dive into the full paper for an in-depth exploration of this transformative AI innovation here: https://ieeexplore.ieee.org/document/10615749/
