A linguistics professor at UNM, is helping shape the future of sign language research, using unique teaching methods that help her students re-imagine communication.
Abstract: This paper proposes a Visual-Speech-Text Large Language Model framework for Human-Robot Interaction (VSTLLM HRI). By designing a Modality Language Model (MLM), the framework achieves a ...