NeuroRobo: Enhancing Human-Robot Interaction with Facial Sentiment Analysis and Object Detection

Published on Slideshow
Static slideshow
Download PDF version
Download PDF version
Embed video
Share video
Ask about this video

Scene 1 (0s)

[Audio] In August 2023, four B.Tech students and one Ph.D. student from the BVRIT Hyderabad College of Engineering for Women set out to make a breakthrough in Human-Robot Interaction. G. Himabindu, P. Aparna, P. Sai Sri Poojitha, P. Shobitha Rani and Mukhtar Sofi embarked on a journey of Facial Sentiment Analysis and Object Detection. This project, NEUROROBO, was about to revolutionize Human-Robot Interaction and the five students were determined to take up this challenge and make a difference..

Scene 2 (38s)

[Audio] Four B.Tech students and one Ph.D. student from the BVRIT Hyderabad College of Engineering started a project in August 2023 with the objective to advance Human-Robot Interaction through Facial Sentiment Analysis and Object Detection. The team developed the NeuroRobo model, a major accomplishment in the area of computer vision and deep learning that can upgrade Natural Language Processing so that interactions between people and machines become more natural and straightforward..

Scene 3 (1m 10s)

[Audio] This project revolves around improving human-robot interaction through the precision of natural language processing. Constructed by five students, the NeuroRobo model employs advanced computer vision and deep learning strategies to provide fluent lipsyncing-based responses, object identification abilities, and imitate user movements. The main ambition is to make real-time, more intuitive and natural interactions between people and computers possible. The students intend to refine the model's NLP accuracy and fashion a smoother and more gratifying experience for the user..

Scene 4 (1m 49s)

[Audio] The objective of this project is to create an advanced system that will revolutionize Human-Robot Interaction. A new model called NeuroRobo has been devised, and to achieve this it brings together the expertise of five students. NeuroRobo uses advanced computer vision and deep learning to provide intuitive lipsyncing-based responses. It also uses object recognition through a webcam to interact with physical objects. The model has been designed to mimic the user's actions for a more natural interaction..

Scene 5 (2m 26s)

[Audio] A framework was developed by the team to enhance Human-Robot Interaction. The framework incorporates facial sentiment analysis and object detection algorithms in order to provide an extra layer of biometric authentication for the robot to recognize the person. The facial sentiment analysis helps the robot to determine the emotions expressed by the individual, while the object detection element grants the robot the capability to identify and categorize different objects. This amalgamation of algorithms allows the robot to interact with humans in a way that is more natural and intuitive..

Scene 6 (3m 5s)

[Audio] The BVRIT Hyderabad College of Engineering project involves improving Human-Robot Interaction through Facial Sentiment Analysis and Object Detection. The Talk to Me module allows people to hold natural conversations with the model. Speech is collected through a microphone, then converted to text in order for the Blenderbot model to generate a response back. The response is converted to speech and played back with the Mel Frequency Cepstral Coefficients, which enable the model to lip sync to the reply, making it easier for the user to interact with the robot in a natural manner..

Scene 7 (3m 44s)

[Audio] BVRIT Hyderabad College of Engineering has developed an exceptional project that aims to improve Human-Robot Interaction. The object detection module makes use of YOLOv3-tiny model to detect objects from webcam feed in real-time, driven by the Common Objects in Context (COCO) labeled dataset containing 80 labels. It produces an audio output to inform the user about the detected objects, which could, potentially, be revolutionary for enhancing human-robot interaction..

Scene 8 (4m 19s)

[Audio] Our B.Tech students and PhD student have collaborated on a project to improve Human-Robot Interaction utilizing Facial Sentiment Analysis and Object Detection. One of the features of the project is the I Am a Mimic module, which leverages SparkAR technology to map a user's facial movements to the facial model's actions in real-time, permitting a more immersive and organic interaction with the robot. Moreover, the user is also able to switch to the video function, with the model then able to replicate the movements and actions seen in the video..

Scene 9 (4m 55s)

[Audio] In August 2023, four B.Tech students and one Ph.D. student from the BVRIT Hyderabad College of Engineering developed an innovative project that focused on improving Human-Robot Interaction. This was done by implementing Facial Sentiment Analysis and Object Detection. To accomplish this, they used Mel Frequency Cepstral Coefficients, the Blenderbot Blended Skill Talk Dataset for Speech Recognition, Text Tokenisation, and the YOLOv3 Algorithm and OpenCV to detect objects. The objective of this project is to make it possible for robots to detect and respond to human emotions through facial expressions..

Scene 10 (5m 40s)

Home Page.

Scene 11 (5m 46s)

[Audio] "The team worked for two months to build a model to identify objects in the background, using a deep learning approach. They combined existing facial recognition algorithms and computer vision techniques to create a model that was able to recognize objects. The team also developed a novel sentiment analysis algorithm to interpret the facial expressions of the test subjects. Finally, they created an interface to allow users to interact with the model. After testing the model for accuracy and robustness, the team was able to successfully demonstrate its functionality." This amazing team of students worked diligently for two months to develop a model that could identify objects in the background and interpret facial expressions. They used a combination of facial recognition algorithms, computer vision techniques, and a novel sentiment analysis algorithm to create a model that was robust and accurate. They also created an interface that enabled the user to interact with the model. In the end, the team were rewarded for their hard work by demonstrating the successful functioning of their model..

Scene 12 (7m 0s)

[Audio] The BVRIT team has designed a Human-Robot Interaction System that enables robots to interpret human conversations through facial sentiment analysis and object detection. By examining facial features and expressions, the robot can detect not only spoken words but also the emotions behind them. This system also allows the robot to accurately recognize objects in the environment, thus improving its capability to interact with humans. Experiments have been conducted to make the system more accurate and efficient, and the outcomes have been encouraging. Thanks to this system, robots can now comprehend and interact with humans better, increasing their level of realism..

Scene 13 (7m 46s)

[Audio] The BVRIT Hyderabad College of Engineering team has developed a project called 'I Am a Mimic'. With this project, they intend to enhance the interaction between humans and robots using facial sentiment analysis and object detection. Comprising four B.Tech students and one Ph.D. student, the team has been working on the project since August 2023. It is a complicated task which requires facial expression analysis and object detection to enable robots to interact more intelligibly with humans. This project has the capability to transform the way of human-robot interaction and it will be beneficial to both humans and robots in the future..

Scene 14 (8m 34s)

[Audio] Five students from the BVRIT Hyderabad College of Engineering, led by Dr. Mukhtar Sofi, a PhD student whose thesis has recently been submitted, are working on an ambitious project combining facial sentiment analysis and object detection to increase Human-Robot Interaction. Included in the team are G. Himabindu, P. Aparna, and P. Sai Sri Poojitha, all Bachelor of Technology degree holders. This project could potentially shape the future of robotic technology..

Scene 15 (9m 8s)

[Audio] Four B.Tech students and one Ph.D. student from the BVRIT Hyderabad College of Engineering worked together in August 2023 to enhance Human-Robot Interaction with Facial Sentiment Analysis and Object Detection. This project has been a great experience for the students and they hope that their work will help lead to improved Human-Robot Interaction in the near future. They are open to any questions or suggestions..