SOUND-BASED COMPUTER AUTOMATION USING CNN

Published on Slideshow
Static slideshow
Download PDF version
Download PDF version
Embed video
Share video
Ask about this video

Scene 1 (0s)

SOUND-BASED COMPUTER AUTOMATION USING CNN. Supervisor Dr.R Pari Asst. Professor CSE Dept.

Scene 2 (22s)

Follow up of First Review.

Scene 3 (28s)

Objective. The objective for sound-based computer automation is to enable users to interact with computers and devices through sound-based input methods. This type of automation can make it easier for people with disabilities or those who prefer hands-free operation to use technology, as well as improve productivity and efficiency for all users. To process the input sound to seperate the background noise and identify the clap sounds To build a system where we can control slideshows or audio visual by sensing sound signals..

Scene 4 (51s)

Project Description. The project aims to develop a sound-based computer automation system for controlling a desktop applications. The system will be used to control the playback of music by playing or pausing a song by using sound signals such as claps or finger snap, to trigger specific actions or functions in the music player. To build a system where we can control the slideshows by sensing sound signals through the microphone like clapping twice to move backward or once to move forward. It also controls video playback by recognizing sound gestures where it can pause/play by the sound of single clap/snap.

Scene 5 (1m 18s)

Problem Definition. In this project we are finding a way to use Sound gestures to automate certain processes such as controlling a presentation slideshow or play/pause songs or videos. We can use this to make these operations convenient like not having to move to change the slide or control the music playback. We can easily change the slides by simply clapping one time once we are done with that slide..

Scene 6 (1m 38s)

Proposed Solution. Sound-based computer automation refers to the use of sound as a means to control or interact with a computer or a computer system. This can be achieved through various means, including sound sensors, or other sound-based technologies. This sound-based automation project will explore the use of audio signals, such as claps or finger snaps, to trigger specific actions or functions in the music player. .We receive Sound input(clap sound) from the user through microphone and process the sound using Machine learning algorithm (CNN) to seperate the background noise from the input and identify only when the input is a clap sound. Then we take the processed input and the function assigned to that input is triggered..

Scene 7 (2m 9s)

Requirement Specification. Software. Hardware. FRONT END TOOLS Tkinter BACK END TOOLS PYTHON MARKUP LANGUAGES HTML SCRIPTING LANGUAGES CSS MIDDLE WARE TECHNOLOGIES DSP, Pyautogui, Pyaudio IDE VS code SIMULATION TOOLS Matplot NETWORK/WIRELESS TECHNOLOGIES - ANY OTHER -.

Scene 8 (2m 23s)

Requirement Specification - Justification. <Justify the need of the Software/Hardware components> Coefwmparison table may be presented.

Scene 9 (2m 32s)

Architecture Diagram. SOUND-BASED COMPUTER AUTOMATION.

Scene 10 (2m 46s)

Module Description. Using Python and OpenCV for your driver drowsiness detection project is a well-justified choice for several reasons: DIGITAL SIGNAL PROCESSING : Digital signal processing is used extensively in modern communication systems, h as wireless and wired networks, as well as in consumer electronics devices such as smartphones, digital cameras, and audio players. It has revolutionized the way signals are processed and analyzed, enabling faster and more accurate processing of signals in a wide range of applications. AUDIO FILTERING MODULE : An audio filtering module is a component or software tool that is used to manipulate audio signals by modifying or removing certain frequencies from the audio spectrum. This process is called audio filtering, which involves altering the audio signal in some way to improve its quality or remove unwanted noise. They can be used to remove background noise, enhance certain frequencies, or create special audio effects. Audio filters can be implemented using different techniques, such as analog filters or digital signal processing (DSP).

Scene 11 (3m 27s)

Module Description. PYAUDIO : Pyaudio supports a variety of audio formats, including WAV, AIFF, and MP3, and can be used to read and write audio files. It also provides a way to capture audio data from microphones or other input devices, and to play back audio through speakers or other output devices. Pyaudio can be used for a variety of applications, such as recording and analyzing audio data, creating audio processing applications, and building voice-enabled applications. PYAUTOGUI : PyAutoGUI provides a simple and easy-to-use interface for automating GUI interactions, allowing you to automate repetitive tasks such as clicking on buttons, entering text, and selecting menus. It also provides functions for controlling the mouse cursor and the keyboard, including pressing keys and simulating mouse movements and clicks..

Scene 12 (4m 1s)

Database Design. Database schema for database oriented projects.

Scene 13 (4m 8s)

Algorithm Explanation. Explain the algorithm used – steps/psudocode/flowchart, in different slides.

Scene 14 (4m 17s)

Implementation. Explain the program logic (part of coding) used, in different slides.

Scene 15 (4m 25s)

Screenshots. Text Description automatically generated.

Scene 16 (4m 35s)

Conclusion. In conclusion, the sound gestures recognition is an innovative technology that allows users to control their audio visual/slideshows without touching any physical buttons or screens. The use of sound gestures provides a hands-free and intuitive way to interact with the background activities This type of sound automation can make it easier for people with disabilities or those who prefer hands-free operation to use technology, as well as improve productivity and efficiency for all users. We can further implement this project into being able to differentiate between various sound signals to perform additional functionalities like seeking a video(Fast forwarding), Increasing or decreasing volume etc.,.

Scene 17 (5m 3s)

References. 1. Suncica Milivojša, Sandra Ivanović, Tatjana Erić, Marija Antić, Nikola Smiljković, March 2017, “Implementation of voice control interface for smart home automation system” 2. Baliyan, Anjali (2022), “Sound Control Home Automation” 3. Suncica Milivojša, Sandra Ivanović, Tatjana Erić, Marija Antić, Nikola Smiljković, March 2017, “Implementation of voice control interface for smart home automation system” 4. Yufei Xia, Chengzhang Qu, 2021, “Design and Implementation of a Voice Controlled Music Player System Based on iFLYTEK Open Platform” 5. Sunčica Milivojša, Sandra Ivanović, Tatjana Erić, 2017, “Implementation of Voice Control Interface for Smart Home Automation System” 6. Dong Myung Lee, Tae-Wan Kim, Ho Chul Lee, Changmin Jeong, Gi Ho Nam, ICAIC 2020, “A Sound Source-based Intelligent Context Awareness System using CNN”.

Scene 18 (5m 39s)

THANK YOU.