UNIVERSITY — RANKINGS— RANKED 149 2024 CUFE-CHS 2024 Cairo UniversiW - Facuty ot Engineering - Credit Hour PereeptfQ Submitted By: • Shahd Abdelrahman Labib — 1190439 • Salma Ahmed — 1190253 • Nada Hesham Anwer — 1190185 • Mazen Osama — 1190164 Under Supervision of : • Prof. Amr Wassal AEM AET CCE CEM CIE EEE HEM IEM MDE MEE MEM PPC SEE STE WEE.
[Audio] Welcome to our short demo video on Perceptra,.
[Audio] a groundbreaking mobile application designed to make 3D scanning accessible to everyone..
[Audio] Our project focuses on making 3D scanning technology accessible through a mobile application. We will utilize photogrammetry techniques to convert 2D images from smartphones into 3D models..
[Audio] Our app is user-friendly, designed for individuals without technical expertise, and aims to provide an affordable alternative to expensive traditional 3D scanners. By making 3D scanning technology accessible and easy to use, we hope to encourage widespread adoption across industries and personal projects. Additionally, we strive for a global reach, promoting technological equity and ensuring that more people can benefit from advanced 3D scanning capabilities..
[Audio] Economically, our project lowers the cost of 3D scanning, making it accessible to small businesses and individuals. It also serves as an educational tool for students and professionals to learn and apply 3D scanning. By bridging the gap between high-end technology and everyday users, our project promotes inclusivity. Furthermore, it acts as a catalyst for new applications and innovations in 3D modeling and related fields..
[Audio] So, how does our app work? The user starts by capturing images of an object choosing to keep or retake every photo. The user then deletes any blurry images, and uploads the remaining photos to the app. The app then converts these images into a 3D model, which the user can save and view later within the app..
[Audio] The following video illustrates the signup and login process to the app..
[Audio] The following videos show how the user simply uses his smartphone camera to capture images of the object..
[Audio] Here, the user chooses to keep or retake each image in case an image is blurry as this image..
[Audio] If the user didn't get the chance to press retake on an image they do not like, they can simply press on the image and click delete..
[Audio] After the user is satisfied with the images he took, he names the model and uploads the images to the app to start the conversion process..
[Audio] So, how are the 2D images converted into a 3D model?.
Input images. [image] A hammer on a tree stump Description automatically generated.
[Audio] We start by masking the image, which means highlighting the parts of the image you want to include in the 3D model. It's necessary to focus on the object and remove any unwanted background, ensuring a cleaner and more accurate 3D scan..
[Audio] We then extract and match features between images. This means identifying and connecting similar points in both images. It's necessary to accurately align and combine the images, creating a precise 3D model..
[Audio] After feature matching, we use the matched points to determine the object's shape and position, creating a point cloud. A point cloud is a collection of many tiny points in 3D space that represent the surface of the object..
[Audio] Here, we can see the models saved in the app for the user to view. We go from the point cloud to our final 3D model by connecting the points with triangles, a process called meshing. Meshing creates a continuous surface that forms the detailed shape of the 3D object..
[Audio] Thank you so much for watching!. THANK YOU!.