[Audio] Good morning everyone. My name is Matt and I am the Information Security Officer. Today, we will be discussing a significant cybersecurity risk that is no longer just a concept in science fiction. The subject of our discussion is deepfakes. Deepfakes are media, typically videos or audio recordings, created using artificial intelligence to imitate real people. This technology utilizes deep learning and specifically generative adversarial networks (GANs) to produce highly realistic but completely fabricated content. While initially developed for entertainment and research purposes, deepfakes have quickly evolved and are now being used as tools for malicious intent. In the corporate world, deepfakes can be utilized to impersonate executives, deceive stakeholders, or steal valuable information. A notable incident in 2019 involved fraudsters using an AI-generated voice of a CEO to trick an employee into transferring funds to a fraudulent supplier account, resulting in a loss of $243000. This is just one example of the potential harm that deepfakes can cause. The danger lies in their incredibly realistic appearance and how easily they can be created. Just imagine receiving a video call or voicemail from your CFO, Kathy, approving a transaction, only to discover later that it was a deepfake. These scams can bypass traditional warning signs and exploit trust, leading to financial loss, damage to reputation, and data breaches. You may be wondering how to determine if you are seeing or hearing a deepfake. This is a valid concern because these fabricated media are becoming alarmingly convincing. Let's look at some indicators. In a video, there may be visual clues that something is not right. For instance, the person's facial expressions may not match what they are saying, or their mouth may not sync with their voice. You may also notice unnatural movements, such as strange blinking or a floating or glitching face. This is because deepfake algorithms still struggle with factors like lighting, shadows, and fine details. As technology continues to advance, it is essential to be mindful of the potential dangers of deepfakes and remain vigilant. Thank you for your attention..
[Audio] Slide number two of our presentation welcomes you to the discussion on the topic of deepfakes in corporate environments. These are manipulated videos or audios that use artificial intelligence to imitate executives, deceive stakeholders, or steal confidential information. Their high level of realism and ease of creation make them a significant threat in the digital world. So, how can one identify a deepfake? If it is a voice message or phone call, one should pay attention to the tone and quality of the voice. A deepfake voice tends to sound flat or robotic, lacking natural emotions and often exhibiting strange pauses or unusual speech patterns. The background noise may also seem off, either too quiet or artificially altered. However, the most telling signs are usually behavioral. It is important to be skeptical of urgent requests, especially if they come in the form of video or audio. Always double-check through secondary channels, such as a direct call, text, or face-to-face conversation. Stay informed and trust your instincts - if something feels off, even in the slightest, report it. And in case of any doubts, do not hesitate to reach out to your manager or our IT Information Security Team for assistance. It is crucial to remember that deepfakes are a 21st-century threat, but our best defence lies in staying vigilant and following the right protocols. Thank you for your attention and let us now proceed to our final slide..