PptxGenJS Presentation

Published on Slideshow
Static slideshow
Download PDF version
Download PDF version
Embed video
Share video
Ask about this video

Scene 1 (0s)

Faculty of Information Technology Network and Computer Communications Department University of Benghazi Faculty of Information Technology Network and Computer Communications Department Thesis was Submitted in Partial Fulfillment of the Requirements for The Degree of master’s in science Reducing SDN Switch-Controller Overhead Using Off-Policy Reinforcement Learning Submitted By Nagi A Mohamed Nagem Supervised By Dr. Farag Sallabi MAR 2024.

Scene 2 (18s)

Introduction. 1. Growing Networking Needs. 2. Limitations of Traditional Approaches.

Scene 3 (58s)

SDN Applications and Challenges. Network Flexibility.

Scene 4 (1m 18s)

x MACHINE LEARNING ¯ ' — Subset of AltchnIqwwhWt we s&UeNcal momods Mable maehlnøs knprow with.

Scene 5 (1m 50s)

Reinforcement Learning. Agent. The entity that learns and interacts with the environment is called an agent. The agent makes decisions and takes action to achieve its goals..

Scene 6 (2m 17s)

RL Concepts. Value Function (V or Q). The value function estimates the expected cumulative reward that an agent can obtain from a particular state (V) or state-action pair (Q)..

Scene 7 (2m 37s)

Model-Based vs. Model-Free Methods. 1. MDP. 2. Off-policy and on-policy.

Scene 8 (2m 48s)

Research Questions. 1. TCAM Efficiency. 2. Optimization Methods.

Scene 9 (3m 9s)

Research Objectives. Study SDN. Study SDN, its applications, and the challenges it presents in the networking field..

Scene 10 (3m 33s)

Research Methodology. 1. Initial Study. Begin with an initial study to formulate key research questions aimed at improving TCAM usage in SDN switches..

Scene 11 (4m 6s)

Research Scope. Off-Policy RL Model. Complete Research Cycle.

Scene 12 (4m 27s)

Research Significance. Flow Rule Management. Addressing the challenge of managing a huge number of traffic flows and associated rules in large-scale networks..

Scene 13 (4m 50s)

TCAM and SDN Switches. TCAM Role in SDN. Efficiency Challenges.

Scene 14 (5m 9s)

Communication Overhead in SDN. Communication overhead remains a major challenge in SDN, impacting the connection between control and data planes. Innovative solutions like DevoFlow and DIFANE have developed, each proposing unique mechanisms to enhance network efficiency and reduce the overhead on SDN controllers..

Scene 15 (5m 41s)

Efficient management of flow entries is important for SDN performance. Dynamic timeout and eviction strategies have been proposed to optimize flow table utilization and adapt to network traffic patterns..

Scene 16 (6m 26s)

Aggregating flow rules is a strategic approach to managing SDN's limited TCAM capacity. Techniques like IDFA and Agg-ExTable have been developed to reduce flow entries and enhance switch performance..

Scene 17 (6m 56s)

Splitting and distributing flow rules across network switches is a technique to address TCAM shortages. Approaches like SA, SSP, and OFFICER aim to optimize rule allocation and network efficiency..

Scene 18 (7m 19s)

Machine Learning (ML) offers a revolutionary approach to SDN, providing intelligent solutions for flow table management, traffic classification, and resource optimization..

Scene 19 (8m 1s)

Experimental Setup. 1. Controlled Environment. 2.

Scene 20 (8m 27s)

Setting up the Experimental Environment. Ubuntu Setup.

Scene 21 (9m 36s)

Network Topology. [image] A diagram of a computer system Description automatically generated.

Scene 22 (9m 58s)

Data Collection in a Mininet-POX Environment. Phase Description Execution Executing experiments with TCP traffic simulation using IPERF and recording flow entries. Storage Storing data in a structured CSV file for organized analysis. Flow-Removed Event Collecting detailed statistics about network flows, focusing on the duration of each flow and frequency..

Scene 23 (10m 15s)

POX Controller Modifications for Overhead Measurement.

Scene 24 (10m 40s)

Dynamic Flow Entry Insertion and Network Performance.

Scene 25 (11m 3s)

Utilizing RL and Interactive Tools for SDN Flow Management.

Scene 26 (11m 21s)

RL Agent State Space, Action Space and Reward Function.

Scene 27 (11m 54s)

Off-Policy RL with DQN. Off-Policy Learning. Off-policy RL algorithms, such as Q-learning and DQN, learn the value of the optimal policy independently of the agent's actions..

Scene 28 (12m 17s)

Deep Q-Network (DQN) Algorithm Overview. 1. 2. 3.

Scene 29 (12m 42s)

RL Framework Integration. 1. Environment Setup. Configuring the Mininet network and POX controller to emulate the SDN topology and manage network traffic..

Scene 30 (13m 7s)

Training Procedures for the RL Agent. 1. Perception of State.

Scene 31 (13m 32s)

1. Controller Overhead Minimization. The Agent showcased a remarkable ability to minimize controller overheads, which is an important element of efficient network management in SDN environments..

Scene 32 (13m 52s)

Deep RL (DQN) Agent's Efficiency. 1. Baseline Scenario.

Scene 33 (14m 5s)

2. 90 Seconds Hard Timeout. Adjusting the Hard TimeOut to 90 seconds, the agent decresed overhead by 45%, showcasing its adaptability and efficiency in rule management..

Scene 34 (14m 19s)

150 Seconds Hard Timeout. A graph with blue dots Description automatically generated.

Scene 35 (14m 32s)

With a Hard TimeOut of 200 seconds, the agent achieved overhead reductions of 65%, marking significant achivements in SDN management..

Scene 36 (14m 43s)

Timeout Value. Packet In Messages. 30. 4822. 90. 3676.

Scene 37 (15m 8s)

90 Seconds Timeout. 150 Seconds Timeout. An adjustment to 150 seconds resulted in a 40% reduction in the number of flow sessions, indicating improved efficiency..

Scene 38 (15m 39s)

Before DQN. Before the introduction of the DQN agent, the server and client had a consistent bandwidth of approximately 263 Kbits/sec for mice flows and 25.6M for elephant flows..

Scene 39 (16m 5s)

Conclusion. 2. Utilization of DQNRL. Our study utilized the off-policy Deep Q-Network Reinforcement Learning algorithm (DQNRL) to automatically determine the entries that should be kept on the switch flow table..

Scene 40 (16m 36s)

Optimizing SDN with DQN RL. 25%. Baseline Reduction.

Scene 41 (16m 58s)

Future Work. Implementing DQN RL Method. This involves implementing the Deep Q-Network (DQN) RL method across a variety of known SDN topologies, incorporating multiple switches, and utilizing different SDN controllers like OpenDaylight and Ryu..

Scene 42 (17m 19s)

Thank You.