PptxGenJS Presentation

Published on Slideshow
Static slideshow
Download PDF version
Download PDF version
Embed video
Share video
Ask about this video

Scene 1 (0s)

[Virtual Presenter] This presentation will discuss the research project conducted by Nagi A Mohamed Nagem which aimed at reducing SDN Switch-Controller Overhead through Off-Policy Reinforcement Learning. The project was submitted in March 2024 to the Faculty of Information Technology Network and Computer Communications Department of the University of Benghazi in partial fulfillment of the requirements for the degree of Masters in Science. The project was supervised by Dr Farag Sallabi. We will explore the results and implications of this research project in the coming slides..

Scene 2 (38s)

[Audio] S-D-N or Software-Defined Networking is gaining importance in the need for efficient big data processing. This thesis focuses on optimizing a vital part of S-D-N networks the T-C-A-M or Ternary Content-Addressable Memory. Utilizing Off-Policy Reinforcement Learning methods this research strives to reduce the control plane overhead and improve overall network performance..

Scene 3 (1m 6s)

[Audio] Today we will be discussing the impact of S-D-N technology on network flexibility scalability concerns and security effects. S-D-N gives us the power to dynamically customize our networks to fit a variety of requirements. By centralizing control and network configuration we can easily expand and manage larger scale networks. With that centralized control comes the responsibility of ensuring that the network is secure. Strong mechanisms must be implemented to guard against malicious activity. As we can see S-D-N is a powerful tool in managing modern networks..

Scene 4 (1m 45s)

[Audio] Reinforcement Learning is an AI technique that encourages agents to interact with their environment and learn from their mistakes. Its goal is to maximize the rewards given by the environment over a certain amount of time and optimize an agent’s behaviour for its specific task. It uses algorithms such as Q-Learning and Deep Q-Learning to learn the best action to take in a certain situation which is then utilized for applications like robotics autonomous navigation and game playing..

Scene 5 (2m 18s)

[Audio] The presentation we are discussing today focuses on the use of Off-Policy Reinforcement Learning to reduce SDN Switch-Controller overhead. Reinforcement Learning is an artificial intelligence technique which involves an agent interacting with an environment to reach a goal. The agent sets its own objectives and takes actions to reach them and receives feedback from the environment in the form of rewards or penalties based on its decisions. The feedback is based on the current state of the environment or the 'State'. State is the information that the agent has to use to make decisions and take action. By using this information the agent can make the best decisions and take the right action to achieve its goal..

Scene 6 (2m 59s)

[Audio] This thesis examines how off-policy reinforcement learning can be utilized to reduce the overhead of S-D-N switch controllers. Reinforcement learning is a branch of machine learning in which an agent acts in an environment to maximize a reward. Building upon concepts such as a value function and the exploration vs exploitation trade-off algorithms can help the agent learn and refine its policy. Using these algorithms S-D-N switch controllers can be made more efficient and the networks utilized more effectively..

Scene 7 (3m 35s)

[Audio] The presentation titled "Reducing SDN Switch-Controller Overhead using Off-Policy Reinforcement Learning" examines the comparison between Model-Based and Model-Free methods. Model-Based methods are based on a mathematical model of the system while Model-Free methods use trial and error exploration to build a model. Nagi A Mohamed Nagem studied two types of learning algorithms: Off-policy and On-policy. Off-policy algorithms are not dependent on the behavior policy that is used to collect the data whereas On-policy algorithms do depend on the behavior policy used..

Scene 8 (4m 11s)

[Audio] This research aims to identify methods to improve the efficiency of T-C-A-M in S-D-N switches. It looks into algorithms and techniques to optimize T-C-A-M utilization and implementation. It further seeks to measure the performance impact of such strategies on T-C-A-M efficiency and S-D-N switch operation. The results of this research will be useful in optimizing the usage of T-C-A-M in S-D-N switches..

Scene 9 (4m 38s)

[Audio] I am conducting research on software-defined networking (S-D-N--) and its applications as well as the challenges it poses in the networking field. Moreover I plan to explore the usage of reinforcement learning algorithms for S-D-N to enhance network management. To do so I will use Mininet to simulate an S-D-N network for practical experiments. Ultimately I intend to create and run an application to determine the best approach for dealing with the T-C-A-M problem via off-policy reinforcement learning..

Scene 10 (5m 13s)

[Audio] I propose to reduce the overhead of the S-D-N switch-controller by using off-policy reinforcement learning. To begin I will identify key research questions regarding T-C-A-M usage in S-D-N switches. Research objectives will be established to develop an efficient algorithm that can manage S-D-N switch flow entry. A literature review will be conducted to understand S-D-N architecture applications challenges and reinforcement learning algorithms. An experimental environment using Mininet Jupyter and Tensorflow will be created to test the algorithm. The core of my research will be to develop an off-policy reinforcement learning algorithm that will manage SDN Flow Entry while considering the TCAM's limitations..

Scene 11 (5m 59s)

[Audio] This thesis investigates the possibility of decreasing SDN Switch-Controller overhead by using an Off-Policy Reinforcement Learning Model. To start the research scrutinizes the possibility of improving the management of flow entries by optimizing the efficiency of TCAMs Subsequently a suitable RL model is strategized and implemented in order to appraise the efficiency of the proposed model. Eventually the research endeavors to reduce SDN Switch-Controller overhead and augment T-C-A-M efficiency via the use of this Off-Policy Reinforcement Learning Model..

Scene 12 (6m 37s)

[Audio] Our research is aimed at reducing the overhead of S-D-N switch-controller connections by employing Off-Policy Reinforcement Learning. This technique can be used to effectively manage a large number of traffic flows and associated rules thus enabling a more efficient operation of the network. We are developing a model to minimize the number of installed rules on switches which would result in better use of network resources and lower overhead for network management..

Scene 13 (7m 6s)

[Audio] T-C-A-M and S-D-N switches can play an important role in reducing Control Plane Overhead. These components however have limited storage capacity which calls for advanced algorithms to manage flow rules. Although these algorithms improve searching operations they are also energy intensive. Off-policy RL techniques can provide a solution to this problem by allowing for more flexible network configurations simplifying rules storage and reducing Control Plane Overhead. In MAR 2024 Nagi A Mohamed Nagem submitted a thesis to the Faculty of Information Technology Network and Computer Communications Department of University of Benghazi suggesting a solution that utilises Off-Policy Reinforcement Learning to reduce SDN Switch-Controller Overhead..

Scene 14 (7m 55s)

[Audio] This presentation focuses on reducing SDN Switch-Controller Overhead Using Off-Policy Reinforcement Learning. Various researches have been conducted to reduce S-D-N communication overhead; Curtis A R et al proposed fine-grained wildcards and a C-L-O-N-E flag Yu M et al proposed authority switches to minimize message volume and controller overhead and Favaro et al proposed a blackhole mechanism to decrease redundant switch-controller traffic. We will discuss the existing techniques and propose a new mechanism to further reduce S-D-N switch-controller overhead using off-policy reinforcement learning..

Scene 15 (8m 35s)

[Audio] Reinforcement learning off-policy has become a widespread technique to solve numerous tough matters. We have studied the usage of reinforcement learning in tackling the S-D-N switch-controller overhead issue. Our study has proved that using a reinforcement learning approach switch-controller overhead can be greatly reduced without compromising the effective use of obtainable resources..

Scene 16 (9m 0s)

[Audio] Flow Rule Aggregation is a key technique for managing SDN's limited T-C-A-M capacity. To reduce flow entries and enhance switch performance In-switch dynamic flow aggregation (I-D-F-A-) and Agg-ExTable have been developed. Quine-Mcclustkey algorithm Hidden Markov Model minnie algorithm and Flow entry compression are among the numerous works in this space which are used to reduce flow processing time maximize the utilization of S-D-N switches' flow table and compress matching header..

Scene 17 (9m 34s)

[Audio] Nagi A Mohamed Nagem's thesis aims to lessen the control overhead of software-defined networking switch controllers. To do so they propose splitting and distributing flow rules across multiple network switches a technique also used by Shue et al and Nguyen et al both of which address T-C-A-M shortages and optimize rule allocation and network performance. By optimizing the flow rule distribution it is possible to reduce the control overhead of switch controllers and improve network performance at the same time..

Scene 18 (10m 9s)

[Audio] S-D-N can leverage the Machine Learning algorithm to greatly improve traffic processing flow table management and resource optimization. This slide presents various Machine Learning techniques that have been proposed for SDN. These techniques can aid S-D-N in determining which flows should be cached which flows should be evicted and how the flows should be scheduled for optimal usage. Research findings demonstrate that by using these methods the control plane overhead and resource utilization can be minimized while maximizing the flow table hit rate..

Scene 19 (10m 46s)

[Audio] We created an experimental setup to evaluate the efficiency of off-policy reinforcement learning in S-D-N flow entry management. This setup enabled us to investigate how our theoretical ideas can be utilized practically assessing and demonstrating the effectiveness of our proposed off-policy RL approach..

Scene 20 (11m 9s)

[Audio] For the experimental environment Ubuntu was setup as an open-source operating system and Mininet was installed as a network emulator. This allows us to quickly and efficiently create virtual networks with hosts switches and controllers providing a realistic environment for testing S-D-N concepts. Finally the POX Controller was setup to manage the S-D-N network..

Scene 21 (11m 37s)

[Audio] This slide focuses on the network topology used in the thesis by Nagi A Mohamed Nagem. It illustrates three primary components: hosts a controller and an OpenFlow switch. The controller communicates with the OpenFlow switch sending command instructions and receiving statistics and the OpenFlow switch manages communication between the hosts connected to it..

Scene 22 (12m 1s)

[Audio] Data collection is a crucial part of the proposed framework. Experiments were carried out in a Mininet-POX environment during which vital data like the duration and frequency of the flow was collected and stored. The Mininet-POX environment allowed us to conduct the experiments and acquire detailed flow-removed events via the I-P-E-R-F tool. The obtained data was structured into a C-S-V file for orderly analysis..

Scene 23 (12m 29s)

[Audio] The connectionUp event was used to measure the controller-switch interaction time giving an indication of the total time taken in the connection establishment process. The research by Nagi A Mohamed Nagem involved modifying the POX Controller to minimize S-D-N switch-controller overhead through off-policy reinforcement learning. To achieve this Nagi adjusted the handle_PacketIn event used connection down events to calculate communication overhead and employed connectionUp events to measure controller-switch interaction time. The results demonstrated a significant reduction in communication overhead leading to enhanced network performance..

Scene 24 (13m 10s)

[Audio] Our research has revealed that it is feasible to select optimal flow entries dynamically by customizing the connection-up event in S-D-N and leveraging the capabilities of a RL-based agent. This could allow us to optimize switch behavior reducing congestion and optimizing overall network performance. We think this approach furnishes a valid answer to effectively managing network flow furthering our quest for an efficient and effective system..

Scene 25 (13m 39s)

[Audio] Our research focused on utilizing reinforcement learning and interactive tools to manage S-D-N flows more effectively. To accomplish this we used Jupyter Notebook as a flexible and user-friendly interactive computing environment. We also took advantage of Keras for neural network model construction and implemented our RL with TensorFlow for computational efficiency. By doing so we were able to reduce the overhead of the S-D-N switch-controller to maximize network performance..

Scene 26 (14m 10s)

[Audio] Nagi A Mohamed Nagem has proposed a model that uses an off-policy reinforcement learning technique to reduce the S-D-N switches and controllers overhead. This model's action space consists of nine actions which influence the network's flow management. The state-space of the model is composed by the combinations of parameters for flow match frequency and flow recentness to signify the frequency of flow matches and the length of the matches. This RL agent was designed to reduce the control plane overhead while aiming for flow match frequency and flow recentness..

Scene 27 (14m 45s)

[Audio] In this slide we discuss the topic of the thesis submitted by Nagi A Mohamed Nagem in March of 2024. Specifically the goal is to reduce SDN Switch-Controller overhead by utilizing off-policy reinforcement learning. We will delve into the methods of off-policy learning experience replay and how deep Q-Network integrates neural networks and Q-learning to address challenges in high-dimensional state spaces. This combination of strategies is expected to effectively improve the efficiency of the SDN Switch-Controller..

Scene 28 (15m 23s)

[Audio] D-Q-N or Deep Q-Network algorithm is a powerful tool for reducing switch-controller overhead in software-defined networking. It uses a neural network to estimate the best actions for a given state and stabilizes this Q-value estimation with a target network. Through iterative refinement D-Q-N is able to learn optimal policies over time thus reducing the switch-controller overhead..

Scene 29 (15m 51s)

[Audio] I integrated a reinforcement learning (R-L---) framework to reduce SDN Switch-Controller Overhead. I set up a Mininet network and P-O-X controller to emulate the S-D-N topology and manage network traffic. I designed an RL agent with a state space action space and reward function for S-D-N flow entry management. Additionally I designed a training loop to interact with the environment. The agent learned by observing selecting actions and receiving rewards based on the impact on network performance. The P-O-X controller enabled the RL agent to transfer the learned policies to the S-D-N switch. This technique enabled me to reduce SDN Switch-Controller Overhead..

Scene 30 (16m 34s)

[Audio] The Agent starts by perceiving the environment choosing an action that leads to state transitions and gaining rewards for the transitions it has made. These rewards are kept and the Agent refines its Q-Network to refine its policy in the future..

Scene 31 (16m 51s)

[Audio] Our agent was able to reduce the controller overhead in software-defined networking (S-D-N--) environments while maintaining network performance. Results and discussion were generated highlighting its remarkable ability to optimize the controller overhead..

Scene 32 (17m 9s)

[Audio] An RL agent was able to autonomously identify optimal network forwarding rules for our baseline scenario resulting in a 25% reduction in controller overhead. This result shows the potential of off-policy reinforcement learning to reduce traffic load and improve efficiency..

Scene 33 (17m 29s)

[Audio] Adjusting the Hard TimeOut to 90 seconds has enabled us to reduce the overhead of our SDN Switch-Controller by 45%. This has proved to be an efficient and adaptive solution for controlling and managing traffic rules in our system..

Scene 34 (17m 43s)

[Audio] The results of the experiment showed that by adjusting the hard timeout to 150 seconds the agent was able to reduce switch-controller overhead by up to 55%. This demonstrates the agent's capability to adjust and effectively manage rules so as to maximize controller resource utilization..

Scene 35 (18m 4s)

[Audio] Our research has shown significant reductions of over 65% in S-D-N switch-controller overhead with a Hard TimeOut of 200 seconds. This is a strong indication of the success of our Off-Policy Reinforcement Learning approach. Investigating further extending the Timeouts can lead to even greater reductions..

Scene 36 (18m 25s)

[Audio] Our research found that adjusting Hard Timeouts strategically had a significant impact on cutting down the quantity of Packet In Messages in a Software Defined Network Switch-Controller environment. We tried out various combinations of Flow_Frequency and Flow_Duration values going from 90 to 80 and 30 to 30 respectively and discovered that the optimal values could reduce network resource utilization while sustaining efficiency..

Scene 37 (18m 53s)

[Audio] Nagi A Mohamed Nagem's research has allowed for significant technological progress. Altering the Timeout parameter from 200 seconds to 150 seconds revealed a 40% decrease in the amount of flow sessions which indicates increased efficacy. Modifying the Timeout to 90 seconds produced a 20% reduction in the amount of flow sessions. These results indicate that adjusting the Timeout parameter drastically improves Software Defined Networking Switch-Controller overhead efficiency..

Scene 38 (19m 29s)

[Audio] The results of the thesis submitted by Nagi A Mohamed Nagem to the University of Benghazi are quite impressive. After integrating a D-Q-N agent the network's bandwidth distribution remained consistent without disrupting network performance. This is a testament to the success of the agent which proves to be an efficient solution in terms of reducing S-D-N switch-controller overloads..

Scene 39 (19m 55s)

[Audio] The Deep Q-Network Reinforcement Learning algorithm has been shown to be an effective method for decreasing the communication between network switches and controllers thus reducing the amount of control plane overhead. Tests utilizing the algorithm in various scenarios were conducted and results indicated that it is a suitable solution for optimizing network forwarding rules..

Scene 40 (20m 18s)

[Audio] This project focuses on optimizing the Software-Defined Network or S-D-N using Deep Q-Network Reinforcement Learning. We saw a 25% reduction in controller overhead from the agent and after a Hard Timeout adjustment we achieved a 45% improved efficiency and an additional 65% of enhanced efficiency. Finally with an improved efficiency of 55% we achieved excellent results..

Scene 41 (20m 49s)

[Audio] Reinforcement learning through the use of Deep Q-Networks is a promising approach for the future of S-D-N switch-controller networks. Applying this across different switch topologies and S-D-N controllers such as OpenDaylight and Ryu could help equip such networks to handle unpredictable conditions in the real world. Ultimately this could result in more accurate and efficient networks..

Scene 42 (21m 17s)

Thank You.