CHAPTER Four Microprocessor system

Published on
Embed video
Share video
Ask about this video

Scene 1 (0s)

[Virtual Presenter] CHAPTER Six Advance topics BY Suroj Burlakoti Lecturer DoECE, National College of Engineering.

Scene 2 (11s)

[Audio] Chapter 6:Advance Topics 6.1 Multiprocessing Systems 6.1.1 Real and Pseudo-Parallelism 6.1.2 Flynn's Classification 6.1.3 Instruction Level, Thread Level and Process Level Parallelism 6.1.4 Interprocess Communication, Resource Allocation and Deadlock 6.1.5 Features of Typical Operating System 6.2 Different Microprocessor Architectures 6.2.1 Register Based and Accumulator Based Architecture 6.2.2 RISC and CISC Architectures 6.2.3 Digital Signal Processors #Burlas 2.

Scene 3 (52s)

[Audio] Multiprocessing System Processor Processor Processor 1 2 3 Multiprocessing System consist of two or more processors of comparable capability that can share common resources such as memory, I/O devices etc., improving overall performance of system These processor shares the resources either through same channels that provides path to the same devices Common Resources: Memory The system is controlled by integrated operating Printer Plotter system that provides interaction between processor and their programs The organization of multiprocessor system can be Fig: Organization of Multiprocessing system divided into following types: Time Shared or Common Bus Multiport Memory Central Control Unit #Burlas 3.

Scene 4 (1m 39s)

[Audio] 1. Time Shared or Common Bus It is simplex mechanism in which the CPU 1 Memory no. of CPUs, I/O modules, Memory modules are connected to the same bus Common Bus This system must distinguish between the module on the bus to determine the source and destination of data I/O CPU 2 When one is controlling or using bus, other device should be locked out. Fig: Time Shared System Further access to each device is divided on the basis of time #Burlas 4.

Scene 5 (2m 12s)

[Audio] 1. Time Shared or Common Bus Advantages: Memory 2 CPU 1 Memory 1 Simplicity: Simple, Very much similar to uniprocessor system Reliability: Failure of any one system Common Bus does not cause failure in whole system Flexibility: Easy to expand the system I/O 2 by attaching more processor I/O 1 CPU 2 Disadvantage: Fig: Time Shared System Performance of whole system is limited by the common bus performance #Burlas 5.

Scene 6 (2m 45s)

[Audio] 2. Multiport Memory Multiport memory allows direct and independent CPU 1 access of memory modules by each CPU Each processor have dedicated path to each CPU 2 memory module with a common memory to communicate between the two processors This gives better performance but complex than MEM MEM MEM shared bus system 3 2 1 The mechanism for resolving conflicts while using memory modules should be implemented Fig: Multiport Memory System For that either permanently designated priorities for each module should be defined or certain portion of memory can be configured as private to one of the CPU. #Burlas 6.

Scene 7 (3m 29s)

[Audio] 3. Central Control Unit (CCU) The centralized control unit manages the transfer of data streams back and forth between independent modules by CPU and I/O, Memory. The controller can manages the timing function and also pass control and status signals between processors Since logic for coordinating the multiprocessor configuration is centered in CCU. The interface from I/O, memories and CPU remains undisturbed So interfacing is simple but designing CCU is complex #Burlas 7.

Scene 8 (4m 2s)

[Audio] Parallelism It is a concept used to provide simultaneous data processing task for the purpose of increasing the computational speed of computer system Pseudo Parallelism The time of processor is divided among the processes and rapid switching takes place between them So it is not exactly a parallelism but gives sense of parallelism, so called pseudo parallelism Real Parallelism Multiple processor executes multiple processes at a time #Burlas 8.

Scene 9 (4m 32s)

[Audio] Parallelism Pseudo Parallelism Real Parallelism 1. Multiprogramming within same processor 1. Multiprogramming on different processor 2. With sharing resources (h/w & s/w) 2. Separate resources (also called H/W parallelism) 3. Overlap individual machine operation, so that they execute in parallel 3. Having separate processors getting separate chunks of program #Burlas 9.

Scene 10 (4m 56s)

[Audio] Flynn's Classification Based on the no. of instruction and data items that are manipulated simultaneously Single Instruction, Single Data (SISD) Stream Single Instruction, Multiple Data (SIMD) Stream Multiple Instruction, Single Data (MISD) Stream Multiple Instruction, Multiple Data (MIMD) Stream #Burlas 10.

Scene 11 (5m 20s)

[Audio] 1. Single Instruction, Single Data (SISD) Stream Here, the single processor executes a single instruction to operate on data stored in single memory E.g. Uniprocessor #Burlas 11.

Scene 12 (5m 36s)

[Audio] 2. Single Instruction, Multiple Data (SIMD) Stream Single machine instruction controls the simultaneous executing of no. of data items. i.e. the processor receives a same instruction from single control unit and then those operates on different set of data as available to them E.g. Vector Processor, Array Processor #Burlas 12.

Scene 13 (6m 0s)

[Audio] 3. Multiple Instruction, Single Data (MISD) Stream Single sequence of data is transmitted to a set of processor simultaneously, each of which execute different instructions in sequence No practical system constructed yet, only theoretical May be a pipelined computers #Burlas 13.

Scene 14 (6m 21s)

[Audio] 4. Multiple Instruction, Multiple Data (MIMD) Stream A set of processor simultaneously execute different instruction sequences on different datasets E.g. Multiprocessor #Burlas 14.

Scene 15 (6m 36s)

[Audio] Instruction Level, Thread Level and Process Level Parallelism Process: It is an instance of program loaded in memory for execution Thread: Small chunks of process #Burlas 15.

Scene 16 (6m 50s)

[Audio] Instruction Level Parallelism It is a degree to which the instruction of program can be executed in parallel Typically multiple instructions are faced at a time and then executed in parallel if both are independent #Burlas 16.

Scene 17 (7m 5s)

[Audio] Process Level Parallelism It is a degree to which the process of program are loaded in memory parallel System with multiple process will yield better performance than one with single processor of same Concerned with scheduling, execution and resource It is a bit time consuming #Burlas 17.

Scene 18 (7m 24s)

[Audio] Thread Level Parallelism The process is divided into several smaller streams called threads such that they can be executed in parallel Concerned with scheduling and execution Thread switching consumes less time than process switching #Burlas 18.

Scene 19 (7m 41s)

[Audio] Data Parallelism Data Parallelism is parallelism inherent in program loops, which focuses on distributing a data across different computing nodes to be processed in parallel #Burlas 19.

Scene 20 (7m 54s)

[Audio] Interprocess communication In any system, multiple processes may be active in one time The process of communication between those multiple process is called interprocesss communication It deals with the how one process can pass information to another thus making sure that processes do not get into each other's way Interprocess methods are message passing, shared memory and remote procedure calls The several reasons for providing an environment that allows process cooperation are information sharing, speed up, modularity, convenience, privilege separation etc. Also referred as inter-thread communication or inter-application communication #Burlas 20.

Scene 21 (8m 33s)

[Audio] Resource Allocation Resource allocation plays vital role in case of multiprocessing systems or system with multiple processes because the multiple processes/processors might need to share same communication resources If the resources are not located efficiently, the use of multiple processes/processors only won't enhance the overall performance of system Resources could be computer's memory, data in device, interface buffer, one or more files or the required amount of processing power #Burlas 21.

Scene 22 (9m 3s)

[Audio] Deadlock The responsibility of operating system to see that resources are used efficiently, otherwise the use of multiple process or processors could be resulting into a system deadlock that will freeze any process happening currently Conditions for deadlock: Mutual Exclusion: shared resources are used in a mutually exclusive manner Hold and Wait: processor hold into resources they already have while waiting for allocation of other resource No Preemption: resources cannot be preempted until the process release them Circular wait: circular chain of process exist in which each process holds resources wanted by next process in chain Deadlock may be System deadlock Process deadlock #Burlas 22.

Scene 23 (9m 49s)

[Audio] Semaphore Semaphore is a software technique to manage the resources Basically, it is a flag associated with each shared resources such that all processor or processes can access the flag and modify it But before accessing any resources, they will check the semaphore If it is '1' the resource is being used else it is available to any requesting processor #Burlas 23.

Scene 24 (10m 13s)

[Audio] Operating System Operating system is a program that controls the execution of application program and acts as an interface between user and hardware of computer E.g. Windows, Linux, Android, IOS, etc. The operating system is responsible for preventing conflicting use of shared resources by several processes allocation and proper scheduling The basic feature of OS are: Resources Allocation and Management Table and Data Protection I/O management Memory Management Abnormal Program Termination Precision of System Deadlock Process Scheduling Scheduling for multiprocessors Reconfiguration for multiprocessors #Burlas 24.

Scene 25 (10m 54s)

[Audio] Different Microprocessor Architecture Register Based and Accumulator Based Microprocessor RISC and CISC Architecture Digital Signal Processor #Burlas 25.

Scene 26 (11m 2s)

[Audio] Register Based and Accumulator Based Microprocessor Accumulator Based Microprocessor Register Based Microprocessor 1. Accumulator is most significant register 1. Accumulator and other registers are identical 2. In arithmetic and logic operation one operand is always accumulator 2. Arithmetic and logic operation can be performed in any registers except I/O 3. I/O data are performed via accumulator 3. I/O operation are limited to accumulator also 4. All registers are connected to the same point to the ALU e.g. ADD R1,R2 ;  R1 + R2 4. Accumulator is directly connected to ALU while other registers are connected to ALU through temporary register or accumulator e.g. ADD R ; A  A + R 5. e.g. 8085 is accumulator based microprocessor 5. e.g. 8086 is register based microprocessor #Burlas 26.

Scene 27 (12m 6s)

[Audio] RISC and CISC Architecture CISC Based Architecture Stands for Complex Instruction Set Computer Characteristics of CISC computer are: Large number of complex instruction Requires multiple cycle for their execution Less registers, so frequent memory access, so reduced processing speed Variable length instruction format is used Instruction implemented in microprocessor and interpreted by microprogram Large variety of addressing modes Less pipelining used Low cost For general purpose application, where speed is not so significant E.g. intel 3090, 8086, 80286, Pentium, AMD etc #Burlas 27.

Scene 28 (12m 54s)

[Audio] RISC and CISC Architecture RISC Based Architecture Stands for Reduced Instruction Set Computer Characteristics of RISC computer are: Less number of simple instruction Instructions are short and of equal length One cycle per execution Less addressing mode More registers, so less memory reference Memory reference limited to Load/Store Instruction implemented in hardware, no microprogram Heavy pipelining Used for overlapped register window Used in scientific and research oriented task where high speed is required E.g. DEC Alpha, Power PC, ULTRA SPARC etc. #Burlas 28.

Scene 29 (13m 37s)

[Audio] Digital Signal Processor (DSP) DSP is a special purpose CPU that provides ultra fast instruction sequences, such as shift and add, multiply and add, which are commonly used in math intensive signal processing applications The digital signal obtained from ADC can be operated upon by adding extra features to them, reducing redundancy of the signals, making them immune to noise etc. by some external processor called digital signal processor Various DSP functions can be implemented using hardware and software. The software implementation is slow and hardware implementation require different architecture than von newman DSP architecture is based on havard architecture where separate space for program and data They are targeted for some specific application such as Sound Card, Video Card, Fast Fourier Transform (FTT), etc. Specialize addressing modes Single cycle – multiply accumulate capability E.g. Texas instrument, TM S320C, Motorolla 3600 #Burlas 29.

Scene 30 (14m 46s)

[Audio] Digital Signal Processor (DSP) Digital Analog Analog Digital Advantages ADC DSP DAC Immune to noise More accurate than analog Versatile, flexible Provides various configuration of on-chip memory Disadvantages Requires high bandwidth Depends upon other device as analog to digital conversion and vice versa required complexity #Burlas 30.

Scene 31 (15m 12s)

[Audio] #Burlas 31. #Burlas 31.