Why Hardware. Acceleration Matters. From TinyML to Large-Scale Embedded AI in the Era of LLMs.
2. What we'll cover today. 01. AI is running everywhere.
3. AI is already running all around you. ๐ฑ. Your phone unlocks with your face.
4. So... how does AI run?. You write Python. โ. Python interpreter.
5. โ QUIZ โ. Which Python code runs faster?. Both compute the same result: a weighted sum. Think before you answer!.
6. โ ANSWER. ๐ก Why?. - Python is an interpreted language: your code is read and translated at runtime. - How you write the code has almost no effect on how the CPU executes it. - Different Python code โ Different hardware..
7. โ QUIZ โ. What is the difference between these two options?.
7. โ QUIZ โ. What is the difference between these two options?.
8. Writing Software vs Describing Hardware. Python tells the processor what to do, HDL tells the chip what to become.
9. Where does AI run?. ๐ฅ๏ธ CPU. The standard processor in every computer. Flexible and easy to program but handles mostly one thing at a time..
10. Field Programmable Gate Array (FPGA): Experiment.
11. Field Programmable Gate Array (FPGA): circuit.
Latency. Latency. Resources. 11. Resources. Field Programmable Gate Array (FPGA): circuit.
12. So, what is hardware acceleration?. ๐ The simple idea: instead of asking a general-purpose processor to run your AI step by step, you build (or configure) dedicated hardware that does exactly one job, and does it extremely fast, in parallel, using very little energy..
13. The fundamental difference. ๐ป Software. ๐ Instructions run in sequence.
Thank you!. Questions & Discussion. ๐ Every AI model you've ever used ran on hardware someone designed. AI needs electronics engineers, and that someone could be you!.