undefined. One Ock the C Ode.
[Audio] Hello everyone, We would like to present our sparkathon idea to you. Someone once said: "Innovation isn't just about new ideas; it's about creating value and solving problems in unique ways." We have developed a clever one-stop shop for producing RAG-based bots that are specific to a given product. All the product team needs to do is try out to the built bot for their team..
[Audio] This is an agenda of presentation. 3. Abhijit Jadhav (Senior Software Engineer) Akshay Jagtap (Specialist Software Engineer) Narendra Sharma (Specialist Software Engineer) Jayesh Sapariya (Specialist Software Engineer) Komal Doke (Senior Automation Engineer).
[Audio] The team that worked on the idea included Abhijit, Akshay, Narendra, Jayesh and Komal from WEM APA team..
[Audio] Creating a RAG-based bot for one product takes minimum of one sprint; we need a faster solution. Security for data storage is crucial. Tailored bots with specific knowledge bases, rapid deployment, efficient knowledge retrieval, and automation of manual processes are essential for enhancing productivity..
[Audio] The solution Any product line, can use our in-house bot generator app to generate a Intelligent Assistant/Chatbot for there product, Bot Name, PDF file with product specific data and the LLM Model Key are required. Once user clicks submit and processing is complete user will get a configurable embed script that they can integrate in there product as per the design standards..
[Audio] Let's look how this is handled in backend, as we can see in the diagram, the interaction from Bot Generator App is done through an API, which creates the flow in the LLM Orchestrator hosted locally. PDF is read, distributed in chunks and then embeddings are done to store it in the Vector DB using Langchain JS methods viz. Document Loader, Conversational Retrieval QA Chain etc. Once user sends a query, the model responds with the answer to the query from the PDF..
[Audio] Users add their product-specific knowledge base PDF, name, and API key to the "CXOne Plug-and-Play RAG Bot Generator". Data is processed by an LLM Orchestrator and transformed into a chat flow on a locally hosted server. Langchain.js methods handle PDF loading, text splitting, and creating embeddings stored in a Vector Database for user interaction, allowing the bot to respond to queries..
9. Cost and Time Saving : Withing a span of a minutes you can create product specific knowledge-based bot which used to take couple of sprints. Data Security : Data will be on prem so will not be exposed to outside world. Scalability: The ability to quickly create and deploy bots allows the organization to scale support and engagement efforts as needed. Platform Independent : Can be integrated to any web application Innovation and Adaptability: Leveraging advanced AI technology positions the organization as innovative and adaptable to changing market demands. Data-Driven Decisions: Collecting and analyzing data from bot interactions provides valuable insights that inform strategic decisions. Improved Customer Satisfaction: Providing instant, accurate support and information boosts customer satisfaction and loyalty..
[Audio] Here's our Bot Generator App, user is required to fill in Bot Name, PDF File which has product specific data, API Key and the LLM Model. Once processing is successful, user will get a configurable embed script that they can use in there application without worrying about the underlying tech stack. Behind the scenes, we create a flow in our LLM Orchestrator using methods like Document Loader, In Memory Vector Store, Text Splitter and Conversational Retrieval QA Chain with Open AI Model. Lets take a look at script in use, first application has tech stack as React, we can see it works fine. Lets take a look in another application, underlying tech stack is JSP based pages, we can see it does its job efficiently..
11. Thank You NICE RKATHO.