This document provides a 50-hour roadmap for building large language model (LLM) applications. It introduces key concepts like text-based and image-based generative AI models, encoder-decoder models, attention mechanisms, and transformers. It then covers topics like intro to image generation, generative AI applications, embeddings, attention mechanisms, transformers, vector databases, semantic search, prompt engineering, fine-tuning foundation models, orchestration frameworks, autonomous agents, bias and fairness, and recommended LLM application projects. The document recommends several hands-on exercises and lists upcoming bootcamp dates and locations for learning to build LLM applications.
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
This document discusses generative AI and its potential transformations and use cases. It outlines how generative AI could enable more low-cost experimentation, blur division boundaries, and allow "talking to data" for innovation and operational excellence. The document also references responsible AI frameworks and a pattern catalogue for developing foundation model-based systems. Potential use cases discussed include automated reporting, digital twins, data integration, operation planning, communication, and innovation applications like surrogate models and cross-discipline synthesis.
Exploring Opportunities in the Generative AI Value Chain.pdfDung Hoang
The article "Exploring Opportunities in the Generative AI Value Chain" by McKinsey & Company's QuantumBlack provides insights into the value created by generative artificial intelligence (AI) and its potential applications.
The document provides an overview of transformers, large language models (LLMs), and artificial general intelligence (AGI). It discusses the architecture and applications of transformers in natural language processing. It describes how LLMs have evolved from earlier statistical models and now perform state-of-the-art results on NLP tasks through pre-training and fine-tuning. The document outlines the capabilities of GPT-3, the largest LLM to date, as well as its limitations and ethical concerns. It introduces AGI and the potential for such systems to revolutionize AI, while also noting the technical, ethical and societal challenges to developing AGI.
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
A Comprehensive Review of Large Language Models for.pptxSaiPragnaKancheti
The document presents a review of large language models (LLMs) for code generation. It discusses different types of LLMs including left-to-right, masked, and encoder-decoder models. Existing models for code generation like Codex, GPT-Neo, GPT-J, and CodeParrot are compared. A new model called PolyCoder with 2.7 billion parameters trained on 12 programming languages is introduced. Evaluation results show PolyCoder performs less well than comparably sized models but outperforms others on C language tasks. In general, performance improves with larger models and longer training, but training solely on code can be sufficient or advantageous for some languages.
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
Delve into this insightful article to explore the current state of generative AI, its ethical implications, and the power of generative AI models across various industries.
The document discusses generative AI and how it has evolved from earlier forms of AI like artificial intelligence, machine learning, and deep learning. It explains key concepts like generative adversarial networks, large language models, transformers, and techniques like reinforcement learning from human feedback and prompt engineering that are used to develop generative AI models. It also provides examples of using generative AI for image generation using diffusion models and how Stable Diffusion differs from earlier diffusion models by incorporating a text encoder and variational autoencoder.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(This updated version builds on our previous deck: slideshare.net/LoicMerckel/intro-to-llms.)
Leveraging Generative AI & Best practicesDianaGray10
In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
Build an LLM-powered application using LangChain.pdfAnastasiaSteele10
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
This document provides information about a bootcamp to build applications using Large Language Models (LLMs). The bootcamp consists of 11 modules covering topics such as introduction to generative AI, text analytics techniques, neural network models for natural language processing, transformer models, embedding retrieval, semantic search, prompt engineering, fine-tuning LLMs, orchestration frameworks, the LangChain application platform, and a final project to build a custom LLM application. The bootcamp will be held in various locations and dates between September 2023 and January 2024.
Langchain Framework is an innovative approach to linguistic data processing, combining the principles of language sciences, blockchain technology, and artificial intelligence. This deck introduces the groundbreaking elements of the framework, detailing how it enhances security, transparency, and decentralization in language data management. It discusses its applications in various fields, including machine learning, translation services, content creation, and more. The deck also highlights its key features, such as immutability, peer-to-peer networks, and linguistic asset ownership, that could revolutionize how we handle linguistic data in the digital age.
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?Bernard Marr
GPT-3 is an AI tool created by OpenAI that can generate text in human-like ways. It has been trained on vast amounts of text from the internet. GPT-3 can answer questions, summarize text, translate languages, and generate computer code. However, it has limitations as its output can become gibberish for complex tasks and it operates as a black box system. While impressive, GPT-3 is just an early glimpse of what advanced AI may be able to accomplish.
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
The document discusses different methods for customizing large language models (LLMs) with proprietary or private data, including training a custom model, fine-tuning a general model, and prompting with expanded inputs. Fine-tuning techniques like low-rank adaptation and supervised fine-tuning allow emphasizing custom knowledge without full retraining. Prompt expansion using techniques like retrieval augmented generation can provide additional context beyond the character limit.
And then there were ... Large Language ModelsLeon Dohmen
It is not often even in the ICT world that one witnesses a revolution. The rise of the Personal Computer, the rise of mobile telephony and, of course, the rise of the Internet are some of those revolutions. So what is ChatGPT really? Is ChatGPT also such a revolution? And like any revolution, does ChatGPT have its winners and losers? And who are they? How do we ensure that ChatGPT contributes to a positive impulse for "Smart Humanity?".
During a key note om April 3 and 13 2023 Piek Vossen explained the impact of Large Language Models like ChatGPT.
Prof. PhD. Piek Th.J.M. Vossen, is Full professor of Computational Lexicology at the Faculty of Humanities, Department of Language, Literature and Communication (LCC) at VU Amsterdam:
What is ChatGPT? What technology and thought processes underlie it? What are its consequences? What choices are being made? In the presentation, Piek will elaborate on the basic principles behind Large Language Models and how they are used as a basis for Deep Learning in which they are fine-tuned for specific tasks. He will also discuss a specific variant GPT that underlies ChatGPT. It covers what ChatGPT can and cannot do, what it is good for and what the risks are.
This presentation presents an overview of the challenges and opportunities of generative artificial intelligence in Web3. It includes a brief research history of generative AI as well as some of its immediate applications in Web3.
Very preliminar intro to MDE for software developer communities and other kind of software practitioners. Contains material from several recognized sources.
Build an LLM-powered application using LangChain.pdfStephenAmell4
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
This document discusses using retrieval augmented generation (RAG) with Cosmos DB and large language models (LLMs) to power question answering applications. RAG combines information retrieval over stored data with text generation from LLMs to provide customized, up-to-date responses without requiring expensive model retraining. The key components of RAG include data storage, embedding models to index data, a vector database to store embeddings, retrieval of relevant embeddings, and an LLM orchestrator to generate responses using retrieved information as context. Azure Cosmos DB is highlighted as an effective vector database option for RAG applications.
Sioux Hot-or-Not: Model Driven Software Development (Markus Voelter)siouxhotornot
The document contains information about Markus Völter and his work related to model-driven software development. It discusses that Markus Völter works as an independent consultant focused on software architecture, middleware, and model-driven software development. He has written books on these topics, speaks at conferences, and is a committer for the Eclipse openArchitectureWare project. The document also provides an overview of model-driven development, including how it works, reasons for using it, core concepts, and benefits.
ROS 2 AI Integration Working Group 1: ALMA, SustainML & ROS 2 use case eProsima
The new ROS 2 AI Integration Working Group is focused on enabling Machine Learning technologies for ROS 2.
In this presentation you'll find:
- ALMA: the Human Centric Algebraic Machine Learning project
- SustainML
- Enabling ML technologies for ROS 2 robots with Vulcanexus
The document introduces MOND TM software, which provides a common data model and semantic mapping capabilities to enable quick and easy integration and maintenance of transformations between different platforms and data sources. It discusses challenges with traditional integration approaches and how MOND TM addresses these challenges through features such as template maps and rules, a semantics engine, support for various data formats and protocols, and packaged integration of common standards. The document also outlines the value propositions of MOND TM such as reduced costs, increased flexibility, automation and compliance. It concludes with answering some frequently asked questions about the software.
CASE tools and their effects on software qualityUtkarsh Agarwal
CASE tools can significantly improve software quality by automating tasks, reducing errors, and standardizing development processes. They provide functionality for data modeling, code generation, refactoring, documentation and more. While some aspects like requirements gathering require human input, overall CASE tools improve design, catch issues early, and allow developers to focus on other important work. Proper use of modeling languages and automation can dramatically enhance software quality across all stages of development.
This document provides information about a bootcamp to build applications using Large Language Models (LLMs). The bootcamp consists of 11 modules covering topics such as introduction to generative AI, text analytics techniques, neural network models for natural language processing, transformer models, embedding retrieval, semantic search, prompt engineering, fine-tuning LLMs, orchestration frameworks, the LangChain application platform, and a final project to build a custom LLM application. The bootcamp will be held in various locations between September 2023 and January 2024.
The document discusses a pragmatic approach to model-driven architecture (MDA) for developing Java EE applications. It describes building platform-independent and platform-specific models, then transforming models into code. The speakers provide examples of applying MDA principles like defining a target architecture and domain model to generate artifacts like entity classes and data access objects from UML stereotypes using a template engine.
This document provides an overview of an AI and Applications bootcamp program. The program includes a variety of courses that provide both theoretical foundations and practical skills in AI, machine learning, and related topics. It utilizes a blended learning approach with online videos, live virtual classes, projects, and masterclasses. The program aims to help professionals gain expertise in in-demand AI skills and advance their careers. It covers topics such as deep learning, computer vision, natural language processing, and more.
This document provides an overview of an AI and Applications bootcamp program. The program includes a variety of courses that provide both theoretical foundations and practical skills in AI, machine learning, and related topics. It utilizes a blended learning approach with online videos, live virtual classes, projects, and masterclasses. The program aims to help professionals gain expertise in in-demand AI skills and advance their careers. It covers topics such as deep learning, computer vision, natural language processing, and more.
This document provides an overview of an AI and Applications bootcamp program. The program includes a variety of courses that provide both theoretical foundations and practical skills in AI, machine learning, and related topics. It utilizes a blended learning approach including online videos, live virtual classes, projects, and masterclasses. The program aims to help professionals gain expertise in in-demand AI skills and advance their careers. It covers topics such as deep learning, computer vision, natural language processing, and more.
Integrating Machine Learning Capabilities into your teamCameron Vetter
Machine Learning is here today and is quickly becoming an expected skill of development teams. As a technical leader on your team, you need to not only help your team learn how to do machine learning, but also select the right tools, integrate the tools into your tool chain, and understand how to deploy and version machine learning models.
This talk answers these questions using the Microsoft stack as an example. We will walk through my approach to integrating Machine Learning into a team. The topics covered include:
• Where to start, while minimizing investment and risk.
• The spectrum of tools from off the shelf to handcrafted.
• Packaging and deploying your model.
• Integrating your model into your system.
• Other considerations and risks.
You'll leave with my perspective on how to introduce a team to machine learning and how I recommend integrating machine learning into your software development toolkit.
TARGET AUDIENCE: Senior Developers, Architects, Technical Leaders
[2015/2016] Software systems engineering PRINCIPLESIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Jordi Cabot discusses modeling approaches for smart software development. He argues that modeling can help generate smart software faster by automating parts of the process and using better models. His research focuses on domain-specific languages for modeling different aspects of smart systems, like interfaces, datasets, development processes, and more. The goal is to provide guidance and structure to help multidisciplinary teams successfully develop complex AI-based applications.
Full-Stack Development or Data Science, Which is the more advantageous Career...Uncodemy
Both full-stack development and data science present promising career trajectories, with the decision hinging on individual preferences, skills, and career objectives. Whether one is drawn to designing user-friendly interfaces or uncovering insights from extensive datasets, both fields play pivotal roles in shaping the technological landscape. By grasping the intricacies of each domain, individuals aspiring to join the tech industry can choose to enroll in a Data Science or Full Stack Developer Course in locations like Kurukshetra, Delhi, Hisar, Noida, Mumbai, or any other city across the nation.
Build an LLM-powered application using LangChain.pdfMatthewHaws4
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch. Hence, LangChain simplifies and streamlines the process of developing LLM-powered apps, making it appropriate for developers of all skill levels.
Chatbots, virtual assistants, language translation tools and sentiment analysis tools are all examples of LLM-powered apps. Developers utilize LangChain to build custom language model-based apps tailored to specific use cases.
As natural language processing becomes more advanced and widely used, the possible applications for this technology could become endless.
Generative AI in CSharp with Semantic Kernel.pptxAlon Fliess
Join Alon Fliess, Azure MVP, and Microsoft RD in an enlightening lecture where C# meets the forefront of AI. Discover how the Semantic Kernel project bridges traditional programming with advanced AI, empowering C# developers to integrate AI functionalities into their software seamlessly.
Experience a paradigm shift in diagnostics through a real-world example: a sophisticated system crafted with C#, Semantic Kernel, and Azure. Witness the synergy of C# and AI in action, optimizing system analysis and problem-solving in complex environments.
Embark on a journey where C# and AI meet.
MLOps Virtual Event | Building Machine Learning Platforms for the Full LifecycleDatabricks
This document summarizes a webinar on building machine learning platforms. It discusses how operating ML models is complex, requiring tasks like monitoring performance, handling data drift, and ensuring governance and security. It then outlines common components of ML platforms, including data management, model management, and code/deployment management. The webinar will demonstrate how different organizations handle these components and include demos from four companies. It will also cover Databricks' approach to providing an ML platform that integrates various tools and simplifies the full ML lifecycle from data preparation to deployment.
DOORS/Analyst is a visual modeling tool integrated within IBM's DOORS requirements management software. It allows users to create diagrams and models synchronized with textual requirements to facilitate requirements elicitation, communication, and systems development. Key benefits include low learning curve, traceability between visual and textual representations, customizable symbols, and enhanced stakeholder involvement, feedback, and collaboration.
[D2T2S04] SageMaker를 활용한 Generative AI Foundation Model Training and TuningDonghwan Lee
이 세션에서는 SageMaker Training Jobs / SageMaker Jumpstart를 사용하여 Foundation Model 을 Pre-Triaining 하거나 Fine Tuing 하는 방안을 제시합니다. 이 세션을 통해 아래 3가지가 소개됩니다.
1. 파운데이션 모델을 처음부터 Training
2. 오픈 소스 모델을 사용하여 파운데이션 모델을 Pre-Training
3. 도메인에 맞게 모델을 Fine Tuning하는 방안
발표자:
Miron Perel, Principal ML GTM Specialist, AWS
Kristine Pearce, Principal ML BD, AWS
Amazon DocumentDB(MongoDB와 호환됨)는 빠르고 안정적이며 완전 관리형 데이터베이스 서비스입니다. Amazon DocumentDB를 사용하면 클라우드에서 MongoDB 호환 데이터베이스를 쉽게 설치, 운영 및 규모를 조정할 수 있습니다. Amazon DocumentDB를 사용하면 MongoDB에서 사용하는 것과 동일한 애플리케이션 코드를 실행하고 동일한 드라이버와 도구를 사용하는 것을 실습합니다.
How We Added Replication to QuestDB - JonTheBeachjavier ramirez
Building a database that can beat industry benchmarks is hard work, and we had to use every trick in the book to keep as close to the hardware as possible. In doing so, we initially decided QuestDB would scale only vertically, on a single instance.
A few years later, data replication —for horizontally scaling reads and for high availability— became one of the most demanded features, especially for enterprise and cloud environments. So, we rolled up our sleeves and made it happen.
Today, QuestDB supports an unbounded number of geographically distributed read-replicas without slowing down reads on the primary node, which can ingest data at over 4 million rows per second.
In this talk, I will tell you about the technical decisions we made, and their trade offs. You'll learn how we had to revamp the whole ingestion layer, and how we actually made the primary faster than before when we added multi-threaded Write Ahead Logs to deal with data replication. I'll also discuss how we are leveraging object storage as a central part of the process. And of course, I'll show you a live demo of high-performance multi-region replication in action.
2. Types of generative AI models
Text-based models
Image-based models
Foundation models & LLMs
Encoder decoder
Attention mechanism
Transformers model and BERT model
Intro to Image Generation
Image captioning models
Diffusion models
Generative AI applications
ChatGPT & Bard
DALL-E & Midjourney
Quick overview of generative AI, LLMs, and foundation models. Learn more about how
transformers and attention mechanism works behind the text and image-based models:
Introduction to Generative AI
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
3. Large language models and foundation models
Vector databases, embeddings, and LLM cache
Prompts and prompt engineering
Context window and token limits
Embeddings and vector databases
Build custom LLM applications by:
Training a new model from scratch
Fine-tuning foundation LLMs
In-context learning
Canonical architecture for an end-to-end LLM application
Understand the common use cases of large language models and the fundamental building
blocks of such applications. Learners will be introduced to the following topics at a very high
level without going into the technical details:
Emerging Architectures
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
4. Review of classical techniques
Review of binary/one-hot, count-based, and TF-IDF techniques for vectorization
Capturing local context with n-grams and challenges
Semantic encoding techniques
Overview of Word2Vec and dense word embeddings
Application of Word2Vec in text analytics and NLP tasks
Hands-on exercise
Creating a TF-IDF and semantic embeddings on a document corpus
In this module, we will be reviewing how embeddings have evolved from the simplest one-hot
encoding approach to more recent semantic embedding approaches. The module will go over the
following topics:
Embeddings
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
5. Text embeddings
Word and sentence embeddings
Multilingual sentence embeddings
Text similarity measures
Dot product, cosine similarity, inner product
Hands-on exercise
Calculating similarity between sentences using cosine similarity and dot product
Attention mechanism and transformer models
Neural machine translation (NMT) and sequence-to-sequence models
Attention mechanism components
Self-attention and multi-head attention
Transformer networks: Tokenization, embedding, positional encoding, and transformers block
Hands-on exercise
Understanding attention mechanisms: Self-attention for contextual word analysis
Dive into the world of large language models, discovering the potent mix of text embeddings, attention mechanisms, and the
game-changing transformer model architecture. This module consists of:
Roadmap to become an LLM applications developer
Attention Mechanism and Transformers
Data Science for Everyone
https://datasciencedojo.com
6. Overview
The rationale for vector databases
Importance of vector databases in LLMs
Popular vector databases
Indexing techniques
Product quantization (PQ), Locality sensitive hashing (LSH), and Hierarchical
navigable small world (HNSW)
Retrieval techniques
Cosine similarity
Nearest neighbor search
Hands-on exercise
Creating a vector store using HNSW
Creating, storing, and retrieving embeddings using cosine similarity and nearest
neighbors
Learn about efficient vector storage and retrieval with vector databases, indexing techniques,
retrieval methods, and hands-on exercises:
Vector Databases
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
7. Understanding and implementing semantic search
Introduction and importance of semantic search
Distinguishing semantic search from the lexical search
Semantic search using text embeddings
Exploring advanced concepts and techniques in semantic search
Multilingual search
Limitations of embeddings and similarity in semantic search
Improving semantic search beyond embeddings and similarity
Hands-on exercise
Building a simple semantic search engine with multilingual capability
Understand how semantic search overcomes the fundamental limitation in lexical search i.e. lack
of semantics. Learn how to use embeddings and similarity in order to build a semantic search
model:
Semantic Search
Data Science for Everyone
https://datasciencedojo.com
Roadmap to become an LLM applications developer
8. Prompt design and engineering
Prompting by instruction
Prompting by example
Controlling the model output
When to stop
Being creative vs. predictable
Saving and sharing your prompts
Use case Ideation
Utilizing goal, task, and domain for perfect prompt
Example use cases
Summarizing (summarizing a technical report)
Inferring (sentiment classification, topic extraction)
Transforming text (translation, spelling, and grammar correction)
Expanding (automatically writing emails)
Unleash your creativity and efficiency with prompt engineering. Seamlessly prompt models, control
outputs, and generate captivating content across various domains and tasks. This module includes:
Prompt Engineering
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
9. Fine-tuning foundation LLMs
Rationale for fine-tuning
Limitations of fine-tuning
Parameter efficient fine-tuning
Hands-on exercise
Fine-tuning and deploying the OpenAI GPT model on Azure
Discover the ins and outs of fine-tuning foundation language models (LLMs) through theory
discussions, exploring rationale, limitations, and parameter efficient fine-tuning (PEFT):
Fine-Tuning Foundation Models
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
10. Why are Orchestration Frameworks (OF) needed?
Eliminate the need for foundation model retraining
Overcoming token limits
Connecters for data sources
Explore the necessity of orchestration frameworks, tackling issues like foundation model retraining,
token limits, data source connectivity, and boilerplate code. Discover popular frameworks, their
creators, and open-source availability:
Orchestration Frameworks
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
11. Introduction toLangChain
Schema, models, and prompts
Memory and chains
Loading, transforming, indexing, and retrieving data
Document loader
Text splitters
Retrievers
LangChain use cases
Summarization: Summarizing long documents
QnA using documents as context
Extraction: Getting structured data from unstructured text
Evaluation: Evaluating outputs generated from LLM models
Querying tabular data without using any extra code
Hands-on exercise
Using LangChain loader, splitter, and retrievals on a pdf document
Build LLM apps using LangChain. Learn about LangChain's key components such as models,
prompts, parsers, memory, chains, and QnA. Get hands-on evaluation experience:
LangChain
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
12. Agents and tools
Agent types
Conversational agents
OpenAI functions agents
ReAct agents
Plan and execute agents
Hands-on exercise: Create and execute some of the following agents
Excel agent
JSON agent
Python Pandas agent
Document comparison agent
Power BI agent
Use LLMs to make decisions about what to do next. Enable these decisions with tools. We’ll learn
what they are, how they work, and how to use them within the LangChain library to superpower our
LLMs. In this module, we’ll talk about:
Autonomous Agents
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
13. Ethics, bias, fairness
Sources of bias in acquisition/annotation of training data, model building
Precautions against safeguarding the model from bias
Review some of the regulations/legislation
Principles of responsible AI
Fairness and eliminating bias
Reliability and safety
Privacy and data protection
Transparency and explainability
Accountability and governance
Inclusivity and accessibility
Review some of the tools available to assess the following in a large language model
application
Correctness and security
Bias, fairness, and explainability of the model
Bias can creep in at any stage of the lifecycle of a model. While large language models offer
tremendous business value, humans are involved in all stages of the lifecycle of an LLM from
acquisition of data to interpretation of insights. In this module, we will learn about the following:
Bias, Fairness and Explainablity
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
14. Virtual assistant: A dynamic customer service agent designed for the car manufacturing industry.
Content generation (Marketing co-pilot): Enhancing your marketing strategies with an intelligent
co-pilot.
Conversational agent (Legal and compliance assistant): Assisting with legal and compliance
matters through interactive conversations.
QnA (IRS tax bot): An intelligent bot designed to answer your questions about IRS tax-related
topics.
Content personalizer: Tailoring content specifically to your preferences and needs.
YouTube virtual assistant: Engage in interactive conversations with your favorite YouTube
channels and playlists.
Recommended Projects
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
15. Learn to Build
LLM Applications
New York
December 4-8, 2023
Roadmap to become an LLM applications developer
Data Science for Everyone
https://datasciencedojo.com
Join this 5-day | 40-hour bootcamp to get started with building
large language model applications on your enterprise data
Seattle
September 18-22, 2023
Washington, D.C.
October 16-20, 2023
Austin
November 6-10, 2023
Singapore
January, 2024