A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(Note: Discover a slightly updated version of this deck at slideshare.net/LoicMerckel/introduction-to-llms.)
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
generative-ai-fundamentals and Large language modelsAdventureWorld5
Thank you for the detailed review of the protein bars. I'm glad to hear you and your family are enjoying them as a healthy snack and meal replacement option. A couple suggestions based on your feedback:
- For future orders, you may want to check the expiration dates to help avoid any dried out bars towards the end of the box. Freshness is key to maintaining the moist texture.
- When introducing someone new to the bars, selecting one in-person if possible allows checking the flexibility as an indicator it's moist inside. This could help avoid a disappointing first impression from a dry sample.
- Storing opened boxes in an airtight container in the fridge may help extend the freshness even further when you can't
Let's talk about GPT: A crash course in Generative AI for researchersSteven Van Vaerenbergh
This talk delves into the extraordinary capabilities of the emerging technology of generative AI, outlining its recent history and emphasizing its growing influence on scientific endeavors. Through a series of practical examples tailored for researchers, we will explore the transformative influence of these powerful tools on scientific tasks such as writing, coding, data wrangling and literature review.
Large Language Models, No-Code, and Responsible AI - Trends in Applied NLP in...David Talby
An April 2023 presentation to the AMIA working group on natural language processing. The talk focuses on three current trends in NLP and how they apply in healthcare: Large language models, No-code, and Responsible AI.
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
An Introduction to Generative AI - May 18, 2023CoriFaklaris1
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
A brief introduction to generative models in general is given, followed by a succinct discussion about text generation models and the "Transformer" architecture. Finally, the focus is set on a non-technical discussion about ChatGPT with a selection of recent news articles.
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
A Comprehensive Review of Large Language Models for.pptxSaiPragnaKancheti
The document presents a review of large language models (LLMs) for code generation. It discusses different types of LLMs including left-to-right, masked, and encoder-decoder models. Existing models for code generation like Codex, GPT-Neo, GPT-J, and CodeParrot are compared. A new model called PolyCoder with 2.7 billion parameters trained on 12 programming languages is introduced. Evaluation results show PolyCoder performs less well than comparably sized models but outperforms others on C language tasks. In general, performance improves with larger models and longer training, but training solely on code can be sufficient or advantageous for some languages.
Leveraging Generative AI & Best practicesDianaGray10
In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
Exploring Opportunities in the Generative AI Value Chain.pdfDung Hoang
The article "Exploring Opportunities in the Generative AI Value Chain" by McKinsey & Company's QuantumBlack provides insights into the value created by generative artificial intelligence (AI) and its potential applications.
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?Bernard Marr
GPT-3 is an AI tool created by OpenAI that can generate text in human-like ways. It has been trained on vast amounts of text from the internet. GPT-3 can answer questions, summarize text, translate languages, and generate computer code. However, it has limitations as its output can become gibberish for complex tasks and it operates as a black box system. While impressive, GPT-3 is just an early glimpse of what advanced AI may be able to accomplish.
The document discusses different methods for customizing large language models (LLMs) with proprietary or private data, including training a custom model, fine-tuning a general model, and prompting with expanded inputs. Fine-tuning techniques like low-rank adaptation and supervised fine-tuning allow emphasizing custom knowledge without full retraining. Prompt expansion using techniques like retrieval augmented generation can provide additional context beyond the character limit.
This presentation presents an overview of the challenges and opportunities of generative artificial intelligence in Web3. It includes a brief research history of generative AI as well as some of its immediate applications in Web3.
The document discusses generative AI and how it has evolved from earlier forms of AI like artificial intelligence, machine learning, and deep learning. It explains key concepts like generative adversarial networks, large language models, transformers, and techniques like reinforcement learning from human feedback and prompt engineering that are used to develop generative AI models. It also provides examples of using generative AI for image generation using diffusion models and how Stable Diffusion differs from earlier diffusion models by incorporating a text encoder and variational autoencoder.
This document discusses techniques for fine-tuning large pre-trained language models without access to a supercomputer. It describes the history of transformer models and how transfer learning works. It then outlines several techniques for reducing memory usage during fine-tuning, including reducing batch size, gradient accumulation, gradient checkpointing, mixed precision training, and distributed data parallelism approaches like ZeRO and pipelined parallelism. Resources for implementing these techniques are also provided.
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
Generative AI: Past, Present, and Future – A Practitioner's Perspective
As the academic realm grapples with the profound implications of generative AI
and related applications like ChatGPT, I will present a grounded view from my
experience as a practitioner. Starting with the origins of neural networks in
the fields of logic, psychology, and computer science, I trace its history and
align it within the wider context of the pursuit of artificial intelligence.
This perspective will also draw parallels with historical developments in
psychology. Against this backdrop, I chart a proposed trajectory for the future.
Finally, I provide actionable insights for both academics and enterprising
individuals in the field.
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
This document provides an overview of machine learning through a case study submitted by computer science students. It discusses the history and evolution of machine learning from its early development in the 1940s-50s to major advances in the 21st century. The document also defines key machine learning terms, describes the typical machine learning process and steps involved, and lists different types of machine learning problems and algorithms. It aims to give readers a comprehensive introduction to the field of machine learning.
This is the slideshow for a presentation I gave as part of my graduate coursework at the Institute for Innovation and Public Purpose at University College London (UCL IIPP). Drawing on the work of IIPP professors including Carlota Perez (techno-economic paradigms), Mariana Mazzucato (“The Entrepreneurial State”), and Tim O’Reilly, I evaluate the innovation trajectory of Deep Neural Networks as a method of machine learning. I trace the history of machine learning to its present-day and conclude that while Deep Neural Networks have not yet reached technological maturity, they are already starting to encounter barriers to exponential growth and innovation. These slides were designed to be read independently from the spoken portion. If you found this useful or interesting, please message me on LinkedIn! - Justin Beirold
November 5, 2023
NHH: FRONT LINES ON ADOPTION OF DIGITAL AND
AI-BASED SERVICES
Thanks to Tor Andreassen for the opportunity
To discuss AI and IA.
Tor Andeassen: https://www.linkedin.com/in/tor-wallin-andreassen-1aa9031/
Genetic Algorithms and Programming - An Evolutionary Methodologyacijjournal
This document summarizes genetic programming, an evolutionary algorithm methodology inspired by biological evolution. Genetic programming starts with a random population of computer programs and uses genetic operators like crossover and mutation to generate new programs. It evaluates programs using a fitness function based on how well they perform a given task. The document discusses the history of genetic programming and machine learning, gives examples of genetic programming representations as tree structures, and explains key genetic programming components like genetic operators, population size, the fitness function, and the evolutionary process of breeding new populations.
History of AI - Presentation by Sanjay KumarSanjay Kumar
Join AI Shorts For Such Contents - https://lnkd.in/gpyzTpa2
Exponential growth of ChatGPT didn't happen in a day. AI Winter - The time when funding went dry, no corporate was ready to do any further development on AI or related stuff etc happened twice.
Started with Alan Turing question in 1956 "Can Machine Think?" and a conference at Dartmouth where John McCarthy coined "AI" and set the goals of AI. Arthur Samuel wrote a program that learnt to play Chinese Checker and popularise ML.
We are progressing at such a speed that we need to create a governing body "OpenAI" to make sure autonomous system don't hurt us back.
History of Artificial Intelligence (AI) from birth till date (2023).
Covers all the important events happened in due course of time with the AI Winter period.
Unraveling Information about Deep LearningIRJET Journal
1) Deep learning is a new field of machine learning that uses artificial neural networks with numerous hidden layers to learn representations of data.
2) Deep learning architectures have made advances in domains like computer vision and natural language processing. The advantages of deep learning's layered hierarchy and nonlinear operations are discussed.
3) Deep learning has its origins in the 1940s and research accelerated in the 2000s due to increases in data and computing power. Major developments include convolutional neural networks and backpropagation. Deep learning is now widely used for tasks like image recognition.
This document discusses the future of AI and provides an overview of key topics including:
- AI is currently at the peak of hype but deep learning depends on large datasets and computing power which are now available. Commonsense reasoning remains a challenge.
- IBM and MIT have invested $240 million over 10 years in an AI mission to advance capabilities.
- The timeline for solving AI involves benchmarks like image recognition, translation, and general AI. Full human-level AI may be 5-10 years away.
- Leaders in AI include companies investing heavily in research like IBM, Google, and Microsoft. Economic benefits are predicted but job losses and risks from advanced AI also exist.
- Other technologies like augmented
This document provides biographical information about Jim Spohrer, a retired IBM executive and UIDP Senior Fellow who was invited to give a presentation on AI to the Branch 54 SIRS group. The document includes Spohrer's contact information, references to books and resources he recommends, an outline of the topics he plans to discuss in his presentation, including an overview of AI progress and timelines, solving AI through leaderboards and exams, solving IA through better building blocks, and preparing for solving all problems. It also shares Spohrer's background, areas of study and priorities as an advisor focused on service innovation, AI upskilling, future universities and more.
This document provides an introduction to Society 5.0, the fourth industrial revolution, and related technologies such as artificial intelligence. It discusses how these concepts and technologies are impacting research and information professions. Society 5.0 is a vision for a new society that balances economic advancement with addressing social problems through highly integrating cyber and physical spaces. It is linked to concepts like the UN's sustainable development goals. The fourth industrial revolution involves new technologies like AI, robotics, and IoT that are transforming many industries and aspects of modern life. The document discusses various AI technologies and their applications. It also outlines some of the impacts these technologies are having on fields like research and libraries/information professions.
BigScience is a one-year research workshop involving over 800 researchers from 60 countries to build and study very large multilingual language models and datasets. It was granted 5 million GPU hours on the Jean Zay supercomputer in France. The workshop aims to advance AI/NLP research by creating shared models and data as well as tools for researchers. Several working groups are studying issues like bias, scaling, and engineering challenges of training such large models. The first model, T0, showed strong zero-shot performance. Upcoming work includes further model training and papers.
Jim Spohrer directs IBM's open-source AI efforts and gives a presentation on the future of AI, discussing timelines for solving different AI challenges, leaders in the field, and implications for stakeholders in preparing for both the benefits and risks of advanced AI. The document also includes slides on AI progress benchmarks, computing costs over time, economic growth projections with AI, and other emerging technologies that could have a larger impact than AI.
Jim from IBM discusses various topics related to artificial intelligence including:
- The timeline for solving different AI problems and reaching human-level performance on benchmarks.
- Leaders and communities driving progress in open source AI.
- Potential benefits of AI including increasing productivity and GDP, as well as risks that need to be addressed.
- Preparing students and citizens for future jobs and skills needed in an increasingly automated world.
- The importance of open source communities working on challenges like bias and fairness in AI.
This document provides an agenda and materials for a post-industrial forum on knowledge worker productivity hosted by Jim Spohrer at SRI. The document includes:
- An introduction and background on Jim Spohrer, a retired industry executive and UIDP senior fellow.
- An agenda for a discussion on knowledge worker productivity, including presentations on relevant books and topics like estimation frameworks.
- Materials and figures for estimating knowledge worker productivity over time based on metrics like computing power and GDP per employee in the US.
- Additional slides on AI progress milestones, types of AI models, and an overview of Jim Spohrer's areas of study and priorities around service science, artificial intelligence, and trust.
Semantic Web: In Quest for the Next Generation Killer AppsJie Bao
The document discusses the potential for killer apps on the Semantic Web. It outlines key Semantic Web standards like RDF, SPARQL, and OWL that add meaning to data on the web. Examples are given of semantic data from sites like BestBuy, Facebook, LinkedIn, and IMDB. Current Semantic Web applications are presented in areas like finance, mapping, email, and data visualization. The document argues that as more data becomes linked and understandable by machines, new and useful applications can be imagined in domains like social media, transportation, and entertainment. The vision is that as the Semantic Web continues to grow, it will unlock new possibilities limited only by our imaginations.
Give a background of Data Science and Artificial Intelligence, to better understand the current state of the art (SOTA) for Large Language Models (LLMs) and Generative AI. Then start a discussion on the direction things are going in the future.
❻❸❼⓿❽❻❷⓿⓿❼ SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY
❻❸❼⓿❽❻❷⓿⓿❼ SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY
Airline Satisfaction Project using Azure
This presentation is created as a foundation of understanding and comparing data science/machine learning solutions made in Python notebooks locally and on Azure cloud, as a part of Course DP-100 - Designing and Implementing a Data Science Solution on Azure.
Amazon Aurora 클러스터를 초당 수백만 건의 쓰기 트랜잭션으로 확장하고 페타바이트 규모의 데이터를 관리할 수 있으며, 사용자 지정 애플리케이션 로직을 생성하거나 여러 데이터베이스를 관리할 필요 없이 Aurora에서 관계형 데이터베이스 워크로드를 단일 Aurora 라이터 인스턴스의 한도 이상으로 확장할 수 있는 Amazon Aurora Limitless Database를 소개합니다.
2. 1966: ELIZA
Image source: en.wikipedia.org/wiki/ELIZA#/media/File:ELIZA_conversation.png
“While ELIZA was capable of
engaging in discourse, it
could not converse with true
understanding. However,
many early users were
convinced of ELIZA's
intelligence and
understanding, despite
Weizenbaum's insistence to
the contrary.”
Source: en.wikipedia.org/wiki/ELIZA (and
references therein).
3. 2005: SCIgen - An Automatic CS Paper Generator
nature.com/articles/d41586-021-01436-7
news.mit.edu/2015/how-three-mit-students-fooled-scientific-journals-0414
A project using a rather rudimentary technology that aimed to "maximize amusement, rather than coherence" is
still the cause of troubles today...
pdos.csail.mit.edu/archive/scigen
4. 2017: Google Revolutionized Text Generation
■ Vaswani (2017), Attention Is All You Need (doi.org/10.48550/arXiv.1706.03762)
■ openai.com/research/better-language-models
Image generated with DALL.E: “A small robot standing on the
shoulder of a giant robot” (and slightly modified with The Gimp)
OpenAI’s Generative Pre-trained
Transformer (DALL.E, 2021; ChatGPT,
2022), as the name suggests, reposes on
Transformers.
Google introduced the Transformer,
which rapidly became the state-of-the-art
approach to solve most NLP problems.
5. ● Kiela et al. (2021), Dynabench: Rethinking Benchmarking in NLP: arxiv.org/abs/2104.14337
● Roser (2022), The brief history of artificial intelligence: The world has changed fast – what might be next?: ourworldindata.org/brief-history-of-ai
Transformers
2017
Text and shapes in blue have been added to the original work from Max Roser.
6. What Are Transformers?
Source: Vaswani (2017), Attention Is All You Need
(doi.org/10.48550/arXiv.1706.03762)
Generative (deep learning) models for understanding and generating text,
images and many other types of data.
Transformers analyze chunks of data, called "tokens" and learn to predict
the next token in a sequence, based on previous and, if available, following
tokens.
The auto-regressive concept means that the output of the model, such as
the prediction of a word in a sentence, is influenced by the previous words it
has generated.
Music—MusicLM (Google) and Jukebox (OpenAI) generate music from text.
Image—Imagen (Google) and DALL.E (OpenAI) generate novel images from text.
Texte—OpenAI’s GPT has become widely known, but other players have similar technology
(including Google, Meta, Anthropic and others).
Others—Recommender (movies, books, flight destinations), drug discovery…
Models that learn from a given dataset how to
generate new data instances.
7. 2022: ChatGPT
“ChatGPT, the popular chatbot
from OpenAI, is estimated to have
reached 100 million monthly
active users in January, just two
months after launch, making it the
fastest-growing consumer
application in history”
statista.com/chart/29174/time-to-one-million-users
Reuters, Feb 1, 2023
https://reut.rs/3yQNlGo
8. The Mushrooming of Transformer-Based LLMs
PaML (540b), LaMDA
(137b) and others (Bard
relies on LaMDA)
OPT-IML (175b), Galactica
(120b), BlenderBot3
(175b), Llama 2 (70b)
ERNIE 3.0 Titan (260b)
GPT-3 (175b), GPT-3.5 (?b),
GPT-4 (?b)
BLOOM (176b)
PanGu-𝛼 (200b)
Jurassic-1 (178b), Jurassic-2 (?b)
Exaone (300b)
Megatron-Turing NLG (530b)
(It appears that all those models rely only on
transformer-based decoders)
12. AI Mentions Boost Stock Prices
● AI-mentioning companies:
+4.6% avg. stock price
increase (nearly double of the
non-mentioning).
● In general, 67% of companies
that mentioned AI observed an
increase in their stock prices
→ +8.5% on average.
● Tech companies:
71% → +11.9% on avg.
● Non-tech companies:
65% → +6.8% on avg.
- Mentions of "AI" and related terms (machine learning, automation, robots, etc.).
- S&P 500 companies in 2023.
- 3-day change from the date the earnings call transcript was published. Source: wallstreetzen.com/blog/ai-mention-moves-stock-prices-2023
13. GPUs Demand Skyrockets
Before LLMs, GPUs were primarily needed for training, and
CPUs were used for inference. However, with the emergence
of LLMs, GPUs have become almost essential for both tasks.
Paraphrasing Brannin McBee, co-founder of CoreWeave, in
Bloomberg Podcast*:
While you may train the model using 10,000 GPUs, the real
challenge arises when you need 1 million GPUs to meet the
entire inference demand. This surge in demand is expected
during the initial one to two years after the launch, and it's likely
to keep growing thereafter.
* How to Build the Ultimate GPU Cloud to Power AI | Odd Lots (youtube.com/watch?v=9OOn6u6GIqk&t=1308s)
14. Enhancing Productivity With Generative AI?
nature.com/articles/d41586-023-02270-9
science.org/doi/10.1126/science.adh2586
16. Beware of “Hallucinations” Which Do Remain Very Real
“Hallucinations” are “confident
statements that are not true”1
.
For the moment, this
phenomenon inexorably
affects all known LLMs.
1: fr.wikipedia.org/wiki/Hallucination_(intelligence_artificielle)
Yves Montand in “Le Cercle Rouge” during an attack of delirium tremens
This thing probably doesn't exist.
17. Concrete
Hallucinations (GPT-4)
We asked ChatGPT the first part of the third
question of the British Mathematical Olympiad
1977: bmos.ukmt.org.uk/home/bmo-1977.pdf
Is that so? Although not an obvious
hallucination, it may remind us of Fermat’s
lack of space in the margin to give the proof
of his last theorem… Perhaps here there is a
lack of tokens?
Here a total hallucination, this statement is
evidently false.
Perhaps it meant “the
product of two negative
numbers”
Here a total hallucination, this statement is
evidently false. (Although in this case the
inequality is indeed clearly true.)
18. The Saga of the Lawyer Who Used ChatGPT
nytimes.com/2023/06/08/nyregion/law
yer-chatgpt-sanctions.html
nytimes.com/2023/05/27/nyregion/avia
nca-airline-lawsuit-chatgpt.html
nytimes.com/2023/06/22/nyregion/la
wyers-chatgpt-schwartz-loduca.html
19. ChatGPT: Achieving Human-Level Performance in
Professional and Academic Benchmarks
● GPT-4's performance in recent tests is
undeniably impressive.
● Study conducted by OpenAI
(openai.com/papers/gpt-4.pdf).
● Most of those tests mainly focus on high
school-level content.
● Many are prepared through test prep
courses and resources.
● By contrast, university exams typically
require a deeper understanding of course
material and critical thinking skills.
● Uniform Bar Exam: Worth noting, but
potential overestimation concerns (see
dx.doi.org/10.2139/ssrn.4441311).
20. Exploring the MIT Mathematics and EECS Curriculum Using
Large Language Models
Published on Jun 15, 2023
Authors: Sarah J. Zhang, Samuel Florin, Ariel N. Lee, Eamon Niknafs, Andrei Marginean, Annie Wang, Keith
Tyser, Zad Chin, Yann Hicke, Nikhil Singh, Madeleine Udell, Yoon Kim, Tonio Buonassisi, Armando
Solar-Lezama, Iddo Drori
Abstract
We curate a comprehensive dataset of 4,550 questions and solutions from problem sets,
midterm exams, and final exams across all MIT Mathematics and Electrical Engineering and
Computer Science (EECS) courses required for obtaining a degree. We evaluate the ability of
large language models to fulfill the graduation requirements for any MIT major in Mathematics
and EECS. Our results demonstrate that GPT-3.5 successfully solves a third of the entire MIT
curriculum, while GPT-4, with prompt engineering, achieves a perfect solve rate on a test set
excluding questions based on images. We fine-tune an open-source large language model on
this dataset. We employ GPT-4 to automatically grade model responses, providing a detailed
performance breakdown by course, question, and answer type. By embedding questions in a
low-dimensional space, we explore the relationships between questions, topics, and classes and
discover which questions and classes are required for solving other questions and classes
through few-shot learning. Our analysis offers valuable insights into course prerequisites and
curriculum design, highlighting language models' potential for learning and improving
Mathematics and EECS education.
Source: arxiv.org/abs/2306.08997
i.e., GPT-4
scored 100% on
MIT EECS
Curriculum
(Electrical
Engineering and
Computer
Science)
21. “No, GPT4 can’t ace MIT”
Three MIT undergrads have debunked the myth.
- 4% of the questions were unsolvable. (How did GPT-4 achieve 100%?)
- Information leak in some few-shot prompts: for those, the answer was
quasi-given in the question.
- The automatic grading using GPT-4 itself has some severe issues: prompt
cascade that reprompted (many times) when the given answer was deemed
incorrect. 16% of the questions were multi-choices questions, hence a
quasi-guaranteed correct response.
- Bugs found in the research script that raise serious questions regarding the
soundness of the study.
Source: flower-nutria-41d.notion.site/No-GPT4-can-t-ace-MIT-b27e6796ab5a48368127a98216c76864
Note: The paper has since been withdrawn (see official statement at people.csail.mit.edu/asolar/CoursesPaperStatement.pdf)
22. Chemistry May Not Be ChatGPT Cup of Tea
A study conducted by three researchers of the University of
Hertfordshire (UK) showed that ChatGPT is not a fan of
chemistry.
Real exams were used, and the authors note that “[a] well-written
question item aims to create intellectual challenge and to require
interpretation and inquiry. Questions that cannot be easily
‘Googled’ or easily answered through a single click in an
internet search engine is a focus.”
“The overall grade on the year 1 paper calculated from the top
four graded answers would be 34.1%, which does not meet the
pass criteria. The overall grade on the year 2 paper would be
18.3%, which does not meet the pass criteria.”
Source: Fergus et al., 2023, Evaluating Academic Answers Generated Using ChatGPT (pubs.acs.org/doi/10.1021/acs.jchemed.3c00087)
23. The “Drift” Phenomenon
Sources:
- wsj.com/articles/chatgpt-openai-math-artificial-intelligence-8aba83f0
- Chaîne et al., 2023, arxiv.org/abs/2307.09009
● New research from Stanford and UC Berkeley
highlights a fundamental challenge in AI
development: "drift."
● Drift occurs when improving one aspect of
complex AI models leads to a decline in
performance in other areas.
● ChatGPT has shown deterioration in basic math
operations despite advancements in other tasks.
● GPT-4 exhibits reduced responsiveness to
chain-of-thought prompting (may be intended to
mitigate potential misuse with malicious
prompts).
The “behavior of the ‘same’ LLM service can
change substantially in a relatively short amount of
time, highlighting the need for continuous monitoring
of LLMs” (Chain et al., 2023).
24. Techniques for Tailoring LLMs to
Specific Problems
Prompts Engineering
Fine-Tuning
Reinforcement Learning From Human Feedback (RLHF)
25. First We Must Have a Problem to Solve…
Source: DeepLearning.AI, licensed under CC BY-SA 2.0
26. Then We Need a Model
Commercial APIs
- Google, OpenAI, Anthropic, Microsoft...
- Privacy concerns may arise.
- No specific hardware requirement.
- Prompt engineering (OpenAI offers prompt fine-tuning).
Use a foundation model (many open sources models are available)
- As it is (prompt engineering),
- or fine-tuned (either full or parameter efficient fine-tuning).
- May required specific hardware/infrastructure for hosting, fine-tuning and
inferences.
Train a model from the scratch
- Requires huge resources (both data and computing power).
- (e.g., BloombergGPT, arxiv.org/abs/2303.17564.)
27. A Plethora of Open
Source Pre-Trained
Models
huggingface.co/models
Models should be selected
depending on:
● The problem at hand.
● The strength of the model.
● The operating costs (larger
models require more
resources).
● Other considerations (e.g.,
license).
28. Prompt Engineering: “Query Crafting”
Improving the output with actions like phrasing
queries, specifying styles, providing context, or
assigning roles (e.g., 'Act as a mathematics
teacher') (Wikipedia, 2023).
Some hints can be found in OpenAI’s “GPT best
practices” (OpenAi, 2023).
Chain-of-thought: popular technique consisting
in “guiding [LLMs] to produce a sequence of
intermediate steps before giving the final answer”
(Wei et al., 2022).
Sources:
- Wei, J.et al., 2022. Emergent abilities of large language models, arxiv.org/abs/2206.07682
- OpenAI, 2023, platform.openai.com/docs/guides/gpt-best-practices/six-strategies-for-getting-better-results
- Wikipedia, 2023, , Prompt Engineering, en.wikipedia.org/wiki/Prompt_engineering
(graph from Wei et al., 2022)
About GSM8K benchmark: arxiv.org/abs/2110.14168
29. Prompt Engineering: In-Context Learning (ICL)
In-Context Learning (ICL) consists in “a few input-output
examples in the model’s context (input) as a preamble
before asking the model to perform the task for an unseen
inference-time example” (Wei et al., 2022).
It is a kind of “ephemeral supervised learning.”
- Zero-shot prompting or Zero-shot learning: no example
given (for largest LLMs, smaller ones may struggle).
- One-shot prompting: one example provided.
- Few-shot prompting: a few examples (typically 3~6).
⚠ Context window limits (e.g., 4096 tokens).
Tweet: @lufthansa Please find our
missing luggage!!
Sentiment: negative
Tweet: Will be on LH to FRA very soon.
Cheers!
Sentiment: positive
Tweet: Refused to compensate me for 2
days cancelled flights . Joke of a airline
Sentiment:
LLM
negative
Example of an input and
output for two-shot prompting
Source: Wei, J.et al., 2022. Emergent abilities of large language models, arxiv.org/abs/2206.07682
30. Fine-Tuning: Introduction
Few shot learning:
- May not be sufficient for smaller models.
- Consumes tokens from the context window.
Fine-tuning is a supervised learning process
that leads to a new model (in contrast with
in-context learning that is “ephemeral”).
Task specific prompt-completion pairs data are
required.
Base LLM
Fine-tuned
LLM
(Prompt_1, completion_1)
(Prompt_2, completion_2)
…
(Prompt_n, completion_n)
Task specific prompt-completion
pairs data
31. Full Fine-Tuning: Updating All Parameters
Fine-tuning very often means “instruction fine-tuning.”
Instruction fine-tuning: each prompt-completion pair includes a specific
instruction (summarize this, translate that, classify this tweet, …).
● Fine-tuning on a single task (e.g, summarization) may lead to a phenomenon
referred to as “catastrophic forgetting” (arxiv.org/pdf/1911.00202), where the
model loses its abilities on other tasks (may not be a business issue, though).
● Fine-tuning on multi tasks (e.g., summarization, translation, classification, …).
This requires a lot more training data. (E.g., see FLAN in Wei et al., 2022.)
Full fine-tuning is extremely resources demanding, even more so for large models.
Source: Wei et al., 2022, Finetuned Language Models Are Zero-Shot Learners. arxiv.org/abs/2109.01652
32. Parameter Efficient Fine-Tuning (PEFT)
Unlike full fine-tuning, PEFT preserves the vast majority of the weights of the original
model.
● Less prone to “catastrophic forgetting” on single task.
● Often a single GPU is enough.
Three methods:
● Selective—subset of initial params to fine-tune.
● Reparameterization—reparameterize model weights using a low-rank
representation, e.g., LoRA (Hu et al., 2021).
● Additive—add trainable layers or parameters to model, two approaches:
- Adapters: add new trainable layers to the architecture of the model.
- Soft prompts: focus on manipulating the input (this is not prompt engineering).
Source:
- coursera.org/learn/generative-ai-with-llms/lecture/rCE9r/parameter-efficient-fine-tuning-peft
- Hu et al., 2021, LoRA: Low-Rank Adaptation of Large Language Models. arxiv.org/abs/2106.09685
33. OpenAI API offers
prompt tuning for
gpt-3.5-turbo, but not
“yet” for GPT-4.
platform.openai.com/docs/guides/fine-tuning
Fine-Tuning With
OpenAI GPT
(PEFT)
34. Reinforcement Learning From Human Feedback
LLMs are trained on the web data with a lot of irrelevant matters (unhelpful), or worse,
where false (dishonest) and/or harmful information are abundant, e.g.,
● Potentially dangerous false medical advices.
● Valid techniques for illegal activities (hacking, deceiving, building weapons, …).
HHH (Helpful, Honest & Harmless) alignment (Askell et al., 2021): ensuring that the
model's behavior and outputs are consistent with human values, intentions, and ethical
standards.
Reinforcement Learning from Human Feedback, or RLHF (Casper et al., 2023)
● “is a technique for training AI systems to align with human goals.”
● “[It] has emerged as the central method used to finetune state-of-the-art [LLMs].”
● It reposes on human judgment and consensus.
Source:
- Casper et al., 2023, Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. arxiv.org/abs/2307.15217
- Ziegler et al., 2022, Fine-Tuning Language Models from Human Preferences. arxiv.org/abs/1909.08593
- Askell et al., 2021, A General Language Assistant as a Laboratory for Alignment. arxiv.org/abs/2112.00861
35. What Is RLHF by Sam Altman
5:59
What is RLHF? Reinforcement Learning with Human Feedback, …
6:07
… So, we trained these models on a lot of text data and, in that process, they
learned the underlying, …. And they can do amazing things.
6:26
But when you first play with that base model, that we call it, after you finish
training, … it can do a lot of, you know, there's knowledge in there. But it's not
very useful or, at least, it's not easy to use, let's say. And RLHF is how we
take some human feedback,
6:45
the simplest version of this is show two outputs, ask which one is better
than the other,
6:50
which one the human raters prefer, and then feed that back into the model
with reinforcement learning.
6:56
And that process works remarkably well with, in my opinion, remarkably little
data to make the model more useful. So, RLHF is how we align the model to
what humans want it to do.
Sam Altman: OpenAI CEO on
GPT-4, ChatGPT, and the Future of
AI | Lex Fridman Podcast #367
(youtu.be/L_Guz73e6fw?si=vfkdtN
CyrQa1RzZR&t=359)
36. Source: Liu et al., 2022, Aligning Generative Language Models with Human Values. aclanthology.org/2022.findings-naacl.18
RLHF: Example of Alignment Tasks
38. Assessing and Comparing LLMs
Metrics while training the model—ROUGE (summary) or BLEU (translation).
Benchmarks—A non-exhaustive list:
- ARC (Abstraction and Reasoning Corpus, arxiv.org/pdf/2305.18354),
- HellaSwag (arxiv.org/abs/1905.07830),
- TruthfulQA (arxiv.org/abs/2109.07958),
- GLUE & SuperGLUE (General Language Understanding Evaluation, gluebenchmark.com),
- HELM (Holistic Evaluation of Language Models, crfm.stanford.edu/helm),
- MMLU (Massive Multitask Language Understanding, arxiv.org/abs/2009.03300),
- BIG-bench (arxiv.org/pdf/2206.04615).
Others—“Auto-Eval of Question-Answering Tasks”
(blog.langchain.dev/auto-eval-of-question-answering-tasks).
39. Source: Wu et al., 2023,
BloombergGPT: A Large Language
Model for Finance.
arxiv.org/abs/2303.17564 (Table 13:
“BIG-bench hard results using
standard 3-shot prompting”)
40. Source: Touvron et al., 2023, Llama 2: Open Foundation and Fine-Tuned Chat Models,
scontent-fra3-1.xx.fbcdn.net/v/t39.2365-6/10000000_662098952474184_2584067087619170692_n.pdf
42. Question ChatGPT About the Latest Financial
Reports?
—blog.langchain.dev/tutorial-
chatgpt-over-your-data
“[ChatGPT] doesn’t know about
your private data, it doesn’t know
about recent sources of data.
Wouldn’t it be useful if it did?”
43. Workflow Overview
Question
Answer
« Quels vont être les dividendes payés
par action par le Groupe Crit ? »
« Le Groupe CRIT proposera lors de sa prochaine Assemblée Générale, le 9
juin 2023, le versement d'un dividende exceptionnel de 3,5 € par action. »
The example (the question and associated
answer) is a real example (the LLM was
“gpt-3.5-turbo” from OpenAI)
Technique described in: Lewis et al., 2020.
Retrieval-augmented generation for knowledge-intensive
nlp tasks. (doi.org/10.48550/arXiv.2005.11401)
Extracting
relevant
information
(“context”)
Generate a prompt
accordingly
(“question +
context”)
LLM
Vector store
Split into chunks
1
2 3
Compute
embeddings
44. Preliminary Prototype
Financial reports retrieved directly from the French AMF (“Autorité
des marchés financiers”) via their API (info-financiere.fr).
xhtml document in
French language.
Question and answer
are in English (they
would be in French
should the question be
asked in French).
45. Except where otherwise noted, this work is licensed under
https://creativecommons.org/licenses/by/4.0/
619.io