This is part 1 of an 8-part introductory course for SharePoint Champions and Superusers focusing on integrating Large Language Models (LLMs) into corporate environments. Section 1 introduces LLMs, covering their definition, history, and capabilities. It explores how LLMs work, their impact across industries, and current limitations. The section also discusses popular LLM examples and future directions in the field, setting the foundation for understanding their potential in SharePoint contexts.
The course then takes a look at using online LLMs, local LLM deployment for corporate use, and the intricate process of installing and configuring these models. It provides detailed guidance on integrating LLMs with SharePoint, exploring various applications such as enhanced search, automated content tagging, and intelligent document processing. The later sections cover best practices and governance for LLM-enhanced SharePoint environments, addressing crucial aspects like data privacy, ethical considerations, and user adoption strategies.
The course concludes by examining future trends and considerations, preparing participants for the evolving landscape of AI-enhanced knowledge management. Throughout, it emphasizes practical applications, challenges, and solutions, equipping SharePoint Champions and Superusers with the knowledge to leverage LLMs effectively within their organizations.
Yes, most of it was written by an LLM.
Report
Share
Report
Share
1 of 39
More Related Content
Similar to An Introduction to AI LLMs & SharePoint For Champions and Super Users Part 1
The document summarizes a technical seminar on natural language processing (NLP). It discusses the history and components of NLP, including text preprocessing, tokenization, and sentiment analysis. Applications of NLP mentioned include language translation, smart assistants, document analysis, and predictive text. Challenges in NLP include ambiguity, context understanding, and ensuring privacy and ethics. Popular NLP tools and the future of NLP involving multimodal analysis are also summarized.
AI_attachment.pptx prepared for all studentstalldesalegn
The document discusses probabilistic modeling and learning probabilistic models. It describes probabilistic modeling as using random occurrences to forecast possible future results while accounting for uncertainty. The key steps in learning probabilistic models are selecting an appropriate model, collecting representative data, initializing model parameters, using an estimation algorithm to update parameters based on data, and evaluating and refining the model. Learning probabilistic models enables more accurate predictions and insights across many domains. Examples provided include Naive Bayes classifiers, hidden Markov models, and Gaussian mixture models.
Chatbots are conversational agents that interact with users using natural language. They have numerous applications such as customer service and call centers. Chatbots work using pattern matching to recognize cue words from users and respond with pre-calculated responses. They have been used for entertainment, foreign language learning, and information retrieval. The goal of chatbot designers should be to help people and facilitate interactions using natural language, not to replace humans or perfectly imitate conversations.
This document discusses using R for Twitter data analytics. It outlines the basics of Twitter data analytics using R, including collecting real-time Twitter data, text mining techniques for Twitter data, and sentiment analysis. Some key steps involved are exploring the Twitter corpus, preprocessing the text by removing stopwords and stemming words, creating a document-term matrix, and calculating TF-IDF weights. Cosine similarity is used to measure similarity between text documents. The goal is to extract useful patterns and insights from large amounts of Twitter data in real-time.
INTRODUCTION TO Natural language processingsocarem879
Natural language processing (NLP) is a machine learning technology that gives computers the ability to
interpret, manipulate, and comprehend human language.
•Ex: Amazon’s Alexa and Apple’s Siri utilize NLP to listen to user queries and find answers
• We have large volumes of voice and text data from various communication channels like emails, text
messages, social media newsfeeds, video, audio, and more.
• They use NLP software to automatically process this data, analyze the intent or sentiment in the
message, and respond in real time to human communication
• When text mining and machine learning are combined, automated text analysis becomes possible
PREPROCESSING STEPS IN NLP
• Data preprocessing involves preparing and cleaning text data so that machines can analyze it. This
can be done in following:
• Tokenization. It substitutes sensitive information with nonsensitive information, or a token.
Tokenization is often used in payment transactions to protect credit card data.
• Stop word removal. Common words are removed from the text, so unique words that offer the most
information about the text remain.
• Lemmatization and stemming. Lemmatization groups together different inflected versions of the
same word. For example, the word "walking" would be reduced to its root form, or stem, "walk" to
process.
• Part-of-speech tagging. Words are tagged based on which part of speech they correspond to -- such
as nouns, verbs or adjectives
The document discusses metadata standards and practices. It begins by asking questions about how digital information is organized and found. It then discusses challenges like having to do new tasks without full knowledge and learning from others. The document provides overviews of various metadata standards like MODS, MIX, PREMIS, METS, and TEI. It also discusses topics such as metadata schemas, subject metadata, indexing metadata, and search relevance. Throughout, it offers advice on evaluating and implementing metadata standards.
An overview of some core concept in natural language processing, some example (experimental for now!) use cases, and a brief survey of some tools I have explored.
Rob Hanna: Leveraging Cognitive Science to Improve Topic- Based AuthoringJack Molisani
This document discusses leveraging cognitive science principles to improve topic-based authoring. It begins with an introduction to effective topic-based authoring using DITA. It then discusses how information overload has increased exponentially in the digital age. The document proposes that well-structured content based on cognitive principles can help users find, understand, and retain information. It presents experiments demonstrating how chunking, labeling, and organizing information by similarity improves memorization. Finally, it introduces the Precision Content methodology for applying language arts and information mapping principles to DITA topics.
Natural Language Processing, Techniques, Current Trends and Applications in I...RajkiranVeluri
The document discusses natural language processing (NLP) techniques, current trends, and applications in industry. It covers common NLP techniques like morphology, syntax, semantics, and pragmatics. It also discusses word embeddings like Word2Vec and contextual embeddings like BERT. Finally, it discusses applications of NLP in healthcare like analyzing clinical notes and brand monitoring through sentiment analysis of user reviews.
Text mining is the process of extracting useful information and patterns from large collections of unstructured documents. It involves preprocessing texts, applying techniques like categorization, clustering, and summarization, and presenting or visualizing the results. While text mining has many applications in business, science, and other domains, it also faces challenges related to linguistics, analytics, and integrating domain knowledge. The document outlines the definition, techniques, applications, advantages, and limitations of text mining.
This document summarizes an introductory session on programming in the digital humanities. It discusses how programming involves complex work in figuring out what to do and which languages to use. Examples are provided of tasks a programming language could perform on text data, like finding quotes from a novel or allowing a user to search a text file. The document emphasizes that critical thinking is important to programming in the humanities. It also discusses different ways of structuring data, such as with markup languages like HTML and TEI, or in a structured format like a database. The goal is to make data understandable to computers while retaining its usefulness. Collaboration is important when creating structured data.
If information stewards and custodians are to collect, create, appraise, preserve, store, use and access sophisticated, flexible, responsive and future- friendly content at scale, then they will have to think strategically about who's going to use the content, how and where they're going to consume it. COPE – Create Once, Publish Everywhere - is an acronym that describes how content should be conceived once and then disseminated through multiple conduits. The goal of COPE is to capture all content (text, media), context and metadata in a single manner, and then ensure that this content can be accessed and used across a range of publishing platforms.
Automate your Job and Business with ChatGPT #3 - Fundamentals of LLM/GPTAnant Corporation
This document provides an agenda for a full-day bootcamp on large language models (LLMs) like GPT-3. The bootcamp will cover fundamentals of machine learning and neural networks, the transformer architecture, how LLMs work, and popular LLMs beyond ChatGPT. The agenda includes sessions on LLM strategy and theory, design patterns for LLMs, no-code/code stacks for LLMs, and building a custom chatbot with an LLM and your own data.
Using construction grammar in conversational systemsCJ Jenkins
This thesis explored using construction grammar and ontologies in conversational systems. The author built two early experimental systems using these techniques. Construction grammar represents language as constructions pairing form and meaning. Ontologies allow for more explicit semantics compared to databases. The author developed a stemmer called UEA-Lite and a system called KIA that incorporated construction grammar, ontologies, and machine learning to understand and respond to natural language.
How to Enhance NLP’s Accuracy with Large Language Models_ A Comprehensive Gui...Nexgits Private Limited
How LLMs can significantly improve the accuracy of natural language processing tasks. Realize how to leverage LLMs to improve the accuracy of your NLP models in this comprehensive guide by Nexgits.
How to Enhance NLP’s Accuracy with Large Language Models - A Comprehensive Gu...Nexgits Private Limited
How LLMs can significantly improve the accuracy of natural language processing tasks. Realize how to leverage LLMs to improve the accuracy of your NLP models in this comprehensive guide by Nexgits.
Similar to An Introduction to AI LLMs & SharePoint For Champions and Super Users Part 1 (20)
Best Internet Service Provider In Bangladeshonesky2024
One Sky Communications Limited is a leading broadband service provider offering a range of high-speed internet packages tailored for diverse needs, including Home Internet, SME Internet, and Corporate solutions. Our commitment to connectivity extends beyond just internet services; we also provide advanced VTS (Vehicle Tracking Service) to ensure the safety and efficiency of your fleet. Additionally, we are dedicated to empowering individuals and businesses through our comprehensive IT training services, designed to enhance skills and drive technological proficiency. At One Sky Communications Limited, we are your trusted partner for seamless connectivity, security, and professional development.
The advent of social media has revolutionized communication, transforming the way people connect, share, and interact globally. At the forefront of this digital revolution are visionary entrepreneurs who recognized the potential of the internet to foster social connections and create communities. This essay explores the founders of some of the most influential social media platforms, their journeys, and the lasting impact they have made on society.
Mark Zuckerberg, along with his college roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, founded Facebook in 2004. Initially created as a social networking site for Harvard University students, Facebook rapidly expanded to other universities and eventually to the general public. Zuckerberg's vision was to create an online directory that connected people through their real-life social networks.
Twitter, founded in 2006 by Jack Dorsey, Biz Stone, and Evan Williams, brought a new dimension to social media with its microblogging platform. Dorsey envisioned a service that allowed users to share short, real-time updates, limited to 140 characters (now 280). This concise format encouraged rapid sharing of information and fostered a culture of brevity and immediacy.
Kevin Systrom and Mike Krieger co-founded Instagram in 2010, focusing on photo and video sharing. Systrom, who studied photography, wanted to create an app that made mobile photos look professional. The app's unique filters and easy-to-use interface quickly gained popularity, amassing over a million users within two months of its launch.
Instagram's emphasis on visual content has had a significant cultural impact. It has popularized the concept of influencers, giving rise to a new industry where individuals can monetize their popularity and reach. The platform has also revolutionized digital marketing, enabling brands to connect with consumers in more authentic and engaging ways. Acquired by Facebook in 2012, Instagram continues to be a dominant force in social media, shaping trends and cultural norms.
Reid Hoffman founded LinkedIn in 2002 with the goal of creating a professional networking platform. Unlike other social media sites focused on personal connections, LinkedIn was designed to connect professionals, facilitate job searches, and foster business relationships. The platform allows users to create professional profiles, network with colleagues, and share industry insights.
LinkedIn has become an indispensable tool for job seekers, recruiters, and businesses. It has transformed the job market by making it easier to find and connect with potential employers and employees. LinkedIn's influence extends beyond job searches; it has become a hub for professional development, thought leadership, and industry news. Hoffman's vision has significantly impacted how professionals manage their careers and build their networks.
Jan Koum and Brian Acton co-founded WhatsApp in 2009, aiming to create a simple, reliable..
An Introduction to AI LLMs & SharePoint For Champions and Super Users Part 1
1. AI LLMs &
SharePoint
Using Large Language Models (LLMs) with SharePoint within
the corporate firewall
Part 1: A brief introduction to Large Language Models
3. Definition
and Basic
Concepts
• What are Large Language
Models (LLMs)?
• Key characteristics of LLMs
• How LLMs differ from
traditional NLP models
4. What are
Large
Language
Models
(LLMs)?
• Large Language Models
(LLMs) are advanced
artificial
intelligence systems
designed to
understand, generate,
and manipulate human
language.
• These models are
trained on vast
amounts of text data,
allowing them to
capture intricate
patterns and nuances
in language.
5. Key
characteristi
cs of LLMs
• Massive scale: Typically
containing billions of
parameters
• Generative capabilities: Able
to produce human-like text
• Contextual understanding: Can
interpret and respond to
complex prompts
6. How LLMs
differ from
traditional
NLP models
• NLP – Natural Language
Processing
• LLMs differ from
traditional NLP models
in their scale,
versatility, and
ability to perform a
wide range of language
tasks without task-
specific training.
7. Brief History
and Evolution
• Early language models and
their limitations
• Breakthrough developments
(e.g., transformer
architecture)
• Major milestones in LLM
development (e.g., BERT, GPT
series)
11. How LLMs Work
• Overview of neural network
architecture
• Training process:
unsupervised learning on vast
text corpora
• Concept of "understanding" in
LLMs
12. Overview of
neural
network
architectur
e
• “At their core, most
modern LLMs use
transformer
architectures, which
allow for parallel
processing of input
data and capture long-
range dependencies in
text.”
• What does THAT mean?
13. What is a
Transformer
Architecture?
• A transformer architecture is
a type of artificial
intelligence (AI) model that
allows computers to process
and analyze large amounts of
data quickly and efficiently.
It's like a super-powerful,
ultra-fast librarian that can
find connections between
different pieces of
information.
14. How does it
work?
• Imagine you're reading
a long book. As you
read, you might notice
that certain words or
phrases keep appearing
throughout the text,
even if they're on
different pages. A
transformer
architecture is
designed to help
computers do the same
thing – it looks for
patterns and
connections between
different parts of a
15. Parallel
Processing
• One of the key features of
transformers is their ability
to process multiple pieces of
information at the same time,
or "in parallel." This means
that instead of reading the
book page by page, the
computer can look at multiple
pages simultaneously and find
connections between them.
16. Long-range
Dependencie
s
• Transformers are also
great at capturing "long-
range dependencies" in
data. What does this mean?
Well, imagine you're
trying to understand a
joke. The punchline might
not make sense until
you've heard the setup and
the context of the entire
joke – it's not just about
individual words or
phrases, but how they all
fit together. Transformers
can capture these long-
range dependencies by
looking at large chunks of
data and finding patterns
that connect different
17. Summary - What
is a
Transformer
Architecture?
• in short, modern LLMs (Large
Language Models) use
transformer architectures to
process and analyze text
quickly and efficiently. This
allows them to find
connections between different
pieces of information, even
if they're far apart – which
is super helpful for tasks
like language translation,
text summarization, and more!
18. Training
process
• How LLMs Learn
• Large Language Models (LLMs)
learn by reading lots of text
from the internet, books, and
articles. This helps them
understand how language works.
• The Training Process
• The model tries to predict
what word comes next in a
sentence or paragraph. As it
makes more predictions, it
gets better at understanding
patterns in language.
• Think of it like learning a
new language by reading lots
of texts, newspapers, and
books. You start to recognize
common phrases, sentence
structures, and even idioms!
The LLM is doing something
similar, but with computers
and algorithms.
19. Concept of
"understanding
" in LLMs
• The concept of
"understanding" in LLMs is a
subject of debate.
• While they can produce
remarkably human-like
responses, their
"understanding" is based on
statistical patterns rather
than true comprehension.
20. Capabilitie
s of LLMs
• Natural language
understanding and
generation
• Translation and
multilingual
capabilities
• Text summarization and
paraphrasing
• Question answering and
information retrieval
• Code generation and
analysis
25. Code
generation
and analysis
• Code Completion and
Suggestions
• Code Generation from Natural
Language
• Code Analysis and Inspection
• Code Synthesis and Generation
from Abstract Specifications
26. Limitations
and
Challenges
• Biases in training
data and outputs
• Hallucinations and
factual inaccuracies
• Lack of true
understanding or
reasoning
• Ethical concerns and
potential misuse
27. Biases in
training data
and outputs
• Unintended Biases in Training
Data
• Implicit Biases in Model
Outputs
• Cascading Biases
• Lack of Representation
29. Lack of true
understanding
or reasoning
• AI Systems that Don't Truly
Understand
• Lack of Common Sense
Reasoning
• Insufficient Contextual
Understanding
• Over-Reliance on Memorization
32. Impact on
Various
Industries
• How LLMs are
transforming business
processes
• Potential applications
in different sectors
(e.g., healthcare,
finance, education)
33. Transforming
business
• Automating Routine Tasks
• Improving Customer Service
• Enhancing Product Development
• Streamlining Compliance and
Risk Management
• Optimizing Operations and
Supply Chain Management
• Enabling Strategic Decision-
Making:
34. Potential
application
s
• Healthcare: Medical
Documentation and Research
• Finance: Risk Analysis and
Compliance
• Education: Personalized
Learning and Research
Support
• Oil and Gas: Predictive
Maintenance and Risk
Analysis
• Telecommunications:
Network Optimization and
Customer Support
• Manufacturing: Quality
Control and Supply Chain
Optimization
36. Ongoing
research
• Multitask Learning
• Adversarial Training
• Explainable AI (XAI)
• Transfer Learning
• Low-Resource Languages
• Human-Like Language
Generation
37. Potential
advancements
and their
implications
• Improved Language
Understanding
• Increased Automation
• Enhanced Creative
Capabilities
• Advanced Customer Service
• Faster Discovery and
Innovation
• New Forms of Human-AI
Interaction:
38. AI LLMs &
SharePoint
Using Large Language Models (LLMs) with SharePoint within
the corporate firewall
Part 1: A brief introduction to Large Language Models
39. AI LLMs & SharePoint
Part 1 - Introduction to Large Language Models
Part 2 - Using Online LLMs
Part 3 - Local LLMs for Corporate Use
Part 4 - Installing and Configuring Local LLMs
Part 5 - Integrating LLMs with SharePoint
Part 6 - Benefits of LLM-Enhanced SharePoint
Part 7 - Best Practices and Governance
Part 8 - Future Trends and Considerations