Large Language Models, No-Code, and Responsible AI - Trends in Applied NLP in...David Talby
An April 2023 presentation to the AMIA working group on natural language processing. The talk focuses on three current trends in NLP and how they apply in healthcare: Large language models, No-code, and Responsible AI.
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
A Comprehensive Review of Large Language Models for.pptxSaiPragnaKancheti
The document presents a review of large language models (LLMs) for code generation. It discusses different types of LLMs including left-to-right, masked, and encoder-decoder models. Existing models for code generation like Codex, GPT-Neo, GPT-J, and CodeParrot are compared. A new model called PolyCoder with 2.7 billion parameters trained on 12 programming languages is introduced. Evaluation results show PolyCoder performs less well than comparably sized models but outperforms others on C language tasks. In general, performance improves with larger models and longer training, but training solely on code can be sufficient or advantageous for some languages.
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?Bernard Marr
GPT-3 is an AI tool created by OpenAI that can generate text in human-like ways. It has been trained on vast amounts of text from the internet. GPT-3 can answer questions, summarize text, translate languages, and generate computer code. However, it has limitations as its output can become gibberish for complex tasks and it operates as a black box system. While impressive, GPT-3 is just an early glimpse of what advanced AI may be able to accomplish.
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
The document provides an overview of transformers, large language models (LLMs), and artificial general intelligence (AGI). It discusses the architecture and applications of transformers in natural language processing. It describes how LLMs have evolved from earlier statistical models and now perform state-of-the-art results on NLP tasks through pre-training and fine-tuning. The document outlines the capabilities of GPT-3, the largest LLM to date, as well as its limitations and ethical concerns. It introduces AGI and the potential for such systems to revolutionize AI, while also noting the technical, ethical and societal challenges to developing AGI.
Automate your Job and Business with ChatGPT #3 - Fundamentals of LLM/GPTAnant Corporation
This document provides an agenda for a full-day bootcamp on large language models (LLMs) like GPT-3. The bootcamp will cover fundamentals of machine learning and neural networks, the transformer architecture, how LLMs work, and popular LLMs beyond ChatGPT. The agenda includes sessions on LLM strategy and theory, design patterns for LLMs, no-code/code stacks for LLMs, and building a custom chatbot with an LLM and your own data.
OpenAI’s GPT 3 Language Model - guest Steve OmohundroNumenta
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to YouTube recording of Steve's talk: https://youtu.be/0ZVOmBp29E0
Leveraging Generative AI & Best practicesDianaGray10
In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
And then there were ... Large Language ModelsLeon Dohmen
It is not often even in the ICT world that one witnesses a revolution. The rise of the Personal Computer, the rise of mobile telephony and, of course, the rise of the Internet are some of those revolutions. So what is ChatGPT really? Is ChatGPT also such a revolution? And like any revolution, does ChatGPT have its winners and losers? And who are they? How do we ensure that ChatGPT contributes to a positive impulse for "Smart Humanity?".
During a key note om April 3 and 13 2023 Piek Vossen explained the impact of Large Language Models like ChatGPT.
Prof. PhD. Piek Th.J.M. Vossen, is Full professor of Computational Lexicology at the Faculty of Humanities, Department of Language, Literature and Communication (LCC) at VU Amsterdam:
What is ChatGPT? What technology and thought processes underlie it? What are its consequences? What choices are being made? In the presentation, Piek will elaborate on the basic principles behind Large Language Models and how they are used as a basis for Deep Learning in which they are fine-tuned for specific tasks. He will also discuss a specific variant GPT that underlies ChatGPT. It covers what ChatGPT can and cannot do, what it is good for and what the risks are.
Seminar on ChatGPT Large Language Model by Abhilash Majumder(Intel)
This presentation is solely for reading purposes and contains technical details about ChatGPT fundamentals
ChatGPT is an AI chatbot created by OpenAI that uses a fine-tuned GPT-3.5 language model to engage in natural conversations. It was trained using reinforcement learning with a reward model to generate helpful, harmless, and honest responses. The document discusses ChatGPT and how it compares to other AI technologies like AI painting, AI chatbots, and goals towards artificial general intelligence.
An Introduction to Generative AI - May 18, 2023CoriFaklaris1
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
generative-ai-fundamentals and Large language modelsAdventureWorld5
Thank you for the detailed review of the protein bars. I'm glad to hear you and your family are enjoying them as a healthy snack and meal replacement option. A couple suggestions based on your feedback:
- For future orders, you may want to check the expiration dates to help avoid any dried out bars towards the end of the box. Freshness is key to maintaining the moist texture.
- When introducing someone new to the bars, selecting one in-person if possible allows checking the flexibility as an indicator it's moist inside. This could help avoid a disappointing first impression from a dry sample.
- Storing opened boxes in an airtight container in the fridge may help extend the freshness even further when you can't
The document discusses advances in large language models from GPT-1 to the potential capabilities of GPT-4, including its ability to simulate human behavior, demonstrate sparks of artificial general intelligence, and generate virtual identities. It also provides tips on how to effectively prompt ChatGPT through techniques like prompt engineering, giving context and examples, and different response formats.
Build an LLM-powered application using LangChain.pdfStephenAmell4
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
ChatGPT is a natural language processing model created by OpenAI that can generate human-like responses to text-based conversations. It uses deep learning and was pre-trained on vast amounts of text to understand language. Performance is evaluated using metrics like perplexity, accuracy, fluency and human evaluation. While very powerful, ChatGPT and other large language models raise legal and ethical concerns regarding copyright, privacy, bias and how training data was obtained. The future potential includes using conversational AI to streamline operations in areas like data entry, scheduling and customer service.
[DSC DACH 23] ChatGPT and Beyond: How generative AI is Changing the way peopl...DataScienceConferenc1
In recent years, generative AI has made significant advancements in language understanding and generation, leading to the development of chatbots like ChatGPT. These models have the potential to change the way people interact with technology. In this session, we will explore the advancements in generative AI. I will show how these models have evolved, their strengths and limitations, and their potential for improving various applications. Additionally, I will show some of the ethical considerations that arise from the use of these models and their impact on society.
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
The victory of AlphaGo against Lee Sedol in March 2016 is a new milestone in Artificial Intelligence history, next one might be the mass adoption of self-driving cars or the generalized use of chatbots. These successes both raise expectations and increase the fear of an Artificial Intelligence getting out of control, which is now addressed in books and scientific papers.
Even though some other human capabilities like natural language are still way beyond machine's reach, the fear of this "control problem" is sometimes intensified by the assumption that human intelligence will be at some point surpassed in all domains by machines even if it is a controverse whether human elementary abilities like subjective experience, consciousness, moral values are even theoretically transferable to computers.
https://tech.rakuten.co.jp/
LinuxCon Europe 2013 covered many topics related to Linux and open source software development. Some key discussions included collaboration models being adopted in new domains like healthcare; Linux running major systems like supercomputers and smartphones; and embracing open source to drive innovation. Speakers also discussed tracing and debugging tools, NetworkManager architectural changes, system firmware, ACPI-based device hotplug, memory snapshots, next generation cloud platforms, Samsung's open source contributions, the Linux kernel, systemd and containers. Overall the conference highlighted Linux and open source's growing role across industries and importance for continued technological progress.
Open source software promotes quality and reliability through independent peer review and rapid code evolution. It has become pervasive as computers have become more of a commodity. While patents protect new inventions, open source relies on collaborative development by groups like Linux organizations. Main players include non-profit enthusiasts, for-profit support companies, and new open source capitalists at firms like IBM and HP. Open source faces challenges from lack of business applications and home use, but projects aim to solve such issues while new users may be more accepting. The growth of open source is inevitable but managerial ignorance benefits monopolists; overcoming this can accelerate appropriate industry acceptance.
The first great artificial intelligence software warDESMOND YUEN
Companies are vying for dominance in the AI software space. This is not a sprint but a marathon. One needs to do research before making a selection of AI software.
SITB160: Rise of the Machines Here Come the Bots!Ivanti
This document discusses the rise of artificial intelligence and intelligent bots/assistants in the workplace. It begins with an overview of how AI technologies are starting to transform the IT and end-user experience through intelligent agents, virtual assistants, bots, conversational interfaces, and automation. It then provides more details on concepts like chatbots, virtual assistants, virtual support agents, and different types of AI including narrow AI, general AI, and super AI. The document discusses the history of AI and highlights some of the key innovators and innovations. It also outlines how AI is already present in technologies people use every day like smartphones. Finally, it discusses how AI will impact businesses in two waves - with bots and automation first, followed by decision support
[Keynote IBM] Frederic Lavigne - From old to new IBM, leading to cognitive era Codemotion
Frederic Lavigne of IBM discussed how cognitive computing will become more prevalent in the coming years. He outlined several technologies that enable cognitive capabilities, such as natural language processing, machine learning, and analytics. Lavigne also discussed how cognitive systems can be used to enhance customer experiences by understanding customer needs and delivering personalized, timely interactions.
The document discusses major developments in internet and technology over the past 25 years. It covers the evolution from the early PC internet age to the mobile internet age to current trends. Some key points:
- The internet has progressed from basic infrastructure and browsers in the 1990s to today's app economies, cloud computing, and mobile dominance.
- Emerging areas discussed include internet of things, artificial intelligence, robotics, virtual reality, smart cars, and wearables. However, many face challenges around standards, costs, and defining killer applications.
- Current IT focuses on automation, machine learning, data analytics, and security across cloud environments and enterprise applications. The document questions if recent tech bubbles may be due to hy
Past, Present and Future of Generative AIabhishek36461
Generative AI creates new content (images, text, music) based on learned patterns.
It learns from vast examples and can produce original, unseen works.
Capable of blending learned elements to generate unique outputs.
Can produce customized creations based on specific prompts.
Improves and refines its output over time with more data and feedback.
A recap of interesting points and quotes from the May 2024 WSO2CON opensource application development conference. Focuses primarily on keynotes and panel sessions.
The document discusses the concept of "University 2.0" which refers to making universities more like continuously evolving beta software projects. It notes that universities should adopt an incremental improvement model with ongoing user feedback. Successful examples like Gmail, Dopplr and Joost are cited. The challenges mentioned include integration problems, scarce feedback and testing. It stresses the importance of early and frequent releases, agile development practices, and good communication. Overall it presents University 2.0 as an open model that focuses on continual evolution based on user input.
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
Produced by Nathan Benaich and Air Street Capital team
Diese ist ur ein kleiner Auszug meiner Seminarbeit über Kernkompetenzen im Web 2.0. Dieser Auszug dient nur dazu die Kollektive Intelligenz zu veranschaulichen. Die komplette Präsentation befindet sich unter http://kernkompetenzen.blogspot.com/
The document provides an introduction to generative AI and discusses its capabilities. It outlines the agenda which includes an introduction to AI, the current state of AI, types of AI, popular AI tools, an overview of the Azure OpenAI service, responsible AI, uses and capabilities of generative AI, and a demo. It defines generative AI as AI that can generate new content like text, images, audio or video based on a given input or prompt. The document discusses how generative AI works by learning patterns from large datasets to produce new content that fits within those patterns.
As presented on November 28, 2023 at the Christa McAuliffe Technology Conference. Please email me with any comments, questions, or suggestions. Maureen Yoder myoder@lesley.edu
[DSC Europe 23] Igor Ilic - Redefining User Experience with Large Language Mo...DataScienceConferenc1
The document discusses how large language models will impact user interfaces and experiences. It begins by providing context on Microsoft's Copilot tool. It then describes current types of user interfaces as simple task-based apps, search-and-select interfaces, and complex system interfaces. New types of interactions are proposed, such as using chat for search-and-select interfaces, voice for simple apps, and adaptive UIs and vision for complex systems. Adaptive UIs could rebuild tools like Photoshop using agents to lower expertise needs. Vision-based interactions may allow pasting screenshots instead of retyping information. Overall, the large language models will make software more intuitive and expertise-reducing through new adaptive and multi-modal interfaces.
The Internet-Of-Things (IoT) is no longer a hype, but a reality. Connecting ANY devices, ANY place, ANY thing will transform the way we live. However from an engineers point of view how can he gain benefit from this? Here are some of the key technology trends that will play an important role.
A Review on the Determinants of a suitable Chatbot Framework- Empirical evide...IRJET Journal
This document reviews and compares two popular chatbot frameworks: RASA and IBM Watson Assistant. It analyzes 30 publications using a systematic review approach to examine the development methodology and areas for improvement of each framework. An extensive comparative analysis is conducted using evaluation models to analyze the performance of each chatbot. The study concludes by discussing why one framework may be preferred over the other and future aspects of each based on data collected from 50 respondents at two companies that provide chatbot services.
This document discusses the economics of open source software. It explains that open source software is not just about sharing or giving things away for free, but is actually closely tied to capitalism. Open source software development spreads costs and risks across many contributors. Companies that adopt open source can benefit from lower costs and more customized software that is improved through peer review. The open source model is economically viable and may be applicable to other fields beyond just software.
Similar to The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT! (20)
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
this resume for sadika shaikh bca studentSadikaShaikh7
I am a dedicated BCA student with a strong foundation in web technologies, including PHP and MySQL. I have hands-on experience in Java and Python, and a solid understanding of data structures. My technical skills are complemented by my ability to learn quickly and adapt to new challenges in the ever-evolving field of computer science.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
What's Next Web Development Trends to Watch.pdfSeasiaInfotech2
Explore the latest advancements and upcoming innovations in web development with our guide to the trends shaping the future of digital experiences. Read our article today for more information.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
AI_dev Europe 2024 - From OpenAI to Opensource AIRaphaël Semeteys
Navigating Between Commercial Ownership and Collaborative Openness
This presentation explores the evolution of generative AI, highlighting the trajectories of various models such as GPT-4, and examining the dynamics between commercial interests and the ethics of open collaboration. We offer an in-depth analysis of the levels of openness of different language models, assessing various components and aspects, and exploring how the (de)centralization of computing power and technology could shape the future of AI research and development. Additionally, we explore concrete examples like LLaMA and its descendants, as well as other open and collaborative projects, which illustrate the diversity and creativity in the field, while navigating the complex waters of intellectual property and licensing.
Interaction Latency: Square's User-Centric Mobile Performance MetricScyllaDB
Mobile performance metrics often take inspiration from the backend world and measure resource usage (CPU usage, memory usage, etc) and workload durations (how long a piece of code takes to run).
However, mobile apps are used by humans and the app performance directly impacts their experience, so we should primarily track user-centric mobile performance metrics. Following the lead of tech giants, the mobile industry at large is now adopting the tracking of app launch time and smoothness (jank during motion).
At Square, our customers spend most of their time in the app long after it's launched, and they don't scroll much, so app launch time and smoothness aren't critical metrics. What should we track instead?
This talk will introduce you to Interaction Latency, a user-centric mobile performance metric inspired from the Web Vital metric Interaction to Next Paint"" (web.dev/inp). We'll go over why apps need to track this, how to properly implement its tracking (it's tricky!), how to aggregate this metric and what thresholds you should target.
Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
1. The rise of LLMs
or: How I Learned to Stop Worrying and
Love the GPT
(ChatGPT did not write this presentation. Or did it?)
Feb 2023 – Laurent Lathieyre
*draft 0.2
7. coming in the
near future:
multi-modal
models (text,
image, video,
etc.)
continuous
learning (i.e.
not stuck in
time)
8. coming in a not
so distant
future:
Artificial
General
Intelligence
(AGI)
9. There will be
only a few LLMs
providers
aka “resistance
is futile ; don’t
try to train your
own LLM”
OpenAI ; Google ; Microsoft ; Facebook ; Amazon ; IBM ; Alibaba ; Tencent ; Baidu ;
Intel ; Nvidia
10. there is room in
the value chain
for
category/vertical
model fine tuners
on top of LLMs
11. importance of 1st
and 2nd party
data
Access to data will be key to success in the field of AI, and businesses will need to
consider data privacy and security as well as how language models can be used to
improve existing products and services, create new products and services, automate
tasks, and provide personalized experiences for customers.
13. the way forward:
embrace the unknown,
harvest chaos to create
opportunities and drive
innovation
(to be continued)
Dall-e: draw a team of pioneers, some are holding a spear, they jump into a portal to
another dimension, digital art