This document provides presentation tips and advice from various experts. It begins by noting that 99% of presentations "suck" according to Guy Kawasaki. It then discusses the importance of preparation, design, and delivery in creating an effective presentation. Specific tips include starting with the goal and knowing your audience, simplifying content to the essential, getting focused alone time without distractions, using analog tools like post-its for brainstorming, and exercising to boost brain power. The document also warns against common mistakes in presentations like including all text on slides and excessive bullet points. It concludes by providing design principles and best practices like limiting text, using visuals and quotes, applying color and alignment properly, and making data memorable.
Building a Neural Machine Translation System From ScratchNatasha Latysheva
Human languages are complex, diverse and riddled with exceptions – translating between different languages is therefore a highly challenging technical problem. Deep learning approaches have proved powerful in modelling the intricacies of language, and have surpassed all statistics-based methods for automated translation. This session begins with an introduction to the problem of machine translation and discusses the two dominant neural architectures for solving it – recurrent neural networks and transformers. A practical overview of the workflow involved in training, optimising and adapting a competitive neural machine translation system is provided. Attendees will gain an understanding of the internal workings and capabilities of state-of-the-art systems for automatic translation, as well as an appreciation of the key challenges and open problems in the field.
Beyond the Symbols: A 30-minute Overview of NLPMENGSAYLOEM1
This presentation delves into the world of Natural Language Processing (NLP), exploring its goal to make human language understandable to machines. The complexities of language, such as ambiguity and complex structures, are highlighted as major challenges. The talk underscores the evolution of NLP through deep learning methodologies, leading to a new era defined by large-scale language models. However, obstacles like low-resource languages and ethical issues including bias and hallucination are acknowledged as enduring challenges in the field. Overall, the presentation provides a condensed, yet comprehensive view of NLP's accomplishments and ongoing hurdles.
Thomas Wolf "An Introduction to Transfer Learning and Hugging Face"Fwdays
In this talk I'll start by introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released by Hugging Face, in particular our transformers, tokenizers, and NLP libraries as well as our distilled and pruned models.
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAILviv Startup Club
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAI
AI & BigData Online Day 2021
Website - https://aiconf.com.ua/
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/aiconf
Fine tune and deploy Hugging Face NLP modelsOVHcloud
This webinar discusses fine-tuning and deploying Hugging Face NLP models. The agenda includes an overview of Hugging Face and NLP, a demonstration of fine-tuning a model, a demonstration of deploying a model in production, and a summary. Hugging Face is presented as the most popular open source NLP library with over 4,000 models. Fine-tuning models allows them to be adapted for specific tasks and domains and is more data efficient than training from scratch. OVHcloud is highlighted as providing tools for full AI workflows from storage and processing to training and deployment.
OpenAI is an artificial intelligence research laboratory consisting of both non-profit and for-profit entities. It was founded in 2015 with the goal of developing AI that is beneficial to humanity. OpenAI conducts research in machine learning, computer vision, natural language processing, and robotics. Notable projects include ChatGPT, an AI assistant capable of answering questions and generating text. OpenAI makes many of its research findings and models openly available via its API and GitHub.
Docker - un outil pour faciliter le développement et le déploiement informatiquesdenier
Cette présentation s'adresse aussi bien aux débutants qu'aux utilisateurs de Docker cherchant à en découvrir de nouveaux aspects.
- caractéristiques de Docker et écosystème
- cas d'usage : création d’environnement automatisé pour le développement, déploiement et orchestration de conteneurs, Docker sous Windows
Présentation donnée dans le cadre du Festival Transfo 2019 http://www.festival-transfo.fr/evenement/145/14-docker-un-outil-pour-faciliter-le-developpement-et-le-deploiement-informatique.htm
Rejoindre le meetup des Matinales techniques de Sogilis : https://www.meetup.com/Les-matinales-techniques-de-Sogilis
Building a Neural Machine Translation System From ScratchNatasha Latysheva
Human languages are complex, diverse and riddled with exceptions – translating between different languages is therefore a highly challenging technical problem. Deep learning approaches have proved powerful in modelling the intricacies of language, and have surpassed all statistics-based methods for automated translation. This session begins with an introduction to the problem of machine translation and discusses the two dominant neural architectures for solving it – recurrent neural networks and transformers. A practical overview of the workflow involved in training, optimising and adapting a competitive neural machine translation system is provided. Attendees will gain an understanding of the internal workings and capabilities of state-of-the-art systems for automatic translation, as well as an appreciation of the key challenges and open problems in the field.
Beyond the Symbols: A 30-minute Overview of NLPMENGSAYLOEM1
This presentation delves into the world of Natural Language Processing (NLP), exploring its goal to make human language understandable to machines. The complexities of language, such as ambiguity and complex structures, are highlighted as major challenges. The talk underscores the evolution of NLP through deep learning methodologies, leading to a new era defined by large-scale language models. However, obstacles like low-resource languages and ethical issues including bias and hallucination are acknowledged as enduring challenges in the field. Overall, the presentation provides a condensed, yet comprehensive view of NLP's accomplishments and ongoing hurdles.
Thomas Wolf "An Introduction to Transfer Learning and Hugging Face"Fwdays
In this talk I'll start by introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released by Hugging Face, in particular our transformers, tokenizers, and NLP libraries as well as our distilled and pruned models.
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAILviv Startup Club
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAI
AI & BigData Online Day 2021
Website - https://aiconf.com.ua/
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/aiconf
Fine tune and deploy Hugging Face NLP modelsOVHcloud
This webinar discusses fine-tuning and deploying Hugging Face NLP models. The agenda includes an overview of Hugging Face and NLP, a demonstration of fine-tuning a model, a demonstration of deploying a model in production, and a summary. Hugging Face is presented as the most popular open source NLP library with over 4,000 models. Fine-tuning models allows them to be adapted for specific tasks and domains and is more data efficient than training from scratch. OVHcloud is highlighted as providing tools for full AI workflows from storage and processing to training and deployment.
OpenAI is an artificial intelligence research laboratory consisting of both non-profit and for-profit entities. It was founded in 2015 with the goal of developing AI that is beneficial to humanity. OpenAI conducts research in machine learning, computer vision, natural language processing, and robotics. Notable projects include ChatGPT, an AI assistant capable of answering questions and generating text. OpenAI makes many of its research findings and models openly available via its API and GitHub.
Docker - un outil pour faciliter le développement et le déploiement informatiquesdenier
Cette présentation s'adresse aussi bien aux débutants qu'aux utilisateurs de Docker cherchant à en découvrir de nouveaux aspects.
- caractéristiques de Docker et écosystème
- cas d'usage : création d’environnement automatisé pour le développement, déploiement et orchestration de conteneurs, Docker sous Windows
Présentation donnée dans le cadre du Festival Transfo 2019 http://www.festival-transfo.fr/evenement/145/14-docker-un-outil-pour-faciliter-le-developpement-et-le-deploiement-informatique.htm
Rejoindre le meetup des Matinales techniques de Sogilis : https://www.meetup.com/Les-matinales-techniques-de-Sogilis
This document discusses video editing and compares two video editing software programs, Adobe Premiere Pro CS6 and Edius 6. It outlines the types of video editing, including linear, non-linear, offline and online editing. It also discusses various video editing tools and identifies some problems with Adobe Premiere Pro CS6, such as requiring powerful hardware and being expensive. It proposes solutions to these problems and concludes by discussing opportunities for future enhancements.
This document discusses different approaches for building chatbots, including retrieval-based and generative models. It describes recurrent neural networks like LSTMs and GRUs that are well-suited for natural language processing tasks. Word embedding techniques like Word2Vec are explained for representing words as vectors. Finally, sequence-to-sequence models using encoder-decoder architectures are presented as a promising approach for chatbots by using a context vector to generate responses.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(Note: Discover a slightly updated version of this deck at slideshare.net/LoicMerckel/introduction-to-llms.)
Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...Edureka!
( **Natural Language Processing Using Python: - https://www.edureka.co/python-natural... ** )
This PPT will provide you with detailed and comprehensive knowledge of the two important aspects of Natural Language Processing ie. Stemming and Lemmatization. It will also provide you with the differences between the two with Demo on each. Following are the topics covered in this PPT:
Introduction to Big Data
What is Text Mining?
What is NLP?
Introduction to Stemming
Introduction to Lemmatization
Applications of Stemming & Lemmatization
Difference between stemming & Lemmatization
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This document provides an overview of Python libraries for data science. It discusses NumPy for numerical processing and n-dimensional arrays, Matplotlib for visualization and 2D plotting, and Pandas for data manipulation and analysis. NumPy allows efficient storage and manipulation of multi-dimensional arrays. Matplotlib is used to create 2D plots from array data. Pandas provides tools for working with tabular data and performing statistical analysis. The document also gives basic installation instructions for these core Python data science libraries.
Prompt Engineering - an Art, a Science, or your next Job Title?Maxim Salnikov
It's quite ironic that to interact with the most advanced AI in our history - Large Language Models: ChatGPT, etc. - we must use human language, not programming one. But how to get the most out of this dialogue i.e. how to create robust and efficient prompts so AI returns exactly what's needed for your solution on the first try? After my session, you can add the Junior (at least) Prompt Engineer skill to your CV: I will introduce Prompt Engineering as an emerging discipline with its own methodologies, tools, and best practices. Expect lots of examples that will help you to write ideal prompts for all occasions.
This session is based on my research and experiments in Prompt Engineering and is 100% relevant for cloud developers who investigate adding some LLM-powered features to their solutions. It's a guide to building proper prompts for AI to get desired results fast and cost-efficient.
This document provides an introduction and overview of Apache UIMA (Unstructured Information Management Architecture).
Apache UIMA is an open source framework for analyzing unstructured information like text, audio, and video. It allows defining type systems and building analysis pipelines using components called annotators that can extract metadata from unstructured data.
The document outlines some key aspects of Apache UIMA including its goals of supporting a community around analyzing unstructured content, how it can bridge different domains, and provides an example scenario of using it to extract metadata from articles about movies.
This document discusses structural design patterns from the Gang of Four (GoF) patterns book. It introduces proxies, decorators, adapters, façades, and composites patterns. Proxies provide a placeholder for another object to control access. Decorators dynamically add/remove responsibilities to objects. Adapters allow incompatible interfaces to work together. Façades provide a simplified interface to a subsystem. Composites represent part-whole hierarchies to access all parts uniformly. Bridge and flyweight patterns were not covered due to dependencies on other patterns. The document emphasizes introducing extra levels of indirection to solve problems and favoring composition over inheritance.
This document provides an overview of continuous integration (CI), continuous delivery (CD), and continuous deployment. CI involves regularly integrating code changes into a central repository and running automated tests. CD builds on CI by automatically preparing code changes for release to testing environments. Continuous deployment further automates the release of changes to production without human intervention if tests pass. The benefits of CI/CD include higher quality, lower costs, faster delivery, and happier teams. Popular CI tools include Jenkins, Bamboo, CircleCI, and Travis. Key practices involve automating all stages, keeping environments consistent, and making the pipeline fast. Challenges include requiring organizational changes and technical knowledge to automate the full process.
GPT-2: Language Models are Unsupervised Multitask LearnersYoung Seok Kim
This document summarizes a technical paper about GPT-2, an unsupervised language model created by OpenAI. GPT-2 is a transformer-based model trained on a large corpus of internet text using byte-pair encoding. The paper describes experiments showing GPT-2 can perform various NLP tasks like summarization, translation, and question answering with limited or no supervision, though performance is still below supervised models. It concludes that unsupervised task learning is a promising area for further research.
Introduction to natural language processing (NLP)Alia Hamwi
The document provides an introduction to natural language processing (NLP). It defines NLP as a field of artificial intelligence devoted to creating computers that can use natural language as input and output. Some key NLP applications mentioned include data analysis of user-generated content, conversational agents, translation, classification, information retrieval, and summarization. The document also discusses various linguistic levels of analysis like phonology, morphology, syntax, and semantics that involve ambiguity challenges. Common NLP tasks like part-of-speech tagging, named entity recognition, parsing, and information extraction are described. Finally, the document outlines the typical steps in an NLP pipeline including data collection, text cleaning, preprocessing, feature engineering, modeling and evaluation.
Handle complex, contextual, back-and-forth conversations with interactive machine learning instead of hand-crafting rules. Understand your customer's intent and extract entities with state of the art NLU. Build a bot that goes beyond answering simple questions using Rasa, a framework of open source machine learning tools
Natural Language Processing (NLP) & Text Mining Tutorial Using NLTK | NLP Tra...Edureka!
** NLP Using Python: - https://www.edureka.co/python-natural-language-processing-course **
This Edureka PPT will provide you with a comprehensive and detailed knowledge of Natural Language Processing, popularly known as NLP. You will also learn about the different steps involved in processing the human language like Tokenization, Stemming, Lemmatization and much more along with a demo on each one of the topics.
The following topics covered in this PPT:
1. The Evolution of Human Language
2. What is Text Mining?
3. What is Natural Language Processing?
4. Applications of NLP
5. NLP Components and Demo
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Natural language processing and transformer modelsDing Li
The document discusses several approaches for text classification using machine learning algorithms:
1. Count the frequency of individual words in tweets and sum for each tweet to create feature vectors for classification models like regression. However, this loses some word context information.
2. Use Bayes' rule and calculate word probabilities conditioned on class to perform naive Bayes classification. Laplacian smoothing is used to handle zero probabilities.
3. Incorporate word n-grams and context by calculating word probabilities within n-gram contexts rather than independently. This captures more linguistic information than the first two approaches.
Prompt Engineering - an Art, a Science, or your next Job Title?Maxim Salnikov
It's quite ironic that to interact with the most advanced AI in our history - Large Language Models: ChatGPT, etc. - we must use human language, not programming one. But how to get the most out of this dialogue i.e. how to create robust and efficient prompts so AI returns exactly what's needed for your solution on the first try? After my session, you can add the Junior (at least) Prompt Engineer skill to your CV: I will introduce Prompt Engineering as an emerging discipline with its own methodologies, tools, and best practices. Expect lots of examples that will help you to write ideal prompts for all occasions.
The document discusses various use cases for learning ChatGPT through prompts provided in the book "The art of Prompt Engineering with ChatGPT". These use cases include brainstorming ideas in a table, translating a poem from Marathi to English, summarizing content for children, writing articles and blogs, academic writing, drafting emails, learning to code with Python, finding recipes based on available ingredients, and noting important points about ChatGPT's capabilities and limitations. The document provides examples of prompts and ChatGPT's responses for each use case.
The NLP muppets revolution! @ Data Science London 2019
video: https://skillsmatter.com/skillscasts/13940-a-deep-dive-into-contextual-word-embeddings-and-understanding-what-nlp-models-learn
event: https://www.meetup.com/Data-Science-London/events/261483332/
Neural Language Generation Head to Toe Hady Elsahar
This is a gentle introduction to Natural language Generation (NLG) using deep learning. If you are a computer science practitioner with basic knowledge about Machine learning. This is a gentle intuitive introduction to Language Generation using Neural Networks. It takes you in a journey from the basic intuitions behind modeling language and how to model probabilities of sequences to recurrent neural networks to large Transformers models that you have seen in the news like GPT2/GPT3. The tutorial wraps up with a summary on the ethical implications of training such large language models on uncurated text from the internet.
How do Chatbots Work? A Guide to Chatbot ArchitectureMaruti Techlabs
A chatbot is a program that can have conversations with humans without human assistance. There are two types of chatbots: rule-based chatbots that are limited to their programming, and AI-based chatbots that can understand open-ended queries using machine learning. Chatbots work through question and answering systems, natural language processing to understand context, and by adopting classification methods like pattern matching, algorithms, and artificial neural networks.
On the Evolution of Source Code and Software DefectsMarco D'Ambros
The document discusses techniques for analyzing software evolution and defects using mining software repositories (MSR) approaches. It describes change coupling analysis, which uses part of a meta-model to make sense of large amounts of change coupling information and address the goal of understanding how changes propagate through a software system. The technique analyzes the relationships between source code changes and software defects by examining the coupling between changes in source code and bug reports.
Commit 2.0 - Enriching commit comments with visualization Marco D'Ambros
The document describes commit 2.0, a proposed IDE enhancement that aims to improve commit comments by visualizing changes and providing better communication of changes. It notes that developers often leave brief or blank commit comments due to time pressures and lack of resources. This makes understanding changes difficult. Commit 2.0 would enrich commit comments with visualization in the IDE to provide more context around changes without changing the underlying commit mechanism.
This document discusses video editing and compares two video editing software programs, Adobe Premiere Pro CS6 and Edius 6. It outlines the types of video editing, including linear, non-linear, offline and online editing. It also discusses various video editing tools and identifies some problems with Adobe Premiere Pro CS6, such as requiring powerful hardware and being expensive. It proposes solutions to these problems and concludes by discussing opportunities for future enhancements.
This document discusses different approaches for building chatbots, including retrieval-based and generative models. It describes recurrent neural networks like LSTMs and GRUs that are well-suited for natural language processing tasks. Word embedding techniques like Word2Vec are explained for representing words as vectors. Finally, sequence-to-sequence models using encoder-decoder architectures are presented as a promising approach for chatbots by using a context vector to generate responses.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(Note: Discover a slightly updated version of this deck at slideshare.net/LoicMerckel/introduction-to-llms.)
Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...Edureka!
( **Natural Language Processing Using Python: - https://www.edureka.co/python-natural... ** )
This PPT will provide you with detailed and comprehensive knowledge of the two important aspects of Natural Language Processing ie. Stemming and Lemmatization. It will also provide you with the differences between the two with Demo on each. Following are the topics covered in this PPT:
Introduction to Big Data
What is Text Mining?
What is NLP?
Introduction to Stemming
Introduction to Lemmatization
Applications of Stemming & Lemmatization
Difference between stemming & Lemmatization
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This document provides an overview of Python libraries for data science. It discusses NumPy for numerical processing and n-dimensional arrays, Matplotlib for visualization and 2D plotting, and Pandas for data manipulation and analysis. NumPy allows efficient storage and manipulation of multi-dimensional arrays. Matplotlib is used to create 2D plots from array data. Pandas provides tools for working with tabular data and performing statistical analysis. The document also gives basic installation instructions for these core Python data science libraries.
Prompt Engineering - an Art, a Science, or your next Job Title?Maxim Salnikov
It's quite ironic that to interact with the most advanced AI in our history - Large Language Models: ChatGPT, etc. - we must use human language, not programming one. But how to get the most out of this dialogue i.e. how to create robust and efficient prompts so AI returns exactly what's needed for your solution on the first try? After my session, you can add the Junior (at least) Prompt Engineer skill to your CV: I will introduce Prompt Engineering as an emerging discipline with its own methodologies, tools, and best practices. Expect lots of examples that will help you to write ideal prompts for all occasions.
This session is based on my research and experiments in Prompt Engineering and is 100% relevant for cloud developers who investigate adding some LLM-powered features to their solutions. It's a guide to building proper prompts for AI to get desired results fast and cost-efficient.
This document provides an introduction and overview of Apache UIMA (Unstructured Information Management Architecture).
Apache UIMA is an open source framework for analyzing unstructured information like text, audio, and video. It allows defining type systems and building analysis pipelines using components called annotators that can extract metadata from unstructured data.
The document outlines some key aspects of Apache UIMA including its goals of supporting a community around analyzing unstructured content, how it can bridge different domains, and provides an example scenario of using it to extract metadata from articles about movies.
This document discusses structural design patterns from the Gang of Four (GoF) patterns book. It introduces proxies, decorators, adapters, façades, and composites patterns. Proxies provide a placeholder for another object to control access. Decorators dynamically add/remove responsibilities to objects. Adapters allow incompatible interfaces to work together. Façades provide a simplified interface to a subsystem. Composites represent part-whole hierarchies to access all parts uniformly. Bridge and flyweight patterns were not covered due to dependencies on other patterns. The document emphasizes introducing extra levels of indirection to solve problems and favoring composition over inheritance.
This document provides an overview of continuous integration (CI), continuous delivery (CD), and continuous deployment. CI involves regularly integrating code changes into a central repository and running automated tests. CD builds on CI by automatically preparing code changes for release to testing environments. Continuous deployment further automates the release of changes to production without human intervention if tests pass. The benefits of CI/CD include higher quality, lower costs, faster delivery, and happier teams. Popular CI tools include Jenkins, Bamboo, CircleCI, and Travis. Key practices involve automating all stages, keeping environments consistent, and making the pipeline fast. Challenges include requiring organizational changes and technical knowledge to automate the full process.
GPT-2: Language Models are Unsupervised Multitask LearnersYoung Seok Kim
This document summarizes a technical paper about GPT-2, an unsupervised language model created by OpenAI. GPT-2 is a transformer-based model trained on a large corpus of internet text using byte-pair encoding. The paper describes experiments showing GPT-2 can perform various NLP tasks like summarization, translation, and question answering with limited or no supervision, though performance is still below supervised models. It concludes that unsupervised task learning is a promising area for further research.
Introduction to natural language processing (NLP)Alia Hamwi
The document provides an introduction to natural language processing (NLP). It defines NLP as a field of artificial intelligence devoted to creating computers that can use natural language as input and output. Some key NLP applications mentioned include data analysis of user-generated content, conversational agents, translation, classification, information retrieval, and summarization. The document also discusses various linguistic levels of analysis like phonology, morphology, syntax, and semantics that involve ambiguity challenges. Common NLP tasks like part-of-speech tagging, named entity recognition, parsing, and information extraction are described. Finally, the document outlines the typical steps in an NLP pipeline including data collection, text cleaning, preprocessing, feature engineering, modeling and evaluation.
Handle complex, contextual, back-and-forth conversations with interactive machine learning instead of hand-crafting rules. Understand your customer's intent and extract entities with state of the art NLU. Build a bot that goes beyond answering simple questions using Rasa, a framework of open source machine learning tools
Natural Language Processing (NLP) & Text Mining Tutorial Using NLTK | NLP Tra...Edureka!
** NLP Using Python: - https://www.edureka.co/python-natural-language-processing-course **
This Edureka PPT will provide you with a comprehensive and detailed knowledge of Natural Language Processing, popularly known as NLP. You will also learn about the different steps involved in processing the human language like Tokenization, Stemming, Lemmatization and much more along with a demo on each one of the topics.
The following topics covered in this PPT:
1. The Evolution of Human Language
2. What is Text Mining?
3. What is Natural Language Processing?
4. Applications of NLP
5. NLP Components and Demo
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Natural language processing and transformer modelsDing Li
The document discusses several approaches for text classification using machine learning algorithms:
1. Count the frequency of individual words in tweets and sum for each tweet to create feature vectors for classification models like regression. However, this loses some word context information.
2. Use Bayes' rule and calculate word probabilities conditioned on class to perform naive Bayes classification. Laplacian smoothing is used to handle zero probabilities.
3. Incorporate word n-grams and context by calculating word probabilities within n-gram contexts rather than independently. This captures more linguistic information than the first two approaches.
Prompt Engineering - an Art, a Science, or your next Job Title?Maxim Salnikov
It's quite ironic that to interact with the most advanced AI in our history - Large Language Models: ChatGPT, etc. - we must use human language, not programming one. But how to get the most out of this dialogue i.e. how to create robust and efficient prompts so AI returns exactly what's needed for your solution on the first try? After my session, you can add the Junior (at least) Prompt Engineer skill to your CV: I will introduce Prompt Engineering as an emerging discipline with its own methodologies, tools, and best practices. Expect lots of examples that will help you to write ideal prompts for all occasions.
The document discusses various use cases for learning ChatGPT through prompts provided in the book "The art of Prompt Engineering with ChatGPT". These use cases include brainstorming ideas in a table, translating a poem from Marathi to English, summarizing content for children, writing articles and blogs, academic writing, drafting emails, learning to code with Python, finding recipes based on available ingredients, and noting important points about ChatGPT's capabilities and limitations. The document provides examples of prompts and ChatGPT's responses for each use case.
The NLP muppets revolution! @ Data Science London 2019
video: https://skillsmatter.com/skillscasts/13940-a-deep-dive-into-contextual-word-embeddings-and-understanding-what-nlp-models-learn
event: https://www.meetup.com/Data-Science-London/events/261483332/
Neural Language Generation Head to Toe Hady Elsahar
This is a gentle introduction to Natural language Generation (NLG) using deep learning. If you are a computer science practitioner with basic knowledge about Machine learning. This is a gentle intuitive introduction to Language Generation using Neural Networks. It takes you in a journey from the basic intuitions behind modeling language and how to model probabilities of sequences to recurrent neural networks to large Transformers models that you have seen in the news like GPT2/GPT3. The tutorial wraps up with a summary on the ethical implications of training such large language models on uncurated text from the internet.
How do Chatbots Work? A Guide to Chatbot ArchitectureMaruti Techlabs
A chatbot is a program that can have conversations with humans without human assistance. There are two types of chatbots: rule-based chatbots that are limited to their programming, and AI-based chatbots that can understand open-ended queries using machine learning. Chatbots work through question and answering systems, natural language processing to understand context, and by adopting classification methods like pattern matching, algorithms, and artificial neural networks.
On the Evolution of Source Code and Software DefectsMarco D'Ambros
The document discusses techniques for analyzing software evolution and defects using mining software repositories (MSR) approaches. It describes change coupling analysis, which uses part of a meta-model to make sense of large amounts of change coupling information and address the goal of understanding how changes propagate through a software system. The technique analyzes the relationships between source code changes and software defects by examining the coupling between changes in source code and bug reports.
Commit 2.0 - Enriching commit comments with visualization Marco D'Ambros
The document describes commit 2.0, a proposed IDE enhancement that aims to improve commit comments by visualizing changes and providing better communication of changes. It notes that developers often leave brief or blank commit comments due to time pressures and lack of resources. This makes understanding changes difficult. Commit 2.0 would enrich commit comments with visualization in the IDE to provide more context around changes without changing the underlying commit mechanism.
The document describes an approach for predicting and analyzing bugs in software. It involves:
1. Collecting metrics and historical bug data from code repositories and issue tracking databases.
2. Using the metrics and history to build classification and ranking models to predict which classes and components are most likely to contain bugs.
3. Evaluating the models by comparing their predictions to actual newly discovered bugs, and calculating precision, recall, and other metrics to assess prediction performance.
The goal is to focus testing and debugging efforts on the most bug-prone parts of the system based on the analysis. Past defect history was found to be the strongest predictor of future defects.
On the Relationship Between Change Coupling and Software DefectsMarco D'Ambros
Change coupling metrics correlate more strongly with software defects than traditional complexity metrics like lines of code and number of changes. Change coupling, measured using number of coupled classes (NOCC) and sum of coupling (SOC), correlates more with both all defects and severe defects than baseline metrics. Additionally, decay models do not improve the correlation for change coupling metrics.
This lesson was designed to help students re-think their senior graduation project presentations. Many of the ideas and samples included were inspired by presentations by David Jakes and Ken Rodoff's presentations attended at NECC, and NCTE
---------
ORIGINAL PRESENTATIONS/ INSPIRATION
http://www.slideshare.net/000036hs
http://www.jakesonline.org/tenstrategies.html
and used / modified with permission for the purpose of giving students good models for how to turn their powerpoints into powerful presentations.
Why should we change the way in which we deliver our presentations?. We aim to change the world of presentations for ever, one speaker at a time. If you what to join the revolution check the first issue of our lectures on "The Art of Presentation." Share it with your friends and colleagues, blog about it, spread the word in your social network. Help us eradicate Death by PowerPoint once and for all!
The document provides tips for creating effective presentations. It recommends standing to the left of the screen so the audience focuses on the presenter, not reading slides verbatim. Slides should complement but not replace the presenter. The document also suggests increasing retention by showing slides for 14-21 seconds before explaining them, allowing the visual to "sink in" first. Additionally, it notes attention spans average 18 minutes and provides guidelines for optimal pacing of slides and engaging the audience every few minutes.
Dr. John Medina is an evolutionary biologist. He knows how the brain works. This is what led him to encourage others to destroy their current PowerPoint presentations and start over.
Inspired by his recent book, Brain Rules, BrainSlides.com helps people design effective slide presentations or redesign their existing ones. Created especially for teachers and students, BrainSlides encourage research-based teaching and design practices to improve classroom experiences.
Free information is available on the blog. You can inquire about design or consulting services via the contact form.
Visit brainslides.com for more info.
Video created in Apple Keynote using images available from brainrules.net/news
Mining Cause Effect Chains from Version Archives - ISSRE 2011Kim Herzig
Software reliability is determined by software changes. How do these changes relate to each other? By analyzing the impacted method definitions and usages, we determine dependencies between changes, resulting in a change genealogy that captures how earlier changes enable and cause later ones. Model checking this genealogy reveals temporal process patterns that encode key features of the software process: “Whenever class A is changed, its test case is later updated as well.” Such patterns can be validated automatically: In an evaluation of four open source histories, our prototype would recommend pending activities with a precision of 60– 72%.
This document contains notes and ideas from various presentations on making effective presentations. It discusses focusing on the message over slides, using simplicity and unexpected elements. It emphasizes using stories, emotion, and building rapport with the audience. Tips include rehearsing, having passion, empathy and pacing your speech. The overall message is to engage the audience and make your presentation memorable through good design and different techniques.
What is important if you are choosing the social media platform for your brand/capaign?
Why context of the communication also matters with Social Media?
I'm talking in this presentation about Facebook, Instagram, Snapchat, LinkedIn, Tinder, Vine, Twitter, Slideshare, Pinterest, YouTube and Tumblr. You will also get some tips which will help you with being efficient in your communication.
The document provides 10 tips for creating effective slides: keep the design simple; limit the amount of text; limit visual effects; use high quality graphics; avoid overuse of templates; emphasize simplicity; carefully choose colors; select fonts appropriately; ensure media like videos work well; and stay organized.
Este documento ofrece consejos para realizar presentaciones efectivas. Resalta la importancia de conocer la audiencia, el escenario y el tiempo disponible. Recomienda comenzar con un inicio breve y claro, seguir con un desarrollo que mantenga la atención a través de frases impactantes y repetición, y terminar con una conclusión que deje el mensaje clave. Además, enfatiza el uso de ayudas visuales, control del lenguaje corporal, y la convicción y pasión del orador.
The document discusses fair use and copyright in the context of digital learning. It outlines goals of gaining knowledge about how copyright and fair use apply, developing confidence sharing information with colleagues, and recognizing how media literacy depends on copyrighted materials. It provides examples of how students use copyrighted materials creatively and academically through activities like digital storytelling and remixing. While technology makes it easy to use and share such content, copyright owners assert their rights in ways that can discourage use. The document advocates replacing outdated copyright knowledge with an accurate understanding of fair use and exemptions, and balancing the rights of owners and users.
This document discusses the importance of preparing, practicing, and reviewing presentations. It emphasizes that presentations should transfer emotions to the audience and be a skill developed through research, storyboarding, and design. Body language, tone of voice, and limiting PowerPoint are key to effective performances, while preparation helps manage fear and allows for adjustments when things go wrong. The overall message is that an effective presentation requires preparation, practice, performance, and review.
Smartphoneography: minilesson on better pics with smartphones for polton
This document provides tips for taking better photos with a smartphone camera. It recommends sticking to basics like keeping the camera steady, getting close to subjects, and adding interest with angles. It also emphasizes the importance of lighting and suggests experimenting with editing apps and filters. The overall message is that the best camera is the one you have with you.
Este documento ofrece consejos para diseñar presentaciones profesionales y atractivas, recomendando seleccionar un título apropiado, usar elementos gráficos y colores de manera moderada para no cansar la vista, emplear una tipografía legible de al menos 16 puntos, crear diapositivas que sean reutilizables para diferentes públicos sin saturarlas de información, personalizar las plantillas en lugar de usar las predeterminadas, e identificar la autoría con un cintillo o avatar.
Communicate POWERFULLY Onstage - Michelle Villalobos Presentation to The Miam...Michelle Villalobos
Communicate powerfully onstage! Presentation skills and tips for people who get nervous, anxious or just plan SCARED onstage. Learn how to structure and prepare your presentation content, how to deliver it effectively, and how to get mentally prepared.
The document summarizes key points about improving PowerPoint presentations. It begins with statistics showing that the vast majority of presentations are ineffective. It then discusses common mistakes like putting entire scripts on slides, poor spelling, overuse of bullet points, and distracting color schemes. Examples are provided of sample slides that demonstrate these issues. The document concludes by proposing some changes like focusing on the message, engaging the audience, and challenging the status quo.
The document summarizes the Bēhance 99% Conference that took place in May 2012. It provides summaries of talks given by various speakers on topics related to design, creativity, innovation, and entrepreneurship. Key points emphasized include the importance of prototyping, testing ideas early, embracing failure, and focusing on execution over just idea generation. Overall, the conference seemed to aim to provide inspiration and practical advice for shifting the focus from coming up with ideas to implementing and developing them.
Deliver great presentations, talks or storiesAbdul Ghafoor
Lunch and learn workshop first delivered to staff at NHS Improvement outlining the ingredients for an amazing presentation for a range of environments, personal and professional.
The document provides tips and strategies for effective communication and public speaking. It discusses moving from simply providing information to telling stories to engage audiences on an emotional level. It emphasizes designing presentations around a central big idea or journey rather than just decorating slides with information. Some key points include contrasting common and lofty ideas, addressing resistance from audiences, and rehearsing to turn nerves into confidence when presenting. The overall message is that effective communication resonates by focusing on audiences through stories and meaning rather than just information.
- The document provides advice for giving effective research talks by focusing on engaging and motivating the audience rather than impressing them with technical details.
- It emphasizes identifying a clear key idea upfront, using examples to illustrate concepts, and maintaining enthusiasm to keep the audience interested and awake.
- Presenting the motivation and intuition behind the work is more important than outlining everything or discussing related work, and talks should finish on time while allowing for questions.
How to deliver effective presentations, by using the time-tested power of story-telling. Based largely upon guidance provided in Alexi Kapterev's book "Presentation Secrets."
First delivered at the Software Engineering Institute's (SEI's) CMMI Workshop in St. Petersburg, Florida, October 2012. [CmmiTraining.com]
Takaful IKHLAS Public Speaking & Presentation Skills for MarketingKenny Ong
I apologize, upon further reflection I do not feel comfortable providing advice about manipulating people or exploiting psychological tendencies without their consent. Let's instead discuss how to build genuine trust and mutually beneficial relationships through ethical communication.
B meller walsh-college-capturing from the platformfv030713Brenda Meller
The presentation provided tips on improving public speaking skills. It began by addressing common myths about public speaking, such as the myths that nervousness is a sign of weakness and that public speaking is an inborn talent. It then offered techniques for engaging audiences, such as maintaining eye contact, using relevant content, and practicing presentations. The presentation concluded by highlighting local resources for further developing public speaking abilities.
The document provides tips for effectively presenting ideas and work. It emphasizes that presentation is important because poorly presented work is often rejected, which is costly and hurts client relationships. Well-presented work gets approved and builds trust. The tips include knowing your audience, distilling ideas down to the essentials, having a clear purpose, confidently revealing the big idea, defending ideas with the thinking already presented, and ending strongly. Presenting is about what the audience hears, so explanations are important.
The document provides tips for effectively presenting ideas and work. It emphasizes that how ideas are presented is just as important as the ideas themselves. Well-presented work is more likely to be approved, develops trust with clients, and keeps the work moving forward. The document encourages presenters to guide audiences through their thinking, address any questions preemptively, and tailor the presentation to the specific audience. It also provides examples of effective presentations from history and recommends reading additional materials to improve presentation skills.
The document provides tips for effectively presenting ideas and work. It emphasizes that how ideas are presented is just as important as the ideas themselves. Well-presented work is more likely to be approved, develops trust with clients, and keeps the work moving forward. The document encourages presenters to guide audiences through their thinking, address possible questions up front, and end presentations strongly to leave a lasting impression. It also suggests preparing thoroughly for any issues and tailoring the presentation to the specific audience.
To Bore No More: Designing & Delivering Presentations That Engage Your AudienceSarah Halstead
This slide show supports a workshop presented in March 2010 at the Fulfilling the Promise Conference in Oconomowoc, WI. While this was a 75 minute workshop, it can easily be expanded to 2 hours, half day or full day presentations.
PLEASE NOTE: This presentation was originally titled "Bore No More." Five months AFTER this presentation was delivered and uploaded, the phrase "Bore No More" was trademarked by Jonathan Petz of Powell, OH. The title has been changed in order to comply with federal trademark rules.
The document provides guidance on effective moderating for usability testing. It discusses that usability testing involves observing users complete tasks while thinking aloud. The moderator plays several roles in guiding the participant and gathering useful feedback. The document outlines best practices for moderators, such as staying neutral, knowing testing goals, and using open-ended questions. It emphasizes the importance of moderating skillfully to obtain valid insights from participants. Regular practice and self-evaluation are recommended for moderators to continuously improve.
Demystifying Creativity: a handbook for left brainers.David Murphy
The document provides a framework for creative problem solving aimed at "left brainers". It begins by addressing common refrains from left-brainers that they are not creative. The goals are then to demystify creativity and provide a useful framework. This framework involves four steps: Define, Know, Collaborate, and Invert. Various techniques are described for each step, such as using the "five whys" to get to the root problem, gathering relevant knowledge from three categories, using a "six hat" team approach, and thinking about the problem from different perspectives. The document argues that creativity comes from structured processes and knowledge rather than being random or a "hollow exhortation".
Hospital pharmacy and it's organization (1).pdfShwetaGawande8
The document discuss about the hospital pharmacy and it's organization ,Definition of Hospital pharmacy
,Functions of Hospital pharmacy
,Objectives of Hospital pharmacy
Location and layout of Hospital pharmacy
,Personnel and floor space requirements,
Responsibilities and functions of Hospital pharmacist
Integrated Marketing Communications (IMC)- Concept, Features, Elements, Role of advertising in IMC
Advertising: Concept, Features, Evolution of Advertising, Active Participants, Benefits of advertising to Business firms and consumers.
Classification of advertising: Geographic, Media, Target audience and Functions.
UGC CARE LIST OF JOURNALS 2024: UNLOCKING ACADEMIC EXCELLENCEaimlayresearch2
The UGC CARE initiative was launched to promote academic integrity and great studies guides. It aims to pick out and keep a comprehensive listing of credible journals across a diverse range of subjects. The UGC CARE listing is up to date often to encompass excellent journals while removing those who fail to fulfill the set requirements
Beginner's Guide to Bypassing Falco Container Runtime Security in Kubernetes ...anjaliinfosec
This presentation, crafted for the Kubernetes Village at BSides Bangalore 2024, delves into the essentials of bypassing Falco, a leading container runtime security solution in Kubernetes. Tailored for beginners, it covers fundamental concepts, practical techniques, and real-world examples to help you understand and navigate Falco's security mechanisms effectively. Ideal for developers, security professionals, and tech enthusiasts eager to enhance their expertise in Kubernetes security and container runtime defenses.
Creativity for Innovation and SpeechmakingMattVassar1
Tapping into the creative side of your brain to come up with truly innovative approaches. These strategies are based on original research from Stanford University lecturer Matt Vassar, where he discusses how you can use them to come up with truly innovative solutions, regardless of whether you're using to come up with a creative and memorable angle for a business pitch--or if you're coming up with business or technical innovations.
Techno-pedagogic skills refer to the ability to effectively integrate technology into teaching and learning processes. In simple terms, it means having the knowledge and skills to use digital tools and resources in a way that enhances the learning experience for students. Teachers with these skills can make lessons more engaging and effective by incorporating technologies such as interactive whiteboards, educational apps, online resources, and multimedia tools in the classroom. This approach allows for the creation of interactive and multimedia-rich lessons, catering to different learning styles and providing personalized learning experiences. Overall, techno-pedagogic skills enable teachers to leverage technology to make learning more fun, interactive, and impactful for students in today's digital age. Here’s how it works:
1. Enhanced Engagement: By using technology, teachers can create more engaging lessons. For example, they might use interactive quizzes or educational games that make learning fun and interactive.
2. Personalized Learning: Technology allows teachers to tailor lessons to individual students’ needs and learning styles. They can provide different resources or activities that cater to each student’s strengths and weaknesses.
3. Access to Information: With digital tools and online resources, students have access to a wealth of information beyond traditional textbooks. This helps them explore topics more deeply and from different perspectives.
4. Collaboration: Technology enables collaborative learning experiences where students can work together on projects, share ideas, and learn from each other’s insights.
5. Impactful Teaching: By mastering techno-pedagogic skills, teachers can make their teaching more effective and impactful. They can deliver content in ways that resonate with today’s tech-savvy students, making learning more relevant and meaningful.
Overall, techno-pedagogic skills empower teachers to leverage technology creatively and effectively in the classroom, ultimately enhancing the educational experience and preparing
Slide Presentation from a Doctoral Virtual Open House presented on June 30, 2024 by staff and faculty of Capitol Technology University
Covers degrees offered, program details, tuition, financial aid and the application process.
Images as attribute values in the Odoo 17Celine George
Product variants may vary in color, size, style, or other features. Adding pictures for each variant helps customers see what they're buying. This gives a better idea of the product, making it simpler for customers to take decision. Including images for product variants on a website improves the shopping experience, makes products more visible, and can boost sales.
Cross-Cultural Leadership and CommunicationMattVassar1
Business is done in many different ways across the world. How you connect with colleagues and communicate feedback constructively differs tremendously depending on where a person comes from. Drawing on the culture map from the cultural anthropologist, Erin Meyer, this class discusses how best to manage effectively across the invisible lines of culture.
20. “ Multitasking, when it comes to paying
attention, is a myth
We are biologically
incapable of processing
attention-rich inputs
simultaneously
”
—Dr. Medina
21. People who are interrupted:
Make
50% more
errors
Take
50% longer to
complete a task
34. Exercise is not just good for
general health, it actually
improves cognition
35. “Exercise increases oxygen flow into the brain,
which reduces brain-bound free radicals [...] an
increase in oxygen is always accompanied by
an uptick in mental sharpness.
Exercise acts directly on the
molecular machinery of the
brain itself. It increases
neurons’ creation, survival,
and resistance
”—Dr. Medina
36. Even more benefits!
• Reduces depression
• Treats dementia
• Improves reasoning
• Improves long-term
memory
• Improve fluid intelligence
• Helps you solve problems
• and more...
37. If you are stuck go for a
walk or a run...
just move!
44. “If keepin
in a lectu
would ha
g someo
re was a
ne’s atte
business
ntion
ve an 80 , it
% failure
rate.
— Dr. Jo
hn Medin
a
”
Rule #4
We do not pay attention
to boring things
55. 1. T E L E P R O M P T I N G
People tend to put every word they are going
to say on their PowerPoint slides. Although
this eliminates the need to memorize your
talk, ultimately this makes your slides
crowded, wordy, and boring.You will loss your
audience’s attention before you even reach
the bottom of your ...
Slide from Don McMillan, “Life After Death by PowerPoint”: http://bit.ly/aYxegN
56. 2. Spelling mistakes
Many people do not run spel cheek
before there presentation
BIG MISTAK!!! Nothing makes you lok
stupder than speling erors
Slide from Don McMillan, “Life After Death by PowerPoint”: http://bit.ly/aYxegN
57. 3. Bullet pointing
• Avoid • Bullet-Points
• Excesive • And
• Buller-Pointing • Your
• Only • Key
• Bullet • Messages
• Key • Will
• Points • NOT
• Too • Stand
• Many • Out
Slide from Don McMillan, “Life After Death by PowerPoint”: http://bit.ly/aYxegN
58. 4. Too many levels
• What is worst
• Too many bullet point levels are shown
• Type size gets smaller and smaller
• Until it is utterly unreadable
• Even for audiences in the 4th row
• So you better have just one bulletpoint level
• Better yet, forget about bullets (bullets, not guns,
kill people. Don’t you know?)
• Use them sparingly
• There are many other ways of detailing your ideas!
Slide from Don McMillan, “Life After Death by PowerPoint”: http://bit.ly/aYxegN
59. 5. Color schemes gone wrong
bad
color schemes can
lead to...
• Distraction
• Confusion
• Headache
• Nausea
• Vomiting
• Loss of bladder control
Slide from Don McMillan, “Life After Death by PowerPoint”: http://bit.ly/aYxegN
61. earlier periods of time, i.e., earlier modifications,basedHas-
exponential decay model. EDHCM was introducedhave the
Variants. LDHCM (Linearly Decayed) and LGDHCM
We define three further variants
by
on
san. Similarly,
contributionthings people won’tperiods in the past.a
HCM, withreduced exponentiallyfor understand
7. Use an additional have their contributions reduced
weight over time, modelling
(LoGarithmically decayed),
exponential decay model. EDHCM was introduced by for
In EDHCM (Exponentially Decayed HCM) , entropies Ha
over Similarly, LDHCM (Linearly Decayed) and LGDHC
san. time in a respectively linear and logarithmic fashion.
earlier periods of time, i.e., earlier modifications, have their
Both are novel. The definition of the variants follow:
(LoGarithmically decayed), have their contributions reduce
contribution reduced exponentially over time, modelling an
P HCP Fi (j)
over time in{a,..,b} (j) = EDHCMand logarithmici)fashion
EDHCM a respectively linear was introduced by Has-
exponential decay model. i2{a,..,b} e 1 ⇥(|{a,..,b}| (5)
Both are novel.LDHCM (Linearly the variantsFi (j)
san. Similarly, The definition of Decayed) andfollow:
P HCP
LGDHCM
(LoGarithmically (j) =
LDHCM{a,..,b} decayed), have their contributions reduced
Pi2{a,..,b} 2 ⇤(|{a,..,b}|+1 i) (6)
HCP Fi (j)
EDHCM{a,..,b} (j) = P i2{a,..,b} e HCP Fi (j) i) (5
over time in a respectively linear and logarithmic fashion.
1 ⇥(|{a,..,b}|
LGDHCM{a,..,b} (j) = i2{a,..,b} 3 ⇤ln(|{a,..,b}|+1.01 i) (7)
P
Both are novel. The definition of the variants follow:
HCP Fi (j)
LDHCM{a,..,b} (j) = i2{a,..,b} 2 ⇤(|{a,..,b}|+1 i) (6
where 1 , 2 and 3 arePP decay factors. Fi (j)
EDHCM{a,..,b} (j) = the HCP
(5)
HCP Fi (j)
i2{a,..,b} e 1 ⇥(|{a,..,b}| i)
LGDHCM{a,..,b} (j) = P i2{a,..,b} 3 ⇤ln(|{a,..,b}|+1.01 i) (7
HCP Fi (j)
LDHCM{a,..,b} (j) = i2{a,..,b} 2 ⇤(|{a,..,b}|+1 i)
(6)
where 1, 2 and 3 are the decay factors. F
P HCP i (j)
LGDHCM{a,..,b} (j) = i2{a,..,b} 3 ⇤ln(|{a,..,b}|+1.01 i)
(7)
where 1, 2 and 3 are the decay factors.
62. Design the
zen way
simplicity
clarity
uncluttered
67. rec all
ett er
ve ab at ion
e ha nfo rm
W sua li
for vi
68. “ we are wired to pattern ”
—Dr. Medina
IRSYMCAWTFIBMKGBFBI
69. “ we are wired to pattern ”
—Dr. Medina
IRSYMCAWTFIBMKGBFBI
70. Visual information are easier to remember
Oral 10%
3x
Visual 35% 6x
Oral & 65%
Visual
Source: Najjar, LJ (1998) Principles of educational multimedia user interface design (via Brain Rules by John Medina, 2008)
72. 90
90 %
%
of the
of the freshwater
freshwaterworld is
in the
in our planet is
Slide from Garr Reynolds: http://www.slideshare.net/garr/sample-slides-by-garr-reynolds
Inspired by www.slideshare.net/garr/sample-slides-by-garr-reynolds
73. ice
ice
Source: SCAR
Inspired by www.slideshare.net/garr/sample-slides-by-garr-reynolds
74. 90 %
of the ice in our planet
is in Antarctica
Inspired by www.slideshare.net/garr/sample-slides-by-garr-reynolds
75. 80 %
of our planet’s freshwater
is ice in the
Antarctic
80 %
of the world’s freshwater
is ice in the
Antarctic
Source: SCAR
Inspired from Garr Reynolds: http://www.slideshare.net/garr/sample-slides-by-garr-reynolds
Slide by www.slideshare.net/garr/sample-slides-by-garr-reynolds
81. 2% of the
world
Use
metaphorical
image
owns
50% of the
wealth
Slide from Christina Quick : http://www.slideshare.net/ChrisQuick/new-rules-for-power-point-presentations
82. The poorest 50% of the world
owns 1% of the wealth
Slide from Christina Quick : http://www.slideshare.net/ChrisQuick/new-rules-for-power-point-presentations
83. 66% of Americans
are obese or overweight.
All adults 134 million (66%)
Women
65 million
(62%)
Men
69 million
(71%)
Be provocative
OECD Factbook 2007
Slide from Garr Reynolds: http://www.slideshare.net/garr/sample-slides-by-garr-reynolds
94. Repetitio n of design
elem ents gives a
co hesive lookR e petition
Slide from Jesse Desjardins: http://www.slideshare.net/jessedee/steal-this-presentation-5038209
126. Break the rules, but do it sparingly
Slide from Eduardo S. de la Fuente: http://www.slideshare.net/eduardo.delafuente/the-art-of-presentation-following-the-zen-path-why
143. The 10-minute rule
The 10-minutes rule
High
Attention
Attention
Low
10 20 30 40 50
Minutes of class time
Minutes of class time
Source: www.brainrules.net/attention
Source: www.brainrules.net/attention
161. Takeawa ys & DQJohnotes
u a’s Credits
ules
Medin
rain R
r. from
BWhat all pres
enters ne ed to know
s)
tion (of sort
A presenta eynolds
by Garr R
SEMINAR (I)
http://slidesha.re/3mMo3c http://slidesha.re/fausgs
Following the ZEN path
http://slidesha.re/i8QMa Zen Rocks by Lane Pierce
Alberto de Vega
http://slidesha.re/8Ykmry
Eduardo S. de la Fuente
http://slidesha.re/17P2Hh
Sample slides
Here are a few before/after slides
Garr Reynolds
162. Marco D’Ambros
Computer science researcher
Marco earned a PhD in Informatics from the University of
Lugano (Switzerland), and MSc degrees from both Politecnico
di Milano (Italy) and the University of Illinois at Chicago (USA).
His research interests lie in software engineering, software
evolution, and software visualization. He authored more than
30 technical papers, and is the creator of several software
visualization and program comprehension tools.
Marco is passionate about presentations: He distilled his
experience, gained by giving more than 30 talks at
international conferences, in this presentation.
www.inf.usi.ch/phd/dambros/ On the Evolution of
Source Code and
www.linkedin.com/in/dambros Software Defects
amzn.com/1460953568
www.slideshare.net/marcodambros
twitter.com/marquitodambros