Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
A Comprehensive Review of Large Language Models for.pptxSaiPragnaKancheti
The document presents a review of large language models (LLMs) for code generation. It discusses different types of LLMs including left-to-right, masked, and encoder-decoder models. Existing models for code generation like Codex, GPT-Neo, GPT-J, and CodeParrot are compared. A new model called PolyCoder with 2.7 billion parameters trained on 12 programming languages is introduced. Evaluation results show PolyCoder performs less well than comparably sized models but outperforms others on C language tasks. In general, performance improves with larger models and longer training, but training solely on code can be sufficient or advantageous for some languages.
The document discusses generative AI and how it has evolved from earlier forms of AI like artificial intelligence, machine learning, and deep learning. It explains key concepts like generative adversarial networks, large language models, transformers, and techniques like reinforcement learning from human feedback and prompt engineering that are used to develop generative AI models. It also provides examples of using generative AI for image generation using diffusion models and how Stable Diffusion differs from earlier diffusion models by incorporating a text encoder and variational autoencoder.
This document provides a 50-hour roadmap for building large language model (LLM) applications. It introduces key concepts like text-based and image-based generative AI models, encoder-decoder models, attention mechanisms, and transformers. It then covers topics like intro to image generation, generative AI applications, embeddings, attention mechanisms, transformers, vector databases, semantic search, prompt engineering, fine-tuning foundation models, orchestration frameworks, autonomous agents, bias and fairness, and recommended LLM application projects. The document recommends several hands-on exercises and lists upcoming bootcamp dates and locations for learning to build LLM applications.
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
Generative AI: Past, Present, and Future – A Practitioner's Perspective
As the academic realm grapples with the profound implications of generative AI
and related applications like ChatGPT, I will present a grounded view from my
experience as a practitioner. Starting with the origins of neural networks in
the fields of logic, psychology, and computer science, I trace its history and
align it within the wider context of the pursuit of artificial intelligence.
This perspective will also draw parallels with historical developments in
psychology. Against this backdrop, I chart a proposed trajectory for the future.
Finally, I provide actionable insights for both academics and enterprising
individuals in the field.
Let's talk about GPT: A crash course in Generative AI for researchersSteven Van Vaerenbergh
This talk delves into the extraordinary capabilities of the emerging technology of generative AI, outlining its recent history and emphasizing its growing influence on scientific endeavors. Through a series of practical examples tailored for researchers, we will explore the transformative influence of these powerful tools on scientific tasks such as writing, coding, data wrangling and literature review.
Generative AI Use cases for Enterprise - Second SessionGene Leybzon
This document provides an overview of generative AI use cases for enterprises. It begins with addressing concerns that generative AI will replace jobs. The presentation then defines generative AI as AI that generates new content like text, images or code based on patterns learned from training data.
Several examples of generative AI outputs are shown including code, text, images and advice. Potential use cases for enterprises are then outlined, including synthetic data generation, code generation, code quality checks, customer service, and data analysis. The presentation concludes by emphasizing that people will be "replaced by someone who knows how to use AI", not AI itself.
An Introduction to Generative AI - May 18, 2023CoriFaklaris1
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
A brief introduction to generative models in general is given, followed by a succinct discussion about text generation models and the "Transformer" architecture. Finally, the focus is set on a non-technical discussion about ChatGPT with a selection of recent news articles.
This presentation presents an overview of the challenges and opportunities of generative artificial intelligence in Web3. It includes a brief research history of generative AI as well as some of its immediate applications in Web3.
Presented at All Things Open RTP Meetup
Presented by Karthik Uppuluri, Fidelity
Title: Generative AI
Abstract: In this session, let us embark on a journey into the fascinating world of generative artificial intelligence. As an emergent and captivating branch of machine learning, generative AI has become instrumental in myriad of sectors, ranging from visual arts to creating software for technological solutions. This session requires no prior expertise in machine learning or AI. It aims to inculcate a robust understanding of fundamental concepts and principles of generative AI and its diverse applications. Join us as we delve into the mechanics of this transformative technology and unpack its potential.
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?Bernard Marr
GPT-3 is an AI tool created by OpenAI that can generate text in human-like ways. It has been trained on vast amounts of text from the internet. GPT-3 can answer questions, summarize text, translate languages, and generate computer code. However, it has limitations as its output can become gibberish for complex tasks and it operates as a black box system. While impressive, GPT-3 is just an early glimpse of what advanced AI may be able to accomplish.
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are "discriminative" in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn't generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of "generative" AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.
Generative AI models, such as ChatGPT and Stable Diffusion, can create new and original content like text, images, video, audio, or other data from simple prompts, as well as handle complex dialogs and reason about problems with or without images. These models are disrupting traditional technologies, from search and content creation to automation and problem solving, and are fundamentally shaping the future user interface to computing devices. Generative AI can apply broadly across industries, providing significant enhancements for utility, productivity, and entertainment. As generative AI adoption grows at record-setting speeds and computing demands increase, on-device and hybrid processing are more important than ever. Just like traditional computing evolved from mainframes to today’s mix of cloud and edge devices, AI processing will be distributed between them for AI to scale and reach its full potential.
In this presentation you’ll learn about:
- Why on-device AI is key
- Full-stack AI optimizations to make on-device AI possible and efficient
- Advanced techniques like quantization, distillation, and speculative decoding
- How generative AI models can be run on device and examples of some running now
- Qualcomm Technologies’ role in scaling on-device generative AI
The document discusses using generative AI to improve learning products by making them better, stronger, and faster. It provides examples of using generative models for game creation, runtime design, and postmortem data analysis. It also addresses ethics and copyright challenges and considers generative AI as both a tool and potential friend. The document explores what models are, how they work, examples of applications, and resources for staying up to date on generative AI advances.
Host:
Bart Raynaud - https://www.linkedin.com/in/bart-raynaud-160a0318/
Title: AI: Past, Present, and Future
Abstract: In 1956, the term "Artificial Intelligence" was coined for a workshop at Dartmouth. Since then there has been waxing and waning enthusiasm and investment, so called "AI Winters" after hype, did not live up to reality. In late 2022, with the release of ChatGPT, and over 100 million users in just 60 days, there is a new wave of hype, investment, excitement, and increased fears of AI use by 'bad actors' for misinformation and other harms to society. What are the future trajectories as this technology is tamed and becomes routine? Are we about to enter a 'golden age' of service in business and society, as technology comes to the service sector, as it came to agriculture and manufacturing in the past?
Bio: Jim Spohrer is a retired industry executive (Apple, IBM). In the 1970's, after graduating MIT with a degree in physics, he worked at an AI startup doing speech recognition with mathematical models. In the 1980's, after completing his PhD in Computer Science/AI & Cognitive Science at Yale, he moved to California to join Apple and work on AI for Education. In the late 1990's, he joined IBM as CTO of the Venture Capital Relations group during the internet investment boom, and later started IBM Research's service research area, led IBM Global University Programs, and led IBM's open source AI efforts. Jim's most recent co-authored book, "Service in the AI Era" was published in late 2022.
Host:
Bart Raynaud (The Terraces of Los Gatos) https://www.linkedin.com/in/bart-raynaud-160a0318/
Title: AI: Past, Present, and Future
Abstract: In 1956, the term "Artificial Intelligence" was coined for a workshop at Dartmouth. Since then there has been waxing and waning enthusiasm and investment, so called "AI Winters" after hype, did not live up to reality. In late 2022, with the release of ChatGPT, and over 100 million users in just 60 days, there is a new wave of hype, investment, excitement, and increased fears of AI use by 'bad actors' for misinformation and other harms to society. What are the future trajectories as this technology is tamed and becomes routine? Are we about to enter a 'golden age' of service in business and society, as technology comes to the service sector, as it came to agriculture and manufacturing in the past?
Bio: Jim Spohrer is a retired industry executive (Apple, IBM). In the 1970's, after graduating MIT with a degree in physics, he worked at an AI startup doing speech recognition with mathematical models. In the 1980's, after completing his PhD in Computer Science/AI & Cognitive Science at Yale, he moved to California to join Apple and work on AI for Education. In the late 1990's, he joined IBM as CTO of the Venture Capital Relations group during the internet investment boom, and later started IBM Research's service research area, led IBM Global University Programs, and led IBM's open source AI efforts. Jim's most recent co-authored book, "Service in the AI Era" was published in late 2022.
The document provides information about David Cieslak and his company RKL eSolutions. It introduces Cieslak as the Chief Cloud Officer of RKL eSolutions, a subsidiary of RKL LLP that provides ERP sales and consulting. It then lists Cieslak's experience and accomplishments in the accounting industry. The rest of the document is an agenda for a presentation on technology trends that will discuss topics like AI, new devices, and connectivity standards.
¿Es posible construir el Airbus de la Supercomputación en Europa?AMETIC
Presentación a cargo de Mateo Valero, Director del Barcelona Supercomputing Center, en el marco de la 30ª edición de los Encuentros de Telecomunicaciones y Economía Digital.
Community based software development: The GRASS GIS projectMarkus Neteler
The document summarizes the GRASS GIS open source project. It discusses the project's objectives of developing free GIS software and algorithms. It describes the international development team and communication structures used, including mailing lists, wikis and bug trackers. Legal aspects of code contributions and licensing are also briefly covered.
Scaling Spatial Analytics with Google Cloud & CARTOCARTO
In this webinar, we focus on how Google Cloud and CARTO can be used to tackle even the most challenging Location Intelligence use cases at scale. You can watch the recorded webinar at: https://go.carto.com/webinars/google-cloud-spatial-analytics-at-scale
This presentation will provide insight on the phenomenon and emerging trend that is ChatGPT.
It will elaborate on its history, usage, workings, popularity and usefulness in social media marketing.
This presentation will provide insight on the phenomenon and emerging trend that is ChatGPT.
It will elaborate on its history, usage, workings, popularity and usefulness in social media marketing.
Past, Present and Future of Generative AIabhishek36461
Generative AI creates new content (images, text, music) based on learned patterns.
It learns from vast examples and can produce original, unseen works.
Capable of blending learned elements to generate unique outputs.
Can produce customized creations based on specific prompts.
Improves and refines its output over time with more data and feedback.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(This updated version builds on our previous deck: slideshare.net/LoicMerckel/intro-to-llms.)
The document discusses the first meeting of the Bucharest Google Technology Users Group (GTUG) which took place on March 2, 2010 in Bucharest. The agenda included introductions to Google Web Toolkit (GWT) and Google App Engine (GAE) with live demonstrations of hello world applications in GWT and GAE. The meeting provided overviews and resources for GWT and GAE and concluded with next steps for the Bucharest GTUG user group.
ChatGPT and Brad AI are AI language models. ChatGPT, based on GPT-3.5, provides engaging conversation and responses. It's trained on vast internet text data. Brad AI, however, is unspecified and lacks specific information for comparison. Both aim to facilitate conversation and deliver meaningful interactions, but their capabilities depend on their respective architectures and training data.
The document discusses several US grid projects including campus and regional grids like Purdue and UCLA that provide tens of thousands of CPUs and petabytes of storage. It describes national grids like TeraGrid and Open Science Grid that provide over a petaflop of computing power through resource sharing agreements. It outlines specific communities and projects using these grids for sciences like high energy physics, astronomy, biosciences, and earthquake modeling through the Southern California Earthquake Center. Software providers and toolkits that enable these grids are also mentioned like Globus, Virtual Data Toolkit, and services like Introduce.
This document summarizes developments in natural language processing (NLP) in 2020. It discusses large language models like GPT-3, the increasing sizes of transformer-based models, issues with large models, multilingual models, more efficient transformer architectures, benchmarks for evaluating NLP systems, conversational agents, and APIs and cloud services for NLP.
Jim Spohrer discusses the evolution of AI and its applications, as well as the relationship between disciplines and professions. The goal of service science was originally to create a new discipline and profession, but the revised goal is to develop wisdom for rebuilding the world. Spohrer also discusses how disciplines can be categorized into clusters such as the humanities, social sciences, natural sciences, and formal sciences.
BigScience is a one-year research workshop involving over 800 researchers from 60 countries to build and study very large multilingual language models and datasets. It was granted 5 million GPU hours on the Jean Zay supercomputer in France. The workshop aims to advance AI/NLP research by creating shared models and data as well as tools for researchers. Several working groups are studying issues like bias, scaling, and engineering challenges of training such large models. The first model, T0, showed strong zero-shot performance. Upcoming work includes further model training and papers.
This document provides an overview of a geospatial metadata and spatial data workshop held at the University of Oxford. The workshop covered topics such as metadata standards, application profiles, geospatial metadata tools and portals for sharing spatial data and metadata. Hands-on sessions demonstrated how to create metadata using the Geodoc Metadata Editor tool and access spatial data repositories through the Go-Geo portal and ShareGeo open data portal.
O documento apresenta uma discussão sobre inteligência artificial generativa conduzida por Carlos J. Costa. Ele descreve modelos de IA que podem gerar novos conteúdos como texto e discute desafios éticos e aplicações potenciais da IA generativa no ensino e na investigação.
This document contains notes from a course on machine learning taught by Carlos J. Costa. It discusses different approaches to machine learning, including analogizers, Bayesians, connectionists, evolutionaries, and symbolists. It also covers topics like supervised learning, unsupervised learning, reinforcement learning, regression, classification algorithms, logistic regression, random forests, and cluster analysis.
This document discusses various computing languages used for data analysis including Power Query M, DAX, R, and Python. Power Query M is the formula language used in Power Query, while DAX is primarily a formula or query language developed by Microsoft for data modeling and analysis. R is an open-source statistical computing and graphics environment that runs on various platforms. Python is a general-purpose programming language commonly used for tasks like getting data from scripts and customizing visualizations. References are provided for documentation on these languages.
A scikit-learn (anteriormente scikits.learn). É uma biblioteca de machine learning em Open Source. Inclui vários algoritmos de classificação, regressão, clustering, redução dimensional, seleção de modelos, pré-processamento. https://scikit-learn.org/. https://scikit-learn.org/stable/index.html
O documento discute a biblioteca Pandas em Python. Ele fornece uma introdução à biblioteca, incluindo como criar e manipular DataFrames, lidar com tipos de dados, importar e analisar dados, selecionar linhas e colunas, e tratar valores ausentes. Ele também discute como usar Pandas com outras bibliotecas como Scikit-Learn e StatsModels.
Numpy é uma biblioteca fundamental do Python para computação científica.
Fornece funcionalidades relacionadas com arrays
Tem nível mais elevado de desempenho
Este documento apresenta um curso de pós-graduação em gestão de projetos, incluindo seus objetivos, blocos de aprendizagem e disciplinas opcionais. O curso visa fornecer uma formação de nível universitário com foco na prática profissional e é ministrado por acadêmicos e profissionais experientes.
Este documento apresenta os detalhes de um curso introdutório sobre gestão de projetos, incluindo os objetivos do curso, professores, programa, recursos de aprendizagem recomendados e método de avaliação.
This document discusses three key questions about usability: where payments are made, how much payments cost, and what value is provided by a payment card. It lists three bullet points asking "Where do I pay?", "How much do I pay?", and "What's the value in the card?".
O documento discute o sistema de gestão de conteúdo WordPress, incluindo o que é WordPress, como criar sites com ele e como instalá-lo localmente. WordPress é um software livre e gratuito que permite criar e gerir sites de forma flexível.
The document discusses client-side web development and provides an overview of HTML, CSS, and JavaScript. It explains that HTML is used to define the structure and presentation of web pages, CSS is used to style web pages, and JavaScript is used to add interactive elements and dynamic behavior. The document also includes various code examples of how to write basic HTML, CSS, and JavaScript.
O documento descreve os principais protocolos e conceitos da Internet e da World Wide Web, incluindo HTTP, TCP, IP, DHCP, DNS, browsers, HTML, URLs, hiperligações, imagens, tabelas e formulários. Explica também elementos como div, span, listas, molduras e como codificar páginas web.
The document provides an overview of web page development using HTML, CSS, and JavaScript. It discusses HTML tags and structure, how to write HTML code, improving pages with elements, links and images. It also covers CSS for customizing page presentation, JavaScript for interactive elements and manipulating the DOM, and examples of using JavaScript to change element styles and positions. Bibliographic references are provided at the end.
This document provides an overview of Enterprise Resource Planning (ERP) systems. It begins with a brief history of ERP, tracing its evolution from early inventory control systems through modern ERP implementations. It defines ERP as a set of integrated software programs and databases that allow organizations to share information and business processes across various departments. The document outlines the key components and architecture of ERP systems, including an integrated database and modular applications. It discusses advantages of ERP like improved business processes, data standardization, and reduced costs. Major ERP vendors like SAP, Oracle, and Microsoft are also highlighted. The document concludes with topics like open source ERP options, ERP certification requirements in Portugal, and integrating ERP with e-
Weka is a machine learning and data mining software developed at the University of Waikato. It contains tools for data pre-processing, classification, regression, clustering, association rule mining and visualization. Weka is open-source, written in Java and supports a variety of machine learning algorithms. The document provides examples of using Weka for regression analysis, classification, clustering and association rule mining on sample datasets.
O documento descreve os principais conceitos do modelo relacional de bases de dados, incluindo suas características e regras definidas por Edgar Codd. O modelo relacional representa dados em tabelas bidimensionais e usa chaves primárias para identificar registros de forma única. Restrições garantem a integridade dos dados armazenados no banco de dados relacional.
This document contains links to 8 different YouTube videos about various business and economic topics including globalization, IT strategy, transaction costs theory, Porter's five competitive forces model, value chain analysis, and agency theory. Each video is authored by Carlos J. Costa and they provide overviews and introductions to fundamental concepts within each topic area.
The document discusses various roles in information systems organizations. It lists common career roles such as Chief Information Officer (CIO), Chief Security Officer (CSO), Chief Information Security Officer (CISO), Chief Technical Officer (CTO), and Database Administrator (DBA). It also provides definitions and descriptions of IT consulting, IT outsourcing, and IT temporary work. Videos and external links are referenced for additional information on some of the roles.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/07/intels-approach-to-operationalizing-ai-in-the-manufacturing-sector-a-presentation-from-intel/
Tara Thimmanaik, AI Systems and Solutions Architect at Intel, presents the “Intel’s Approach to Operationalizing AI in the Manufacturing Sector,” tutorial at the May 2024 Embedded Vision Summit.
AI at the edge is powering a revolution in industrial IoT, from real-time processing and analytics that drive greater efficiency and learning to predictive maintenance. Intel is focused on developing tools and assets to help domain experts operationalize AI-based solutions in their fields of expertise.
In this talk, Thimmanaik explains how Intel’s software platforms simplify labor-intensive data upload, labeling, training, model optimization and retraining tasks. She shows how domain experts can quickly build vision models for a wide range of processes—detecting defective parts on a production line, reducing downtime on the factory floor, automating inventory management and other digitization and automation projects. And she introduces Intel-provided edge computing assets that empower faster localized insights and decisions, improving labor productivity through easy-to-use AI tools that democratize AI.
AC Atlassian Coimbatore Session Slides( 22/06/2024)apoorva2579
This is the combined Sessions of ACE Atlassian Coimbatore event happened on 22nd June 2024
The session order is as follows:
1.AI and future of help desk by Rajesh Shanmugam
2. Harnessing the power of GenAI for your business by Siddharth
3. Fallacies of GenAI by Raju Kandaswamy
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
What Not to Document and Why_ (North Bay Python 2024)Margaret Fero
We’re hopefully all on board with writing documentation for our projects. However, especially with the rise of supply-chain attacks, there are some aspects of our projects that we really shouldn’t document, and should instead remediate as vulnerabilities. If we do document these aspects of a project, it may help someone compromise the project itself or our users. In this talk, you will learn why some aspects of documentation may help attackers more than users, how to recognize those aspects in your own projects, and what to do when you encounter such an issue.
These are slides as presented at North Bay Python 2024, with one minor modification to add the URL of a tweet screenshotted in the presentation.
Interaction Latency: Square's User-Centric Mobile Performance MetricScyllaDB
Mobile performance metrics often take inspiration from the backend world and measure resource usage (CPU usage, memory usage, etc) and workload durations (how long a piece of code takes to run).
However, mobile apps are used by humans and the app performance directly impacts their experience, so we should primarily track user-centric mobile performance metrics. Following the lead of tech giants, the mobile industry at large is now adopting the tracking of app launch time and smoothness (jank during motion).
At Square, our customers spend most of their time in the app long after it's launched, and they don't scroll much, so app launch time and smoothness aren't critical metrics. What should we track instead?
This talk will introduce you to Interaction Latency, a user-centric mobile performance metric inspired from the Web Vital metric Interaction to Next Paint"" (web.dev/inp). We'll go over why apps need to track this, how to properly implement its tracking (it's tricky!), how to aggregate this metric and what thresholds you should target.
this resume for sadika shaikh bca studentSadikaShaikh7
I am a dedicated BCA student with a strong foundation in web technologies, including PHP and MySQL. I have hands-on experience in Java and Python, and a solid understanding of data structures. My technical skills are complemented by my ability to learn quickly and adapt to new challenges in the ever-evolving field of computer science.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
1. 2023 (1)
Carlos J. Costa (ISEG)
GENERATIVE AI: CHALLENGES
AND IMPLICATIONS
Moderator: Prof. Carlos J. Costa, ISEG
2. 2023 (2)
Carlos J. Costa (ISEG)
Index
• Presentation of the moderator
• Generative AI
• LLM
• ChatGPT
• Bard
• Generative AI Ecosystem
• Challenges and Implications
3. 2023 (3)
Carlos J. Costa (ISEG)
Carlos J. Costa
Associate Professor with Habilitation
• Chat-GPT Task Force @ ISEG
• Costa, C. (2023 Jan. 6) “GPT-3 e a utilização de Inteligencia Artificial no Ensino”.
isegtech.blogs.sapo. https://isegtech.blogs.sapo.pt/gpt-3-e-a-utilizacao-de-ia-no-
ensino-7355
• Costa, C. (2023 Jul. 16) “Bard: Finalmente em Portugal”. isegtech.blogs.sapo.
https://isegtech.blogs.sapo.pt/bard-finalmente-em-portugal-8321
4. 2023 (4)
Carlos J. Costa (ISEG)
Generative AI
• Generative artificial intelligenceincludes
models that can generate new content,
such as images, music, and text.
• An example of these models is ChatGPT.
5. 2023 (5)
Carlos J. Costa (ISEG)
LLM
• A large language model (LLM) is a type of
language model characterized by its ability
to achieve general-purpose language
processing and generation.
6. 2023 (6)
Carlos J. Costa (ISEG)
ChatGPT
• Conversational generative artificial
intelligence chatbot
• LLM-based chatbot
• developed by OpenAI
• Launched on November 30, 2022,
• Generative, Pre-Trained, Transformer
Name
Release
date Developer Number of parameters Corpus size
Training cost
(petaFLOP-day) License
GPT-2 2019 OpenAI 1.5 billion
40GB[103] (~10 billion
tokens) MIT
GPT-3 2020 OpenAI 175 billion 300 billion tokens 3640proprietary
GPT-4 March 2023 OpenAI Exact number unknow Unknown proprietary
7. 2023 (7)
Carlos J. Costa (ISEG)
Google Bard
• Conversational generative artificial
intelligence chatbot
• Developed by Google,
• Based initially on the LaMDA family of
large language models (LLMs) and later
the PaLM LLM.
Name Release date Developer
Number of
parameters Corpus size
Training cost
(petaFLOP-day) License
BERT 2018 Google 340 million 3.3 billion words 9Apache 2.0
XLNet 2019 Google ~340 million 33 billion words Apache 2.0
GLaM (Generalist Language Model) December 2021 Google 1.2 trillion 1.6 trillion tokens 5600proprietary
Minerva June 2022 Google 540 billion 38.5B tokens from webpages proprietary
LaMDA (Language Models for Dialog Applications) January 2022 Google 137 billion 1.56T words, 168 billion tokens 4110proprietary
PaLM (Pathways Language Model) April 2022 Google 540 billion 768 billion tokens 29250proprietary
PaLM 2 (Pathways Language Model 2) May 2023 Google 340 billion 3.6 trillion tokens 85000proprietary
9. 2023 (9)
Carlos J. Costa (ISEG)
Generative AI Ecosystem
Other services:
• Chat PDF https://www.chatpdf.com/
• ChatGPT https://chat.openai.com/
• Chatsonic https://app.writesonic.com/
• Consensus https://consensus.app/
• Elicit https://elicit.org/
• Google Bard https://bard.google.com/
• Microsoft Bing Chat (no Edge) https://www.bing.com/
• Research Rabit https://www.researchrabbit.ai/
• SciCite https://scite.ai/
• SciSpace https://typeset.io/
• SciStyle https://www.scistyle.com/
• You.com https://you.com/
10. 2023 (10)
Carlos J. Costa (ISEG)
Generative AI Ecosystem
• GPT4ALL
(https://gpt4all.io) allows to
run locally many pretrained
models
11. 2023 (11)
Carlos J. Costa (ISEG)
Generative AI Ecosystem
• FreedomGPT
(https://www.freedomgpt.com)
• 100% uncensored and private AI chatbot.
• ALPACA and LLAMA models
12. 2023 (12)
Carlos J. Costa (ISEG)
Challenges and Implications
• Education
• Research
• Software development
• Creative industries
• Manufacturing
• Cybersecurity
• Client Support
• Health care
Editor's Notes
deep learning architecture that relies on the parallel multi-head attention mechanism
deep learning architecture that relies on the parallel multi-head attention mechanism