The document provides an overview of various digital technologies including AI, IoT, cloud computing, data analytics, and more. It discusses the "apples" or fundamental technologies in these areas like AR, VR, AI, IoT, and cloud computing. It then outlines several learning paths one could take to understand these technologies, beginning with foundations in areas like probability, statistics, computer science, and communications. It provides recommendations for books and courses to learn about each technology from roots to more advanced concepts. Finally, it discusses bringing all the pieces together using design thinking.
This document provides an overview of a presentation on deep learning given by Melanie Swan. The key points are:
1) Melanie Swan is a technology theorist who gave a presentation on deep learning and smart networks at a conference in Indianapolis.
2) She discussed the definition and technical details of deep learning, including how it is inspired by concepts from statistical mechanics and physics. Deep learning uses neural networks of processing units to model high-level abstractions in data.
3) Deep learning has many applications including image recognition, speech recognition, and question answering. It is seen as important due to the large worldwide spending on AI and the growth of data science jobs.
5G next-generation networking paradigm with its envisioned capacity, coverage, and data transfer rates provide a developmental field for novel applications scenarios. Virtual, Mixed, and Augmented Reality will play a key role as visualization, interaction, and information delivery platforms. The recent hardware and software developments in immersive technologies including AR, VR and MR in terms of the commercial
availability of advanced headsets equipped with XR-accelerated processing units and Software Development Kits (SDKs) are significantly increasing the penetration of such devices for entertainment, corporate and industrial use. This trend creates next-generation usage models which rise serious technical challenges within all networking and software architecture levels to support the immersive digital transformation.
Iaetsd eliminating hidden data from an imageIaetsd Iaetsd
The document proposes a method for blindly extracting hidden data from digital media using multi-carrier iterative generalized least squares (M-IGLS). Data is hidden in the digital media through multi-carrier spread spectrum embedding and DCT transformation. M-IGLS extraction algorithm is then used to extract the hidden data without needing to know the original host or embedding carriers. Experimental results show the extracted data matches the hidden data and M-IGLS provides high signal to noise ratio for blind extraction. The technique aims to provide robust data hiding and extraction to protect against increasing data tracking and tampering attacks.
Research proposal on Computing Security and Reliability - Phdassistance.comPhD Assistance
From introducing new international standards to having an important role to play in several industries, computer science is one of the powerful subjects right now. You cannot guess a single area that does not need computer systems or efficient networking options. Because Technology and Computer Science go together for any field.
Stating this, there are a few core subjects inside computer science that are unpredictable in its future use. One such case is with computing technologies.
Visite : https://www.phdassistance.com/blog/
Contact Us:
UK NO: +44-1143520021
India No: +91-8754446690
Email: info@phdassistance.com
Designing Cross-Domain Semantic Web of Things ApplicationsAmélie Gyrard
The document discusses designing cross-domain semantic web of things applications. It introduces challenges including how to interpret IoT data, combine data from different domains, and reuse domain knowledge. The proposed M3 framework addresses these challenges through components like a SWoT generator template, M3 language and ontology, sensor-based linked open rules, and linked open vocabularies for IoT. Evaluations show the framework helps developers build semantic applications and interprets data efficiently while reusing interoperable domain knowledge. The framework has potential applications in domains like health, tourism and transportation.
This document discusses future computing technologies and challenges. It describes how current computing relies on silicon chips that will soon hit physical limits. Alternative technologies like quantum, photonic and neuromorphic computing are presented as possibilities to overcome these limits. A new university, SIT, is proposed to conduct research on these new computing paradigms through interdisciplinary partnerships. SIT aims to become a top research university and prepare students for leadership roles in technology companies through new advanced degree programs.
Building AI with Security and Privacy in mindgeetachauhan
The document discusses building AI with security and privacy in mind. It covers privacy challenges in AI like tensions between data privacy and model training. It then discusses various privacy preserving machine learning techniques like homomorphic encryption, differential privacy, secure multi-party computation, on-device computation, and federated learning. The document provides examples of how each technique works. It concludes by discussing tools and techniques for starting a privacy journey in AI and provides resources to learn more.
1) Deep learning is a type of machine learning that uses neural networks with many layers to learn representations of data with multiple levels of abstraction.
2) Deep learning techniques include unsupervised pretrained networks, convolutional neural networks, recurrent neural networks, and recursive neural networks.
3) The advantages of deep learning include automatic feature extraction from raw data with minimal human effort, and surpassing conventional machine learning algorithms in accuracy across many data types.
International Journal of Computational Engineering Research(IJCER) ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Analysis of Homomorphic Technique and Secure Hash Technique for Multimedia Co...IJERA Editor
This document discusses security techniques for multimedia content systems stored in the cloud. It analyzes homomorphic encryption and secure hash techniques. The document provides an overview of homomorphic encryption, describing how it allows computation on encrypted data while preserving privacy. It also reviews related work applying homomorphic encryption and searchable encryption to securely store and process multimedia files in cloud systems. The goal is to push workloads in a fully homomorphic encrypted form to cloud storage while maintaining privacy and flexibility.
Big Data and Next Generation Network Challenges - PhdassistancePhD Assistance
This document provides an overview of next generation networks and big data challenges. It discusses how 5G networks will generate huge amounts of data from billions of wireless devices. Modern data analytics techniques like big data analytics will be needed to efficiently handle and extract insights from this large and diverse data. The document also outlines some of the key requirements for 5G networks, such as high data rates and low latency, and the underlying technologies being developed to achieve this, including millimeter wave spectrum and massive MIMO. It discusses open issues regarding security, privacy, and analyzing heterogeneous data sources.
This document summarizes Melanie Swan's presentation on deep learning. It began with defining key deep learning concepts and techniques, including neural networks, supervised vs. unsupervised learning, and convolutional neural networks. It then explained how deep learning works by using multiple processing layers to extract higher-level features from data and make predictions. Deep learning has various applications like image recognition and speech recognition. The presentation concluded by discussing how deep learning is inspired by concepts from physics and statistical mechanics.
EclipseCon France 2015 - Science TrackBoris Adryan
Software is increasingly playing a big part in scientific research, but in most cases the growth is organic. The life time of research software is often as short as the duration of a postdoctoral contract: Once the researcher moves on, custom-written niche code is frequently not well documented, components are not reusable, and the overall development effort is likely lost.
This is a case study in looking at the evolution of software for research in the field of genomics within my research group at the Department of Genetics at Cambridge University. While our research questions changed over the past decade, we moved from Perl code and regular expressions to R and statistical analysis, and from there to agent-based simulations in Java. Not only will I discuss the languages and tools used as well as the processes and how they have evolved over the years. It also covers the factors that influence the nature of the growth, such as funding, but also how 'open source' as a default has changed our development work. We also take a look into the future to see how we predict the software usage will grow.
Also, in presenting the problems and discussing possible solution, this talk will look at the role institutions play in helping address these issues. In particular the Software Sustainability Institute (SSI, http://software.ac.uk/) works in the UK to promote the development, maintenance and (re)use of research software.
The Eclipse Foundation, with the Science Working Group, works to facilitate software sharing and reuse. How can organisations like the SSI and Eclipse align their strategies and activities for maximum effect?
International Journal of Ad hoc, Sensor & Ubiquitous Computing (IJASUC)ijasuc
International Journal of Ad hoc, sensor & Ubiquitous Computing (IJASUC) is a bi monthly open access peer-reviewed journal provides excellent international forum for sharing knowledge and results in theory, methodology and applications of Ad Hoc & Ubiquitous computing. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Ubiquitous Computing presents a rather arduous requirement of robustness, reliability and availability to the end user. Ad hoc, Sensor & Ubiquitous computing has received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life.
Defining a Practical Path to Artificial Intelligence Roman Chanclor
With the evolution of purpose built AI Infrastructures and the advancement of Graphics Processing Units (GPUs) that enable massively parallel, deep analysis in real-time; cognitive computing may be the norm in data centers in record time. But how?
LSTM deep learning method for network intrusion detection system IJECEIAES
The security of the network has become a primary concern for organizations. Attackers use different means to disrupt services, these various attacks push to think of a new way to block them all in one manner. In addition, these intrusions can change and penetrate the devices of security. To solve these issues, we suggest, in this paper, a new idea for Network Intrusion Detection System (NIDS) based on Long Short-Term Memory (LSTM) to recognize menaces and to obtain a long-term memory on them, in order to stop the new attacks that are like the existing ones, and at the same time, to have a single mean to block intrusions. According to the results of the experiments of detections that we have realized, the Accuracy reaches up to 99.98 % and 99.93 % for respectively the classification of two classes and several classes, also the False Positive Rate (FPR) reaches up to only 0,068 % and 0,023 % for respectively the classification of two classes and several classes, which proves that the proposed model is effective, it has a great ability to memorize and differentiate between normal traffic and attacks, and its identification is more accurate than other Machine Learning classifiers.
Mehr und schneller ist nicht automatisch besser - data2day, 06.10.16Boris Adryan
Das Gesetz der großen Zahlen gilt immer: Die statistische Sicherheit nimmt mit der Anzahl der Datenpunkte immer zu, sofern die Datennahme fair erfolgt. Leider kostet das Sammeln der Daten oftmals Geld, und so ist man vor allem im Bereich der Sensorik (Stichwort: Internet der Dinge) gezwungen, sinnvolle Kompromisse einzugehen. In diesem Vortrag fasse ich die Erkenntnisse eines Projekts zusammen, in dem die Datenanalytik zeigte, dass man zukünftig nur 60% der ausgebrachten Sensoren wirklich braucht. Auch muss es nicht immer Echtzeit-Analyse sein: Mit einer auf den Business-Case abgestimmten Datenstrategie lassen sich unnötige Ausgaben vermeiden.
DLP Systems: Models, Architecture and AlgorithmsLiwei Ren任力偉
DLP is a data security technology that detects and prevents data breach incidents by monitoring data in-use, in-motion and at-rest. It has been widely applied for regulatory compliances, data privacy and intellectual property protection. This talk will introduce basic concepts and security models to describe DLP systems with high level architecture. DLP is an interesting discipline with content inspection techniques supported by sophisticated algorithms. Special investigation will be taken for a few algorithms: document fingerprinting, data record fingerprinting, scalable M-pattern string match and etc..
Cloud computing has sweeping impact on the human productivity. Today it’s used for Computing, Storage, Predictions and Intelligent Decision Making, among others. Intelligent Decision-Making using Machine Learning has pushed for the Cloud Services to be even more fast, robust and accurate. Security remains one of the major concerns which affect the cloud computing growth however there exist various research challenges in cloud computing adoption such as lack of well managed service level agreement (SLA), frequent disconnections, resource scarcity, interoperability, privacy, and reliability. Tremendous amount of work still needs to be done to explore the security challenges arising due to widespread usage of cloud deployment using Containers. We also discuss Impact of Cloud Computing and Cloud Standards. Hence in this research paper, a detailed survey of cloud computing, concepts, architectural principles, key services, and implementation, design and deployment challenges of cloud computing are discussed in detail and important future research directions in the era of Machine Learning and Data Science have been identified.
ARTIFICIAL INTELLIGENCE IN CYBER SECURITYCynthia King
Artificial intelligence techniques can help address challenges in cyber security that are difficult for humans to handle alone. Neural networks have proven effective for tasks like pattern recognition and classification that are well-suited to their speed of operation. Expert systems allow codifying security expertise to help with tasks like intrusion detection and response. As cyber threats evolve rapidly, applying learning approaches from artificial intelligence can help security systems adapt dynamically instead of relying only on fixed algorithms. Overall, artificial intelligence shows promise for enhancing cyber security capabilities by accelerating the intelligence of security systems.
The document discusses artificial intelligence (AI) in cloud computing. It covers applications of AI in the cloud like natural language processing, image recognition, and predictive analytics. It also discusses challenges of using AI in the cloud like data privacy/security, specialized hardware/software needs, integration issues, and lack of transparency. Additionally, it outlines the infrastructure needed for AI in the cloud, including compute resources, storage, networking, software, and management/monitoring tools. Major cloud service providers like AWS, Azure, and GCP offer these resources and services to support AI development and deployment in the cloud.
MAKING SENSE OF IOT DATA W/ BIG DATA + DATA SCIENCE - CHARLES CAIBig Data Week
Charles Cai has more than two decades of experience and track records of global transformational programme deliveries – from vision, evangelism to end-to-end execution in global investment banks, and energy trading companies, where he excels at designing and building innovative, large scale, Big Data systems in high volume low latency trading, global Energy Trading & Risk Management, and advanced temporal and geospatial predictive analytics, as Chief Front Office Technical Architect and Head of Data Science. He’s also a frequent speaker at Google Campus, Big Data Innovation Summit, Cloud World Forum, Data Science London, QCon London and MoD CIO Symposium etc, to promote knowledge and best practice sharing, with audience ranging from developers, data scientists, to CXO level senior executives from both IT and business background. He has in-depth knowledge and experience Scala, Python, C# / F#, C++, Node.js, Java, R, Haskell programming languages in Mobile, Desktop, Hadoop/Spark, Cloud IoT/MCU and BlockChain etc, and TOGAF9, EMC-DS, AWS CNE4 etc. certifications.
Platform for Big Data Analytics and Visual Analytics: CSIRO use cases. Februa...Tomasz Bednarz
Presented at the ACEMS workshop at QUT in February 2015.
Credits: whole project team (names listed in the first slide).
Approved by CSIRO to be shared externally.
Phoenix Data Conference - Big Data Analytics for IoT 11/4/17Mark Goldstein
“Big Data for IoT: Analytics from Descriptive to Predictive to Prescriptive” was presented to the Phoenix Data Conference on 11/4/17 at Grand Canyon University.
As the Internet of Things (IoT) floods data lakes and fills data oceans with sensor and real-world data, analytic tools and real-time responsiveness will require improved platforms and applications to deal with the data flow and move from descriptive to predictive to prescriptive analysis and outcomes.
Understanding the New World of Cognitive ComputingDATAVERSITY
Cognitive Computing is a rapidly developing technology that has reached practical application and implementation. So what is it? Do you need it? How can it benefit your business?
In this webinar a panel of experts in Cognitive Computing will discuss the technology, the current practical applications, and where this technology is going. The discussion will start with a review of a recent survey produced by DATAVERSITY on how Cognitive Computing is currently understood by your peers. The panel will also review many components of the technology including:
Cognitive Analytics
Machine Learning
Deep Learning
Reasoning
And next generation artificial intelligence (AI)
And get involved in the discussion with your own questions to present to the panel.
What happens in the Innovation of Things?Kim Escherich
From the ComputerWorld Internet of Things conference in Copenhagen October 27 2015. On definitions, markets, trends, needed capabilities and how to implement using IBM BlueMix.
the world of technology is changing at an unprecedented pace, and th.docxpelise1
the world of technology is changing at an unprecedented pace, and these changes represent business opportunities as well as challenges. Mass connectivity and faster speeds create opportunities for businesses to network more devices, complete more transactions, and enhance transaction quality. Internet Protocol version 6 (IPv6) and Internet of things (IoT) are two such technologies that represent significant opportunities for strategic cybersecurity technology professionals to create lasting value for their organizations.
IoT is the phenomenon of connecting devices used in everyday life. It provides an interactive environment of human users and a myriad of devices in a global information highway, always on and always able to provide information. IoT connections happen among many types of devices — sensors, embedded technologies, machines, appliances, smart phones — all connected through wired and wireless networks.
Cloud architectures such as software as a service have allowed for big data analytics and improved areas such as automated manufacturing. Data and real-time analytics are now available to workers through wearables and mobile devices.
Such pervasive proliferation of IoT devices gives hackers avenues to gain access to personal data and financial information and increases the complexity of data protection. Given the increased risks of data breaches, newer techniques in data loss prevention should be examined.
Increased bandwidth and increased levels of interconnectivity have allowed data to become dispersed, creating issues for big data integrity. In such a world, even the financial transactions of the future are likely to be different — Bitcoin and digital currency may replace a large portion of future financial transactions.
To survive and thrive, organizational technology strategists must develop appropriate technology road maps. These strategists must consider appropriate function, protection, and tamper-proofing of these new communications and transactions.
It will be impossible to protect data by merely concentrating on protecting repositories such as networks or endpoints. Cybersecurity strategists have to concentrate on protecting the data themselves. They will need to ensure that the data are protected no matter where they reside.:
Step2
Select Devices and Technologies
By now, you have an idea of your team members and your role on the team project. Now, it's time to get the details about the devices and technologies needed to be included in the Strategic Technology Plan for Data Loss Prevention.
You should limit the scope of this project by selecting a set of devices and technologies which are most appropriate for data loss prevention for your business mission and future success. Based on your prior knowledge of your company and based on the project roles you agreed upon in the previous step, perform some independent research on the following topics and identify a set of devices and technologies that you propose for.
The document discusses the Internet of Things (IoT) and some of the key challenges. It notes that IoT data is multi-modal, distributed, heterogeneous, noisy and incomplete. It raises issues around data management, actuation and feedback, service descriptions, real-time analysis, and privacy and security. The document outlines research challenges around transforming raw data to actionable information, machine learning for large datasets, making data accessible and discoverable, and energy efficient data collection and communication. It emphasizes that IoT data integration requires solutions across physical, cyber and social domains.
This document discusses how telecom companies can leverage artificial intelligence and analytics to drive digital transformation. It identifies key opportunities for AI including improving the customer experience, fraud mitigation, and predictive maintenance. It then outlines the components of a telecom data lake that can support these advanced analytics initiatives. Examples of AI use cases for different telecom business functions like marketing, network operations, and security are also provided. The document argues that a data lake platform optimized for analytics can help telecom companies achieve business and innovation goals through improved operations, new revenue streams, and lower costs.
MBA-TU-Thailand:BigData for business startup.stelligence
This document provides an overview of big data presented by Santisook Limpeeticharoenchot. It begins with an introduction to big data, covering definitions, characteristics involving volume, velocity, variety and veracity. Examples of big data sources like machine data, sensor data, and internet of things data are described. The use of big data analytics in industries like manufacturing, healthcare, and transportation is discussed. Finally, the document touches on data visualization, different types of analytics, and how companies can use big data to better understand customers and optimize business processes.
AI in Business - Key drivers and future valueAPPANION
Artificial Intelligence is undoubtedly a hyped topic at the moment. But what is the reasoning for investors and digital platform players to bet very large amounts of money on this technology right now? To better understand the current market dynamics and to give an overview of renown predictions for the upcoming 2-3 years, we compiled a practical overview of this topic. This report covers the major driving forces of AI, assumptions for the future from the industry thought leaders as well as practical advice on how to start AI projects within your company.
This document provides information about an Artificial Intelligence Engineer learning path offered by Simplilearn. The learning path includes courses in data science with Python, machine learning, and deep learning with TensorFlow. It describes the key features and benefits of the AI Engineer program, including 15+ in-demand skills and tools covered, 10+ real-life projects, hands-on experience, and an industry-recognized certification upon completion. Successful graduates will be prepared for roles as AI engineers and machine learning engineers.
This document discusses how cognitive computing can help realize the full potential of the Internet of Things (IoT). It notes that while early IoT applications are providing value, the vast majority of data generated by IoT devices is currently unused. Cognitive systems that can learn from large amounts of structured and unstructured data have the potential to extract much more insights from IoT data and enable more advanced IoT applications. The document outlines some key foundations for a successful IoT strategy and argues that cognitive systems like IBM's Watson platform can help address the data challenges of IoT by facilitating deeper human engagement, continuous learning, predictive capabilities, knowledge sharing and optimization of complex systems.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
This document discusses techniques for identifying fake news using social network analysis. It first reviews literature on existing fake news identification methods that use feature extraction from news content and social context. Deep learning models are then proposed to classify news as real or fake using datasets of news and social network information. The implementation achieves 99% accuracy on binary classification of news. Social network analysis factors like bot accounts, echo chambers, and information spread are discussed as enabling the spread of fake news online.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it
tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for
misstatement of information thru its source, content material, or author and save you the unauthenticated
assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network
entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for
fake information presence. The implementation setup produced most volume 99% category accuracy, even
as dataset is tested for binary (real or fake) labelling with multiple epochs.
The document discusses emerging technologies including artificial intelligence, blockchain, big data/analytics, internet of things, cybersecurity, augmented/virtual reality, and cloud computing. It provides brief definitions and examples for each technology. The document also discusses related topics like business intelligence vs data science, IoT and cybersecurity applications, and additional frameworks/methodologies like Angular/React, DevOps, intelligent apps, and robotic process automation. Overall the document provides a high-level overview of popular emerging technologies and how they are being applied.
Dell NVIDIA AI Powered Transformation in Financial Services WebinarBill Wong
Digital transformation through data analytics and AI can help financial services firms address business, technology, and labor challenges caused by COVID-19. Key trends include increased reliance on remote work and digital platforms, and the importance of data analytics for decision making. By 2025, 90% of new apps will use AI. The document discusses NVIDIA and Dell Technologies' partnership and strategies for providing infrastructure to support AI workloads through solutions like the DGX A100 system, which can support training, inference, and analytics on one platform through technologies like GPUs and MIG. This helps provide a more flexible and efficient infrastructure compared to traditional siloed approaches.
Similar to 2018 learning approach-digitaltrends (20)
1. The document discusses key aspects of digital transformation including focusing on speed, data, and ecosystems. It emphasizes the importance of building digital capabilities like customer experience, operations, and business models.
2. Transformation requires changes in information technology, strategy, and organizational agility. Companies should move along a continuum from pre-digital to digital pure play.
3. Accelerating transformation involves increasing speed through shorter feedback loops, leveraging large amounts of available data, and developing partnerships within ecosystems. Digital thread and twin approaches can also drive continuous improvement.
Architects and Designers do understand the principles of design. While delving on Requirements without paying heed to the needs to identify latent needs is a challenge
Distribution Automation - Emerging Trends and Challenges Providing an overview of challenges, further providing a detail by introducing IEC 61850 standard and finally concluding by discussing the need of a maker approach or workshops thus enabling better skills and development at institutions.
This presentation discusses human-machine collaboration in industrial automation. It covers the need for human-machine interaction, handling uncertainty through a lifecycle approach using digital twins, and progressing towards more autonomous systems through a step-by-step approach. The overarching goal is a symbiotic relationship between humans and machines by leveraging their respective strengths to maximize benefits.
The document provides an overview of software architecture in an agile world. It discusses the need for speed in software development driven by factors like scaling, heterogeneous systems, and reduced infrastructure costs. It then covers foundations of software architecture including definitions, skills, deliverables, and examples of great reference architectures. The remainder outlines an agile approach to software architecture, including preparing an architecture vision, early decisions, decomposition, identifying significant elements, a risk-based roadmap, measuring progress, and communicating. The summary emphasizes that architects now play a continuous role on development teams.
The document summarizes power systems evolution and technology trends in India. It discusses how India is balancing universal electricity access with reducing climate change impacts. It outlines the development of India's power grid and challenges in ensuring efficient operations. The document then covers trends in power generation automation, transmission control and management, distribution microgrids, and emerging operational views. It emphasizes opportunities for original research in areas like renewable energy modeling, energy storage, distributed decision-making, and integrating power systems with communication technologies like 4G networks.
This document discusses industry expectations for entry-level engineers and steps for improvement. It notes that most new engineers require 8-12 months of additional training to be productive. It recommends developing core employability skills like communication, integrity, and willingness to learn during university. Industry expects professional skills like applying math/science, problem-solving, and customer service. Universities should focus more on "doing" through workshops, competitions, and industry projects to help students gain real-world experience and transition from classroom to workplace.
1. The document discusses using scenarios and use cases to visualize requirements for complex projects. It provides an example of Boeing's analysis and simulation studio, which uses scientific visualization to recreate scenarios.
2. An example use case for withdrawing cash from an ATM is described in detail, including basic and alternate flows.
3. It is important to develop use cases from requirements to establish a shared vision between stakeholders. Scenario visualization tools can be used to define, edit, load, and play scenarios to further illustrate requirements.
This is daily increasing. More relevant
http://publications.computer.org/software-magazine/2017/11/16/automotive-engineering-software-and-agile-development/
The document provides an overview of design thinking and innovation. It discusses the need for innovation in industries facing disruption, competition, and the need for new skills. Design thinking is presented as an approach that includes inspiration, ideation, and implementation through iterative cycles. Key aspects of design thinking are identifying real customer needs through research and observation, generating solutions to address those needs, and evaluating whether solutions are technically feasible (win) and commercially viable (worth pursuing based on net present value analysis). Examples are given of how design thinking was applied at IDEO and how it could transform product development processes at organizations.
This document discusses perspectives on innovation from economists and management scholars. It defines innovation as invention plus exploitation or commercialization. The steps involved in innovation are identified as identifying resources, understanding organizational limitations and abilities, managing interfaces, and assessing projects from a customer value and systems perspective. Principles of "jugaad", an Indian strategy, are presented, as are case studies of innovative individuals like Steve Jobs, Sir Jagadish Chandra Bose, Dr. Verghese Kurien, and Ratan Tata.
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Hire a private investigator to get cell phone recordsHackersList
Learn what private investigators can legally do to obtain cell phone records and track phones, plus ethical considerations and alternatives for addressing privacy concerns.
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
What Not to Document and Why_ (North Bay Python 2024)Margaret Fero
We’re hopefully all on board with writing documentation for our projects. However, especially with the rise of supply-chain attacks, there are some aspects of our projects that we really shouldn’t document, and should instead remediate as vulnerabilities. If we do document these aspects of a project, it may help someone compromise the project itself or our users. In this talk, you will learn why some aspects of documentation may help attackers more than users, how to recognize those aspects in your own projects, and what to do when you encounter such an issue.
These are slides as presented at North Bay Python 2024, with one minor modification to add the URL of a tweet screenshotted in the presentation.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threatsanupriti
In the rapidly evolving landscape of blockchain technology, the advent of quantum computing poses unprecedented challenges to traditional cryptographic methods. As quantum computing capabilities advance, the vulnerabilities of current cryptographic standards become increasingly apparent.
This presentation, "Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threats," explores the intersection of blockchain technology and quantum computing. It delves into the urgent need for resilient cryptographic solutions that can withstand the computational power of quantum adversaries.
Key topics covered include:
An overview of quantum computing and its implications for blockchain security.
Current cryptographic standards and their vulnerabilities in the face of quantum threats.
Emerging post-quantum cryptographic algorithms and their applicability to blockchain systems.
Case studies and real-world implications of quantum-resistant blockchain implementations.
Strategies for integrating post-quantum cryptography into existing blockchain frameworks.
Join us as we navigate the complexities of securing blockchain networks in a quantum-enabled future. Gain insights into the latest advancements and best practices for safeguarding data integrity and privacy in the era of quantum threats.
Interaction Latency: Square's User-Centric Mobile Performance MetricScyllaDB
Mobile performance metrics often take inspiration from the backend world and measure resource usage (CPU usage, memory usage, etc) and workload durations (how long a piece of code takes to run).
However, mobile apps are used by humans and the app performance directly impacts their experience, so we should primarily track user-centric mobile performance metrics. Following the lead of tech giants, the mobile industry at large is now adopting the tracking of app launch time and smoothness (jank during motion).
At Square, our customers spend most of their time in the app long after it's launched, and they don't scroll much, so app launch time and smoothness aren't critical metrics. What should we track instead?
This talk will introduce you to Interaction Latency, a user-centric mobile performance metric inspired from the Web Vital metric Interaction to Next Paint"" (web.dev/inp). We'll go over why apps need to track this, how to properly implement its tracking (it's tricky!), how to aggregate this metric and what thresholds you should target.
Performance Budgets for the Real World by Tammy EvertsScyllaDB
Performance budgets have been around for more than ten years. Over those years, we’ve learned a lot about what works, what doesn’t, and what we need to improve. In this session, Tammy revisits old assumptions about performance budgets and offers some new best practices. Topics include:
• Understanding performance budgets vs. performance goals
• Aligning budgets with user experience
• Pros and cons of Core Web Vitals
• How to stay on top of your budgets to fight regressions
Are you interested in learning about creating an attractive website? Here it is! Take part in the challenge that will broaden your knowledge about creating cool websites! Don't miss this opportunity, only in "Redesign Challenge"!
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Blockchain and Cyber Defense Strategies in new genre timesanupriti
Explore robust defense strategies at the intersection of blockchain technology and cybersecurity. This presentation delves into proactive measures and innovative approaches to safeguarding blockchain networks against evolving cyber threats. Discover how secure blockchain implementations can enhance resilience, protect data integrity, and ensure trust in digital transactions. Gain insights into cutting-edge security protocols and best practices essential for mitigating risks in the blockchain ecosystem.
What's Next Web Development Trends to Watch.pdfSeasiaInfotech2
Explore the latest advancements and upcoming innovations in web development with our guide to the trends shaping the future of digital experiences. Read our article today for more information.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.
2. AI
CC
IoT
The apples seen on the tree include:
AR - Augmented RealityVR
DA
AR - Augmented Reality
VR- Virtual Reality
AI-Artificial Intelligence
IoT-Internet of Things
CC Cloud Computing
AR
CS
CC-Cloud Computing
DA-Data Analytics
CS- Cyber Security
SDN-Software Defined Networks
Software
Defined
Everything
SDN
Voice
Recognition
Micro
Services
Model
Based
Engineering
Distributed
Source
Control
Open
Communication
Stacks How do we get to the apple and eat as well ?
Sir Issac newton identified the force behind the apple
falling down as gravitation. Many such strides led to
Bayesian
Belief
Network
Natural
Language
Processing
Deep
Learning
Recognition
Open
Embedded
Systems
Distributed
Databases
Big Data
Analytics
a g do as g a tat o a y suc st des ed to
advancement of humanity. Today as humans we are no
more mere observers, instead have power to create new
from our understanding of nature.
Internet or
Data
Mining
Computer
Vision
ARM
Architecture
& System on
Chip
Cryptography &
Security Engineering
Automated
Test &
Hardware in
Loop
Every science had been through three distinct-stages i.e.,
classification , correlation and Effect- Cause- Effect. Or
otherwise we call a certain interest science once it has
reached the third phaseInternet or
World Wide Web
reached the third phase.
The correlations done here is a humble attempt to compile
and analyse these technologies and support accelerate
their learningtheir learning.
If you don’t know what you don’t know, That is a great beginning – Socrates
3. Summary – Page 1
The fundamentals include Probability and Statistics, Liner Algebra, Computer Science,
Communication and sensing plus Software and Systems Engineering abilities. [6]
Internet and its architecture built by Tim Berners Lee and team is something to be
understood. Roy Fielding’s seminal dissertation describing REpresentational State Transfer
(REST) is a must read. [7]
Data Mining and Vis ali ation has been po e f l tools fo decision s ppo t as ell as a i ingData Mining and Visualization has been powerful tools for decision support as well as arriving
at correlations. While Weka focuses on Data Mining, for visualization we can use R with R
Studio, Qt Data Visualization or Python itself. [7]
The emergence of consumer electronics especially mobile phones triggered the popularity ofg p y p gg p p y
ARM based architectures and System on Chip which lead to reduction of hardware costs.
ARM architecture and the approach of lifecycle is unique considering the instruction set is
now world wide being used in many millions of devices. [7]
Computer Vision driven by OpenCV was the first to mature with Optical Character recognitionComputer Vision driven by OpenCV was the first to mature with Optical Character recognition
considering the ease at which the vision elements could be transformed in matrices and
used. Automated tests and Hardware in Loop advancements in simulations provide
significant ability to automate development and test process leading to speeding time to
market. [8]a et [8]
Cryptography and Security engineering emerged with the digital solutions and internet. Open
approaches like OpenSSL, OpenLDAP and Driven by standards on Information Security
and IEC 62351 and ISA-99 and with new laws on data protection, cyber security becomes
an ingredient of all digital solutions [8]an ingredient of all digital solutions. [8]
4. Summary – Page 2
Data Analytics is one that has moved into the realm of science. Heavily discussed in detail in
McKinsey report-Age of Analytics, this area has significant influences on value proposition.
Unless you require autonomy like in robotics data science could provide the solution The toolsUnless you require autonomy like in robotics, data science could provide the solution. The tools
for data science include R Rstudio, Python and supported tools. PYMC a library based on Python
can be used for Bayesian Belief Networks (BBN). Natural Language Processing being supported
by libraries like NLTK. [9]
The considerations of AI start from Machine Learning and progresses into latest applications ofThe considerations of AI start from Machine Learning and progresses into latest applications of
Deep Learning considering open tools like TensorFlow from Google. Any discussion on learning
AI would need to start with Machine Learning course from Andrew Ng and then progress to
deep learning. [10]
Mi S i hit t l t l b d l ti d i t i i th hMicro Services an architectural style based on evolutionary design to services is the approach
suggested and followed by organizations including Amazon to stay agile and support faster
deployment. [11]
Open Communication Stacks like OpenStack, Basic TCP/IP Stack and latest IoT protocol stacksp p p
have all together started supporting an ecosystem to speed and innovation in communication
development. Additionally supported by mininet like network emulators that support Software
Defined Networking, the possibility of emergence of Software Defined Everything is on the way.
[12]
5. Summary – Page 3
There are multiple learning paths for instance a few below:
Probability& Statistics -> Internet -> Data Mining -> Visualization -> AnalyticsProbability& Statistics Internet Data Mining Visualization Analytics
Communications & Sensing-> Internet->Arm SoC-> Hw in Loop->Open Communications> IoT
Computer Science -> Internet->Micro Services-> Open Communications> Cloud Computing
Linear Algebra, Computer Science -> Internet->Machine Learning-> Deep Learning
A Software or Systems Engineering background is necessary, as well is a basic understanding of
Cyber Security.
h h l i l h ld b b d h f b bili l iThe technologies to learn should be based on strengths. If you are strong on probability, analytics
is a sure shot, where as if you are good at communication protocols, then advancement in
protocols and communication including SDN and IoT is the way to go. If you already have an
idea of neural networks and looking forward to autonomy, deep learning is for sure a way to go.
At the same time parallel building knowledge and applying it in some solution would be the way to
go considering the availability of open source frameworks to start with. Mastery of all these
technologies to the detail would be difficult without great team work!
In order to tie these all together, we need to select the right use cases/ business cases, designIn order to tie these all together, we need to select the right use cases/ business cases, design
them in minimal, analyze the returns, the capabilities and select what best serves the purpose.
Here we can apply Design thinking as a framework for analysis and selection, thus tying the
pieces to make a bigger impact. Industry Standards is another most important ingredient. [13]
6. Roots ‐ Foundations
Probability and Statistics Computer ScienceProbability and Statistics
Book:
https://www.dartmouth.edu/~chance/teaching
aids/books articles/probability book/amsboo
Computer Science
Book:
Introduction to Computation and_aids/books_articles/probability_book/amsboo
k.mac.pdf
Course:
https://www.edx.org/course/introduction-
Programming Using Python, Revised And
Expanded Edition By John V. Guttag
Course:p // g/ /
probability-science-mitx-6-041x-2
Course: PH525.1x: Data Analysis for Life
Sciences 1: Statistics and R
h // d / i /d l i
https://www.edx.org/course/introduction-
computer-science-mitx-6-00-1x-11
https://www.edx.org/xseries/data-analysis-
life-sciences
Communication & Sensing
Book: Communication Networks by Andrew S. Tannenbaum
Course:
https // ed o g/co se/s stem ie comm nications signals hk st elec1200 1 3https://www.edx.org/course/system-view-communications-signals-hkustx-elec1200-1x-3
https://www.edx.org/course/digital-networks-essentials-imtx-net01x
Sensors:
http://engineering.nyu.edu/gk12/amps-cbri/pdf/Intro%20to%20Sensors.pdf
https://www.edx.org/course/introduction-control-system-design-first-mitx-6-302-0x
7. First steps
I t t D t Mi iInternet
Book: Seminal Dissertation by Roy Thomas Fielding
on REST
Data Mining
Book: Data Mining:: Practical
Machine Learning Tools andon REST
Courses:
Java Web Services
Mi ft NET W b S i
Machine Learning Tools and
Techniques (Morgan Kaufmann
Series in Data Management
Systems)
Microsoft .NET Web Services
Python based Services
https://blog.miguelgrinberg.com/post/designing-a-
restful-api-with-python-and-flask
Open Source Software : Weka
https://www.cs.waikato.ac.nz/ml/w
eka/ (University of Waikato)restful api with python and flask eka/ (University of Waikato)
Data Visualization
Book :http://www storytellingwithdata com/book
ARM architecture and SoC
Book :http://www.storytellingwithdata.com/book
Courses
https://www.edx.org/course/data-visualization-all-
Book: ARM System-on-Chip
Architecture by Steve Furber
F S ft ARM Ki l
p g
trinityx-t005x
https://www.udemy.com/data-visualization/
https://www coursera org/learn/datavisualization
Free Software : ARM Kiel
Course: Embedded Systems -
Shape The World: Microcontrollerhttps://www.coursera.org/learn/datavisualization
Open Source Software: Qt Data Visualization, HTML5-
JQuery, Tablueu Public
Shape The World: Microcontroller
Input/output
8. Next Steps
Cyber Security
Books:
Computer Vision
Book:Books:
Security Engineering by Ross Anderson
Courses:
Book:
Mastering Open CV with practical
computer vision projects
https://www.edx.org/course/introduction-
cybersecurity-uwashingtonx-cyb001x
https://www.edx.org/micromasters/ritx-
Course:
https://www.udemy.com/hands-
on-computer-vision-with-opencv-
python/https://www.edx.org/micromasters/ritx
cybersecurity
Automated Test and Hardware in Loop
python/
Automated Test and Hardware in Loop
Book: ‘Test Driven Development: By Example’ By Kent Beck
Test driven development for Embedded C : http://cpputest.github.io/
Hardware-in-the-Loop (HIL) Simulation
Course:
h // d / /d i i f d 200 5 0https://www.edx.org/course/devops-testing-microsoft-devops200-5x-0
https://www.udemy.com/test-driven-development-for-professionals/
9. Next Steps
Start with Data Analytics and AI with this report
The age of analytics: Competing in a data-driven world by McKinsey
https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/the-
age-of-analytics-competing-in-a-data-driven-worldg y p g
Data Analytics
Book: Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IpythonBook: Python for Data Analysis: Data Wrangling with Pandas, NumPy, and Ipython
Data Analytics - Models and Algorithms for Intelligent Data Analysis
Courses:
htt // d / i /d t l i lif ihttps://www.edx.org/xseries/data-analysis-life-sciences
https://www.edx.org/course/data-science-r-basics-harvardx-ph125-1x-0
https://www.edx.org/course/python-for-data-science
Open Source Software : Using Python NLTK for Natural Language Processing andOpen Source Software : Using Python NLTK for Natural Language Processing and
PYMC for Bayesian Belief Networks
URLS: https://www.kdnuggets.com/2018/01/gregory-piatetsky-data-science-
i b t j ht lincubator-january.html
https://www.kdnuggets.com/2017/07/machine-learning-big-data-explained.html
https://www.kdnuggets.com/2017/09/science-data-science.html
10. Next Steps
Artificial Intelligence
Book :Book :
Artificial Intelligence: Pearson New International Edition: A Modern Approach by
Stuart Russel & Team
Courses:
https://courses.edx.org/courses/BerkeleyX/CS188.1x-4/1T2015/course/https://courses.edx.org/courses/BerkeleyX/CS188.1x 4/1T2015/course/
https://www.edx.org/course/artificial-intelligence-ai-columbiax-csmm-101x-4
Micro Masters:
https://www.edx.org/micromasters/columbiax-artificial-intelligence
A Faster Path - Following Andrew Ng (Baidu), also explains the math behind:
htt // d / / hi l i l bi 102 2https://www.edx.org/course/machine-learning-columbiax-csmm-102x-2
https://www.coursera.org/learn/machine-learning
https://www.coursera.org/specializations/deep-learningp // g/ p / p g
Open Software : TensorFlow
11. Next Steps
Internet of Things
Book: Internet of Things: Principles and
Micro Services
Book: Book: Internet of Things: Principles and
Paradigms
Course:
h // d / i / i
Book:
Microservice Architecture: Aligning Principles,
Practices, and Culture
l https://www.edx.org/micromasters/curtin
x-internet-of-things-iot
https://www.edx.org/course/iot-sensors-
devices curtinx iot2x
Material:
https://martinfowler.com/articles/microservic
es.html
devices-curtinx-iot2x
https://www.mooc-list.com/tags/sensors
https://www.fullstackpython.com/microservic
es.html
Di t ib t d S M d l B d E i iDi t ib t dDistributed Source
Control
Book:
Model Based Engineering
Book:
Model-Based Systems Engineering with
Distributed
Databases
Book:
Version Control with
Git 2e
Material:
y g g
OPM and SysML
Courses:
https://www coursera org/learn/mbse
Version Control with
Git 2e
Material:
https://git-
scm.com/book/en/v2
https://www.coursera.org/learn/mbse
https://sysengonline.mit.edu/
https://git-
scm.com/book/en/v2
12. Next Steps
Open Embedded Systems
Book:
Open Communications
Book: Book:
Real-Time Embedded Systems: Open-
Source Operating Systems Perspective
U i F RTOS R l i K l
Book:
OpenStack Essentials
OpenStack for Architects
Using FreeRTOS Realtime Kernel
Material:
FREERTOS
Material:
https://www.edx.org/course/introduction-
openstack-linuxfoundationx-lfs152x FREERTOS
https://www.raspberrypi.org/help/
openstack linuxfoundationx lfs152x
Software Defined EverythingSoftware Defined Everything
Book:
SDN – Software Defined Networks
Software Defined Storage for Dummies
Cloud Computing: Principles and Paradigm
Distributed and Cloud Computing: From Parallel Processing to the Internet of ThingsDistributed and Cloud Computing: From Parallel Processing to the Internet of Things
Courses:
https://www.edx.org/course/introduction-cloud-computing-ieeex-cloudintro-x-2
13. Putting it all together
Design Thinking
Book: Change by Design: How Design Thinking Transforms Organizations and
Inspires Innovationp
Short Presentation: Link in Slideshare
Courses: https://emeritus.org/management-certificate-programs/innovation-design-
thinking/thinking/
https://www.coursera.org/learn/uva-darden-design-thinking-innovation