This document discusses using AWS CloudFormation and AWS CodePipeline to implement infrastructure continuous delivery. It begins by explaining the need for infrastructure as code and continuous delivery workflows for infrastructure changes. AWS CloudFormation allows treating infrastructure as code by authoring templates and provisioning AWS resources from them. AWS CodePipeline can then be used to automate building, testing and deploying infrastructure changes as code is updated. The document demonstrates decomposing a sample application into CloudFormation templates and setting up a CodePipeline to continuously deliver changes. It provides examples of how to model pipelines for network resources and application components separately with dependencies.
Docker containers have become a key component of modern application design. Increasingly, developers are breaking their applications apart into smaller components and distributing them across a pool of compute resources.
This document discusses DevOps practices at Amazon, including:
1. Amazon uses DevOps practices like continuous integration, deployment, and automation to deploy code changes frequently and reliably, with mean deployment times of 11.6 seconds and up to 10,000 deployments in an hour.
2. Adopting DevOps practices has led to a 75% reduction in outages from software deployments and a 90% reduction in outage minutes since 2006.
3. The document outlines DevOps tools and practices used at Amazon like AWS services for version control, continuous integration, deployment automation, and monitoring.
In this session we’ll take a high-level overview of AWS Lambda, a serverless compute platform that has changed the way that developers around the world build applications. We’ll explore how Lambda works under the hood, the capabilities it has, and how it is used. By the end of this talk you’ll know how to create Lambda based applications and deploy and manage them easily.
Speaker: Chris Munns - Principal Developer Advocate, AWS Serverless Applications, AWS
How can you accelerate the delivery of new, high-quality services? How can you be able to experiment and get feedback quickly from your customers? To get the most out of the agility afforded by serverless and containers, it is essential to build CI/CD pipelines that help teams iterate on code and quickly release features. In this talk, we demonstrate how developers can build effective CI/CD release workflows to manage their serverless or containerized deployments on AWS. We cover infrastructure-as-code (IaC) application models, such as AWS Serverless Application Model (AWS SAM) and new imperative IaC tools. We also demonstrate how to set up CI/CD release pipelines with AWS CodePipeline and AWS CodeBuild, and we show you how to automate safer deployments with AWS CodeDeploy.
Amazon EC2 Container Service is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of Amazon EC2 instances in your cluster. This session will describe the benefits of containers, introduce the Amazon EC2 Container Service, and demonstrate how to use Amazon EC2 Container Service for your applications.
Speakers:
Ian Massingham, AWS Technical Evangelist and
Boyan Dimitrov, Platform Automation Lead, Hailo Cabs
This session is focused on diving into the AWS IAM policy categories to understand the differences, learn how the policy evaluation logic works, and go over some best practices. We will then walk through how to use permission boundaries to truly delegate administration in AWS.
AWS Control Tower is a new AWS service for cloud administrators to set up and govern their secure, compliant, multi-account environments on AWS.
In this session, University of York will discuss their implementation of AWS Landing Zone. We’ll also explain how AWS Control Tower automates AWS Landing Zone creation with best-practice blueprints.
(DVO315) Log, Monitor and Analyze your IT with Amazon CloudWatchAmazon Web Services
You may already know that you can use Amazon CloudWatch to view graphs of your AWS resources like Amazon Elastic Compute Cloud instances or Amazon Simple Storage Service. But, did you know that you can monitor your on-premises servers with Amazon CloudWatch Logs? Or, that you can integrate CloudWatch Logs with Elasticsearch for powerful visualization and analysis? This session will offer a tour of the latest monitoring and automation capabilities that we’ve added, how you can get even more done with Amazon CloudWatch.
API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. It allows hosting multiple API versions and stages, generating SDKs, adding authentication, throttling requests, and caching responses to improve performance and reduce latency. API Gateway supports building and deploying REST and WebSocket APIs. Pricing is based on the number of API calls and amount of data transferred out. Optional dedicated caching tiers are also available.
Do you want to run your code without the cost and effort of provisioning and managing servers? Find out how in this deep dive session on AWS Lambda, which allows you to run code for virtually any type of application or back end service – all with zero administration. During the session, we’ll look at a number of key AWS Lambda features and benefits, including automated application scaling with high availability; pay-as-you-consume billing; and the ability to automatically trigger your code from other AWS services or from any web or mobile app.
AWS CodeCommit, CodeDeploy & CodePipelineJulien SIMON
The document summarizes AWS Code services for automating the development lifecycle including CodeCommit for source control, CodePipeline for continuous delivery, and CodeDeploy for automated deployments. It describes how these services work together to enable microservices architectures and continuous delivery practices for deploying updates with no downtime. Examples are provided of how to set up a delivery pipeline using these AWS Code services to connect development tools and deploy changes from testing to production environments.
The document discusses Amazon EKS (Elastic Kubernetes Service), which allows users to run Kubernetes on AWS. It highlights that EKS manages the control plane for users and provides native integrations with other AWS services like load balancers, IAM, and container registry. The document also summarizes key capabilities like high availability of the Kubernetes masters, networking options, version upgrades, and how to provision Kubernetes nodes on EKS.
The document discusses Amazon Virtual Private Cloud (Amazon VPC), which allows users to define virtual networks within the AWS cloud. It describes benefits of using VPC such as security, IP address management, and network access control. It then covers VPC capabilities, architecture scenarios, configuration options for public/private subnets, security features like security groups and network ACLs, and additional topics such as dedicated hardware, VPC peering, and default VPC configuration.
This document summarizes CI/CD on AWS by Bhargav Amin. It introduces DevOps practices like continuous integration, continuous delivery, and continuous deployment. It explains how to design a CI/CD pipeline and create one on AWS using services like CodeCommit, CodeBuild, CodeDeploy, and CodePipeline. The document provides examples of integrating these services to automate building, testing, and deploying code changes. It also includes a link to a demo repository and discusses managing infrastructure with CI/CD by updating CloudFormation templates in a pipeline.
This document summarizes Paul Maddox's presentation on Amazon EKS (Elastic Container Service for Kubernetes). It includes an agenda for the presentation, introduces Maddox and his background, and addresses some frequently asked questions about EKS. The presentation then provides an introduction to Kubernetes and EKS, describing how EKS manages the Kubernetes control plane and allows customers to run Kubernetes clusters on AWS, while also integrating AWS services. It highlights new features of EKS like Kubernetes certification and cross-account networking capabilities.
Improving Infrastructure Governance on AWS - AWS June 2016 Webinar SeriesAmazon Web Services
As your teams and infrastructure grow, it becomes more difficult to track IT resource changes as well as identify who made changes and when. It also becomes harder to enforce standards for your infrastructure resources, resulting in configuration drift and potential security issues. On AWS, you can easily standardize infrastructure configurations for commonly used IT services while also enabling self-service provisioning for your company. Once these resources are provisioned, you can then track how these resources are connected and monitor configuration changes and drift. In this session, we will discuss how you can achieve a sophisticated level of standardization, configuration compliance, and monitoring using a combination of AWS Service Catalog, AWS Config, and AWS CloudTrail.
Learning Objectives:
Understand how to use AWS services to enable governance while providing self-service
Learn to codify your business policies to promote compliance
How to improve security without sacrificing developer productivity
Getting Started With Continuous Delivery on AWS - AWS April 2016 Webinar SeriesAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying code changes. This automation helps you catch bugs sooner and increases developer productivity.
In this webinar, we’ll share the processes that Amazon engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodePipeline and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
Learning Objectives:
• Learn what is continuous delivery, its benefits, and how to implement it
• Learn how to increase the frequency and reliability of your application updates
• Learn to create an automated software release workflow on AWS
• Understand the basics of AWS CodePipeline and AWS CodeDeploy
AWS Landing Zone Deep Dive (ENT350-R2) - AWS re:Invent 2018Amazon Web Services
In this session, we discuss how to deploy a scalable environment that considers the AWS account structure, security services, network architecture, and user access. We present an overview of the AWS Landing Zone solution, an automated solution for setting up a robust and flexible AWS environment designed from the collective experience of AWS and our customers. The AWS Landing Zone helps automate the setup of a flexible account structure, security baseline, network structure, and user access based on best practices. Future growth is facilitated by an account vending machine component that simplifies the creation of additional accounts. Learn how the AWS Landing Zone can ensure that you start your AWS journey with the right foundation. We encourage you to attend the full AWS Landing Zone track, including SEC303. Search for #awslandingzone in the session catalog.
AWS Elastic Beanstalk is a service that allows developers to quickly deploy and manage applications in the AWS cloud without worrying about the underlying infrastructure. It provides an easy way to launch applications developed in Java or other languages and have them automatically scaled across Amazon EC2 instances. Key features include automated provisioning and deployment, easy management of settings, built-in monitoring, and troubleshooting tools. Developers retain full control over their AWS resources while taking advantage of Elastic Beanstalk's management capabilities.
This document discusses infrastructure as code using the AWS Cloud Development Kit (CDK). It begins by describing manual infrastructure creation and then imperative and declarative infrastructure as code approaches. It introduces the CDK, which allows defining infrastructure in familiar programming languages like JavaScript. With the CDK, constructs can be used to provision many underlying AWS resources with a single class, making infrastructure definition more abstract and code-like.
AWS re:Invent 2016: Infrastructure Continuous Delivery Using AWS CloudFormati...Amazon Web Services
In this session, we will review ways to manage the lifecycle of your dev, test, and production infrastructure using CloudFormation. Learn how to architect your infrastructure through loosely coupled stacks using cross-stack references, tightly coupled nested stacks and other best practices. Learn how to use CloudFormation to provision and manage a continuous deployment pipeline for your infrastructure-as-code. Automate deployment of new development environments as your infrastructure evolves, promote your new architecture for testing, and deploy changes to production.
AWS re:Invent 2016: Chalk Talk: Succeeding at Infrastructure-as-Code (GPSCT312)Amazon Web Services
- Infrastructure as code is the practice of provisioning and managing infrastructure using code and software development techniques like version control. This allows infrastructure changes to be tested and deployed in a consistent, repeatable way.
- AWS services like CloudFormation, OpsWorks, and CodeDeploy allow defining infrastructure as code templates and automating the deployment of applications and infrastructure changes across environments like development, testing, and production.
- CloudFormation templates define AWS resources and their dependencies and can be used to create matching environments in different stages. OpsWorks and CodeDeploy help manage application deployments and ongoing configuration of running systems.
The document discusses AWS services for continuous integration, delivery, and deployment based on AWS. It describes how CodeCommit can be used for source code management, CodePipeline for continuous delivery, and CodeDeploy for continuous deployment. It also discusses how Elastic Beanstalk can be used to deploy and manage applications on AWS.
Devops integrates developers and operations teams to improve collaboration and productivity through automating infrastructure, workflows, and continuously measuring application performance. The goal is to automate everything like code testing, workflows, and infrastructure to deploy small chunks of code frequently for testing and production using the same infrastructure. AWS supports a platform as infrastructure and provides tools like CodePipeline, CodeCommit, CodeBuild, and CodeDeploy to automate deployments from development to production.
AWS re:Invent 2016: How to Launch a 100K-User Corporate Back Office with Micr...Amazon Web Services
Learn how to build a scalable, compliance-ready, and automated deployment of the Microsoft “backoffice” servers for 100K users running on AWS. In this session, we show a reference architecture deployment of Exchange, SharePoint, Skype for Business, SQL Server and Active Directory in a single VPC. We discuss the following: (1) how the solution is automated for 100K users, (2) how the solution is enabled for compliance (e.g., FedRAMP, HIPAA, PCI), and (3) how the solution is built from modular 10K user blocks. Attendees should have knowledge of AWS CloudFormation, PowerShell, instance bootstrapping, VPCs, and Amazon Route 53, as well as the relevant Microsoft technologies.
Deploy, Manage, and Scale your Apps with AWS Elastic BeanstalkAmazon Web Services
AWS Elastic Beanstalk is the fastest and simplest way to deploy your application on AWS. It is ideal for developers that are new to the platform but is also used by large organizations that want to manage and scale production workloads with minimum operational overhead. This session shows you how to deploy your code to AWS Elastic Beanstalk, easily manage multiple environments (e.g. Test & Production) and perform zero-downtime deployments through interactive demos and code samples.
Day 3 - DevOps Culture - Continuous Integration & Continuous Deployment on th...Amazon Web Services
This document discusses continuous integration (CI) and continuous deployment (CD) workflows on AWS. It provides examples of CI/CD pipelines and tools. It also demonstrates how to automate infrastructure deployment and management using AWS services like CloudFormation, containerization with Docker, and extending CI/CD tools to interact with AWS APIs. The document concludes with a discussion on how to implement best practices for innovation, quality and governance in CI/CD processes.
This document provides an overview of serverless applications and how to build one. It discusses what serverless means, common use cases, how to bundle and deploy code, continuous integration and delivery, versioning, monitoring, and more. Specific AWS services for building serverless applications are also covered, including AWS Lambda, API Gateway, DynamoDB, S3, CloudFormation, CodeBuild, CodePipeline, X-Ray and CloudWatch.
WKS401 Deploy a Deep Learning Framework on Amazon ECS and EC2 Spot InstancesAmazon Web Services
Deep learning is an implementation of machine learning that uses neural networks to solve difficult and complex problems, such as computer vision, natural language processing, and recommendations. Due to the availability of deep learning libraries and frameworks, developers have the ability to enhance the capabilities of their applications and projects.
In this workshop, you learn how to build and deploy a powerful deep learning framework called MXNet on containers. The portability and resource management benefit of containers means developers can focus less on infrastructure and more on building. The labs start by demonstrating the automation capabilities of AWS CloudFormation to stand up core infrastructure; as an added bonus, you use Spot Fleet to leverage the cost benefits of using Spot Instances, especially for developer environments. Then, you walk through creating an MXNet container in Docker and deploying it with Amazon ECS. Finally, you walk through an image classification demo of MXNet to validate that everything is working as expected.
Pre-reqs: Laptop and AWS account
This document provides an introduction to network infrastructure concepts, database concepts, and web application development using cloud services. It begins with an overview of Amazon Web Services (AWS) including Amazon EC2, S3, SimpleDB, Elastic Block Store, Glacier, Elastic Beanstalk, ElastiCache, IAM, Route 53, Elastic Load Balancing, CloudFormation, CloudWatch, DynamoDB, RDS, SQS, and Redshift. It then covers virtual private clouds and various VPC configurations including public/private subnets, hardware VPN access, private subnets with VPN access, and disaster recovery. It discusses the use of Chef, Puppet, Git, GitHub, JSON, and CloudFormation templates for infrastructure automation
Continuous Integration e Delivery per (r)innovare lo sviluppo software e la g...Amazon Web Services
This document discusses continuous integration and delivery practices using AWS services like CodeCommit, CodeBuild, CodeDeploy, and CodePipeline. It summarizes how these services can be used together in a software development pipeline to automate building, testing, and deploying code changes. It also discusses how infrastructure as code with CloudFormation templates allows infrastructure to be provisioned and managed like code. The document provides an example of how a company implemented continuous integration of their infrastructure stacks using CloudFormation across different environments.
Introduction to DevOps on AWS. Basic introduction to Devops principles and practices, and how they can be implemented on AWS. Introduces basic cloudformation.
This document section covers deploying and managing Azure compute resources. It discusses options for high availability like availability zones, virtual machine scale sets, and availability sets. It also covers automating deployment through infrastructure as code using ARM templates, container and web app deployment, and networking options like load balancing and virtual network peering.
The document provides an overview of application lifecycle management (ALM) in a serverless world. It discusses key concepts like continuous integration/delivery and testing practices for serverless applications. Serverless architectures using AWS Lambda and API Gateway are highlighted, along with how to manage deployments, configurations, and monitor applications.
This mid-level technical session will help you choose among the AWS services that can help you deploy and run your applications more easily. You will learn how to get an application running using AWS OpsWorks and AWS Elastic Beanstalk and how to use AWS CloudFormation templates to document, version control, and share your application configuration
AWS Architecting Cloud Apps - Best Practices and Design Patterns By Jinesh VariaAmazon Web Services
Jinesh Varia, Technology Evangelist, Discusses AWS architecture best practices and design patterns at the AWS Enterprise Tour - SF - 2010
http://jineshvaria.s3.amazonaws.com/public/cloudbestpractices-jvaria.pdf
The document discusses architectural patterns and best practices for building scalable and resilient applications on Amazon Web Services (AWS). It provides examples of how to design for failure, implement loose coupling between components, and build elasticity into applications using AWS services like Auto Scaling, Elastic Load Balancing, and Amazon EC2. The document also outlines three approaches for creating standardized technology stacks and managed development environments on AWS.
Continuous Integration and Deployment Best Practices on AWSAmazon Web Services
With AWS, organizations now have the ability to develop and run their applications with speed and flexibility like never before. Working with an infrastructure that can be 100% API-driven enables organizations to use lean methodologies and realize these benefits. In this session, we will explore some key concepts and design patterns for continuous deployment and continuous integration, two elements of lean application and infrastructure development. We will look at several use cases where IT organizations leveraged AWS to rapidly develop and iterate on applications for scale, high availability and cost optimization.
Speaker: Adrian White, Solutions Architect, Amazon Web Services
DevOps with Elastic Beanstalk - TCCC-2014scolestock
This document discusses using AWS Elastic Beanstalk for deploying applications. It describes Elastic Beanstalk as a platform as a service that handles provisioning infrastructure and managing application deployments. It covers how to deploy application versions through the AWS console, command line, IDE plugins, or a CI/CD tool like Jenkins. It also discusses how Elastic Beanstalk uses applications, environments, and versions to model deployments and provides configuration, monitoring, logging and scaling capabilities.
Delivering High-Availability Web Services with NGINX Plus on AWSNGINX, Inc.
Over 1/3 of websites running on Amazon Web Services (AWS) are delivered and accelerated using NGINX. In this webinar Nginx and Amazon explain how to get started with NGINX Plus on AWS and how to further increase performance and availability of large, dynamic, cloud-based applications integrating with critical AWS services.
Similar to Infrastructure Continuous Delivery Using AWS CloudFormation (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Introduction to dosage forms and routes of drug administrationDefinition, the need for dosage forms, classification, overview of dosage form design
❖ Introduction to pharmaceutical ingredients (definition, importance)
❖ Routes of administration
Destyney Duhon personal brand explorationminxxmaree
Destyney Duhon embodies a singular blend of creativity, resilience, and purpose that defines modern entrepreneurial spirit. As a visionary at the intersection of artistry and innovation, Destyney fearlessly navigates uncharted waters, sculpting her journey with a profound commitment to authenticity and impact.This Brand exploration power point is a great example of her dedication to her craft.
2. Aboot Me
Hubert Cheung hubertc@amazon.com
Solutions Architect
Canuck
@ AWS 4.5 Years
- AWS Support
- AWS Solutions Architecture
3. What to expect from this session
• We’ll show you how to:
• Architect your infrastructure using AWS CloudFormation
• Use AWS CloudFormation to set up AWS CodePipeline
pipelines
• Continuously deliver changes to stacks as you make
changes to your templates
• Demo
4. Let’s look at release
processes
https://www.flickr.com/photos/jurvetson/5201796697/
5. • Integration
tests with
other systems
• Load testing
• UI tests
• Penetration
testing
Release processes have four major phases
Source Build Test Production
• Check-in
source code
such as .java
files.
• Peer review
new code
• Compile code
• Unit tests
• Style checkers
• Code metrics
• Create
container
images
• Deployment
to production
environments
9. What do we need for infrastructure continuous
delivery?
• A way to treat infrastructure as code.
• Tools to manage the workflow that creates and updates
infrastructure resources.
• Tools to properly test and inspect your changes for
defects and potential issues
10. What do we need for infrastructure continuous
delivery?
Infrastructure as code
A practice in which infrastructure
is provisioned and managed using
code and software development
techniques, such as version
control and continuous
integration.
Workflow
Build, test, and deploy your code
every time there is a code
change, based on the release
process models you define,
enabling you to rapidly and
reliably deliver changes.
12. AWS CloudFormation
• Create templates of your infrastructure
• Version control /code review /update
templates like code
• CloudFormation provisions AWS resources
based on dependency needs
• Integrates with development, CI/CD,
management tools
• No additional charge to use
13. Author templates in JSON or YAML
Use change sets to preview your changes
Continuous delivery workflows for stacks
Support for AWS Serverless App Model
Enable cross-stack references with exports
Key new features
16. Preview the set of actions that CloudFormation will take on your
behalf before you create or update stacks.
CloudFormation Change Sets
Change sets show you what resources will be created, updated
or replaced. This ensures that only expected operations are
executed.
17. Cross Stack References (Exports)
Network Stack
Outputs:
VPC
Description: reference VPC
Value: !Ref VPC
Export:
Name: ProdVPC
App Stack
Resources:
myTargetGroup:
Type: AWS::ELBV2::TargetGroup
Properties:
VpcId:
Fn::ImportValue: ProdVPC
• Allows you to share information between independent stacks.
• Export a stack’s output values. Other stacks in the same account and region
can import the exported values.
19. Considerations for Exports and Nested Stacks
Nested Stacks Cross Stack References
Recommended
uses cases
Advantages
Considerations
• Template reuse
• Use multiple templates but
manage as single stack
• Sharing common resources
• Allows for independent stacks based
on resource lifecycle or ownership.
• Convenient management.
One stack manages all
resources and nested stacks.
• Creation order and
dependencies are managed
• Separation of concern
• Share databases and VPCs
• Lets you limit blast radius with
safeguards
• Updates and rollbacks have
a wide surface area
• Reusing templates that have
custom resource names
• Replacing updates requires
changes to the importing stacks to
execute.
• Does not manage creation order
21. Let’s examine a sample application
Deconstruct the application into the necessary AWS resources
Create CloudFormation templates based your management needs
Model your continuous delivery pipeline
Continuously deliver infrastrucure changes as you iterate on your architecture
Use CloudFormation to model, provision, and manage changes to your pipeline
22. Microservices Application Based on Amazon
ECS
Two interconnecting microservices deployed as ECS
services (website-service and product-service).
The application runs on a highly available ECS cluster
deployed across multiple Availability Zones with auto
scaling
Available at github.com/awslabs/ecs-refarch-cloudformation
23. Reference architecture
Public Subnet
Private Subnet
Availability Zone
Internet
Gateway
Public Subnet
Private Subnet
Availability Zone
Application
Load Balancer
NAT GatewayNAT Gateway
ECS Cluster
CloudWatch Logs (Container Logs)
ECS Host ECS Host ECS Host ECS HostAuto Scaling Group
github.com/awslabs/ecs-refarch-cloudformation
24. Decompose into AWS resource types
NAT
Gateway
Elastic IP
Default
Public Route
Public Subnet 1Private Subnet 1
Default
Private
Route
Private
Route
Table
NAT
Gateway
Elastic IP
Public Subnet 2Private Subnet 2
Default
Private
Route
Private
Route
Table
AvailabilityZone1AvailabilityZone2
VPC
Internet
Gateway
Public
Route Table
Load Balancer
Security Group
ECS Host
Security Group
Application
Load Balancer
Load Balancer
Listener
Load Balancer
Default TargetGroup
ECS Cluster
Auto Scaling
Group
Auto Scaling
Launch Configuration
ECS (IAM) Role
IAM
Instance Profile
ECS Service
ECS Task Definition
CloudWatch
Log Group
TargetGroup
Listener Rule
Service Role
ECS Service
ECS Task Definition
CloudWatch
Log Group
TargetGroup
Listener Rule
Service Role
Network Security
Load
Balancing
Front End
Service
ECS
Cluster
Back End
Service
25. Build CloudFormation templates based on this
logical grouping
Template Description
Network
VPC, Availability Zones, subnets, routing, NAT and
internet gateways
Security groups Security groups for the application
Load balancers ALBs that are deployed to the public subnets
ECS cluster ECS cluster deployed to private subnets
Back end service ECS service and task definition for the back end app
Front end service ECS service and task definition for the webpage
26. Set up your template to flow configuration to
each other
Network
Template
Security
Template
Load Balancing
Template
Front End svc
Template
ECS Cluster
Template
Back End Svc
Template
Outputs
Load Balancer
Listener
Load Balancer
DNS Name
ECS Cluster
Load Balancer
Security Group
ECS Host
Security Group
VPC
Public
Subnets
Private
Subnets
27. Network
Security
Load Balancing
ECS Cluster
Front End
Back End
with nested stacks
Use these templates to build your stacks
with cross-stack references
Network
Security
Load
Balancing
ECS Cluster
Front End
Back End
Parent Template Microservices
Stack
Nested
templates
Templates Individual Stacks
29. Applying continuous delivery for your
infrastructure
Continuous delivery service for fast and reliable
application and infrastructure updates
Builds, tests and deploys your code each time there is a
code change.
Built in actions for AWS CloudFormation
AWS
CodePipeline
30. How does this align with release phases?
Source Test Deploy
Source stage for
CloudFormation
templates can be
AWS CodeCommit,
S3, or GitHub
Use CloudFormation
change sets to ensure
to verify deployments
prior to execution
Create, update or
delete stacks, or
change sets.
31. Model your pipelines
Iterate more often on your application
and infrastructure code
Launch new versions in dev and
promote to prod
Manage your network resources
separately per its own cadence.
Maintain separate, mirror sandbox, and
production network environments.
Production
VPC, Security Groups,
Load Balancing
Sandbox
VPC, Security Groups,
Load Balancing
Production
ECS Cluster, Application
Front & Back Ends
Dev
ECS Cluster, Application
Front & Back Ends
Application PipelineNetwork Resources Pipeline
32. Create and manage your pipeline using
CloudFormation
Pipeline artifact store
S3 bucket
Pipeline notifications
SNS email notifications
Pipeline IAM roles
CloudFormation template to set up your pipeline
Could be provisioned
in a separate stack
with IAM resources –
with cross-stack refs
33. Create and manage your pipeline using
CloudFormation
Choose ‘deploy’ action with CloudFormation
as the provider
CloudFormation has enabled several action modes
– REPLACE_ON_FAILURE creates a new stack if one
doesn’t exist, updates it if it does, or replaces it if
its in a failed state
You can use template configuration files or
specify parameter overrides within the template
that defines your pipeline
Stage
Action
Action
config
Name of your CloudFormation template
34. Pipeline for network resources
Source repo
Networking resources for
sandbox/dev environments
Individual stacks. Ordered to
account for dependencies.
Change sets to preview changes to prod
Manual approval before you
changes are applied to prod
Apply changes to Prod
1
2
3
4
5
35. Pipeline for your application
Pipeline triggered as soon as new
versions are posted
Run your tests and clean up
your dev environment when
done, so you aren’t charged
for the instances you don’t
use.
Review to ensure resource modification
or replacement is what you expect
Continuously deliver changes to Prod
1
2
3
4
38. FIN, ACK
We’ve seen how to compose and continuously deliver your
infrastructure as code on our software release process:
• Different ways to decompose your infrastructure into
templates and stacks
• Create and provision your continuous delivery pipeline
for your infrastructure
• Deliver changes to your environments with speed and
quality.
39. re:Invent 2016 sessions on Continuous Delivery:
• DEV201 - DevOps on AWS: Accelerating Software Delivery with the AWS Developer
Tools
• CON302 - Development Workflow with Docker and Amazon ECS
• DEV403 - DevOps on AWS: Advanced Continuous Delivery Techniques
Resources to learn more:
• Continuous delivery: https://aws.amazon.com/devops/continuous-delivery/
• Continuous delivery for CloudFormation stacks -
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-
delivery-codepipeline.html
• CodePipeline - https://aws.amazon.com/documentation/codepipeline/
But wait, there’s more!
And together we will take next one hour to focus more on “how” and less on “what”.
We will show you how to architect your infrastructure using AWS CloudFormation. Then, we will see how to setup up a continuous delivery workflow using CloudFormation and AWS CodePipeline, and finally we will see how to use this workflow to update a CloudFormation stack and continuously deliver changes to this stack. Also, we will follow up with a demo.
But before doing all this, we will take a look at software release processes.
If you have any questions, we are happy to answer them after our session. After the session, Dominic and I will hang around a bit off stage. So, please come and find us. You can also visit our AWS management tools booth if you want to interact with us further.
https://www.flickr.com/photos/jurvetson/5201796697/
- Let’s take a step back and see all the phases involved in the software release process.
I want to take a moment to talk about different release processes.
Each team’s release process takes a different shape to accommodate the needs of each team.
Nearly all release processes can be simplified down to four stages – source, build, test and production. Each phase of the process provides increase confidence that the code being made available to customers will work in the way that was intended.
During the source phase, developers check changes into a source code repository. Many teams require peer feedback on code changes before shipping code into production. Some teams use code reviews to provide peer feedback on the quality of code change. Others use pair programming as a way to provide real time peer feedback.
During the Build phase an application’s source code is built and the quality of the code is tested on the build machine. The most common type of quality check are automated tests that do not require a server in order to execute and can be initiated from a test harness. Some teams extend their quality tests to include code metrics and style checks. There is an opportunity for automation any time a human is needed to make a decision on the code.
The goal of the test phase is to perform tests that cannot be done on during the build phase and require the software to be deployed to a production like stages. Often these tests include testing integration with other live systems, load testing, UI testing and penetration testing. At Amazon we have many different pre-production stages we deploy to. A common pattern is for engineers to deploy builds to a personal development stage where an engineer can poke and prod their software running in a mini prod like stage to check that their automated tests are working correctly. Teams deploy to pre-production stages where their application interacts with other systems to ensure that the newly changed software work in an integrated environment.
Finally code gets deployed to production. Different teams have different deployment strategies though we all share a goal of reducing risk when deploying new changes and minimizing the impact if a bad change does get out to production.
Each of these steps can be automated without the entire release process being automated. There are several levels of release automation that I’ll step through.
Continuous Integration
Continuous Integration is the practice of checking in your code to the continuously and verifying each change with an automated build and test process. Over the past 10 years Continuous Integration has gained popularity in the software community. In the past developers were working in isolation for an extended period of time and only attempting to merge their changes into the mainline of their code once their feature was completed. Batching up changes to merge back into the mainline made not only merging the business logic hard, but it also made merging the test logic difficult. Continuous Integration practices have made teams more productive and allowed them to develop new features faster. Continuous Integration requires teams to write automated tests which, as we learned, improve the quality of the software being released and reduce the time it takes to validate that the new version of the software is good.
There are different definitions of Continuous Integration, but the one we hear from our customers is that CI stops at the build stage, so I’m going to use that definition.
Continuous Delivery
Continuous Delivery extends Continuous Integration to include testing out to production-like stages and running verification testing against those deployments. Continuous Delivery may extend all the way to a production deployment, but they have some form of manual intervention between a code check-in and when that code is available for customers to use.
Continuous Delivery is a big step forward over Continuous Integration allowing teams to be gain a greater level of certainty that their software will work in production.
Continuous Deployment
Continuous Deployment extends continuous delivery and is the automated release of software to customers from check in through to production without human intervention. Many of the teams at Amazon have reached a state of continuous deployment. Continuous Deployment reduces the time for your customers to get value from the code your team has just written, with the team getting faster feedback on the changes you’ve made. This fast customer feedback loop allow you to iterate quickly, allowing you to deliver more valuable software to your customers, quicker.
Continuous Integration
Continuous Integration is the practice of checking in your code to the continuously and verifying each change with an automated build and test process. Over the past 10 years Continuous Integration has gained popularity in the software community. In the past developers were working in isolation for an extended period of time and only attempting to merge their changes into the mainline of their code once their feature was completed. Batching up changes to merge back into the mainline made not only merging the business logic hard, but it also made merging the test logic difficult. Continuous Integration practices have made teams more productive and allowed them to develop new features faster. Continuous Integration requires teams to write automated tests which, as we learned, improve the quality of the software being released and reduce the time it takes to validate that the new version of the software is good.
There are different definitions of Continuous Integration, but the one we hear from our customers is that CI stops at the build stage, so I’m going to use that definition.
Continuous Delivery
Continuous Delivery extends Continuous Integration to include testing out to production-like stages and running verification testing against those deployments. Continuous Delivery may extend all the way to a production deployment, but they have some form of manual intervention between a code check-in and when that code is available for customers to use.
Continuous Delivery is a big step forward over Continuous Integration allowing teams to be gain a greater level of certainty that their software will work in production.
Continuous Deployment
Continuous Deployment extends continuous delivery and is the automated release of software to customers from check in through to production without human intervention. Many of the teams at Amazon have reached a state of continuous deployment. Continuous Deployment reduces the time for your customers to get value from the code your team has just written, with the team getting faster feedback on the changes you’ve made. This fast customer feedback loop allow you to iterate quickly, allowing you to deliver more valuable software to your customers, quicker.
So what about tools that can help us implement continuous delivery for creating and updating infrastructure.
Let’s start at a high level and then we will filter down to specific AWS Services that you can make use of to implement continuous delivery for creating and updating infrastructure on AWS.
The first thing is that you have to start treating Infrastructure as code.
Then you need a tool or a service to manage the workflow that binds all the different phases that I talked about earlier.
Lastly, you should be in a position to test or preview your changes for any potential issues.
In few minutes Dominic will show you how you can use a feature to preview proposed changes before executing them.
Ok, let’s distill this further.
We have to use code and software development techniques to provision and manage infrastructure.
Once you have something in a codified format you enjoy a number of benefits:
you can version it,
you can share with your colleagues for review,
you can create and codify standards,
you can re-use it and use it to replicate environments rapidly.
And then we need a workflow. On every code commit your workflow should be able to build, test and deploy changes.
Build test and deploy Infrastructure changes.
You can use AWS CloudFormation - our Infrastructure as code service and AWS CodePipeline – our continuous delivery workflow service to achieve all this. You can use these two service together to continuously deliver fast and reliable infrastructure updates.
We’ve assumed that most of you are pretty familiar with CloudFormation, but lets start off with a quick overview of the basics
CloudFormation is an Infrastructure as code service by AWS. It helps you model and set up your AWS resources.
You use declarative ways to describe all the AWS resources that you need in a template and CloudFormation takes care of provisioning and configuring those resources for you. CloudFormation figures out all the dependencies and execution order, provisions and configures all your resources.
You can update your templates and version control them as you make incremental changes to your infrastructure.
CloudFormation integrates with popular CI/CD and management tools including AWS CodePipleine.
Here are some of the key features that we've added this year:
We’ve added the capability to author templates in YAML. YAML provides for concise, readable templates that you can comment. You can also use new shorthands for functions and a new Sub function to substitute variables in an string.
Change sets provides you with a preview of the actions CloudFormation will take on your behalf when you create or update a stack
Cross Stack references lets use output values from another stack that are given an export name
You can now build continuous delivery workflows for CloudFormation stacks using CodePipeline. AWS CodePipeline has built-in integration with AWS CloudFormation, so you can specify AWS CloudFormation-specific actions, such as creating, updating, or deleting a stack and Change Sets, within a pipeline
We launched newer abstractions for serverless architectures. CloudFormation now supports the AWS Serverless Application Model with special resource types that simplify expression of Lambda functions, APIs, mappings and IAM resources to create Serverless applications. It’s little bit our of scope for this session, but please meet us after the session if you need more details on this.
Based on customer feedback we launched support for YAML ver 1.1
YAML is more concise and readable with a lot less punctuation.
YAML allows you to add comment blocks to your templates.
YAML supports all the functionality that is available with JSON.
1. We have not only added the YAML support but we have enhanced some syntaxes to further improve the template authoring experience.
2. We have introduced short forms. Using these short forms you can express CloudFormation intrinsic functions (such as Fn::Join, Fn::FindInMap, Fn::GetAtt, Fn::GetAZs) in a more readable and concise manner in CloudFormation YAML templates.
3. Also, we have introduced a new intrinsic Function called Fn::Sub for conducting basic string interpolations within a CloudFormation template. This intrinsic function Fn::Sub substitutes variables in an input string with values that you specify.
4. Here is an arbitrary user data section in CloudFormation JSON template.
And here is how you can express the same thing in YAML using these new enhancements.
…and here’s the same user data section in YAML with a new Sub function that can substitute variables & in this case pseudo parameters in a string.
There’s another enhancement - you can use YAML tag directive and then the CloudFormation intrinsic function name as a short form.
1. Many customers asked for insight into the changes or preview the changes that CloudFormation is planning to perform when it creates or updates a stack based on whatever is present in the CloudFormation template and parameter values. Previewing the changes, can help verify if they are in line with the expectations.
2. Change sets lets you preview and approve the set of actions CloudFormation will take on your behalf when you create or update a stack.
You can view exactly what resources will get created, modified or replaced, so you can ensure only expected operations are executed.
3. What I am showing you here is a basic flow involving change sets during stack update.
Suppose you have your original stack, and you want to update it. Next step is to provide updated template and create change sets
Change Set is generated. It provides blow by blow information of what is going to be created , modified and deleted. You can review this and make sure that the changes are in line with your expectation, and if you’re happy you can go ahead and execute the changes sets to update the stack.
There was this need for making configuration values flow from one independent stack to another independent stack. You can tackle some the bits using customer resources but a native CloudFormation feature would just making things easier, standardized and improve overall experience with CloudFormation.
2. So we launched a feature called cross stack references :
Cross stack reference feature can help you do things such as share IAM roles, VPC information, and security groups across CloudFormation stacks in a standard way.
You can export values from one stack and use them in another
Let’s take a short example: In this case the Network stack is exporting its VPC with an export name of ProdVPC
The App stack is consuming this value using a new function Fn::ImportValue
Just considering this example, using this feature you can manage your network resources separately from the application resources that support your application.
Any stack within the account and region can consume the exported value
You can view your available exports using the console, API or CLI
I wanted to talk little bit about Nested stacks. Its not a new CloudFormation concepts at all . It has been there for years, but its worth recapping as this concept is later used later in this session.
The Nested stacks feature lets you create stacks using multiple templates.
You can describe nested templates within a parent template using the AWS::CloudFormation::Stack resource type and a pointer to the S3 URL where the template is located.
When you use CloudFormation to create a stack using the parent template, CloudFormation creates all the resources in the template including nested stacks with its resources that are described in the nested templates.
Nested stacks provides a way to break up and organize templates that are too large.
It allows you to separate commonly used infrastructure components into their own templates
Lets recap the use cases, advantages and considerations for Nested Stacks and Cross Stack References.
Its important to remember that these are not alternative approaches but rather tools that can be used together!
If you like to separate and re-use templates that contain descriptions of commonly used infrastructure resources, nested stacks gives you a good way to provision and manage them together as a single unit
When you use nested stacks CloudFormation manages all the dependencies and creation order including your nested templates to provisions all of your resources.
However, you do need to consider that all the resources in the stack are subject to updates and rollbacks….and if you are looking to reuse generic templates you’ll want to make sure you don’t have any custom names for resources.
Cross stack references is a great way to share common resources like networking or database resources that may need to used by several applications.
It allows you to manage stacks of resources independently. The folks that own network and security can manage those resources independent of the instances, containers or functions.
However, you do need to consider that CloudFormation will prevent you from deleting or replacing a resource that is being imported by another stack.
Nested stacks is a convenient way to deploy using a library of common/ shared templates. Cross stack references is a convenient way to share resources across stacks.
Again, its important to remember that these are not alternative approaches but rather tools that can be used together!
Consider a simple micro services based application.
Deconstruct the application into the necessary AWS resources
Create CloudFormation templates based your management needs – determine if you need to structure your resources in nested stacks for convenience or cross stack references so they may be managed independently
Use CloudFormation to model your continuous delivery pipeline as code and version control it like you would your application code
CodePipeline uses CloudFormation to continuously deliver changes you make to your infrastructure code
The sample application we’ve picked to examine is available at github at awslabs under ecs-refarch-cloudformation.
The application consists of two interconnecting microservices deployed as ECS services
The application runs on a highly available ECS cluster deployed across multiple availability zones with auto scaling
Lets look at the published reference architecture for this application.
A tiered VPC with public and private subnets, spanning an AWS region.
A highly available ECS cluster
NAT gateways (1 per Zone) to handle outbound traffic.
An Application Load Balancer (ALB) to the public subnets to handle inbound traffic.
ALB path-based routes for each ECS service to route the inbound traffic to the correct service.
Centralized container logging with Amazon CloudWatch Logs.
Lets look at the AWS resource types that are needed to support this application.
Decomposing the requisite elements by category we will need:
VPC
A pair of public and private subnets split across two zones
A pair of private route tables, default routes and associations that tie them together
A pair of NAT gateways and an EIP for each public subnet
A public route table, default route and an internet gateway
Create templates based on the logical grouping
The Network template outputs the VPC, and subnets that are consumed by all other templates
Security groups are managed and exported out of the security template.
The loadbalancer is launched in the public subnets and outputs the DNS name and Listener ARN
The ECS cluster exports the ARN of the ECS cluster
Different ways to compose infrastructure
Single stack supporting the application per region
One parent stack and several nested stacks leveraging standard templates
Multiple standalone stacks, loosely coupled together with cross stack references
Combination of nested stacks and cross stack references
For the rest of this demo lets examine using Cross Stack references within a continuous delivery workflow
Integrate your infrastructure with your Continuous Delivery framework. Lets review modeling a release pipeline for this infrastrucure.
To automate the rollout of infrastructure changes lets use CodePipeline to trigger deployments using CloudFormation
Lets look at incorporating continuous delivery for your infrastructure
AWS CodePipeline is a continuous delivery service for fast and reliable application and infrastructure updates.
CodePipeline builds, tests and deploys your code each time there is a code change, based on the release process you define.
We’ve added new built-in actions for CloudFormation that let you create, update or delete stacks and create and execute change sets
Take a common scenario for many customers. Network resources are managed by a separate team with separate policies and update needs than the resources that support applications.
Lets model an example for this pipeline with two separate VPCs
Use a CloudFormation template to setup and manage your pipeline
This particular example creates:
an S3 bucket as an artifact store for the pipeline
SNS Topic to subscribe to email notifications for approvals
The pipeline with its various stages
IAM roles that CloudFormation will use to provision resources and CodePipeline will need too call CloudFormation on your behalf. A best practice for this would be to model this in a separate