Serverless promises to change the way we consume software. It allows us to potentially pay for only that which we use and can help drive down operational costs to the minimal amount of resources necessary.
Architecting for serverless requires a unique look at app logic and the way it is deployed. It takes a combination of the logical and physical worlds. An architectural pattern has emerged where we can scale ephemeral compute separate from services that need to persist.
We use Kubernetes to deliver exactly this. A “serverless” experience that is driven and enabled by compute pods and storage pods. We also have used our experience running thousands of database clusters on Kubernetes to automate the operational expertise of managing a distributed database.
In this talk, we will take a dive deep into the architecture of our application and share:
* A definition and outline of the challenges of serverless
* How we reworked our logic for a serverless approach
* How we use Kubernetes to gain serverless autoscaling
This talk was given by Jim Walker for DoK Day Europe @ KubeCon 2022.
Report
Share
Report
Share
1 of 35
More Related Content
Similar to Using Kubernetes to deliver a “serverless” service
In this session, we will discuss strategies, tools, and techniques for migrating and running off-the-shelf Oracle packages on AWS. We'll consider applications like Oracle eBusiness Suite, PeopleSoft, JD Edwards, Endeca, and Siebel. These applications are complex by themselves, they are frequently customized, they have many touch points on other systems in the enterprise, and they often have large associated databases. Therefore, they may not seem good candidates for the cloud at first look. Nevertheless, running enterprise applications in the cloud affords powerful benefits, and we'll identify the factors and best practices that most influence success.
This document provides an overview of best practices for deploying business critical workloads on AWS. It discusses key benefits of running SAP, Oracle, and Microsoft applications on AWS, including high availability, high performance, security and agility. The document dives deep into architectures for fault-tolerant SAP deployments, including multi-AZ database replication, message server deployment, and backup strategies using Oracle Secure Backup. It also covers monitoring of applications using CloudWatch and pushing logs to CloudWatch Logs. Live demos are shown for EC2 auto recovery, OSB backups to S3, and alert log monitoring with CloudWatch.
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #1: AWS AuroraContinuent
AWS Aurora vs. Continuent Tungsten Clusters
Building a Geo-Scale, Multi-Region and Highly Available MySQL Cloud Back-End
This first installment of our High Noon series of on-demand webinars is focused on AWS Aurora. It looks at some of the key characteristics of AWS Aurora and how it fares as a MySQL HA / DR / Geo-Scale solution, especially when compared to Continuent Tungsten Clustering.
Watch this webinar to learn how to do better MySQL HA / DR / Geo-Scale.
AGENDA
- Goals for the High Noon Webinar Series
- AWS Aurora
- Key Characteristics
- Cross Region Requirements
- RDS Proxy
- Limitations Using AWS Aurora
- How to do better MySQL HA / DR / Geo-Scale?
- AWS Aurora vs Tungsten Clustering
- About Continuent & Its Solutions
PRESENTER
Matthew Lang - Customer Success Director – Americas, Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scaleable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The service is now in preview. Come to our session for an overview of the service and learn how Aurora delivers up to five times the performance of MySQL yet is priced at a fraction of what you'd pay for a commercial database with similar performance and availability.
Speakers:
Ronan Guilfoyle, AWS Solutions Architect
Brian Scanlan, Engineer, Intercom.io
With AWS you can choose the right database technology and software for the job. Given the myriad of choices, from relational databases to non-relational stores, this session provides details and examples of some of the choices available to you. This session also provides details about real-world deployments from customers using Amazon RDS, Amazon ElastiCache, Amazon DynamoDB, and Amazon Redshift.
Design, Deploy, and Optimize SQL Server on AWS - AWS Online Tech TalksAmazon Web Services
Enterprises are quickly moving database workloads like SQL Server to the cloud, but with so many options, the best approach isn’t always obvious. You exercise full control of your SQL Server workloads by running them on Amazon EC2 instances, or leverage Amazon RDS for a fully managed database experience. This session will go deep on best practices and considerations for running SQL Server on AWS. We will cover best practices for deploying SQL Server, how to choose between Amazon EC2 and Amazon RDS, ways to optimize the performance of your SQL Server deployment for different applications types. We review in detail how to provision and monitor your SQL Server databases, and how to manage scalability, performance, availability, security, and backup and recovery, in both Amazon RDS and Amazon EC2.
The document provides an overview of Microsoft Azure's data platform and various database options, including SQL Database, SQL Server VMs, DocumentDB, HDInsight, and Azure Search. It discusses the architecture and features of these services, how to provision and manage databases, and includes demos of interacting with the databases. The key services covered allow for relational, non-relational, and search databases hosted on Azure infrastructure at varying levels of management and control.
This document discusses SQL Server 2012 AlwaysOn, a high availability and disaster recovery solution. It provides an overview of AlwaysOn availability groups, which allow for multiple synchronous or asynchronous copies of databases across instances. Key features include readable secondary replicas, automatic instance and database failover, and the ability to perform backups on secondary replicas. The document also demonstrates AlwaysOn configuration and functionality through a virtual machine-based lab environment.
Design, Deploy, and Optimize SQL Server on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to build applications on AWS from a strong foundation on SQL Server
- Learn when to deploy SQL Server on Amazon EC2 versus Amazon RDS
- Learn how to take advantage of the latest features in SQL Server 2016 when running on AWS
Enterprises are quickly moving database workloads like SQL Server to the cloud, but with so many options, the best approach isn’t always obvious. You exercise full control of your SQL Server workloads by running them on Amazon EC2 instances, or leverage Amazon RDS for a fully managed database experience. This session will go deep on best practices and considerations for running SQL Server on AWS. We will cover best practices for deploying SQL Server, how to choose between Amazon EC2 and Amazon RDS, ways to optimize the performance of your SQL Server deployment for different applications types. We review in detail how to provision and monitor your SQL Server databases, and how to manage scalability, performance, availability, security, and backup and recovery, in both Amazon RDS and Amazon EC2.
Applying ML on your Data in Motion with AWS and Confluent | Joseph Morais, Co...HostedbyConfluent
Event-driven application architectures are becoming increasingly common as a large number of users demand more interactive, real-time, and intelligent responses. Yet it can be challenging to decide how to capture and perform real-time data analysis and deliver differentiating experiences. Join experts from Confluent and AWS to learn how to build Apache Kafka®-based streaming applications backed by machine learning models. Adopting the recommendations will help you establish repeatable patterns for high performing event-based apps.
Azure Cosmos DB - NoSQL Strikes Back (An introduction to the dark side of you...Andre Essing
This document summarizes an introduction presentation about Azure Cosmos DB. It discusses key aspects of Cosmos DB including that it is a globally distributed, massively scalable database that supports multiple data models. It also covers request units, partitioning, indexing, consistency models, and other architectural aspects that allow Cosmos DB to elastically scale storage and throughput worldwide.
AWS Summit London 2014 | Scaling on AWS for the First 10 Million Users (200)Amazon Web Services
This mid-level technical session will provide an overview of the techniques that you can use to build high-scalabilty applications on AWS. Take a journey from 1 user to 10 million users and understand how your application's architecture can evolve and which AWS services can help as you increase the number of users that you serve.
The document discusses best practices for developing minimum viable products (MVPs) on AWS. It recommends releasing quickly with limited core features, iterating in production, and basing business decisions on data. It advocates decomposing monolithic architectures into loosely coupled microservices and using AWS services for undifferentiated heavy lifting to focus on core differentiators. Building block services like compute, storage, databases, and analytics are discussed. Development approaches like infrastructure as code, continuous integration/delivery, and automation are presented to help deliver changes continuously and safely to production.
SQL Server Alwayson for SharePoint HA/DR Step by Step GuideLars Platzdasch
SQL Server Alwayson for Sharepoint HA/DR SQL Konferenz 2017
-What is SQL Server AlwaysOn?
-AlwaysOn Failover Clustering
-AlwaysOn Availability Groups
-Why AlwaysOn Availability Groups for SharePoint?
-Requirements and Prerequisites
-Step by Step guide to implementing AlwaysOn Availability Groups
Demonstration
lessons learned
Immutable Infrastructure: the new App DeploymentAxel Fontaine
Immutable Infrastructure: the new App Deployment
App deployment and server setup are complex, error-prone and time-consuming. They require OS installers, package managers, configuration recipes, install and deployment scripts, server tuning, hardening and more. But... Is this really necessary? Are we trapped in a mindset of doing things this way just because that's how they've always done?
What if we could start over and radically simplify all this? What if, within seconds, and with a single command, we could wrap our application into the bare minimal machine required to run it? What if this machine could then be transported and run unchanged on our laptop and in the cloud? How do the various platforms and tools like AWS, Docker, Heroku and Boxfuse fit into this picture? What are their strengths and weaknesses? When should you use them?
This talk is for developers and architects wishing to radically improve and simplify how they deploy their applications. It takes Continuous Delivery to a level far beyond what you've seen today. Welcome to Immutable Infrastructure generation. This is the new black.
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
AskTom: How to Make and Test Your Application "Oracle RAC Ready"?Markus Michalewicz
Oracle Real Application Clusters (Oracle RAC) is the preferred availability and scalability solution for Oracle Databases, as most applications can benefit from its capabilities without making any changes. This mini session explains the secrets behind Oracle RAC’s horizontal scaling algorithm, Cache Fusion, and how you can test and ensure that your application is “Oracle RAC ready.”
This deck was first presented in OOW19 as an AskTom theater / mini session and will be presented as a full version in other conferences going forward at which time I will provide an updated version of the deck.
Azure CosmosDB the new frontier of big data and nosqlRiccardo Cappello
Azure Cosmos DB is a globally distributed, massively scalable, multi-model database service. It supports document, key-value, graph, and column-family data models. Cosmos DB provides turnkey global distribution, elastic scale of storage and throughput, guaranteed low latency at the 99th percentile, comprehensive SLAs, and five consistency models. It is designed for data growth and puts data where users are located.
Similar to Using Kubernetes to deliver a “serverless” service (20)
Distributed Vector Databases - What, Why, and HowDoKC
Distributed Vector Databases - What, Why, and How - Steve Pousty, VMware
In the last two years, AI machine learning has exploded in prominence. One of the key concepts used in the modeling and storage of AI is vectors. Feeling like you should learn more and how you would use them in your data work? Wondering how you would run this distributed on Kubernetes? Then have I got a talk for you! We will start by explaining the concept of (embedding) vectors and how they are used in the AI life cycle. From there we will go into putting them into a database. We will cover the use cases where this technology makes sense. As opposed to an RDBMS, vector databases are more tightly focused and optimized for particular use cases. To ground this discussion in something more concrete, there will be hands-on demos throughout the talk. You will see the advantages to running distributed vector databases on Kubernetes infrastructure. Bring your favorite Kube infrastructure and leave with hands-on experience running AI infrastructure on Kubernetes.
Is It Safe? Security Hardening for Databases Using Kubernetes OperatorsDoKC
Is It Safe? Security Hardening for Databases Using Kubernetes Operators - Robert Hodges, Altinity
Thanks to the Operator Pattern, Kubernetes is now an outstanding platform to run databases. But to quote Marathon Man, "is it safe?" This talk is a top-level review of the database security problem in Kubernetes, standard ways that operators can mitigate threats, and a wallet-sized checklist of security features you should look for in any operator you use. Our talk is practical and focused on needs of Kubernetes developers. Join us!
Stop Worrying and Keep Querying, Using Automated Multi-Region Disaster RecoveryDoKC
Stop Worrying and Keep Querying, Using Automated Multi-Region Disaster Recovery - Shivani Gupta, Elotl & Sergey Pronin, Percona
Disaster Recovery(DR) is critical for business continuity in the face of widespread outages taking down entire data centers or cloud provider regions. DR relies on deployment to multiple locations, data replication, monitoring for failure and failover. The process is typically manual involving several moving parts, and, even in the best case, involves some downtime for end-users. A multi-cluster K8s control plane presents the opportunity to automate the DR setup as well as the failure detection and failover. Such automation can dramatically reduce RTO and improve availability for end-users. This talk (and demo) describes one such setup using the open source Percona Operator for PostgreSQL and a multi-cluster K8s orchestrator. The orchestrator will use policy driven placement to replicate the entire workload on multiple clusters (in different regions), detect failure using pluggable logic, and do failover processing by promoting the standby as well as redirecting application traffic
Transforming Data Processing with Kubernetes: Journey Towards a Self-Serve Da...DoKC
Transforming Data Processing with Kubernetes: Journey Towards a Self-Serve Data Mesh - Rakesh Subramanian Suresh & Jainik Vora, Intuit
This presentation explores how Intuit uses Kubernetes with Domain-Driven Design and Data Mesh principles to transform its data processing landscape, crucial for its AI-driven expert platform. We will discuss the importance of clean data in developing robust generative artificial intelligence and how Intuit is addressing this through the creation of paved paths for data platforms running on Kubernetes. We'll examine the challenges and solutions in managing 100,000 data pipelines and 1000+ engineers interacting with data, highlighting the need for scalable solutions. We'll also discuss how Intuit uses Kubernetes to build its batch and stream processing platform, overcoming hurdles in data pipeline deployment, scheduling, orchestration, and dependency management. We'll conclude by emphasizing how this transformation, based on treating data as a product, has improved decision-making speed and accuracy across the organization and fostered a more efficient, collaborative data culture.
The State of Stateful on Kubernetes - Stateful Workloads in Kubernetes: A Deep Dive - Kaslin Fields & Michelle Au, Google
As a platform for distributed computing, Kubernetes enables users to run their workloads across machines. However data has gravity, and when workloads in Kubernetes have to share data with other applications, managing the application’s requirements can get more tricky. In this talk, we will explore what "Stateful" means from Kubernetes' perspective. We will discuss the different types of stateful workloads, and the challenges of deploying them on Kubernetes. We will also look at the features that exist in Kubernetes to support stateful workloads, as well as the features that are in the works. Key Takeaways: What is a stateful workload from Kubernetes’ perspective? What are the challenges of deploying stateful workloads on Kubernetes? What features exist in Kubernetes to support stateful workloads? What features are in the works to support stateful workloads better in the future?
Colocating Data Workloads and Web Services on Kubernetes to Improve Resource ...DoKC
Colocating Data Workloads and Web Services on Kubernetes to Improve Resource Utilization - He Cao, ByteDance
Recently, more and more data workloads are running on top of Kubernetes, such as ETL processes, Spark and Flink jobs, and more. These workloads typically exhibit high resource utilization and remain relatively stable over time. In contrast, web services often exhibit tidal patterns, characterized by significant fluctuations in resource utilization. The resource model of vanilla Kubernetes is static, which can lead to low resource utilization accumulated over 24 hours. In this talk, He will introduce how ByteDance uses Katalyst to colocate data workloads and online services on Kubernetes to improve resource utilization. In addition, He will explain how Katalyst ensures the QoS of these workloads through QoS-aware scheduling, service profiling, multi-dimensional resource isolation, real-time container resource adjustment, and more. In ByteDance, Katalyst has been deployed on 500,000+ nodes with tens of millions of cores, and has improved daily resource utilization from 20% to 60%.
Make Your Kafka Cluster Production-Ready - Jakub Scholz, Red Hat
Kubernetes became the de-facto standard for running cloud-native applications. And more and more users turn to it also to run stateful applications such as Apache Kafka. While there are different tools such as Helm charts or operators which can get you quickly up and running, there is often still a long way to make sure the Kafka cluster is production-ready. This talk will take you through the main aspects you should consider for your Kafka cluster and will cover things such as resource management, storage, scheduling, rolling updates, or reliability. It will show you how to do it using the Strimzi operator, but the lessons learned will apply also to any other Kafka cluster. If you are interested in production-ready Apache Kafka on Kubernetes, this is a talk for you.
Dynamic Large Scale Spark on Kubernetes: Empowering the Community with Argo W...DoKC
Dynamic Large Scale Spark on Kubernetes: Empowering the Community with Argo Workflows and Argo Events - Ovidiu Valeanu, AWS & Vara Bonthu, Amazon
Are you eager to build and manage large-scale Spark clusters on Kubernetes for powerful data processing? Whether you are starting from scratch or considering migrating Spark workloads from existing Hadoop clusters to Kubernetes, the challenges of configuring storage, compute, networking, and optimizing job scheduling can be daunting. Join us as we unveil the best practices to construct a scalable Spark clusters on Kubernetes, with a special emphasis on leveraging Argo Workflows and Argo Events. In this talk, we will guide you through the journey of building highly scalable Spark clusters on Kubernetes, using the most popular open-source tools. We will showcase how to harness the potential of Argo Workflows and Argo Events for event-driven job scheduling, enabling efficient resource utilization and seamless scalability. By integrating these powerful tools, you will gain better control and flexibility for executing Spark jobs on Kubernetes.
Run PostgreSQL in Warp Speed Using NVMe/TCP in the CloudDoKC
Run PostgreSQL in Warp Speed Using NVMe/TCP in the Cloud - Sagy Volkov, Lightbits
PostgreSQL as a SQL engine can accommodate a very high-transaction rate, but as your data grows and the number of connections and queries increases, there is a challenge for the storage to keep up with the SQL engine.
To the rescue comes NVMe over TCP (or NVMe/TCP). Developed by Lightbits Labs in 2016 and donated to the Linux community, it is the next evaluation of using NVMe based storage over TCP Fabric. NVMe/TCP simplifies how you interact with remote NVMe devices (targets) and allows your PostgreSQL storage to consume fast storage very easily.
In this session I will explain the core concept of the NVMe/TCP protocol, current storage providers that can use it, how you can consume it in Kubernetes (super easy), and discuss the possibilities of using NVMe/TCP in the cloud.
The session will also include a performance comparison of a few storage that are available in AWS and even a live demo of how PostgreSQL can run super fast - warp speed fast - in AWS.
Link: https://www.youtube.com/watch?v=D8kJCvsHD9Q&list=PLHgdNuGxrJt04Fwaip9aDYvXrbRSmc5HZ&index=12
https://go.dok.community/slack
https://dok.community/
From DoK Day NA 2022 (https://www.youtube.com/watch?v=YWTa-DiVljY&list=PLHgdNuGxrJt04Fwaip9aDYvXrbRSmc5HZ)
In the software industry we’re fond of terms that define major trends, like “cloud native”, “Kubernetes native” and “serverless”. As more and more organizations move stateful workloads to Kubernetes, we’ve started to see these terms applied to data infrastructure, where they can get overtaken by marketing hype unless we work to define them.
In this talk, we’ll examine two different databases, TiDB and Apache Cassandra, in order to identify what it means for a database to be Kubernetes native and why it matters. We’ll look at points including:
- The differences between cloud native, Kubernetes native, and serverless
- How databases become Kubernetes native
- Benefits of Kubernetes native databases
- How Kubernetes can better support databases
-----
Jeff has worked as a software engineer and architect in multiple industries and as a developer advocate helping engineers get up to speed on Apache Cassandra. He's involved in multiple open source projects in the Cassandra and Kubernetes ecosystems including Stargate and K8ssandra. Jeff is the author of the O’Reilly books “Cassandra: The Definitive Guide" and “Managing Cloud Native Data on Kubernetes".
ING Data Services hosted on ICHP DoK Amsterdam 2023DoKC
An explanation of how ING deals with local persistence at scale in secure and compliant manner for Elastic and Prometheus workloads today and other Data Services in the future.
In more detail we will elaborate on the following topics
How we solve local persistence
Type of workloads now and in the future
Typical requirements for a banking environment
Automation
Scale
Resilience
Security / Compliance
Service offering / demarcation
About Tor and Luuk:
Tor and Luuk are experienced engineers working at ING for over 10 years and working in the Kubernetes area for the last 5 years. They are specialized in and responsible for the Data Services OpenShift clusters in ING and have a strong focus on resilience, automation and security.
Implementing data and databases on K8s within the Dutch governmentDoKC
A small walkthrough of projects within the dutch government running Data(bases) on OpenShift. This talk shares success stories, provides a proven recipe to `get it done` and debunks some of the FUD.
About Sebastiaan:
I have always been a weird DBA, trying to combine Databases with out-of-the-box thinking and a DevOps mindset. Around 2016 I fell in love with both Postgres and Kubernetes, and I then committed my life to enabling Dutch organisations with running their Database workloads CloudNative.
Over the last few years I worked as a private contractor for 2 large government agencies doing exactly that, and I want to share my and others (success stories) hoping to enable and inspire Data on Kubernetes adoption.
https://go.dok.community/slack
https://dok.community/
Link: https://youtu.be/n_thXwyJNSU
ABSTRACT OF THE TALK
Deploying Stateless applications is easy but this is not the case for Stateful applications. StatefulSets are the K8s API object that helps to manage stateful application. Learn about what Stateful sets are, how to create, How it differs from Deployments.
KEY TAKE-AWAYS FROM THE TALK
This talk is focused on basics of StatefulSet, how StatefulSet differs from Deployments, How to manage Stateful app using StatefulSet
Running PostgreSQL in Kubernetes: from day 0 to day 2 with CloudNativePG - Do...DoKC
Link: https://youtu.be/cegd3Exg05w
https://go.dok.community/slack
https://dok.community/
Gabriele Bartolini - Vice President/CTO of Cloud Native and Kubernetes, EDB
ABSTRACT OF THE TALK
Imagine this: you have a virtual infrastructure based on Kubernetes, made up of virtual data centers, possibly spread across multiple Kubernetes clusters and regions. Your infrastructure could even be hosted on premises or on different cloud service providers. Infrastructure as Code is a requirement. You’ve been tasked to run Postgres databases, alongside your applications.
The good news is that you can leverage a fully open source stack with Kubernetes, PostgreSQL and the CloudNativePG operator, and deploy your Postgres database in the same way you deploy applications.
Join me in this webinar to discover the key role that you have to make this succeed, starting from day 0 through day 2 operations.
I’ll share some examples and best practices for running Postgres databases in Kubernetes, before peeking at the new features we are developing for the months to come.
Analytics with Apache Superset and ClickHouse - DoK Talks #151DoKC
Link: https://youtu.be/Y-1uFVKDfgY
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
This talk concerns performing analytical tasks with Apache Superset with ClickHouse as the data backend. ClickHouse is a super fast database for analytical tasks, and Apache Superset is an Apache Software foundation project meant for data visualization and exploration. Performing analytical tasks using this combo is super fast since both the software are designed to be scalable and capable of handling data of petabyte scale.
Overcoming challenges with protecting and migrating data in multi-cloud K8s e...DoKC
Link: https://youtu.be/EFaRyl4HmmE
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
If you are running or planning a multi-cloud or even a multi-cluster environment, there are several considerations in implementing a data protection solution – especially if you plan on an organic home-grown, do-it-yourself option. This talk will highlight challenges and best practices around centralized management of configuration, credentials, compliance across multiple accounts, regions, providers etc. We will also highlight the deviations in CSI driver implementations of various storage vendors and cloud providers. Finally, we will cover the various recovery options available in the market today.
Kubernetes cloud services are popular since they mitigate, but do not eliminate, the difficulties of operating a Kubernetes environment. This is especially true for protecting the stateful configuration and data of your Kubernetes applications, where the inherent high-availability and infrastructure as code are not a substitute for have cloud-native backup and disaster recovery capabilities. Further, many companies now have multi-cloud strategies for their cloud-native applications. These challenges can be addressed with backup applications that are both Kubernetes managed service and multi-cloud aware in order to snapshot, copy, restore, and migrate Kubernetes workloads (resources and data) running on AKS, EKS and GKE. Capturing information from cloud accounts and how the cluster and storage resources are configured allows 1) centralized visibility into all cloud accounts and the clusters and resources in the accounts including for compliance; 2) cross-account, cross-cluster, and cross-region data restores; 3) automation of the cluster and data restores including for Dev, Test, and Production recovery use cases.
BIO
Sebastian Glab is a Cloud Architect for CloudCasa and he resides in Poland. He is responsible for integrating the different cloud providers with the CloudCasa service, and making sure that all clusters in the cloud service get discovered and protected. In his free time, he plays volleyball and develops his own projects.
Martin Phan is the Field CTO in North America for CloudCasa by Catalogic Software. With over 20+ years of experience in the software-industry, he takes pride in supporting, developing, implementing, and selling enterprise software and data protection solutions to help customer solve their backup and recovery challenges.
KEY TAKE-AWAYS FROM THE TALK
1) Challenges and best practices around centralized management of configuration, credentials, compliance across multiple accounts, regions, providers etc.
2) Advantages of cloud awareness and Kubernetes managed service awareness for application and data recovery and security
3) Examples of overcoming Container Storage Interface (CSI) deviations
4) Various recovery options available in the market today.
Evaluating Cloud Native Storage Vendors - DoK Talks #147DoKC
Link: https://youtu.be/YVXEpcSclwY
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
In a continuation of a talk given at DoK day at KubeCon EU 2022, join Dinesh Majrekar, Civo's CTO as they walk through their evaluation process of the CNCF Storage market.
Civo offers managed Kubernetes clusters powered by K3s to customers around the world. We manage thousands of Virtual Machines and stateful customer data within multiple data centres across several continents.
In late 2021, Civo had the opportunity to evaluate the CNCF storage landscape to move to a new technology stack. During the migration project, Civo evaluated Mayastor, Ondat, Ceph and Longhorn against the following metrics:
Scalability
Performance
Ease of Support
Attendants will see practical examples on how they could carry out their own similar evaluation and see some of the results of the Civo research project.
BIO
Dinesh is CTO at Civo. Having worked in the hosting industry for many years, Dinesh has a passion for creating solutions that operate at scale. This not only applies to the technology stack, but for nurturing engineers through their career.
Kubernetes Cluster Upgrade Strategies and Data: Best Practices for your State...DoKC
Link: https://youtu.be/qUW8LkxYayc
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
How do you make sure your Stateful Workloads remain available when your Kubernetes infrastructure updates? This talk will discuss different strategies of upgrading a Kubernetes cluster, and how you can manage risk for your workload. The talk will showcase demos of each upgrade strategy.
BIO
Peter is a Senior Software Engineer on GKE at Google. He works on improving Kubernetes for Stateful workloads. His main focus is on enhancing the Kubernetes ecosystem for high availability applications.
KEY TAKE-AWAYS FROM THE TALK
The mechanics of different upgrade strategies, when to apply a particular upgrade strategy depending on your Stateful workload and how to mitigate risk to your application’s availability.
We will Dok You! - The journey to adopt stateful workloads on k8sDoKC
Link: https://youtu.be/AjvwG53yLMY
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
Stateful workloads are the heart of any application, yet they remain confusing and complicated even to daily K8s practitioners. That’s why many organizations shy away from migrating their data - their prized possession - to the unfamiliar stateful realm of Kubernetes.
After meeting with many organizations in the adoption phase, I discovered what works best, what to avoid, and how critical it is to gain confidence and the right knowledge in order to successfully adopt stateful workloads.
In this talk I will demonstrate how to optimally adopt Kubernetes and stateful workloads in a few steps, based on what I’ve learned from observing dozens of different adoption journeys. If you are taking your first steps in data on K8s or contemplating where to start - this talk is for you!
BIO
- A Developer turned Solution Architect.
- Working at Komodor, a startup building the first K8s-native troubleshooting platform.
- Love everything in infrastructure: storage, networks & security - from 70’s era mainframes to cloud-native.
- All about “plan well, sleep well”.
KEY TAKE-AWAYS FROM THE TALK
- Understand how critical stateful workloads are for any system, and that the key challenges to migrating it to Kubernetes are knowledge and confidence.
- How to build the foundational knowledge required to overcome adoption challenges by creating a learning path for individuals and teams.
- How to gain confidence to run stateful workloads on Kubernetes with support from the community (and yourself!)
Mastering MongoDB on Kubernetes, the power of operators DoKC
Link: https://youtu.be/Pi5ueyl_1jU
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
During my first talk for DoK community I want to walk you through the world of NoSQL database MongoDB and Kubernetes Operators - Community Edition, Enterprise Edition (MongoDB and Ops Manager on K8s), and Atlas operator, highlight the most important capabilities, talk about use cases and challenges, the theory will be mixed with a live demos!
BIO
I'm a SRE / NoSQL / DevOps professional. I hold CKA, CKAD, CKS, also I’m MongoDB Certified DBA and MongoDB Champion. I have experience with multiple cloud providers, Kubernetes, different types of K8s operators (Strimzi, RabbitMQ Cluster Operator), but especially MongoDB K8s Operator. I also work with KEDA. Since 2017, I have been a speaker at MongoDB conferences all around the world (USA, China, Europe).
KEY TAKE-AWAYS FROM THE TALK
I would like to share the best practices of running NoSQL database - MongoDB on Kubernetes also I want to show how to manage Atlas (MongoDB cloud) via K8s operator
https://www.mongodb.com/developer/community-champions/arkadiusz-borucki/
AI_dev Europe 2024 - From OpenAI to Opensource AIRaphaël Semeteys
Navigating Between Commercial Ownership and Collaborative Openness
This presentation explores the evolution of generative AI, highlighting the trajectories of various models such as GPT-4, and examining the dynamics between commercial interests and the ethics of open collaboration. We offer an in-depth analysis of the levels of openness of different language models, assessing various components and aspects, and exploring how the (de)centralization of computing power and technology could shape the future of AI research and development. Additionally, we explore concrete examples like LLaMA and its descendants, as well as other open and collaborative projects, which illustrate the diversity and creativity in the field, while navigating the complex waters of intellectual property and licensing.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
What's Next Web Development Trends to Watch.pdfSeasiaInfotech2
Explore the latest advancements and upcoming innovations in web development with our guide to the trends shaping the future of digital experiences. Read our article today for more information.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
this resume for sadika shaikh bca studentSadikaShaikh7
I am a dedicated BCA student with a strong foundation in web technologies, including PHP and MySQL. I have hands-on experience in Java and Python, and a solid understanding of data structures. My technical skills are complemented by my ability to learn quickly and adapt to new challenges in the ever-evolving field of computer science.
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
In this follow-up session on knowledge and prompt engineering, we will explore structured prompting, chain of thought prompting, iterative prompting, prompt optimization, emotional language prompts, and the inclusion of user signals and industry-specific data to enhance LLM performance.
Join EIS Founder & CEO Seth Earley and special guest Nick Usborne, Copywriter, Trainer, and Speaker, as they delve into these methodologies to improve AI-driven knowledge processes for employees and customers alike.
Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threatsanupriti
In the rapidly evolving landscape of blockchain technology, the advent of quantum computing poses unprecedented challenges to traditional cryptographic methods. As quantum computing capabilities advance, the vulnerabilities of current cryptographic standards become increasingly apparent.
This presentation, "Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threats," explores the intersection of blockchain technology and quantum computing. It delves into the urgent need for resilient cryptographic solutions that can withstand the computational power of quantum adversaries.
Key topics covered include:
An overview of quantum computing and its implications for blockchain security.
Current cryptographic standards and their vulnerabilities in the face of quantum threats.
Emerging post-quantum cryptographic algorithms and their applicability to blockchain systems.
Case studies and real-world implications of quantum-resistant blockchain implementations.
Strategies for integrating post-quantum cryptography into existing blockchain frameworks.
Join us as we navigate the complexities of securing blockchain networks in a quantum-enabled future. Gain insights into the latest advancements and best practices for safeguarding data integrity and privacy in the era of quantum threats.
2. Before we get started
Jim Walker
• Principal Product Evangelist
• @jaymce
This session is INTERMEDIATE
• I am not database “experts”
• I am curious and love tech
• I think this stuff is cool and these
concepts define the future
• GOAL: a high level context of the concept
6. Serverless as a computing paradigm…
1. Little to no manual server management
2. Automatic, elastic app/service scale
3. Built-in resilience and inherently fault tolerant service
4. Always available and instant access
5. Consumption-based rating or billing mechanism
6. Survive any failure domain, including regions
7. Geographic Scale and latencies
8. Infrastructure-less
Over the past 5 or so years, the serverless execution model has been most commonly used for:
● Serverless compute
● Serverless functions
● Serverless app development
8. So, how do you make a database app serverless?
Language
Execution
Storage
Language: SQL
Distributed Execution
Replication & Distribution
Most Databases
Storage
Find the
divide
9. single cluster - one logical database
CockroachDB Node 1
Pulling it together, into serverless and beyond
SQL Layer
Execution
Storage & Replication
Distribution
CockroachDB Node 2
SQL Layer
Execution
Storage & Replication
Distribution
CockroachDB Node 3
SQL Layer
Execution
Storage & Replication
Distribution
10. Virtual Cluster
Shared CockroachDB Storage ONLY Cluster
CockroachDB
SQL Pod 1
Serverless decouples execution and storage
SQL Layer
Execution
Distribution
Storage & Replication Storage & Replication Storage & Replication
CockroachDB
SQL Pod 2
SQL Layer
Execution
Distribution
CockroachDB
SQL Pod 3
SQL Layer
Execution
Distribution
SQL
Storage
11. Virtual Cluster - Tenant 3
Virtual Cluster 2
Virtual Cluster - Tenant 1
Shared CockroachDB Storage ONLY Cluster
Tenant 1
SQL Pod 1
This allows to scale storage and execution separately
SQL Layer
Execution
Distribution
Storage & Replication Storage & Replication Storage & Replication
Tenant 1
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant 1
SQL Pod 3
SQL Layer
Execution
Distribution
Tenant 2
SQL Pod 1
SQL Layer
Execution
Distribution
Tenant 3
SQL Pod 1
SQL Layer
Execution
Distribution
Tenant 3
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant 3
SQL Pod 3
SQL Layer
Execution
Distribution
12. Availability Zone 1 Availability Zone 2 Availability Zone 3
App pods are spreads across AZs and the storage
cluster is spread to optimize for resilience
Storage & Replication Storage & Replication Storage & Replication
Tenant1
SQL Pod 3
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 1
SQL Layer
Execution
Distribution
13. Availability Zone 1 Availability Zone 2 Availability Zone 3
App pods are spreads across AZs and the storage
cluster is spread to optimize for resilience
Storage & Replication Storage & Replication Storage & Replication
Tenant1
SQL Pod 3
SQL Layer
Execution
Distribution
Tenant2
SQL Pod 1
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 1
SQL Layer
Execution
Distribution
14. Availability Zone 1 Availability Zone 2 Availability Zone 3
App pods are spreads across AZs and the storage
cluster is spread to optimize for resilience
Storage & Replication Storage & Replication Storage & Replication
Tenant3
SQL Pod 1
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 3
SQL Layer
Execution
Distribution
Tenant2
SQL Pod 1
SQL Layer
Execution
Distribution
Tenant3
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 1
SQL Layer
Execution
Distribution
Tenant3
SQL Pod 3
SQL Layer
Execution
Distribution
15. Availability Zone 1 Availability Zone 2 Availability Zone 3
We introduce proxy pods that routes
tenant queries to their SQL pods
Storage & Replication Storage & Replication Storage & Replication
Proxy Proxy Proxy Proxy
LOAD BALANCER
Tenant3
SQL Pod 1
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 3
SQL Layer
Execution
Distribution
Tenant2
SQL Pod 1
SQL Layer
Execution
Distribution
Tenant3
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 1
SQL Layer
Execution
Distribution
Tenant3
SQL Pod 3
SQL Layer
Execution
Distribution
16. Serverless Scale
Data volume scale is accomplished in the storage cluster and uses native
distribution and range splitting
Transactional scale is quite different. We need to
● Accommodate elastic usage
● Deal with spikes in traffic
● Spin down to dormant usage
Autoscaler monitors CPU load on each SQL pod in the cluster, and calculates the
number of SQL pods/tenant based on two metrics:
• Average CPU usage over the last 5 minutes.
• Peak CPU usage during the last 5 minutes.
This is all accomplished using ephemeral SQL pods
17. Availability Zone 1 Availability Zone 2 Availability Zone 3
Autoscaling your application
Storage & Replication Storage & Replication Storage & Replication
Tenant1
SQL pod 3
SQL Layer
Execution
Distribution
Tenant1
SQL POd 2
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 1
SQL Layer
Execution
Distribution
Proxy Proxy Proxy Proxy
LOAD BALANCER
unassigned unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
“Hot”
pods
“Hot”
pods
“Hot”
pods
Tenant1
Capacity
18. Availability Zone 1 Availability Zone 2 Availability Zone 3
Accommodates peaks in traffic by adding SQL pods
Storage & Replication Storage & Replication Storage & Replication
Tenant1
SQL Pod 3
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 1
SQL Layer
Execution
Distribution
Proxy Proxy Proxy Proxy
LOAD BALANCER
unassigned
Tenant1
SQL pod 4
SQL Layer
Execution
Distribution
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
“Hot”
pods
“Hot”
pods
“Hot”
pods
Tenant1
Capacity
19. Availability Zone 1 Availability Zone 2 Availability Zone 3
...and returns to “steady state” after the event
Storage & Replication Storage & Replication Storage & Replication
Tenant1
SQL Pod 3
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 2
SQL Layer
Execution
Distribution
Tenant1
SQL Pod 1
SQL Layer
Execution
Distribution
Proxy Proxy Proxy Proxy
LOAD BALANCER
unassigned unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
“Hot”
pods
“Hot”
pods
“Hot”
pods
Tenant1
Capacity
20. Availability Zone 1 Availability Zone 2 Availability Zone 3
And when there is no traffic, the tenant goes dormant and
the pods spin down to zero
Storage & Replication Storage & Replication Storage & Replication
Proxy Proxy Proxy Proxy
LOAD BALANCER
unassigned unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
“Hot”
pods
“Hot”
pods
“Hot”
pods
Tenant1
Capacity
21. Availability Zone 1 Availability Zone 2 Availability Zone 3
When it needs to “wake up”, it spins up a “hot” pod
Storage & Replication Storage & Replication Storage & Replication
Proxy Proxy Proxy Proxy
LOAD BALANCER
unassigned unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
unassigned
“Hot”
pods
“Hot”
pods
“Hot”
pods
Tenant1
Capacity
Tenant1
SQL Pod 4
SQL Layer
Execution
Distribution
22. Serverless gives developers what they want
Spin up a CockroachDB
Serverless cluster in seconds,
for free w/out a credit card
Start Instantly
No need to manage, upgrade
or operate the database, just
connect and code
No Operations
Use a SQL database that
scales storage and transaction
vols up/down to meet demand
Auto Scale
CockroachDB serverless
replicates your data across
AZs to ensure it is always
available
Eliminate Downtime
23. Serverless gives developers what they want
Spin up a CockroachDB
Serverless cluster in seconds,
for free w/out a credit card
Start Instantly
No need to manage, upgrade
or operate the database, just
connect and code
No Operations
Use a SQL database that
scales storage and transaction
vols up/down to meet demand
Auto Scale
CockroachDB serverless
replicates your data across
AZs to ensure it is always
available
Eliminate Downtime
Just Build
Relational models and SQL
Guaranteed correct, low-latency transactions
OPERATIONS
NO
YES Code against an API
Wire compatible w/ PostgreSQL
DEVELOPER
FOCUS?
CODE
YES
GENEROUS free tier
For us, it’s a database!
27. Create a CockroachDB instance now…
Serverless(beta)
A single region instance with a generous free tier and
with a capped pay for usage beyond free limits
No credit card required!
Free, every month up to:
• 5GB Storage
• 250M Request Units
Dedicated
A full featured, single tenant instance. Deploy
instantly on AWS or GCP in a single region or across
multiple regions with 99.99% guaranteed uptime.
Let our SRE team provision and manage your database.
www.cockroachlabs.com
28. What if I run out of request units (RUs)?
CockroachDB Serverless doesn’t turn off once you use your spend limit capacity, it
ensures you will have at least 100RUs/second for remainder of billing period (month)
What is it good for?
In its current state, We
believe CockroachDB
Serverless will be good for:
● side projects
● smaller apps
● low code apps
● learning SQL
Gaining familiarity with
SQL or CockroachDB
29. SINGLE LOGICAL
DATABASE
CockroachDB: Architected For the Cloud
A fundamentally better database for your developers and applications
CockroachDB is a distributed, relational database that
can be used for mundane and high value workloads
It is a database cluster that is comprised of nodes that
appear as a single logical database
It gives your developers, familiar standard SQL
USER: Ashley
> INSERT (Kimball)
INTO CUSTOMER;
30. SINGLE LOGICAL
DATABASE
CockroachDB: Architected For the Cloud
A fundamentally better database for your developers and applications
Scale the database by simply adding more nodes.
CockroachDB auto-balances to incorporate the new
resource. No manual work is required.
● Easy scale for increase in database size
● Every node accepts reads and write so
you also scale transactional volume
USER: Ashley
> INSERT (Kimball)
INTO CUSTOMER;
USER: Lindsay
> SELECT * FROM
ORDERS;
31. SINGLE LOGICAL
DATABASE
CockroachDB: Architected For the Cloud
A fundamentally better database for your developers and applications
REGION 1
US-WEST
REGION 1
US-WEST
REGION 1
US-WEST
Scale even further across regions and
even clouds, yet still deliver a single
logical database
It excels when deployed across
multiple data centers.
USER: Ashley
> INSERT (Kimball)
INTO CUSTOMER;
USER: Lindsay
> SELECT * FROM
ORDERS;
USER: Peter
> UPDATE (Kimball)
FNAME=”Spencer”;
32. SINGLE LOGICAL
DATABASE
CockroachDB: Architected For the Cloud
A fundamentally better database for your developers and applications
Scale even further across regions and
even clouds, yet still deliver a single
logical database
It excels when deployed across
multiple data centers.
...and even multi-cloud!
USER: Ashley
> INSERT (Kimball)
INTO CUSTOMER;
USER: Lindsay
> SELECT * FROM
ORDERS;
USER: Peter
> UPDATE (Kimball)
FNAME=”Spencer”;
33. SINGLE LOGICAL
DATABASE
CockroachDB: Architected For the Cloud
A fundamentally better database for your developers and applications
USER: Ashley
> INSERT (Kimball)
INTO CUSTOMER;
USER: Lindsay
> SELECT * FROM
ORDERS;
REGION 1
US-WEST
REGION 1
US-WEST
REGION 1
US-WEST
USER: Peter
> UPDATE (Kimball)
FNAME=”Spencer”;
CockroachDB is naturally resilient so you can
survive the failure of a node or even an entire
region without service disruption
● Always-on, always available w/ zero RPO/RTO
● Allows for no downtime rolling upgrades
34. REGION 2
US-EAST
REGION 3
EMEA
REGION 1
US-WEST
CockroachDB: Architected For the Cloud
A fundamentally better database for your developers and applications
USER: Random
> INSERT (Kimball)
INTO CUSTOMER;
LOAD BALANCER
Kimball
Mattis
Stewart
LOAD BALANCER LOAD BALANCER
Kimball
Mattis
Stewart
Kimball
Mattis
Stewart
Ask any node for data and it will find it in
the cluster
35. REGION 2
US-EAST
REGION 3
EMEA
REGION 1
US-WEST
CockroachDB: Architected For the Cloud
A fundamentally better database for your developers and applications
USER: Random
> INSERT (Kimball)
INTO CUSTOMER;
LOAD BALANCER
Kimball
Mattis
Stewart
USER: Kimball
> SELECT (Kimball)
FROM CUSTOMER;
LOAD BALANCER LOAD BALANCER
Ask any node for data and it will find it in
the cluster
Geo-locate data near user to reduce
read/write latencies
(or comply with regulations)
Kimball
Mattis
Stewart
Kimball
Mattis
Stewart