This document discusses adding Forge modules to Puppet Enterprise. It describes moving the HTTP load balancer from Pound to nginx, keeping existing manifests pulled from GitHub, and using the Puppet Forge to install the puppetlabs/nginx module or integrating it via Git submodules. It also covers parameterizing classes on the Puppet Enterprise console and merging site.pp files, as well as updating nginx configurations on the fly and considering alternative nginx modules on the Forge.
Gulp is a streaming build system that automates and enhances workflows. It uses plugins to compile Sass/Less, autoprefix CSS, run JavaScript linting, bundle files, minify assets, and more. Yeoman generates project scaffolds with preconfigured Gulp setups for tasks like minimizing, autoprefixing, and collecting static assets. Together, Gulp and Yeoman provide tools to automate common development tasks and enhance workflows.
1. The document discusses setting up a continuous integration workflow for Drupal projects using tools like Jenkins, Drush, and Vagrant.
2. It identifies problems with current development practices like code being merged without testing and different environments between dev and production.
3. The workflow proposed uses scripts to automate rebuilding development and production environments from source control, running tests, and deploying code.
The document discusses modern web technologies including Composer, Laravel, Sass, Compass, Node.js, Bower, Gulp and SemanticUI. It provides overviews of each tool, why they are useful, how to install them and includes demos. Key topics covered are dependency management with Composer, PHP framework Laravel, CSS preprocessor Sass and framework Compass, front-end package manager Bower, task runner Gulp and theming framework SemanticUI.
The document provides steps to dockerize a WordPress application. It involves installing Docker, creating a Dockerfile to define the WordPress application environment, building a Docker image from the Dockerfile, running the image as a container and configuring WordPress. Key steps include creating a Dockerfile to install Apache, MySQL, PHP and WordPress, building an image from the Dockerfile, running the image as a container and mapping ports, and configuring WordPress inside the container.
OSDC.no 2015 introduction to node.js workshopleffen
This document provides a short introduction to Node.js, Express, and MQTT for IoT applications. It discusses using Node.js and its non-blocking I/O model on devices like the Raspberry Pi. It then demonstrates setting up a basic Express app, adding static files and templates. Finally, it introduces MQTT as a lightweight protocol for IoT with publish/subscribe messaging and shows a simple example of connecting and publishing with the MQTT Node.js client library.
Devenez le plus heureux des Front-end avec Gulp.jsRémy Savard
This document discusses using Gulp as a task runner for front-end development. It introduces Gulp and compares it to Grunt, explaining that Gulp uses streaming for faster performance. The document then demonstrates how to set up a basic Gulpfile to compile Sass files to CSS. It covers the main Gulp functions like gulp.task(), gulp.src(), gulp.dest(), and gulp.watch(), and shows how to create dependent tasks. Finally, it recommends some common Gulp plugins for tasks like autoprefixing, uncss, concatenation, minification, and more.
Setup Kubernetes with flannel on ubuntu platformAjeet Singh
This document provides instructions for setting up Kubernetes with 2 nodes using kubeadm and flannel on Ubuntu. It describes installing Docker and Kubernetes, initializing the master node with kubeadm init, joining the worker node with the kubeadm join command, installing and configuring flannel as the pod network, and deploying and exposing the Kubernetes dashboard for management. Specific steps include installing Docker, initializing the master, saving the join command, installing flannel on all nodes, joining the worker, installing the dashboard and changing its service type to NodePort for external access.
This document provides an overview of developing applications in Docker. It defines key Docker terminology like Dockerfile, image, and container. It demonstrates how to build an image from a Dockerfile, run containers, and use Dockerfiles to package applications. Tips are given for optimizing images like using lightweight base images, combining commands, and removing temporary files. Volumes are demonstrated as a way to share files between the host and container during development.
Deploying Rails applications with MoonshineRobot Mode
Moonshine is a tool that automates server deployment and configuration. It uses Puppet and Capistrano to update packages, configure services like SSH, deploy code, install gems and their dependencies, pull down git submodules, and keep the server configuration under version control. Moonshine plugins allow it to perform additional tasks during deployment. To use Moonshine, a server is configured using a YAML file and application manifest, then deployments can be automated using Capistrano commands. This allows servers to be deployed and maintained in a consistent, repeatable manner.
uWSGI - Swiss army knife for your Python web appsTomislav Raseta
uWSGI is a full-stack tool for building hosting services that acts as an application server for Python web apps using the WSGI specification. It provides a pluggable architecture, versatility, high performance, low resource usage, and reliability. Configuration options are extensive and allow for processes, threads, reloading, monitoring and more. Proper configuration and testing is required to optimize performance for production deployments.
"Wix Serverless from inside", Mykola BorozdinFwdays
There were three Scala developers and a task - drastically improve Wix Node.JS development velocity. They created Wix Serverless, which, indeed, gives you blazingly fast development but does have servers. This talk is about inside cornerstones and the history of the framework, which gives developers the power of all Wix infrastructure in one function and deploys to production in seconds.
EuroPython 2014 - How we switched our 800+ projects from Apache to uWSGIMax Tepkeev
During the last 7 years the company I am working for developed more than 800 projects in PHP and Python. All this time we were using Apache+nginx for hosting this projects. In this talk I will explain why we decided to switch all our projects from Apache+nginx to uWSGI+nginx and how we did that.
This document discusses JavaScript task runners Gulp and Grunt. It describes common web development tasks like compiling Sass/Less to CSS, concatenating and minifying JavaScript files. Task runners automate repetitive tasks and are also called build systems. Gulp is a streaming build system while Grunt uses configuration over code. Both are useful for modern front-end workflows involving preprocessors, package managers, and building/optimizing assets.
SaltConf 2014 keynote - Thomas Jackson, LinkedIn
Safety with Power tools
As infrastructure scales, simple tasks become increasingly difficult. For large infrastructures to be manageable, we use automation. But automation, like any power tool, comes with its own set of risks and challenges. Automation should be handled like production code, and great care should be exercised with power tools. This talk will cover how SaltStack is used at LinkedIn and offer tips and tricks for automating management with SaltStack at massive scale including a look at LinkedIn-inspired Salt features such as blacklist and pre-req states. It will also cover Salt master and minion instrumentation and a compilation of how not to use Salt.
This document provides instructions for setting up Docker and Fig on Mac OS X. It describes installing Homebrew, VirtualBox, Vagrant, and Docker client tools. It also explains setting up a CoreOS VM using Vagrant to run Docker containers remotely, and installing Fig to manage Docker containers. Finally, it provides examples of running a Docker image directly and using Fig to test the full Docker configuration.
Wt and Phalcon are PHP frameworks, with Wt being built entirely in C++ for improved performance and lower resource usage compared to Phalcon which relies on PHP. Performance tests showed Wt had higher request throughput and lower memory usage than Phalcon. Wt also has advantages in security, as its logic is automatically verified, while Phalcon requires manual filtering. Overall Wt provides better performance, security and is more lightweight and environmentally friendly than Phalcon due to its C++ implementation.
1) The document describes how to create a .NET Core web API application using 4 Docker containers: NGINX load balancer, 2 Core API containers, and a MySQL database container.
2) The Core API containers are built from a Dockerfile and published to Ubuntu containers on the Docker network with static IPs.
3) The MySQL container is run on the network.
4) An NGINX container is built from a Dockerfile using an NGINX configuration file for load balancing, and run on the network.
This document provides instructions for installing and configuring nginx, uWSGI, and a simple Python Bottle application on an Ubuntu server. It describes installing necessary packages like nginx, uWSGI, Bottle and its dependencies. It then provides details on configuring uWSGI as an emperor to manage applications, creating a simple test app, writing the uWSGI and nginx configuration files, and testing the running application.
NoSQL databases are often touted for their performance and whilst it's true that they usually offer great performance out of the box, it still really depends on how you deploy your infrastructure. Dedicated vs cloud? In memory vs on disk? Spindal vs SSD? Replication lag. Multi data centre deployment.
This talk considers all the infrastructure requirements of a successful high performance infrastructure with hints and tips that can be applied to any NoSQL technology. It includes things like OS tweaks, disk benchmarks, replication, monitoring and backups.
MongoDB: Optimising for Performance, Scale & AnalyticsServer Density
MongoDB is easy to download and run locally but requires some thought and further understanding when deploying to production. At scale, schema design, indexes and query patterns really matter. So does data structure on disk, sharding, replication and data centre awareness. This talk will examine these factors in the context of analytics, and more generally, to help you optimise MongoDB for any scale.
Presented at MongoDB Days London 2013 by David Mytton.
The customer lifecycle - from visitor to customer. Techniques for driving traffic, trials, nurturing, conversion, success monitoring and handling churn.
Presented by David Mytton at Startup Camp Berlin 2015-03-13.
The document discusses how to handle incidents, downtime, and outages. It notes that the cost of downtime for companies in Q1 2015 was $2.9 billion, $870 million, and $4.1 billion. It recommends preparing for incidents by having on-call staff and documentation, responding quickly by following an incident response checklist and notifying stakeholders, and performing a postmortem within days to analyze what failed and how to prevent future issues.
Remote startup - building a company from everywhere in the worldServer Density
This document discusses how Server Density grew from 2 employees in 2009 to 12 employees in 2013 while being fully remote. It outlines the company's timeline and some advantages of being fully remote such as access to worldwide talent, lower costs, and no distractions of an office. However, it also notes disadvantages like collaboration being more difficult without in-person interactions. While the company added an office in 2012, the document emphasizes that working remotely is a mindset that is difficult to adopt after initially co-locating. Effective communication and availability are important for fully distributed teams.
This document discusses MongoDB infrastructure at Server Density. It notes that Server Density uses 27 MongoDB nodes to store 20TB of data per month from their MySQL database. Some key reasons for choosing MongoDB include replication, official drivers, easy deployment, and fast performance out of the box. The document then discusses various MongoDB performance and infrastructure considerations like network throughput, replication lag, failover processes, disk types, backups, and monitoring.
Going from zero to Puppet by Pedro Pessoa, Operations Engineer at Server Density.
Abstract: Using out-of-the-box Puppet for non-sysadmin work - steps from going from no config management to managing 100 nodes and allowing non-sysadmin tasks to be performed.
Speaker Bio: Linux admin for 10+ years. Java/Python/C developer 12+ years. Ops engineer at http://www.serverdensity.com - a hosted server and website monitoring service. Currently processing 12TB+ per month into MongoDB running on dedicated and virtual instances.
www.serverdensity.com/puppetcamp/
StartOps: Growing an ops team from 1 founderServer Density
Bootstrapped startups don't have the luxury of a full team of ops engineers available to respond to issues 24/7, so how can you survive on your own? This talk will tell the story of how to run your infrastructure as a single founder through to growing that into a team of on call engineers. It will include some interesting war stories as well as tips and suggestions for how to run ops at a startup.
Presented at DevOpsDays London 2013 by David Mytton.
Containers seem to have suddenly become the hot new thing everyone is talking about, but what are they?
Why are they important?
How should you use them and what does it mean for cloud infrastructure? This talk will examine the history, technical details and strategy around containerisation from the perspective of developers and operations, consider internal container OSs like Rocket and Ubuntu Core as well as management layers like Docker and Apache Mesos and take a look at why cloud providers are launching their own services around them.
Presented by David Mytton at Datacloud Monaco 2015-06-04
The document discusses high performance infrastructure for Server Density which includes 150 servers that have been running since June 2009 and migrated from MySQL to MongoDB. It stores 25TB of data per month. Key aspects of performance discussed are using fast networks like 10 Gigabit Ethernet on AWS, ensuring high memory, using SSDs over spinning disks for performance, and factors like replication lag based on location. The document also compares options like using cloud, dedicated servers, or colocation and discusses monitoring, backups, dealing with outages, and other operational aspects.
Scaling humans - Ops teams and incident managementServer Density
The document discusses the costs of downtime for companies and best practices for incident management teams to prepare for, respond to, and review incidents to minimize downtime costs. It notes that downtime costs companies billions per quarter and recommends teams prepare documentation and contact information, have on-call rotation schedules, log all responses to incidents, provide frequent status updates, gather teams to escalate incidents as needed, and conduct post-mortem reviews within days of an incident.
DevOps Incident Handling - Making friends not enemies.Server Density
David Mytton CEO of Server Density presented this talk to the DevOps Meetup in London. It takes you through how to handle DevOps incidents, outages and downtime -- and more specifically how to make friends, not enemies in the process.
Why Puppet? Why now? Can you get by without using any config management? You probably think don't have time, or that your project is too small. What can using Puppet really add? How can you justify investing time up front? Maybe you can just do it later?
Getting started with config management can often seem like a big project, especially if you only manage a few systems or have a small team. This talk will examine why you should use Puppet from the beginning. It will examine what you can do with Puppet that couldn't do otherwise, how much time it will save and why it's especially important if you think your project has even the smallest chance of scaling in the future.
Presented by David Mytton at Puppet Camp London 2015-04-13
The document discusses Server Density's architecture which includes 100 Ubuntu servers with 50% being virtual, using Nginx, Python, and MongoDB. It handles 25TB of data per month. Puppet is used for configuration, failover, code deploys, and system updates. The document also considers colocating servers versus using a dedicated provider and factors like hardware specs, costs, skills required, and fun.
Infrastructure choices - cloud vs colo vs bare metalServer Density
This document discusses the differences between cloud, colocation, and bare metal infrastructure options. It covers key considerations for performance including CPU, memory, disk, and network latency and bandwidth. Colocation provides hardware at a specific location while maintaining internal skills, but has costs for total spend, hardware specifications, and power usage. Cloud infrastructure offers elastic workloads and support for demand spikes and unknown requirements, but bare metal is preferable for managed hardware replacement and networking needs. Overall, the best option depends on an organization's specific workload characteristics and skills.
This document discusses improving incident response procedures through practices such as checklists, documented procedures, realistic incident simulations and postmortems. It recommends extended use of checklists to guide responses while still allowing for experience and independent thought. Regular incident response simulations that test both general processes and specific failures can help refine procedures and build confidence. Postmortems should objectively review incidents, suggest improvements and run through scenarios again over time to prevent complacency.
The document discusses how content marketing is a path to reaching customers' goals, not just a company's marketing goals. It emphasizes creating high-quality, useful content that serves customers' needs over superficial or keyword-focused content. The key is developing content that provides long-term value for customers through discipline, honesty and developing one's unique voice.
Joined by Rick Nelson, Technical Solutions architect from NGINX Server Density take you though the do's and don'ts of monitoring NGINX. Critical and non critical metrics to monitor, important alerts to configure and the best monitoring tools available.
Github Action is the CI/CD tool made by Github. Deeply integrated with Github features, it can not only automate deployments, but also Githu.b repository management. In this sharing I will talk about how we use Github action in LikeCoin and some issues we encountered.
Mcollective is an open source framework for server orchestration and parallel job execution. It provides asynchronous and event-driven communication between nodes using a message broker like RabbitMQ. Nodes can be targeted based on facts, classes, or other criteria. Plugins allow mcollective to manage configurations, run puppet, install packages, manage firewall rules, and more across large server fleets. It provides a scalable and decentralized alternative to SSH loops for orchestrating infrastructure changes and operations.
The document summarizes Henry Schreiner's work on several Python and C++ scientific computing projects. It describes a scientific Python development guide built from the Scikit-HEP summit. It also outlines Henry's work on pybind11 for C++ bindings, scikit-build for building extensions, cibuildwheel for building wheels on CI, and several other related projects.
Webpack is a module bundler that builds out a dependency graph from entry points to bundle assets. It understands JavaScript and JSON files by default but uses loaders to process other file types. Plugins provide additional functionality beyond loading and bundling like generating HTML files and service workers. Workbox plugins help precache assets and implement caching strategies in service workers to improve performance. Webpack supports different modes for development and production builds and includes optimizations like scope hoisting to improve bundle performance.
Kubernetes from Dev to Prod summarizes GoEuro's transition from a legacy environment to using Kubernetes and CI/CD pipelines from development to production. Key points:
- GoEuro transitioned 50+ services across 150 engineers from separate development and operations teams to using Kubernetes, Docker, and CI/CD pipelines in 4 months.
- They developed "hyper-vm" single-node Kubernetes VMs for local development and testing and "y8s" for sharing Kubernetes configurations across environments from development through production.
- CI/CD pipelines were automated using GitLab CI and custom implementations running jobs on "hyper-vm" agents to deploy to environments from preview through production.
- Additional services
Presentation at March 2019 Dutch Postgres User Group Meetup on lessons learnt while migrating from Oracle to Postgres, demo'ed via vagrant test environments and using generic pgbench datasets.
Kubernetes is exploding in popularity right now and has all the buzz and cargo-culting that Docker enjoyed just a few years ago. But what even is Kubernetes? How do I run my PHP apps in it? Should I run my PHP apps in it ?
Tuesday, July 30th session of the vBrownBag OpenStack Sack Lunch Series: Couch to OpenStack. We cover Nova, the Compute Service that deploys and runs VMs.
Setting up the hyperledger composer in ubuntukesavan N B
The document provides steps to set up Hyperledger Composer in Ubuntu by:
1. Installing development tools like composer-cli, generator-hyperledger-composer, and composer-rest-server.
2. Starting Hyperledger Fabric.
3. Creating a business network definition from a sample, modifying files, and defining models and transactions.
4. Building a business network archive (.bna) file.
5. Deploying the .bna file to the running Hyperledger Fabric.
6. Generating a REST API using composer-rest-server to interact with the business network.
This document discusses packaging Ruby and Rails applications for production. It covers using system packages versus gems, configuration management tools like Chef and Puppet, creating Debian packages, packaging gems, build servers, pain points like outdated Rubygems packages, and ideas for deeper Bundler integration and packaging gems by default. Overall it presents strategies for deploying Ruby applications as system packages for production servers.
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHPDana Luther
In this tutorial we will go over setting up a standard LEMP stack for development use and learn how to modify it to mimic your production/pre-production environments as closely as possible. We will go over how to switch from Nginx to Apache, upgrade PHP versions and introduce additional storage engines such as Redis to the equation. We'll also step through how to run both unit and acceptance suites using headless Selenium images in the stack. Leave here fully confident in knowing that whatever environment you get thrown into, you can replicate it and work in it comfortably.
Puppet Camp Berlin 2015: Pedro Pessoa | Puppet at the center of everything - ...NETWAYS
Pedro shows a short overview on how they use puppet at Server Density to manage the entire infrastructure and code, content mostly taken from previous presentations.
The focus will be on how to reduce the 4-year-old code base while making more use of forge modules while migrating to Puppet Enterprise 3.
Plack provides a common interface called PSGI (Perl Server Gateway Interface) that allows Perl web applications to run on different web servers. It includes tools like Plackup for running PSGI applications from the command line and middleware for adding functionality. Plack has adapters that allow many existing Perl web frameworks to run under PSGI. It also provides high performance PSGI servers and utilities for building and testing PSGI applications.
This presentation was given at the Boston Django meetup on November 16, and surveyed several leading PaaS providers including Stackato, Dotcloud, OpenShift and Heroku.
For each PaaS provider, I documented the steps necessary to deploy Mezzanine, a popular Django-based CMS and blogging platform.
At the end of the presentation, I do a wrap-up of the different providers and provide a comparison matrix showing which providers have which features. This matrix is likely to go out-of-date quickly because these providers are adding new features all the time.
TensorFlow can be installed and run in a distributed environment using Docker. The document discusses setting up TensorFlow workers and parameter servers in Docker containers using a Docker compose file. It demonstrates building Docker images for each role, and configuring the containers to communicate over gRPC. A Jupyter server container is also built to host notebooks. The distributed TensorFlow environment is deployed locally for demonstration purposes. Future directions discussed include running the distributed setup on a native cluster using tools like Docker Swarm or RancherOS, and testing TensorFlow with GPU support in Docker.
Apache Bigtop and ARM64 / AArch64 - Empowering Big Data EverywhereGanesh Raju
Apache Bigtop packages the Hadoop ecosystem into RPM and DEB packages. It provides a foundation for commercial Hadoop distributions and services. Bigtop features include a build toolchain, package framework, Puppet deployment scripts, and integration test framework. The next release of Bigtop 1.4 is upcoming in early April 2019, adding AArch64 support, improved testing, and package version updates. Future work includes focusing on core big data components like Spark and Flink, adding Kubernetes and cloud support, and expanding integrations.
CAS 4.2 focuses on easy configuration through auto-configuration and reducing XML, universal protocol support through additional modules, and delegated authentication through pac4j integrations. New features include configuration via properties files instead of XML, support for OAuth, SAML, and OpenID Connect through additional dependencies, and replacing Spring Security with pac4j for security. Future plans for CAS 4.3 include Java 8 support, multifactor authentication, improved OAuth/OpenID Connect, SAML SSO, and a Groovy management console.
5. HTTP Load Balancer
from Pound to nginx
New product : new load balancer
nginx :
- WebSockets
- SPDY standard
keep :
- manifests pulled from our Gihub repo by
the puppet master
- use of Puppet Console and Live Management
to trigger transient changes
6. Reinventing the wheel
(don't)
Writing our nginx manifest?
- add yet another one to the collection
community reach?
- whether our problem had already been solved
- or a kick start where we could stand on
http://www.flickr.com/photos/conskeptical/
9. Integration
A) get the actual code in
puppet module install puppetlabs/nginx
(or)
git submodule add https://github.com/puppetlabs/puppetlabsnginx.git
B) run it on existing nodes
no parameterised classes on PE Console
(or)
merge our site.pp (which is empty) and the console,
it being an ENC and all - (how-merging-works)