Intro to MongoDB
Get a jumpstart on MongoDB, use cases, and next steps for building your first app with Buzz Moschetti, MongoDB Enterprise Architect.
@BuzzMoschetti
The document is a slide presentation on MongoDB that introduces the topic and provides an overview. It defines MongoDB as a document-oriented, open source database that provides high performance, high availability, and easy scalability. It also discusses MongoDB's use for big data applications, how it is non-relational and stores data as JSON-like documents in collections without a defined schema. The presentation provides steps for installing MongoDB and describes some basic concepts like databases, collections, documents and commands.
This document provides an introduction to NoSQL and MongoDB. It discusses that NoSQL is a non-relational database management system that avoids joins and is easy to scale. It then summarizes the different flavors of NoSQL including key-value stores, graphs, BigTable, and document stores. The remainder of the document focuses on MongoDB, describing its structure, how to perform inserts and searches, features like map-reduce and replication. It concludes by encouraging the reader to try MongoDB themselves.
The document provides an introduction and overview of MongoDB, including what NoSQL is, the different types of NoSQL databases, when to use MongoDB, its key features like scalability and flexibility, how to install and use basic commands like creating databases and collections, and references for further learning.
The document discusses MongoDB concepts including:
- MongoDB uses a document-oriented data model with dynamic schemas and supports embedding and linking of related data.
- Replication allows for high availability and data redundancy across multiple nodes.
- Sharding provides horizontal scalability by distributing data across nodes in a cluster.
- MongoDB supports both eventual and immediate consistency models.
The document introduces MongoDB as a scalable, high-performance, open source, schema-free, document-oriented database. It discusses MongoDB's philosophy of flexibility and scalability over relational semantics. The main features covered are document storage, querying, indexing, replication, MapReduce and auto-sharding. Concepts like collections, documents and cursors are mapped to relational database terms. Examples uses include data warehousing and debugging.
The Right (and Wrong) Use Cases for MongoDBMongoDB
The document discusses the right and wrong use cases for MongoDB. It outlines some of the key benefits of MongoDB, including its performance, scalability, data model and query model. Specific use cases that are well-suited for MongoDB include building a single customer view, powering mobile applications, and performing real-time analytics. Cache-only workloads are identified as not being a good use case. The document provides examples of large companies successfully using MongoDB for these right use cases.
MongoDB is an open-source, document-oriented database that provides flexible schemas, horizontal scaling, and high performance. It stores data as JSON-like documents with dynamic schemas, making the integration of data easier for developers. MongoDB can be scaled horizontally and supports replication and load balancing for high availability.
This document provides an overview and introduction to MongoDB. It discusses how new types of applications, data, volumes, development methods and architectures necessitated new database technologies like NoSQL. It then defines MongoDB and describes its features, including using documents to store data, dynamic schemas, querying capabilities, indexing, auto-sharding for scalability, replication for availability, and using memory for performance. Use cases are presented for companies like Foursquare and Craigslist that have migrated large volumes of data and traffic to MongoDB to gain benefits like flexibility, scalability, availability and ease of use over traditional relational database systems.
MongoDB is a cross-platform document-oriented database that provides high performance, high availability, and easy scalability. It uses a document-based data model where data is stored in JSON-like documents within collections, instead of using tables with rows as in relational databases. MongoDB can be scaled horizontally and supports replication and sharding. It also supports dynamic queries on documents using a document-based query language.
This document discusses how to achieve scale with MongoDB. It covers optimization tips like schema design, indexing, and monitoring. Vertical scaling involves upgrading hardware like RAM and SSDs. Horizontal scaling involves adding shards to distribute load. The document also discusses how MongoDB scales for large customers through examples of deployments handling high throughput and large datasets.
Elasticsearch is a free and open source distributed search and analytics engine. It allows documents to be indexed and searched quickly and at scale. Elasticsearch is built on Apache Lucene and uses RESTful APIs. Documents are stored in JSON format across distributed shards and replicas for fault tolerance and scalability. Elasticsearch is used by many large companies due to its ability to easily scale with data growth and handle advanced search functions.
The document summarizes a meetup about NoSQL databases hosted by AWS in Sydney in 2012. It includes an agenda with presentations on Introduction to NoSQL and using EMR and DynamoDB. NoSQL is introduced as a class of databases that don't use SQL as the primary query language and are focused on scalability, availability and handling large volumes of data in real-time. Common NoSQL databases mentioned include DynamoDB, BigTable and document databases.
This document provides an overview of MongoDB administration commands and CRUD operations. It discusses how to select databases, show collections, import/export data, and perform basic CRUD operations like insert, find, update, and remove in MongoDB. It also covers additional find methods like logical operators, array operations, and accessing embedded documents. Methods for updating include $set, $inc, $unset, and multi updates.
Jay Runkel presented a methodology for sizing MongoDB clusters to meet the requirements of an application. The key steps are: 1) Analyze data size and index size, 2) Estimate the working set based on frequently accessed data, 3) Use a simplified model to estimate IOPS and adjust for real-world factors, 4) Calculate the number of shards needed based on storage, memory and IOPS requirements. He demonstrated this process for an application that collects mobile events, requiring a cluster that can store over 200 billion documents with 50,000 IOPS.
Whether you're a MongoDB professional or totally new to document databases, our MongoDB performance success factors & evaluation framework has something for you,
Curious about MongoDB performance?
Mydbops CTO, Manosh Malai illustrates the secret sauce for MongoDB performance best practices & analysis tool.
Hadoop 3.0 has been years in the making, and now it's finally arriving. Andrew Wang and Daniel Templeton offer an overview of new features, including HDFS erasure coding, YARN Timeline Service v2, YARN federation, and much more, and discuss current release management status and community testing efforts dedicated to making Hadoop 3.0 the best Hadoop major release yet.
This document discusses common use cases for MongoDB and why it is well-suited for them. It describes how MongoDB can handle high volumes of data feeds, operational intelligence and analytics, product data management, user data management, and content management. Its flexible data model, high performance, scalability through sharding and replication, and support for dynamic schemas make it a good fit for applications that need to store large amounts of data, handle high throughput of reads and writes, and have low latency requirements.
This document discusses using MongoDB to build location-based applications. It describes how to model location and check-in data, perform queries and analytics on that data, and deploy MongoDB in both unsharded and sharded configurations to scale the application. Examples of using MongoDB for a location application include storing location documents with name, address, tags, latitude/longitude, and user tips, and user documents with check-in arrays referencing location IDs.
Webinar: General Technical Overview of MongoDB for Dev TeamsMongoDB
In this talk we will focus on several of the reasons why developers have come to love the richness, flexibility, and ease of use that MongoDB provides. First we will give a brief introduction of MongoDB, comparing and contrasting it to the traditional relational database. Next, we’ll give an overview of the APIs and tools that are part of the MongoDB ecosystem. Then we’ll look at how MongoDB CRUD (Create, Read, Update, Delete) operations work, and also explore query, update, and projection operators. Finally, we will discuss MongoDB indexes and look at some examples of how indexes are used.
Norberto Leite gives an introduction to MongoDB. He discusses that MongoDB is a document database that is open source, high performance, and horizontally scalable. He demonstrates how to install MongoDB, insert documents into collections, query documents, and update documents. Leite emphasizes that MongoDB allows for flexible schema design and the ability to evolve schemas over time to match application needs.
Building web applications with mongo db presentationMurat Çakal
The document introduces building web applications using MongoDB, a document-oriented database. It discusses MongoDB's data modeling and querying capabilities, including examples of modeling user and location data for a check-in application. The document also covers indexing, insertion, updating, and analytics queries for the sample location and user data models.
Building Your First MongoDB App ~ Metadata Cataloghungarianhc
These are the slides I used for a MongoDB webinar about creating your first application with MongoDB. They start with a general MongoDB overview, continuing onto how to model data for a metadata catalog. At this point in the presentation, I break to do a live demonstration. Afterwards, I touch on scaling your application with MongoDB.
MongoDB is a non-relational database that uses a document-based data model. It is an alternative to traditional relational databases and is optimized for storing large amounts of unstructured and semi-structured data. MongoDB does not require a predefined schema and allows flexible, dynamic queries against documents using JavaScript. While relational databases are better suited for transactions, MongoDB is designed for horizontal scalability, faster queries, and flexible data modeling.
Christian Kvalheim gave an introduction to NoSQL and MongoDB. Some key points:
1) MongoDB is a scalable, high-performance, open source NoSQL database that uses a document-oriented model.
2) It supports indexing, replication, auto-sharding for horizontal scaling, and querying.
3) Documents are stored in JSON-like records which can contain various data types including nested objects and arrays.
The document describes MongoDB as an open-source, high-performance, document-oriented database. It stores data in flexible, JSON-like documents, with schemaless collections. It supports dynamic queries, indexing, aggregation and scaling horizontally. MongoDB is suited for scaling out web applications, caching, and high volume use cases where SQL may not be a good fit.
MongoDB, PHP and the cloud - php cloud summit 2011Steven Francia
An introduction to using MongoDB with PHP.
Walking through the basics of schema design, connecting to a DB, performing CRUD operations and queries in PHP.
MongoDB runs great in the cloud, but there are some things you should know. In this session we'll explore scaling and performance characteristics of running Mongo in the cloud as well as best practices for running on platforms like Amazon EC2.
Analytics with MongoDB Aggregation Framework and Hadoop ConnectorHenrik Ingo
This document provides an overview of analytics with MongoDB and Hadoop Connector. It discusses how to collect and explore data, use visualization and aggregation, and make predictions. It describes how MongoDB can be used for data collection, pre-aggregation, and real-time queries. The Aggregation Framework and MapReduce in MongoDB are explained. It also covers using the Hadoop Connector to process large amounts of MongoDB data in Hadoop and writing results back to MongoDB. Examples of analytics use cases like recommendations, A/B testing, and personalization are briefly outlined.
MongoDB Evenings Dallas: What's the Scoop on MongoDB & HadoopMongoDB
What's the Scoop on MongoDB & Hadoop
Jake Angerman, Sr. Solutions Architect, MongoDB
MongoDB Evenings Dallas
March 30, 2016 at the Addison Treehouse, Dallas, TX
Introduction to MongoDB
MongoDB Database
Document Model
BSON
Data Model
CRUD operations
High Availability and Scalability
Replication
Sharding
Hands-On MongoDB
This document discusses MongoDB and the needs of Rivera Group, an IT services company. It notes that Rivera Group has been using MongoDB since 2012 to store large, multi-dimensional datasets with heavy read/write and audit requirements. The document outlines some of the challenges Rivera Group faces around indexing, aggregation, and flexibility in querying datasets.
Eagle6 is a product that use system artifacts to create a replica model that represents a near real-time view of system architecture. Eagle6 was built to collect system data (log files, application source code, etc.) and to link system behaviors in such a way that the user is able to quickly identify risks associated with unknown or unwanted behavioral events that may result in unknown impacts to seemingly unrelated down-stream systems. This session is designed to present the capabilities of the Eagle6 modeling product and how we are using MongoDB to support near-real-time analysis of large disparate datasets.
Back to Basics Webinar 2 - Your First MongoDB ApplicationJoe Drumgoole
How to build a MongoDB application from scratch in the MongoDB Shell and Python. How to add indexes and use explain to make sure you are using them properly.
Back to Basics Webinar 2: Your First MongoDB ApplicationMongoDB
The document provides instructions for installing and using MongoDB to build a simple blogging application. It demonstrates how to install MongoDB, connect to it using the mongo shell, insert and query sample data like users and blog posts, update documents to add comments, and more. The goal is to illustrate how to model and interact with data in MongoDB for a basic blogging use case.
Back to Basics, webinar 2: La tua prima applicazione MongoDBMongoDB
Questo è il secondo webinar della serie Back to Basics che ti offrirà un'introduzione al database MongoDB. In questo webinar ti dimostreremo come creare un'applicazione base per il blogging in MongoDB.
Back to Basics 2017: Mí primera aplicación MongoDBMongoDB
Descubra:
Cómo instalar MongoDB y usar el shell de MongoDB
Las operaciones básicas de CRUD
Cómo analizar el rendimiento de las consultas y añadir un índice
On Tuesday 18th March, the MongoDB team held on online Cloud Workshop in place of the in-person event which was planned.
Attendees learnt how to build modern, event driven applications powered by MongoDB Atlas in Google Cloud Platform (GCP) and were shown relevant operational and security best practices, to get started immediately with their own digital transformations.
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
This presentation discusses migrating data from other data stores to MongoDB Atlas. It begins by explaining why MongoDB and Atlas are good choices for data management. Several preparation steps are covered, including sizing the target Atlas cluster, increasing the source oplog, and testing connectivity. Live migration, mongomirror, and dump/restore options are presented for migrating between replicasets or sharded clusters. Post-migration steps like monitoring and backups are also discussed. Finally, migrating from other data stores like AWS DocumentDB, Azure CosmosDB, DynamoDB, and relational databases are briefly covered.
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe.
This talk covers:
Common components of an IoT solution
The challenges involved with managing time-series data in IoT applications
Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance.
How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts
At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch".
This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
The document discusses guidelines for ordering fields in compound indexes to optimize query performance. It recommends the E-S-R approach: placing equality fields first, followed by sort fields, and range fields last. This allows indexes to leverage equality matches, provide non-blocking sorts, and minimize scanning. Examples show how indexes ordered by these guidelines can support queries more efficiently by narrowing the search bounds.
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
The document describes a methodology for data modeling with MongoDB. It begins by recognizing the differences between document and tabular databases, then outlines a three step methodology: 1) describe the workload by listing queries, 2) identify and model relationships between entities, and 3) apply relevant patterns when modeling for MongoDB. The document uses examples around modeling a coffee shop franchise to illustrate modeling approaches and techniques.
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms.
How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms?
In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $.
La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Are you interested in learning about creating an attractive website? Here it is! Take part in the challenge that will broaden your knowledge about creating cool websites! Don't miss this opportunity, only in "Redesign Challenge"!
Interaction Latency: Square's User-Centric Mobile Performance MetricScyllaDB
Mobile performance metrics often take inspiration from the backend world and measure resource usage (CPU usage, memory usage, etc) and workload durations (how long a piece of code takes to run).
However, mobile apps are used by humans and the app performance directly impacts their experience, so we should primarily track user-centric mobile performance metrics. Following the lead of tech giants, the mobile industry at large is now adopting the tracking of app launch time and smoothness (jank during motion).
At Square, our customers spend most of their time in the app long after it's launched, and they don't scroll much, so app launch time and smoothness aren't critical metrics. What should we track instead?
This talk will introduce you to Interaction Latency, a user-centric mobile performance metric inspired from the Web Vital metric Interaction to Next Paint"" (web.dev/inp). We'll go over why apps need to track this, how to properly implement its tracking (it's tricky!), how to aggregate this metric and what thresholds you should target.
What Not to Document and Why_ (North Bay Python 2024)Margaret Fero
We’re hopefully all on board with writing documentation for our projects. However, especially with the rise of supply-chain attacks, there are some aspects of our projects that we really shouldn’t document, and should instead remediate as vulnerabilities. If we do document these aspects of a project, it may help someone compromise the project itself or our users. In this talk, you will learn why some aspects of documentation may help attackers more than users, how to recognize those aspects in your own projects, and what to do when you encounter such an issue.
These are slides as presented at North Bay Python 2024, with one minor modification to add the URL of a tweet screenshotted in the presentation.
Data Protection in a Connected World: Sovereignty and Cyber Securityanupriti
Delve into the critical intersection of data sovereignty and cyber security in this presentation. Explore unconventional cyber threat vectors and strategies to safeguard data integrity and sovereignty in an increasingly interconnected world. Gain insights into emerging threats and proactive defense measures essential for modern digital ecosystems.
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
Hire a private investigator to get cell phone recordsHackersList
Learn what private investigators can legally do to obtain cell phone records and track phones, plus ethical considerations and alternatives for addressing privacy concerns.
Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Em...Erasmo Purificato
Slide of the tutorial entitled "Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Emerging Trends" held at UMAP'24: 32nd ACM Conference on User Modeling, Adaptation and Personalization (July 1, 2024 | Cagliari, Italy)
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
In this follow-up session on knowledge and prompt engineering, we will explore structured prompting, chain of thought prompting, iterative prompting, prompt optimization, emotional language prompts, and the inclusion of user signals and industry-specific data to enhance LLM performance.
Join EIS Founder & CEO Seth Earley and special guest Nick Usborne, Copywriter, Trainer, and Speaker, as they delve into these methodologies to improve AI-driven knowledge processes for employees and customers alike.
AI_dev Europe 2024 - From OpenAI to Opensource AIRaphaël Semeteys
Navigating Between Commercial Ownership and Collaborative Openness
This presentation explores the evolution of generative AI, highlighting the trajectories of various models such as GPT-4, and examining the dynamics between commercial interests and the ethics of open collaboration. We offer an in-depth analysis of the levels of openness of different language models, assessing various components and aspects, and exploring how the (de)centralization of computing power and technology could shape the future of AI research and development. Additionally, we explore concrete examples like LLaMA and its descendants, as well as other open and collaborative projects, which illustrate the diversity and creativity in the field, while navigating the complex waters of intellectual property and licensing.
2. Who is Talking To You?
• Yes, I use “Buzz” on my business cards
• Former Investment Bank Chief Architect at JPMorganChase
and Bear Stearns
• Over 30 years of designing and building systems
• Big and small
• Super-specialized to broadly useful in any vertical
• “Traditional” to completely disruptive
• Advocate of language leverage and strong factoring
• Inventor of perl DBI/DBD
• Not an award winner for PowerPoint
• Still programming – using emacs, of course
3. Agenda
• What is MongoDB?
• What are some good use cases?
• How do I use it?
• How do I deploy it?
4. MongoDB: The Leading NoSQL Database
Document
Data Model
Open-
Source
Fully Featured
High Performance
Scalable
{
name: “John Smith”,
pfxs: [“Dr.”,”Mr.”],
address: “10 3rd St.”,
phones: [
{ number: “555-1212”,
type: “land” },
{ number: “444-1212”,
type: “mobile” }
]
}
5. 5
The best way to run
MongoDB
Automated.
Supported.
Secured.
Features beyond those in the
community edition:
Enterprise-Grade Support
Commercial License
Ops Manager or Cloud Manager Premium
Encrypted & In-Memory Storage Engines
MongoDB Compass
BI Connector (SQL Bridge)
Advanced Security
Platform Certification
On-Demand Training
MongoDB Enterprise Edition
6. Company Vital Stats
500+ employees 2000+ customers
Over $311 million in funding
Offices in NY & Palo Alto and
across EMEA, and APAC
14. Agenda
• What is MongoDB?
• What are some good use cases?
• How do I use it?
• How do I deploy it?
15. 15
MongoDB 3.0 Set The Stage…
7x-10x Performance, 50%-80% Less Storage
How: WiredTiger Storage Engine
• Same data model, query language, & ops
• 100% backwards compatible API
• Non-disruptive upgrade
• Storage savings driven by native
compression
• Write performance gains driven by
– Document-level concurrency control
– More efficient use of HW threads
• Much better ability to scale vertically
MongoDB 3.0MongoDB 2.6
Performance
16. 16
MongoDB 3.2 :
Efficient Enterprise MongoDB
• Much better ability to scale vertically
+
• Document Validation Rules
• Encryption at rest
• BI Connector (SQL bridge)
• MongoDB Compass
• New Relic & AppDynamics integration
• Backup snapshots on filesystem
• Advanced Full-text languages
• $lookup (“left outer JOIN”)
More
general-purpose
solutions
17. 17
MongoDB Sweet Spot Use Cases
Big Data
Product &
Asset Catalogs Security &
Fraud
Internet of
Things
Database-as-
a- Service
Mobile
Apps
Customer
Data
Management
Single View Social &
Collaboration
Content
Management
Intelligence
Agencies
Top Investment
and Retail Banks
Top Global
Shipping Company
Top Industrial
Equipment
Manufacturer
Top Media
Company
Top Investment
and Retail Banks
Complex Data
Management
Top Investment
and Retail Banks
Embedded /
ISV
Cushman &
Wakefield
18. Agenda
• What is MongoDB?
• What are some good use cases?
• How do I use it?
• How do I deploy it?
20. 20
Unpack and Start The Server
$ tar xf mongodb-osx-x86_64-enterprise-3.2.0.tgz
$ mkdir -p ~/mydb/data
$ mongodb-osx-x86_64-enterprise-3.2.0/bin/mongod
> --dbpath ~/mydb/data
> --logpath ~/mydb/mongod.log
> --fork
about to fork child process, waiting until server is
ready for connections.
forked process: 6517
child process started successfully, parent exiting
21. 21
Verify Operation
$ mongodb-osx-x86_64-enterprise-3.2.0/bin/mongo
MongoDB shell version: 3.2.0
connecting to: 127.0.0.1:27017/test
Server has startup warnings:
2016-01-01T12:44:01.646-0500 I CONTROL [initandlisten]
2016-01-01T12:44:01.646-0500 I CONTROL [initandlisten] ** WARNING:
soft rlimits too low. Number of files is 256, should be at least
1000
MongoDB Enterprise > use mug
switched to db mug
MongoDB Enterprise > db.foo.insert({name:”bob”,hd: new ISODate()});
MongoDB Enterprise > db.foo.insert({name:"buzz"});
MongoDB Enterprise > db.foo.insert({pets:["dog","cat"]});
MongoDB Enterprise > db.foo.find();
{ "_id" : ObjectId("5686cef538ea4981e63111dd"), "name" : "bob", "hd"
: ISODate("2016-01-01T19:09:41.442Z") }
{ "_id" : ObjectId("5686…79d5"), "name" : "buzz" }
{ "_id" : ObjectId("5686…79d6"), "pets" : [ "dog", "cat" ] }
22. 22
The Simple Java App
import com.mongodb.client.*;
import com.mongodb.*;
import java.util.Map;
public class mug1 {
public static void main(String[] args) {
try {
MongoClient mongoClient = new MongoClient();
MongoDatabase db = mongoClient.getDatabase("mug”);
MongoCollection coll = db.getCollection("foo");
MongoCursor c = coll.find().iterator();
while(c.hasNext()) {
Map doc = (Map) c.next();
System.out.println(doc);
}
} catch(Exception e) {
// ...
}
}
}
28. A Slightly Bigger Example
Relational MongoDB
{ vers: 1,
customer_id : 1,
name : {
“f”:"Mark”,
“l”:"Smith” },
city : "San Francisco",
phones: [ {
number : “1-212-777-1212”,
dnc : true,
type : “home”
},
{
number : “1-212-777-1213”,
type : “cell”
}]
}
Customer
ID
First Name Last Name City
0 John Doe New York
1 Mark Smith San Francisco
2 Jay White Dallas
3 Meagan White London
4 Edward Daniels Boston
Phone Number Type DNC
Customer
ID
1-212-555-1212 home T 0
1-212-555-1213 home T 0
1-212-555-1214 cell F 0
1-212-777-1212 home T 1
1-212-777-1213 cell (null) 1
1-212-888-1212 home F 2
29. 29
MongoDB Queries Are Expressive
SQL select A.did, A.lname, A.hiredate, B.type,
B.number from contact A left outer join phones B
on (B.did = A.did) where b.type = ’home' or
A.hiredate > '2014-02-02'::date
MongoDB CLI db.contacts.find({"$or”: [
{"phones.type":”home”},
{"hiredate": {”$gt": new ISODate("2014-
02-02")}}
]});
Find all contacts with at least one home phone or
hired after 2014-02-02
30. 30
MongoDB Aggregation Is Powerful
Sum the different types of phones and create a list
of the owners if there is more than 1 of that type
> db.contacts.aggregate([
{$unwind: "$phones"}
,{$group: {"_id": "$phones.t", "count": {$sum:1},
"names": {$push: "$name"} }}
,{$match: {"count": {$gt: 1}}}
]);
{ "_id" : "home", "count" : 2, "names" : [
{ "f" : "John", "l" : "Doe" },
{ "f" : "Mark", "l" : "Smith" } ] }
{ "_id" : "cell", "count" : 4, "names" : [
{ "f" : "John", "l" : "Doe" },
{ "f" : "Meagan", "l" : "White" },
{ "f" : "Edward", "l" : "Daniels” }
{ "f" : "Mark", "l" : "Smith" } ] }
31. 31
$lookup: “Left Outer Join++”
> db.leases.aggregate([ ]);
{
"_id" : ObjectId("5642559e0d4f2076a43584fc"),
"leaseID" : "A5",
"sku" : "GD652",
"origDate" : ISODate("2010-01-01T00:00:00Z"),
"histDate" : ISODate("2010-10-28T00:00:00Z"),
"monthlyDue" : 10,
"vers" : 11,
"delinq" : { "d30" : 10, "d60" : 10, "d90" : 60
},
"credit" : 0
}
// 66 more ….
Step 1: Get a sense of the raw material
32. 32
$lookup: “Left Outer Join++”
Step 2: Group leases by SKU and capture count and max value of 90
day delinquency
> db.leases.aggregate([
{$group: { _id: "$sku", n:{$sum:1},
max90:{$max:"$delinq.d90"} }}
]);
{ "_id" : "AC775", "n" : 27, "max90" : 20 }
{ "_id" : "AB123", "n" : 26, "max90" : 5 }
{ "_id" : "GD652", "n" : 14, "max90" : 80 }
33. 33
$lookup: “Left Outer Join++”
Step 3: Reverse sort and then limit to the top 2
> db.leases.aggregate([
{$group: { _id: "$sku", n:{$sum:1},
max90:{$max:"$delinq.d90"} }}
,{$sort: {max90:-1}}
,{$limit: 2}
]);
{ "_id" : "GD652", "n" : 14, "max90" : 80 }
{ "_id" : "AC775", "n" : 27, "max90" : 20 }
36. Agenda
• What is MongoDB?
• What are some good use cases?
• How do I use it?
• How do I deploy it?
37. 37
• Single-click provisioning
• Scaling & upgrades
• Admin tasks
• Monitoring with charts
• Dashboards and alerts on 100+
metrics
• Backup and restore with point-in-
time recovery
• Support for sharded clusters
MongoDB Ops/Cloud Manager
41. 41
HA and DR Are Isomorphic
PRIMARY
Application
DRIVER
secondary secondary Dual Data
Center HA/DR
Replica Set
secondary
Arbiter
(DC3 or cloud)
Data Center 1 Data Center 2
43. 43
Horizontal Scalability Through Sharding
PRIMARY
Application
DRIVER
secondary
secondary
PRIMARY
secondary
secondary
PRIMARY
secondary
secondary
mongos
Three Sharding Models:
1. Range
2. Tag
3. Hash
…
Shard 1
Symbols A-D
Shard 2
Symbols E-H
Shard n
Symbols ?-Z
44. 44
For More Information
Resource Location
Case Studies mongodb.com/customers
Presentations mongodb.com/presentations
Free Online Training education.mongodb.com
Webinars and Events mongodb.com/events
Documentation docs.mongodb.org
MongoDB Downloads mongodb.com/download
Additional Info info@mongodb.com
HELLO!
This is Buzz Moschetti at MongoDB, and welcome to today’s webinar entitled “Thinking in Documents”, part of our Back To Basics series.
If your travel plans today do not include exploring the document model in MongoDB then please exit the aircraft immediately and see an agent at the gate
Otherwise – WELCOME ABOARD for about the next hour.
Let’s talk about some the terms.
JOINS: RDBMS uses Join to stich together fundamentally simple things into larger, more complex data.
MongoDB uses embedding of data within data and linking to produce the same result
There are three themes that are important to grasp when Thinking In Documents in MongoDB and these will be reinforced throughout the presentation.
First, great schema design…. I’ll repeat it: great…
This is not new or something revolutionary to MongoDB.
It is something we have been doing all along in the construction of solutions. Sometimes well, sometimes not.
It’s just that the data structures and APIs used by MongoDB make it much easier to satisfy the first two bullet points.
Particularly for an up-stack software engineer kind of person like myself, the ease and power of well-harmonizing your persistence with your code – java, javascript, perl, python – is a vital part of
ensuring your overall information architecture is robust.
Part of the exercise also involves candidly addressing legacy RDBMS issues that we see over and over again after 40 years, like schema explosion and field overloading and flattening
Boils down to success = “schema” + code
Let’s talk about some the terms.
JOINS: RDBMS uses Join to stich together fundamentally simple things into larger, more complex data.
MongoDB uses embedding of data within data and linking to produce the same result
Very briefly, a little bit about the person talking to you today over the net.
Yep!
Let’s talk about some the terms.
JOINS: RDBMS uses Join to stich together fundamentally simple things into larger, more complex data.
MongoDB uses embedding of data within data and linking to produce the same result
Let’s talk about some the terms.
JOINS: RDBMS uses Join to stich together fundamentally simple things into larger, more complex data.
MongoDB uses embedding of data within data and linking to produce the same result
A document is not a PDF or a MS word artifact.
A document a term for a rich shape. Structures of structures of lists of structures that ultimately at the leaves have familiar scalars like int, double, datetimes, and string.
In this example we see also that we’re carrying a thumbnail photo in a binary byte array type; that’s natively supported as well.
This is different than the traditional row-column approach used in RDBMS.
Another important difference is that In MongoDB, it is not required for every document in a collection to be the same shape; shapes can VARY
With the upcoming release of 3.2, we will be supporting documentation validation so in those designs where certain fields and their types are absolutely mandatory, we’ll be able to enforce that at the DB engine level similar to – but not exactly like – traditional schemas.
Truth is in most non-trivial systems, even with RDBMS and stored procs, etc. plenty of validation and logic is being handled outside the database..
Now here is something very important:
For the purposes of the webinar, we will be seeing this “to-string” representation of a document as it is emitted from the MongoDB CLI.
This is easy to read and gets the structural design points across nicely.
But make no mistake: you want most of your actual software interaction with MongoDB (and frankly any DB) to be via high fidelity types, not a big string with whitespaces and CR and quotes and whatnot.
This is a very, very exciting part of MongoDB.
No need to come up with userDefined column 1, column 2, etc.
We see here that Kristina and Mike have very different substructures inside the personalData field.
We call this polymorphism: the variation of shape from document-to-document within the same collection.
The library application logic only is looking for a field called “personalData”; actions will be taken dynamically based on the shape and types in the substructure!
For example, It is a very straightforward exercise to recursively “walk” the structure and construct a panel in a GUI – especially if you are using AngularJS and the MEAN stack
(MongoDB / Express / Angular / Node.js )
No need to use XML or blobs or serialized objects. It’s all native MongoDB -- and documents are represented in the form most easily natively manipulated by the language
Every field is queryable and if so desired, indexable! Documents that do not contain fields in a query predicate are simply treated as unset.
Drivers in each language represent documents in a language-native form most appropriate for that language.
Java has maps, python has dictionaries. You deal with actual objects like Dates, not strings that must be constructed or parsed.
Another important note: We’ll be using query functionality to kill 2 birds with one stone:
To show the shape of the document
To show just a bit of the MongoDB query language itself including dotpath notation to “dig into” substructures
Note also that in MongoDB, documents go into collections in the same shape they come out so we won’t focus on insert.
This is a very different design paradigm from RDBMS, where, for example, the read-side of an operation implemented as an 8 way join is very different than the set of insert statements (some of them in a loop) required for the write side.
Let’s get back to data design.
…
Traditional data design is characterized by some of the points above, and this is largely because the design goals and constraints of legacy RDBMS engines heavily influence the data being put into them.
These platforms were designed when CPUs were slow and memory was VERY expensive.
Perhaps more interesting is that the languages of the time – COBOL, FORTRAN, APL, PASCAL, C – were very compile time oriented and very rectangular in their expression of data structures.
One could say rigid schema combined with these languages was in fact well-harmonized.
Overall, the platform is VERY focused on the physical representation of data.
For example, although most have been conflated, the legacy types of char, varchar, text, CLOB etc. to represent a string suggest a strong coupling to byte-wise storage concerns.
Documents, on the other hand, are more like business entities.
You’ll want to think of your data moving in and out as objects.
And the types and features of Document APIs are designed to be well-harmonized with today’s programming languages – Java, C#, python, node.js, Scala , C++ -- languages that are not nearly as compile-time oriented and offer great capabilities to dynamically manipulate data and perform reflection/introspection upon it.
There are three themes that are important to grasp when Thinking In Documents in MongoDB and these will be reinforced throughout the presentation.
First, great schema design…. I’ll repeat it: great…
This is not new or something revolutionary to MongoDB.
It is something we have been doing all along in the construction of solutions. Sometimes well, sometimes not.
It’s just that the data structures and APIs used by MongoDB make it much easier to satisfy the first two bullet points.
Particularly for an up-stack software engineer kind of person like myself, the ease and power of well-harmonizing your persistence with your code – java, javascript, perl, python – is a vital part of
ensuring your overall information architecture is robust.
Part of the exercise also involves candidly addressing legacy RDBMS issues that we see over and over again after 40 years, like schema explosion and field overloading and flattening
Boils down to success = “schema” + code
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have appeared in the webinar.
Some quick logistics.
In the last 5 to 10 mins today, we will answer the most common questions that have appeared in the webinar.
Customer Data Management (e.g., Customer Relationship Management, Biometrics, User Profile Management)
Product and Asset Catalogs (e.g., eCommerce, Inventory Management)
Social and Collaboration Apps: (e.g., Social Networks and Feeds, Document and Project Collaboration Tools)
Mobile Apps (e.g., for Smartphones and Tablets)
Content Management (e.g, Web CMS, Document Management, Digital Asset and Metadata Management)
Internet of Things / Machine to Machine (e.g., mHealth, Connected Home, Smart Meters)
Security and Fraud Apps (e.g., Fraud Detection, Cyberthreat Analysis)
DbaaS (Cloud Database-as-a-Service)
Data Hub (Aggregating Data from Multiple Sources for Operational or Analytical Purposes)
Big Data (e.g., Genomics, Clickstream Analysis, Customer Sentiment Analysis)
There are three themes that are important to grasp when Thinking In Documents in MongoDB and these will be reinforced throughout the presentation.
First, great schema design…. I’ll repeat it: great…
This is not new or something revolutionary to MongoDB.
It is something we have been doing all along in the construction of solutions. Sometimes well, sometimes not.
It’s just that the data structures and APIs used by MongoDB make it much easier to satisfy the first two bullet points.
Particularly for an up-stack software engineer kind of person like myself, the ease and power of well-harmonizing your persistence with your code – java, javascript, perl, python – is a vital part of
ensuring your overall information architecture is robust.
Part of the exercise also involves candidly addressing legacy RDBMS issues that we see over and over again after 40 years, like schema explosion and field overloading and flattening
Boils down to success = “schema” + code
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
We’ll close with something really cool – document validation that adapts to change over time.
Assuming you “soft version” your documents by including a logical version ID ( in this case, a simple integer in field v) , you can maintain multiple different shapes of documents in one collection, each of them validation enforced to the version rules appropriate at the time. And again, because it is at the DB engine level, enforcement is guaranteed through all drivers.
SUPER POWERFUL!
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
Today we’ll explore data structures and schema for a library management application
Good example because in general most of you have some familiarity with the entities involved and we can explore some 1:1, 1:n and other design elements
There are three themes that are important to grasp when Thinking In Documents in MongoDB and these will be reinforced throughout the presentation.
First, great schema design…. I’ll repeat it: great…
This is not new or something revolutionary to MongoDB.
It is something we have been doing all along in the construction of solutions. Sometimes well, sometimes not.
It’s just that the data structures and APIs used by MongoDB make it much easier to satisfy the first two bullet points.
Particularly for an up-stack software engineer kind of person like myself, the ease and power of well-harmonizing your persistence with your code – java, javascript, perl, python – is a vital part of
ensuring your overall information architecture is robust.
Part of the exercise also involves candidly addressing legacy RDBMS issues that we see over and over again after 40 years, like schema explosion and field overloading and flattening
Boils down to success = “schema” + code
On behalf of all of us at MongoDB , thank you for attending this webinar!
I hope what you saw and heard today gave you some insight and clues into what you might face in your own data design efforts.
Remember you can always reach out to us at MongoDB for guidance.
With that, code well and be well.